Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (68.4%)
Universe
- 3 (15.8%)
The World
- 3 (15.8%)

Total Members Voted: 19


Pages: 1 ... 16 17 [18] 19 20 ... 40

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 23683 times)

Starver

  • Bay Watcher
    • View Profile

"AI body movements" for NPCs already seems limited to whatever non-AI (PC) body movements are. If anything, pre-scripted (cut-scene) animations already outdo pretty much all "player is running, use standard 'running' sequence for avatar" stuff for multiplayer. If your MMORPG/equivalent players obtain "a funny little victory dance" perk that they can activate at will then a non-AI NPC can do exactly as much to copy the player pressing B-Triangle-LeftTrigger (or whatever it is) that then just invokes the feature that has to have been involved.

If you definitely mean "not bashing into walls", I can probably say that if Lara Croft could bruise then my playing of the original Tomb Raider, back in the day, would have turned her black and blue (even in the bits without actual enemies, or fatally vertical geology/architecture, just trying to run round a maze of corridors trying to find various right blocks to push and pull to let me into the next maze of corridors). Game-sprite controllers were almost always much better at basic navigation (by simple pathing) than an inexpert player, they just lacked the intelligence to discover that one corner where the perfect shot was, or that exact sequence of jumps that might get you through a gap in the intended virtual-glass-tunnel the level designer thought they were constraining the player in.

There's better and worse non-AI NPC 'brains', of course. You don't get Wolfenstein guards covering each others advances as they try to flush you out, and also a perfectly calculated shot on an exploding barrel would also be a game-killing experience as well, if it was what happened when a game-engine knows that this is what would ultimately kill the player-avatar and end the run.

Some form of collaborative (but hopefully not too collaborative) flocking behaviour does happen (or anti-flocking, but similarly using avoidance and accounting for all the fellow NPCs it can see), and you could get realistic (low-density) crowds spawning on the streets of San Andreas, I think, that weren't dumb enough to clip each other or step out in front of traffic. (Normally, at least when currently unaffected by the player's "demolition derby"-driving or "Dallas book depository"-sniping.)

Pre-programmed behaviours can be lacking, but can also be comprehensive. AI just means that you start with virtually all options on the table (whether it be a twitch of a leg, or the freedom to run off a cliff) and then with broad stroke rules (don't fall over/don't try to headbutt a train or anything else moving/static) develop a more intangible ruleset of behaviours that work with what interactions a PC might expect (some enemy avatars might be expected to attack on sight, others be more tricky; a shopkeeper avatar probably shouldn't attack at all unless it's that kind of game and the player-character now has that kind of rep), whilst obeying the game-universes various physical rules as much as necessary ('elemental spirits' might be allowed to noclip the environment a bit!).


Throwing in "AI makes a game better" begs loads of questions, though. What's currently lacking? Is it suffering from insufficient variation? Or from too much pre-programmed non-sequitur? Are you trying to take rails off the NPCs or add new psychological rails to the player? Are you trying to fill a multi-user environment with more 'users' at quiet times, without anyone realising? [...etc...] And how will your AI accomplish this?

I play Urban Dead, a very simple web-game, and supposedly everyone you meet (or get attacked by) is a real person logging in and responding to how everyone else who logs in moves through the environment, fixes (or breaks) things, heals or 'hugs' those that are currently humans while shooting or needling those currently zombies. Certainly no official server-side NPCs. I have no doubt that some active 'players' are something like browser-scripted automata (perhaps just to wander round and avoid trouble, maybe some are doing zerg functions for others, padding up a one-man "mall tour" wrecking spree with 'outrider' characters who can at least spy ahead). Though why you'd want to do that is another matter. You don't have to talk with your fellow players (English or Zombish or whatever comes natural to you), but a "broadcast zerg" that tries to find powered radios, sprays graffiti or speaks direct information/insults to anyone it meets could be hand-guided, pre-scripted with a "message" or tap into a GPT engine and do pretty much the very same thing in 95% of any resulting interaction with an actual player. And this in all an environment where the training and deployment of an all-singing-and-dancing AI is simplified by the many (externally reproducable) restrictions. The limit of actions per day, interactions with the world, length of messaging (speech, radio, graffiti, 'SMS'), etc. And if it 'hears' someone report "2Z SE mall, doors open, damaged gennie" and doesn't comprehend its meaning... well, probably not all human players do, at first, and at the very least its human "controller" can decide whether to add semantic training to it (or give ChatGPT a chance to query its own knowledgebase on the issue).


Fortnite-like environments will have a lot more challenges (for human players, too), with so much more 3d *stuff* (as pure data or otherwise) and nuances, and a pre-programmed 'bot-character might already be indistinguishable from a given quality of human player. If you need them to live-chat (especially in audio) with actual players then that's another thing, but I imagine that's also not compulsary.

Obviously offline (especially large-map sandboxy) games lack anyone real, so the plan is to replace current NPCs (perhaps a little predictable/unhelpful) with AINPC variations? Still basically scripted, just far more loosely. More unpredictable, possibly far more unhelpful (or not as valid in thebofficial role of an adversary) at the same time as a consequence, but that depends on the pre-training and QC.

I'm sure some of these things are not answers in search of a question, but as a broad sweep I'm not sure I see the excitement in most of the contexts. Interesting ideas, but a bit like saying that something "now has Blockchain", perhaps. Specific examples might shine through, of course, and populating a simulation with (learnable?) AI agents and seeing how far it goes does intrigue me. We shall see.
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

I'm sure some of these things are not answers in search of a question, but as a broad sweep I'm not sure I see the excitement in most of the contexts. Interesting ideas, but a bit like saying that something "now has Blockchain", perhaps. Specific examples might shine through, of course, and populating a simulation with (learnable?) AI agents and seeing how far it goes does intrigue me. We shall see.
Nah, blockchain is, and has always been completely useless except in a small array of real world circumstances notably when there is no central repository you can trust to hold your data faithfully and when you can't just hold the data on your PC instead. Since in games you can just store all the data on either your PC or the game companies servers its completely useless for any game ever made.

Its more like say; graphics. Does shoving them into a game make it inherently better then a text game? No, especially at the start before the technology had time to text (and ASCII) games were many times better then ones with graphics.
But as graphics got better and better they become more important to implement to some degree because yeah, in the end it kinda does make it better.

I agree that there will be a ton of meaningless hype/flat out lies around its implementation, but that has been true for gaming "AI" for decades anyways.
---
The first excitement of AI is the same as all other automation humans have made; which is more results from less inputs.
So instead of getting a thousand lines of text from a writer you get ten thousand. Instead of having fifty voiced NPCs you have five hundred. Even if you have everything you want in the game it could mean that instead of having a week at the end to polish up the game you finish a month earlier and have the extra time to fix everything up.

Now a lot of that is going to be trash; including AI won't inherently make a game better, especially at the start. What it does do however is raise the ceiling on what is possible with the same budget which will in any cases where money is a concern (see: basically every game ever).
Imagine Elden Ring, but they had the budget for ten times the NPCs.
If implemented well (and for the first few years in many cases it won't be) the ability to get more stuff in the game will just make the game better.

This isn't particularly exciting, but being able to do the work of fifty people with twenty is going to be a pretty huge shift in how many games end up getting developed and how good those games end up being.
---
The second excitement of AI is the same as the same as tools in general; you can do stuff that is flat out impossible to implement without said tools.

Some of it is boring issues that AI can obviously fix.
For instance without AI you flat out can't ever have NPCs that can respond to any question you ask.
You can't have NPCs that would dynamically change their daily routine based on what is happening in the world.
You can't just grab a NPC off the street to be your companion and learn their hopes and dreams and watch as they change based on the choices you make in the world and get stronger as they level with you (and said NPCs wouldn't repeat "I'm sworn to carry your burdens" over and over).

But the truly interesting pie in the sky stuff is merely hypothetical, because this technology is barely developed in the real world, and certainly doesn't have decades of development time behind it that other gaming tech does and we have no clue of the potential of the technology.

For instance it could allow the player to develop custom factions and have them dynamically impact the world based on their precepts. You could say, make an evil faction that summons demons and it would fight the good factions and develop settlements and create bound demons. Or (depending on how mean the game is and how hard you set the difficulty) you try to make a demon summoning faction and the good guys and evil guys team up and come out to slap you around because everyone hates demon summoners.
Or proper branching questline where if you decide to team up with the bad guy in some random quest there are actual consequences in the world; some minor and irrelevant (grain costs go up because he made a plague in the farmlands) and some major (refugees fleeing said plague, a lockdown in a city once plague monsters start coming out and you have to bunker down and kill them and outlast them or break through the guards and escape the city).

Or how about if you're playing a medieval fantasy stealth game, and you go "I want this game to have guns" and "I want there to be blood magic in this game" and the AI DM goes "Cool dawg, I made you some guns and gave you a Blood stat and put some blood spells in the next few levels for you to find".
Obviously offline (especially large-map sandboxy) games lack anyone real, so the plan is to replace current NPCs (perhaps a little predictable/unhelpful) with AINPC variations? Still basically scripted, just far more loosely. More unpredictable, possibly far more unhelpful (or not as valid in thebofficial role of an adversary) at the same time as a consequence, but that depends on the pre-training and QC.
Obviously multiplayer games will benefit less then single player games to what is quite possibly a staggering degree, and even within SP games some genres will benefit more then others.
AI as a tool should still be a great help to multiplayer devs though, so even if you don't see AI acting directly it will still improve the game in the background.
I would like to know what they mean when they say agents are informed of their circumstances. Is there like a layer that describes every scene in english so the LM gets to answer? What's funny to me is how it basically a village of superficial liers, but they are allways nice to eachother. I doubt little Eddy has commited a single note to memory by now
Quote from: From the research paper
John Lin is a pharmacy shopkeeper at the Willow Market and Pharmacy who loves to help people. He is always looking for ways to make the process of getting medication easier for his customers; John Lin is living with his wife, Mei Lin, who is a college professor, and son, Eddy Lin, who is a student studying music theory; John Lin loves his family very much; John Lin has known the old couple next-door, Sam Moore and Jennifer Moore, for a few years; John Lin thinks Sam Moore is a kind and nice man; John Lin knows his neighbor, Yuriko Yamamoto, well; John Lin knows of his neighbors, Tamara Taylor and Carmen Ortiz, but has not met them before; John Lin and Tom Moreno are colleagues at The Willows Market and Pharmacy; John Lin and Tom Moreno are friends and like to discuss local politics together; John Lin knows the Moreno family somewhat well — the husband Tom Moreno and the wife Jane Moreno.
Its just a few lines of text with each persons circumstances and their relationships with others.
Quote from: From the news article on it
For instance, after the agent is told about a situation in the park, where someone is sitting on a bench and having a conversation with another agent, but there is also grass and context and one empty seat at the bench… none of which are important. What is important? From all those observations, which may make up pages of text for the agent, you might get the “reflection” that “Eddie and Fran are friends because I saw them together at the park.” That gets entered in the agent’s long-term “memory” — a bunch of stuff stored outside the ChatGPT conversation — and the rest can be forgotten.
So ha, Eddie totally does have his own memories.

Which raises an interesting point I've been considering. People have been saying that GPT isn't sentient because it doesn't form long term memories and doesn't know math, ect.
But GPT is just a language system, and the language part of humans brain doesn't store long term memories or know math either.

And thats because Humans aren't just any single specific intelligence system, we are the combination of a dozen systems all with their own specific intelligence stapled together with duct tape that thinks its one system.

So sure GPT doesn't know math and can't draw and doesn't have long term memory, but once you hook it up to Wolfram Alpha and Stable Diffusion and something to store its memories in that will all change awfully fast.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

jipehog

  • Bay Watcher
    • View Profile

https://kotaku.com/nvidia-ace-ai-rtx-4060-ti-gpu-graphics-gaming-jobs-1850484480
I found the later part of the presentation very interesting. With all the talk how AI companies don't have a moat, I am increasingly certain that hosting companies will be the main beneficiaries.

Otherwise, cutting cost and reason to have always online (anti piracy) is win win for game industry.

I had thought someone might mention the new AI-'found[1]' antibiotic. Almost seems timed to counter the "AI is bad and/or worrying" flurry of opinions.
Not the first such success, using AI to discover new materials and drugs is an exciting new field. But this is not the billion dollar question, that USA congress and world leaders are asking, and OpenAI giving grants for ideas on way to solve it.
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).

When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
« Last Edit: May 30, 2023, 10:28:16 pm by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

The full paper on the AI GPT village is worth a read.
Cause it has a bunch of really interesting stuff in it, and some of the theoretical stuff I was thinking about is basically solved here.

Now most of the stuff is really unnecessary given the point of most games isn't to make a real world, its to make a world that looks real as cheaply as possible (even to the extent that stuff outside your line of sight simply doesn't exist), but this is really interesting stuff.
Although 25 agents on a consumer computer won't be possible within a decade (and might not even be possible in two) having a single agent that manipulates and acts as 25 people at once feels like something much more achievable, especially if enough optimization research is done.
---
To be clear, these aren't just chat GPT, its chat GPT with a large number of additions stapled on to give them medium and long term memory as well as the ability to meaningfully plan things over a number of (ingame) days.
Quote
A user running this simulation can steer the
simulation and intervene, either by communicating with the agent
through conversation, or by issuing a directive to an agent in the
form of an ‘inner voice’.
The user communicates with the agent through natural language,
by specifying a persona that the agent should perceive them as. For
example, if the user specifies that they are a news “reporter” and
asks about the upcoming election, “Who is running for office?”, the
John agent replies:
John: My friends Yuriko, Tom and I have been talking
about the upcoming election and discussing the candidate Sam Moore. We have all agreed to vote for him
because we like his platform.
To directly command one of the agents, the user takes on the persona of the agent’s “inner voice”—this makes the agent more likely
to treat the statement as a directive. For instance, when told “You
are going to run against Sam in the upcoming election” by a user
as John’s inner voice, John decides to run in the election and shares
his candidacy with his wife and son.
You can easily directly play god in such a simulation.
And the positions of agents can fundamentally change. So while the leader of a faction would start as the same person, it could soon be someone else entirely with completely different goals and motivations.
Quote
By interacting with each other, generative agents in Smallville
exchange information, form new relationships, and coordinate joint
activities. Extending prior work [79], these social behaviors are
emergent rather than pre-programmed.
3.4.1 Information Diffusion. As agents notice each other, they may
engage in dialogue—as they do so, information can spread from
agent to agent. For instance, in a conversation between Sam and
Tom at the grocery store, Sam tells Tom about his candidacy in the
local election:
Information spreads from AI to AI, which would allow dynamic information spreading in a game.
So when you (say) kill someone do something guards wouldn't instantly know, a guard would have to see you do it; then they would have to actually go and report it or let other guards know somehow.
Or it might be possible for you to literally outrun information or rumors.
Quote
3.4.3 Coordination. Generative agents coordinate with each other.
Isabella Rodriguez, at Hobbs Cafe, is initialized with an intent to
plan a Valentine’s Day party from 5 to 7 p.m. on February 14th. From
this seed, the agent proceeds to invites friends and customers when
she sees them at Hobbs Cafe or elsewhere. Isabella then spends the
afternoon of the 13th decorating the cafe for the occasion. Maria, a
frequent customer and close friend of Isabella’s, arrives at the cafe.
Isabella asks for Maria’s help in decorating for the party, and Maria
agrees. Maria’s character description mentions that she has a crush
on Klaus. That night, Maria invites Klaus, her secret crush, to join
her at the party, and he gladly accepts.
On Valentine’s Day, five agents—including Klaus and Maria—
show up at Hobbs Cafe at 5pm and they enjoy the festivities (Figure 4).
In this scenario, the end user only set Isabella’s initial intent
to throw a party and Maria’s crush on Klaus: the social behaviors
of spreading the word, decorating, asking each other out, arriving
at the party, and interacting with each other at the party, were
initiated by the agent architecture.
...
We observed evidence of the emergent outcomes
across all three cases. During the two-day simulation, the agents
who knew about Sam’s mayoral candidacy increased from one (4%)
to eight (32%), and the agents who knew about Isabella’s party
increased from one (4%) to twelve (48%), completely without user
intervention. None who claimed to know about the information
had hallucinated it. We also observed that the agent community
formed new relationships during the simulation, with the network
density increasing from 0.167 to 0.74. Out of the 453 agent responses
regarding their awareness of other agents, 1.3% (n=6) were found to
be hallucinated. Lastly, we found evidence of coordination among
the agents for Isabella’s party. The day before the event, Isabella
spent time inviting guests, gathering materials, and enlisting help
to decorate the cafe. On Valentine’s Day, five out of the twelve
invited agents showed up at Hobbs cafe to join the party.
We further inspected the seven agents who were invited to the
party but did not attend by engaging them in an interview. Three
cited conflicts that prevented them from joining the party. For
example, Rajiv, a painter, explained that he was too busy: No, I
don’t think so. I’m focusing on my upcoming show, and I don’t really
have time to make any plans for Valentine’s Day. The remaining four
agents expressed interest in attending the party when asked but
did not plan to come on the day of the party
This is the most wild thing.
A single line of text turns into a large collaborative event that multiple agents organically attend.
Obviously the potential of this would be wild for a stardew valley type game, but adapting it to a game like GTA or Skryim would be trivial. For instance you could have rival gangs, and one person might call the gang together to attack another. Or a different person could be a police informant would relay information to the police and who would be killed if caught by his fellow gang members.

They flat out have memories. The paper goes into more detail, but they can change and gather new information that impacts their worldview as time goes on.

The study only generated two days worth of time (which they note cost them thousands of dollars to run), so its impossible to say how it would end up working on a longer timescale.
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

King Zultan

  • Bay Watcher
    • View Profile

If it costs thousand of dollars for a two day test I don't see this kind of thing being used in a major video game in the next decade.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Strongpoint

  • Bay Watcher
    • View Profile

I expect optimized hardware for AI generation will come sooner than later and it will reduce the cost significantly.

I actually played a discord-based simple RPG with heavy use of AI text generation (some 6B model IIRC) and while it was VERY crude, this concept works. I expect in a year or two we'll have decent games that will provide live AI generation during gameplay with a modest monthly fee.

Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

King Zultan

  • Bay Watcher
    • View Profile

I still see the monthly fee being a blocker for a lot of people, and I figure once it gets to the point where the monthly fee is no longer a thing then I see more people interacting with stuff like that.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

Strongpoint

  • Bay Watcher
    • View Profile

Sure, people don't really like subscription games (for a good reason) but it may be a service for many compatible games and then it is not that much different from Netflix. I can even see Amazon including this in Amazon Prime

Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

dragdeler

  • Bay Watcher
    • View Profile

By that quote they probably mean the individual actor has it in it's context memory

Quote
I told dad that I am working on a music composition at the breakfast table.

Just like a text based adventure. Just like individual threads with chatgpt 3.5 on openai (the free one).

I doubt it stored something like:

Quote
X: 1
T: Coherent Composition
M: 4/4
L: 1/4
K: Cmaj
%%score (V1 | V2 | V3)
V:V1 clef=treble
[V:V1] C D E F | G A B c | d e f g | a b c' d' |
[V:V1] e f g a | b c' d' e' | f g a b | c' d' e' f' |

V:V2 clef=treble
[V:V2] C,2 D,2 | E,2 F,2 | G,2 A,2 | B,2 c2 |
[V:V2] d2 e2 | f2 g2 | a2 b2 | c'2 d'2 |

V:V3 clef=bass
[V:V3] C,,2 D,,2 | E,,2 F,,2 | G,,2 A,,2 | B,,2 c,2 |
[V:V3] d,2 e,2 | f,2 g,2 | a,2 b,2 | c'2 d'2 |


Tho I must admit I'm surprised chatgpt keeps delivering whatever I ask it to.
Logged
let

Starver

  • Bay Watcher
    • View Profile

Some of it is boring issues that AI can obviously fix.
For instance without AI you flat out can't ever have NPCs that can respond to any question you ask.
You can't have NPCs that would dynamically change their daily routine based on what is happening in the world.
You can't just grab a NPC off the street to be your companion and learn their hopes and dreams and watch as they change based on the choices you make in the world and get stronger as they level with you (and said NPCs wouldn't repeat "I'm sworn to carry your burdens" over and over).
(Plenty of interesting things said, here and later, which I might even outright agree with, plucking this little bit out to make one response, though.)
The first one, I won't go into that much. If I asked the Woodchuck question then maybe an AI would handle it better than a less sophisticated one (unprimed by its programmer), but non-AI search engines are capable of (un)intelligently linking it up to their intended responses.

Agents changing daily routine are common enough already, to my knowledge. Do the villagers hang around in the marketplace of an evening when the Night Stalker (player, or player-invoked) starts to prowl? The bus no longer travels between stops, sedately, but attempts to escape the area when it gets hit by a stray bullet (or deliberately fired into by the 'traffic surfing' player, stood atop it).  All pre-programmed, so limited to whatever modes of operation the developer requires, obviously the scope for AI to verge into further (untested) extremes saves 'development time', once you've developed the flexibility. And before extensive checking that paradoxical emergent behaviours aren't more the norm than desired!)

The third item is (as a start, certainly) already a DF Adventure Mode thang! Talk to Tarn, perhaps, how much more he'd be able to use AI for?


And the "design me a blood'n'guns game" idea, more meta than the "internal whisper" activating an election within the scenario mentioned later. Programming enthropy requires that some information about guns/blood/elections be available. As imight be made available (DLC-like) regardless. New professions (and on-screen behaviours to go along with them) got added to The Sims all the time. Hard to say that AI alone adds this ("force multiplier", as someone else said).


I appreciate the possibilities, but I'm not yet entirely on board with the "it'll change *everything*!" viewpoints. Accelerate some things (adding plenty of potential weirdnesses along the way, perhaps like a spontaneous dragon-cult somehow arising amongst an antagonist faction in a Halo-universe game?) and transfer the development skills to carefully crafting and shaping the scope and limitations the AI should work within rather than directly scoping and limiting any the game directly. Rather than carefully crafting the game to display alpine-style mountains with Swiss style archtecture in one zone, and tibbetan-styling in another of its settings, telling it where to identify source material to obtain any suitable environment (Norwegian fjords, 'Grand' Canyons, Rift Valley plateaus) and ...for the time being at least... we're still looking at meta-development and meta-curation to fulfil.

Like we can request an AI to create pictures (often slightly off) of manga girls eating noodles, but it is nothing without a corpus of work being supplied of all the original manga source material and some form of tagging. It's a different emphasis from manually composing 'answers' to all conceivable requests, by artists or rather artistic direct-coders, but at least we'd expect only human limitations. Rather than oversights by the AI, derived from the attentions of the AI-compositor (either human or 'training AI'), however many layers of departure we're talking from the last human spark of guidance.

(Outside of such things, if an AI runs amok then ultimately it's the fault of some human back in the history of the AI's inception for decisions made. And clearly also to their credit when the AI produces some good outcome, however hidden behind the cascading 'creative rights'. But this verges on philosophical issues, rather than practical ones. Which is why I'm not sure about all these AI recantations, too. There have been so many Nobels and Oppenheimers in history. Would you think Guttenburg might be right to be pro- or anti- any particular book that was printed later? The guild of prehistoric Prometheuses (not a close group, I grant you) have a lot to answer for, with or without whichever individual(s) then decided that a pinch of sulphur here, a dash of ground charcoal there (and the scrapings from the privvy wall as well) might be a good idea...).
Logged

jipehog

  • Bay Watcher
    • View Profile

AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).

When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
I agree that AI is revolutionary, the key issue is how do we manage its impacts, bringing about its numerous potential positive changes (e.g. enhancing and increasing access to education) while limiting/adapting to the negative ones.

For example, as you mentioned AI is economic force multiplier. It has the potential to substantially increase productivity and reduce costs without additional labor, however, it can also decrease labor demands, depreciate its value, and offer no employment alternatives. Contrary to what you said this will effect everyone, not just the corporates, and I think that in the long run corporates will benefit. You --and billion other people-- ability to make yesterdays games faster will not improve your income prospects, meanwhile large companies between their resouces and economies of scales will continue to dominate. Furthermore as companies are able to automate and reducing their dependence on wider workforce i foresee the more inequality will rise giving the rich even more power.

Otherwise most of that post is cheap adhomniem, I can similarly say that there are many whose dissatisfaction with their lot in life turned them into narcissistic/true-believer in delusional idealist ideas that want to burn the system down because the alternative must be better than this.
« Last Edit: May 31, 2023, 02:18:55 pm by jipehog »
Logged

lemon10

  • Bay Watcher
  • Citrus Master
    • View Profile

(Plenty of interesting things said, here and later, which I might even outright agree with, plucking this little bit out to make one response, though.)
Oh no worries, I was even thinking about adding a disclaimer like this to my previous post since this is indeed a very interesting and speculative topic.
There isn't any need to actually respond to all the stuff I'm saying because I'm also just doing a lot of thinking out loud.
(Outside of such things, if an AI runs amok then ultimately it's the fault of some human back in the history of the AI's inception for decisions made.
On one hand sure, if an AI runs amok its the fault of a human somewhere down the line, but that's the same as saying that if your child ever does something bad its your fault.
I mean, its true; you could have taught them better or not had them or whatever; but in practice it won't be anywhere near that simple especially when you are on the cutting edge.
There have been so many Nobels and Oppenheimers in history.
I think Oppenheimer is the right comparison here, some of these people are coming to the realization that this stuff has a legitamate chance of ending the human race or supplanting our place in the world; not just eventually but in our lifetime and being part of that is pretty existentially terrifying.
but it is nothing without a corpus of work being supplied of all the original manga source material and some form of tagging.
I've seen this type of thought (along with the similar LLM aren't thinking and are just flat out copying stuff off the internet) thrown out a lot as proof that AI are fundamentally lacking, but it feels like complete rubbish to me, cause the same is true of humans; without our own training data we can't paint or do art or even speak (although we can totally do stuff like cry or grunt).
Quote from: Newton
“If I have seen a little further it is by standing on the shoulders of Giants”
Or in other words: "Some other dudes gave me good training data and that's the only reason I can do stuff beyond grunt at my fellow cavemen".
And the "design me a blood'n'guns game" idea, more meta than the "internal whisper" activating an election within the scenario mentioned later. Programming enthropy requires that some information about guns/blood/elections be available. As imight be made available (DLC-like) regardless. New professions (and on-screen behaviours to go along with them) got added to The Sims all the time. Hard to say that AI alone adds this ("force multiplier", as someone else said).
Oh sure, they have to know what a gun or blood or an election is for them to be able to meaningfully interpret your request. But they already *do* and not even as a hypothetical development, if you go to GPT right now and ask it: "What is a gun" it will tell you.

Actually implementing truly new features into games would require it knowing what it is in the context of a game (with regards to programming, ect) which AI doesn't yet, but the information required for that could be simply grabbed out of some researchers steam libraries.
(And obviously it would be more complex then just grabbing it, which is why it was in the hypothetical area in the first place).
As imight be made available (DLC-like) regardless.
Reducing a ten or hundred thousand dollar job into a voice prompt and possibly a few hours or days of time for your computer to crunch some numbers doesn't strike you are a huge "change everything about video games" type of deal?
And sure, they could make some stuff for DLC, but making an infinite amount of DLC to fit some random person's desires is obviously impossible.
Hard to say that AI alone adds this ("force multiplier", as someone else said).
This statement is honestly perplexing to me because there are already artists/writers/programmers who are using the current model of GPT/Stable Diffusion that have been using it as a force multiplier.
Like this isn't in the future, there are currently artists who are using it as a draft tool to produce significantly more, programmers who are now able to hold multiple jobs, press people who can do their job in 4 hours and then just chill the rest of the day, ect.

I have complete confidence when I say that it will indeed change everything to at least the degree the internet or the computer or electricity changed everything in the past.
And if new advancements keep coming down the rails and moores law continues to be true it may very well result in a fundamental change in the human condition.
---
Even just in regards to making games its going to be a massive force multiplier. Being able to reduce what could very well be a hour of work into a single plain language prompt (eg. "You want to setup a party", or "You have a vendetta against the yakuza and have voice and dialog options that represent this as well as that represent this a quest with a reasonable reward", or "get me HD graphics for all those mountains in the distant background") is going to result in pretty mind boggling amounts of time saved.

E: I just realized I probably misinterpreted you at the end there and you were saying it would *just* be a force multiplier.
Which yeah, I disagree with, but we'll see how it impacts gaming over the next decade or two.
« Last Edit: June 01, 2023, 02:25:46 am by lemon10 »
Logged
And with a mighty leap, the evil Conservative flies through the window, escaping our heroes once again!
Because the solution to not being able to control your dakka is MOAR DAKKA.

That's it. We've finally crossed over and become the nation of Da Orky Boyz.

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

AI is essentially a force multiplier. You can do things faster with less manpower and/or effort. That's why corporations are trying to regulate it, not really some "ethics" or "safety" (they don't actually give a damn as long as the money flows) but because they're scared of losing their monopoly on this force multiplier, and thus losing money. "Ethical AI" is a smokescreen, mostly, with some ensheeped true believers who drank either corporate Kool-Aid (e.g various Twitter activists) or made and drank their own (e.g the LessWrong crowd).

When coding AI becomes good I might be able to make a game, together with my friend, with just 5 months instead of 5 years time.
I agree that AI is revolutionary, the key issue is how do we manage its impacts, bringing about its numerous potential positive changes (e.g. enhancing and increasing access to education) while limiting/adapting to the negative ones.

For example, as you mentioned AI is economic force multiplier. It has the potential to substantially increase productivity and reduce costs without additional labor, however, it can also decrease labor demands, depreciate its value, and offer no employment alternatives. Contrary to what you said this will effect everyone, not just the corporates, and I think that in the long run corporates will benefit. You --and billion other people-- ability to make yesterdays games faster will not improve your income prospects, meanwhile large companies between their resouces and economies of scales will continue to dominate. Furthermore as companies are able to automate and reducing their dependence on wider workforce i foresee the more inequality will rise giving the rich even more power.

Otherwise most of that post is cheap adhomniem, I can similarly say that there are many whose dissatisfaction with their lot in life turned them into narcissistic/true-believer in delusional idealist ideas that want to burn the system down because the alternative must be better than this.
You seem to think I want to become rich. I really don't. I just want to be creative in peace and AI can assist me with that. That is why I support UBI, I want enough to feed myself with a bit to spend on luxury but I don't seek to make line go up ad infinitum.

I disagree with your point however. If opensource AI is sufficiently distributed, entertainment media corporations will have less of a chokehold as indie games and movies with unique concepts (rather than ones made to maximize profit) can be made with a similar quality as AAA games and blockbuster movies. Sure, the corporations will be able to make even more detailed ones... but I believe there are diminishing returns with that and quality, and we are soon hitting the plateau. Market oversaturation hurts megacorps precisely because of this deprecation of value. AI regulations are meant to rein in this kind of oversaturation, and are generally lobbied for by corporations. Thus, I oppose all AI regulations except things like "don't use it for social credit systems".
« Last Edit: May 31, 2023, 09:00:38 pm by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Starver

  • Bay Watcher
    • View Profile

Quote from: Newton
“If I have seen a little further it is by standing on the shoulders of Giants”
Or in other words: "Some other dudes gave me good training data and that's the only reason I can do stuff beyond grunt at my fellow cavemen".
Or "Robert Hooke had nothing to do with any of my brilliance..!", some would say.

Anyway, LLMs aren't understanding, and are just copying and recombining fragments under a statistically directed generalised rule where it is always supercicially similar in structure to its various sources (if the programmers, curators and caretakers of the thing have done enough work to sustain even that).

And I also happen to think the human brain is just as physically limited, just has vastly more complexity. And inconceivably more complex algorithms. I've said before that true electronic intelligence is entirely feasible, all it has to do is enough things to be significantly an analogue (or digital!) of a whole brain's normal biochemical 'processing', but also in the right sort of way. And we're nowhere near reproducing this.


Then parents (or legal guardians) are indeed made responsible for their infant children (what they subject them to, as well as potentially what the child goes and does), and onwards until a certain degree of maturity (after which they can go off the rails more with social disapproval than parental responsibily, but it's a while until they're considered fully independent) but no AI has a chance of exceeding even that lower limit right now, if you had some "autonomy-adjusted age" measure. We can afford to be very conservative on this, as we're quite a way off an AI legitimately campaigning for its own emancipation. If you want other analogues, "corruption of minor" or whatever would be an anti-Fagin law (if not actual child labour legislation) could cover the philosophical parallels. Except that I imagine mis-use of property (or anything up to and including something "assault with the deadly weapon") would be the more relevent right now and for the foeseeable future.

Of all the problems (or benefits), we need to look forward but we shouldn't ignore that while we're trying to work out what's perhaps just beyond the the ultimate control of the human elements that much of what we see is plainly just procedurally generated in complex (perhaps opaque) but actually deterministic ways. If I don't believe in a divine spark existing even in my own head, I'm not going to imagine one spontaneously occurs in the emergent behaviour of a fancy calculator.
Logged
Pages: 1 ... 16 17 [18] 19 20 ... 40