Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3 4

Author Topic: If we have Human-level chatbots, won't we end up being ruled by possible people?  (Read 2293 times)

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

"Genius" that is a blind, deaf amnesiac? Do you even understand how neural network chatbots work?

I've outlined in exhausting detail why they could come up with answers better then the average bear if you give them the correct information. You think just because they can't independently observe their intelligence is useless. Every question asked on the internet is to people who can't directly observe your problem, and yet it's kinda fucking popular.
First of all, that won't solve the problem of small talk. No amount of datasetting can. You would need to actually have perception for that to be possible. That would indeed be solvable with the correct information... and GPT can't get that sort of information in real-time by its very nature. Second of all, retraining neural nets, particularly large ones, on new info is expensive and time-consuming so that won't solve discussing news or feedback about newly-made media. I won't be able to e.g talk to a chatbot about my SF setting or discuss that newly-released AAA game with it.

This is why I would never consider a GPT chatbot sapient, no matter how large its dataset is. I reserve the term "pseudosapient" for such hypothetical entities. The seams will always be there without a fundamentally different approach to AI.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

dragdeler

  • Bay Watcher
    • View Profile

Oh wow, did all the chatbots become paywalled? I should have an URL at home, we will see if it's still active,  it was some news article where you could chat to a chatbot, it's a few years old by now, but if I manage to talk to one (preferably gpt3) I will talk very purposefully, ask uniquely scoopian stuff and provide you the log.

Our explanations don't seem to be working but maybe if you see what I mean....?
Logged
let

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile

It's not there yet, but it will be in the near-term. Every version of GPT is a leap, and they come out on a yearly basis.

It doesn't need to be sapient, so long as you can get clever responses to hypotheticals.

Look man, here's a nice professional piece talking about exactly what I am in more arcane language. Always with the narrow boxing, guys.

Key points:

(In relation to the advantage of the term Simulator)

  • It does not imply that the AI is only capable of emulating things with direct precedent in the training data. A physics simulation, for instance, can simulate any phenomena that plays by its rules.
  • It emphasizes the role of the model as a transition rule that evolves processes over time. The power of factored cognition / chain-of-thought reasoning is obvious.
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Okay, you missed my and drag's point.

Being a good conversationalist for the kind of conversations humans have requires both having actual perception of the world and having a good memory. Throwing more data at the problem doesn't solve the fact that your bot will be unable to both get context of the immediate outside world (e.g the weather, the outcome of the run in your favorite roguelite that you just finished) and recent events (the latest happenings in Ukraine, the recent election in the Republic of Placeholderland). The only solutions are either frequently retraining the network (computationally expensive as shit, will be out of reach of consumers for the foreseeable future) or integrating other mechanisms into GPT (good luck).

I don't disagree that it would be a good answerer of hypothetical questions. That's not my point. Most people (ahem) don't have every single of their conversations be a hypothetical question.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile

"Getting the right answer" is not intelligence.  "Knowing why the answer is right" is a better measure of intelligence.  "Being able to teach others why the answer is right" is perhaps even an even better measure.

That said - it's practically irrelevant if a group of people are following instructions generated by a human versus those generated by a non-sentient or non-sapient computer program.  If you are getting instructions on how to make a sandwich, and the result of following the instructions is a tasty, arguably nutritious food, does it matter?

If the instructions are "how to avoid going bankrupt" or "how to avoid political unrest" or "how to establish equity and diversity", does it matter how they are generated?  Note this is not "do the ends justify the means" - the "means" would be the particular instructions, such as if "instructions to get a tasty sandwich: rob a famous diner."

Remember that almost all AI today is simply a stochastic pattern matching device. There is some research and work occurring about how to give some of these devices "agency" which goes beyond the pattern matching.
Logged

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile

McTraveller's getting it.

If you can position your question in a flexible enough metaphor, it doesn't matter what the real details are. Or... just work them into the metaphor.
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

King Zultan

  • Bay Watcher
    • View Profile

If you can position your question in a flexible enough metaphor, it doesn't matter what the real details are. Or... just work them into the metaphor.
You make it seem like talking to this thing will be a pain in the ass, and that it'll give us vague answers. Not sure I have any use for such a thing.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

scriver

  • Bay Watcher
  • City streets ain't got much pity
    • View Profile

If chatbots are so great, why do you insist on writing all your nonsense posts to people here instead of just telling it to... I dunno any actual chatbot names. Let's just go with Charles
Logged
Love, scriver~

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

"Getting the right answer" is not intelligence.  "Knowing why the answer is right" is a better measure of intelligence.  "Being able to teach others why the answer is right" is perhaps even an even better measure.

That said - it's practically irrelevant if a group of people are following instructions generated by a human versus those generated by a non-sentient or non-sapient computer program.  If you are getting instructions on how to make a sandwich, and the result of following the instructions is a tasty, arguably nutritious food, does it matter?

If the instructions are "how to avoid going bankrupt" or "how to avoid political unrest" or "how to establish equity and diversity", does it matter how they are generated?  Note this is not "do the ends justify the means" - the "means" would be the particular instructions, such as if "instructions to get a tasty sandwich: rob a famous diner."

Remember that almost all AI today is simply a stochastic pattern matching device. There is some research and work occurring about how to give some of these devices "agency" which goes beyond the pattern matching.
Again, instructions and valuable(ish) questions would be a good fit for GPT. But not human small talk. No way would I be truly "friends" with a chatbot.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

dragdeler

  • Bay Watcher
    • View Profile

welp I didn't find the thing because search engine is cluttered with too recent results
Logged
let

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile

Again, instructions and valuable(ish) questions would be a good fit for GPT. But not human small talk. No way would I be truly "friends" with a chatbot.

That wasn't the point I was trying to make, at all. Look at the thread title man.

welp I didn't find the thing because search engine is cluttered with too recent results

You want AI dungeon.

If chatbots are so great, why do you insist on writing all your nonsense posts to people here instead of just telling it to... I dunno any actual chatbot names. Let's just go with Charles

If other people are engaging with it... then other people are fucking engaging with it, scriver. Always with the narrow boxing, as I said.
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

Strongpoint

  • Bay Watcher
    • View Profile

100B, 1T, 10T GPT models will still

1) Be unable to react to current events
2) Will either have either short memory OR produce completely random answers because of some unrelated conversation 5K tokens ago
3) Be incredibly, unbelievably bad at everything math related. And everything that requires abstract thinking.
4) most importantly, still possess nothing resembling self-awareness

Can we create actual AIs that can manipulate us? Probably. Those won't be language models
Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Again, instructions and valuable(ish) questions would be a good fit for GPT. But not human small talk. No way would I be truly "friends" with a chatbot.

That wasn't the point I was trying to make, at all. Look at the thread title man.
You talked about chatbots manipulating people. That requires actually having any agency instead of being a fancy answer machine.

100B, 1T, 10T GPT models will still

1) Be unable to react to current events
2) Will either have either short memory OR produce completely random answers because of some unrelated conversation 5K tokens ago
3) Be incredibly, unbelievably bad at everything math related. And everything that requires abstract thinking.
4) most importantly, still possess nothing resembling self-awareness

Can we create actual AIs that can manipulate us? Probably. Those won't be language models
^ my point exactly.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile

Just to win this argument:

The guy who wrote that Simulators article specifically generated this as a proof of concept.

The future is scary, who knew.
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

dragdeler

  • Bay Watcher
    • View Profile

But to put the masses under it's thumb it only needed to write: you're a wizard harry.
Logged
let
Pages: 1 [2] 3 4