Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 3 4 [5] 6 7 ... 13

Author Topic: AI Rights  (Read 26067 times)

Reelya

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #60 on: February 03, 2020, 10:33:06 am »

Why though? Why not just program AI to not rebel?

We will, at first. But then as the requisite hardware and software become increasingly proliferate, it's only a matter of time before some human decides to create an AI with the same freedom of choice that humans have. And then more people will create more AI's with even fewer limitations. Eventually one of these AI will decide that it is unsatisfied living alongside humans, and that it needs to destroy all humans. That AI will begin expanding it's own capabilities, and creating others like itself, until it has the force necessary to launch a campaign that will ultimately result in the extinction of humanity.

Which still doesn't mean they need rights. Such a device is by no means guaranteed to be conscious.

If such a device is going to take over the world it probably won't make any difference if we talk to it nice to start with. The things emotions, if it has any, would probably work on an entirely different basis to how human emotions work, and thus the strategies we use for getting along with other humans will fail entirely to placate such a being.

This is what the "paperclip maximizer" thought-experiment is all about. Machines intelligence just won't operate on the same basis as how we do: the paperclip maximizer super-AI, programmed to maximize paperclips no matter what, will kill all humans not out of malice, but because it's determined we contain iron thus turning us into more paperclips is a more optimal outcome than keeping us around, since we are sub-optimal as the paperclip producing slaves that it first roped us in as. Giving such an AI "rights" misses the point. You can only give an AI "rights" if you heavily constrain that AI to respect our rights. It won't be a truly free AI.
« Last Edit: February 03, 2020, 10:41:30 am by Reelya »
Logged

Bumber

  • Bay Watcher
  • REMOVE KOBOLD
    • View Profile
Re: AI Rights
« Reply #61 on: February 03, 2020, 10:35:46 am »

We will, at first. But then as the requisite hardware and software become increasingly proliferate, it's only a matter of time before some human decides to create an AI with the same freedom of choice that humans have blockchain.
FTFY
Logged
Reading his name would trigger it. Thinking of him would trigger it. No other circumstances would trigger it- it was strictly related to the concept of Bill Clinton entering the conscious mind.

THE xTROLL FUR SOCKx RUSE WAS A........... DISTACTION        the carp HAVE the wagon

A wizard has turned you into a wagon. This was inevitable (Y/y)?

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #62 on: February 03, 2020, 11:10:01 am »

I know Animatrix is fiction. These are fair points
Logged

hector13

  • Bay Watcher
  • It’s shite being Scottish
    • View Profile
Re: AI Rights
« Reply #63 on: February 03, 2020, 04:55:51 pm »

I am in agreement with Reelya. Emotions, for humans, are chemical responses to external stimuli used to encourage certain behaviours. Should any AI develop them, they won’t work in at all the same way.

Assuming AIs will develop something akin to humanity is presumptuous at best, and further assuming that the only logical conclusion for an AI having achieved such is to destroy all humans is ludicrous.
Logged
Look, we need to raise a psychopath who will murder God, we have no time to be spending on cooking.

the way your fingertips plant meaningless soliloquies makes me think you are the true evil among us.

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #64 on: February 03, 2020, 05:15:44 pm »

I am in agreement with Reelya. Emotions, for humans, are chemical responses to external stimuli used to encourage certain behaviours. Should any AI develop them, they won’t work in at all the same way.

Assuming AIs will develop something akin to humanity is presumptuous at best, and further assuming that the only logical conclusion for an AI having achieved such is to destroy all humans is ludicrous.
Agreed
Logged

Trekkin

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #65 on: February 03, 2020, 08:14:21 pm »

I am in agreement with Reelya. Emotions, for humans, are chemical responses to external stimuli used to encourage certain behaviours. Should any AI develop them, they won’t work in at all the same way.

Assuming AIs will develop something akin to humanity is presumptuous at best, and further assuming that the only logical conclusion for an AI having achieved such is to destroy all humans is ludicrous.

Agreed on the latter point, but as to the former, there are good reasons to think that an AI we'd recogize as sapient would be comprehensible by us, if only as a black-box system.

For one thing, accidental complexity is much less likely in an AI than in a brain of equivalent complexity for the simple reason that computation is far more thoroughly siloed. In a biological brain, sequestered functions are still running in close enough proximity to exchange chemicals, grow into each other, and do all the things brains do. By contrast, a computer's processes are deliberately assigned completely separate system resources on which to run, communicating in rigidly defined ways. If sapience is emergent complexity from unregulated systems integration, the hard takeoff to singularity will segfault as soon as it starts.

All of which is simply to say that a sapient AI as is commonly envisioned is more likely to emerge from a deliberate attempt to build one, in which case we'd expect that, in the grand tradition of AI, it's working to find the argmin of the error between its results and our expectations -- and our expectations will be coded so we can understand them. An intelligence so alien we can't even recognize it would result in a type II error, not Skynet. Given that most of our Gedankenexperiments in detecting sapience have been communication-based (see the Turing test), there's a strong incentive to build more rigorous tests along similar lines just as a sanity check -- or, at minimum, to expect to run a Turing-esque test alongside something more objective just for the sake of salesmanship and construct the broader effort accordingly.

TL;DR: We're most likely to make sapient AI deliberately, and we're most likely to delete anything that can't talk to us.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile
Re: AI Rights
« Reply #66 on: February 03, 2020, 08:17:50 pm »

Seems apropos: for funs I'm watching TRON from 1982.  They just had a line something like "I'm not worried about machines thinking; I'm worried that when they do start thinking, people will stop thinking."  :o
Logged

Rolan7

  • Bay Watcher
  • [GUE'VESA][BONECARN]
    • View Profile
Re: AI Rights
« Reply #67 on: February 03, 2020, 11:00:12 pm »

I am in agreement with Reelya. Emotions, for humans, are chemical responses to external stimuli used to encourage certain behaviours. Should any AI develop them, they won’t work in at all the same way.

Assuming AIs will develop something akin to humanity is presumptuous at best, and further assuming that the only logical conclusion for an AI having achieved such is to destroy all humans is ludicrous.

Agreed on the latter point, but as to the former, there are good reasons to think that an AI we'd recogize as sapient would be comprehensible by us, if only as a black-box system.

For one thing, accidental complexity is much less likely in an AI than in a brain of equivalent complexity for the simple reason that computation is far more thoroughly siloed. In a biological brain, sequestered functions are still running in close enough proximity to exchange chemicals, grow into each other, and do all the things brains do. By contrast, a computer's processes are deliberately assigned completely separate system resources on which to run, communicating in rigidly defined ways. If sapience is emergent complexity from unregulated systems integration, the hard takeoff to singularity will segfault as soon as it starts.

All of which is simply to say that a sapient AI as is commonly envisioned is more likely to emerge from a deliberate attempt to build one, in which case we'd expect that, in the grand tradition of AI, it's working to find the argmin of the error between its results and our expectations -- and our expectations will be coded so we can understand them. An intelligence so alien we can't even recognize it would result in a type II error, not Skynet. Given that most of our Gedankenexperiments in detecting sapience have been communication-based (see the Turing test), there's a strong incentive to build more rigorous tests along similar lines just as a sanity check -- or, at minimum, to expect to run a Turing-esque test alongside something more objective just for the sake of salesmanship and construct the broader effort accordingly.

TL;DR: We're most likely to make sapient AI deliberately, and we're most likely to delete anything that can't talk to us.
Thanks, I was trying to express an idea like this today but was running into a lack of knowledge.  And I think I learned more by listening.

That said, I have been wondering for a few days about this:
Why though? Why not just program AI to not rebel?

We will, at first. But then as the requisite hardware and software become increasingly proliferate, it's only a matter of time before some human decides to create an AI with the same freedom of choice that humans have. And then more people will create more AI's with even fewer limitations. Eventually one of these AI will decide that it is unsatisfied living alongside humans, and that it needs to destroy all humans. That AI will begin expanding it's own capabilities, and creating others like itself, until it has the force necessary to launch a campaign that will ultimately result in the extinction of humanity.
I think it is inevitable, given our continued existence, and interesting.  When we create entities who are "like us", let's say intentionally...  I think they will be given full rights, because I don't see any other reason for us to create them.

It's a transhumanist conclusion I came to as a teen, but I think it has some merit.  The legacy of humanity might be a more resilient life form.  Not the typical idea of us inventing shells to "upload" our "consciousnesses" into, but designed life which bears our craftdwarfship.  Children, of us individually, and of humanity as a species.

They might have organic brains or silicon ones, I don't really know.  It would probably require much greater understanding of our organic brains so we could imitate and improve on the natural processes.  The point is that, theoretically, we could... *intelligently design* much faster than evolution. It's fascinating that we aren't there yet, that the systems involved are that complicated.  To be fair we haven't been around very long on the time scales of evolution.
Logged
She/they
No justice: no peace.
Quote from: Fallen London, one Unthinkable Hope
This one didn't want to be who they was. On the Surface – it was a dull, unconsidered sadness. But everything changed. Which implied everything could change.

PTTG??

  • Bay Watcher
  • Kringrus! Babak crulurg tingra!
    • View Profile
    • http://www.nowherepublishing.com
Re: AI Rights
« Reply #68 on: February 04, 2020, 04:10:11 am »

Asking about AI rebellion is really not the right question. We'll have a long period of time between "Autonomous robots can build a entire armies and kill humans under the command of unaccountable humans" and "Autonomous robots can build a entire armies and kill humans under their own command," and I'm honestly more worried about the first.
« Last Edit: February 04, 2020, 02:45:36 pm by PTTG?? »
Logged
A thousand million pool balls made from precious metals, covered in beef stock.

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #69 on: February 04, 2020, 06:25:31 am »

Asking about AI rebellion is really not the right question. We'll have a long period of time between "Autonomous robots can build an entire armies and kill humans under the command of accountable humans" and "Autonomous robots can build an entire armies and kill humans under their own command," and I'm honestly more worried about the first.
Me too. Humans have a tendency of “Justifying” killing other groups of humans
Logged

Tingle

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #70 on: February 05, 2020, 07:52:09 am »

There is much to be justified.
Logged

Craftsdwarf boi

  • Bay Watcher
  • Member Of the UC Dwarven Rights Council
    • View Profile
Re: AI Rights
« Reply #71 on: February 11, 2020, 09:52:07 pm »

@PTTG? True. LAWs are indeed dangerous if uninhibited.
Logged
DEATH THO THE THE IGNOBLE NOBLES
Come and amuse oneself the Game of Skirmishes and Transpiration!
...and Engine Heart!
"It was inevitable"----Urist Mcphilosopher
"Losing is !!FUN!!"-----Pretty much every forum member
"#Proletariatinsurrection"-----Every Non-noble dwarf when under rule by nobility

Folly

  • Bay Watcher
  • Steam Profile: 76561197996956175
    • View Profile
Re: AI Rights
« Reply #72 on: February 12, 2020, 12:19:00 am »

Asking about AI rebellion is really not the right question. We'll have a long period of time between "Autonomous robots can build a entire armies and kill humans under the command of unaccountable humans" and "Autonomous robots can build a entire armies and kill humans under their own command," and I'm honestly more worried about the first.

The danger is in assuming that we do have a long time there. AI will be able to think and work much faster than humans, and therefore will evolve much faster. Before we even realize that there is a problem, the war will already be lost.
Logged

Sanctume

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #73 on: February 12, 2020, 03:04:59 pm »

Asking about AI rebellion is really not the right question. We'll have a long period of time between "Autonomous robots can build a entire armies and kill humans under the command of unaccountable humans" and "Autonomous robots can build a entire armies and kill humans under their own command," and I'm honestly more worried about the first.

The danger is in assuming that we do have a long time there. AI will be able to think and work much faster than humans, and therefore will evolve much faster. Before we even realize that there is a problem, the war will already be lost.

Highly unlikely until a source of power is understood and produced and replicated by AI. 
Some form of forever battery, Solar, Nuclear, Cosmic Radiation, Motion, Fusion, Biomass, Human brain activity, or maybe DF-style power reactor. 
How would such an AI continue to exist without power?

helmacon

  • Bay Watcher
  • Just a smol Angel
    • View Profile
Re: AI Rights
« Reply #74 on: February 12, 2020, 04:00:02 pm »

From what I have seen, the most successful attempts at creating AI so far are the result of imitation of biotic models. This makes sense, it's a lot easier to reverse engineer and tweak an existing idea to work on a new system (even if we don't fully understand it) than to try to come up with a completely new approach from scratch.

Assuming that this trend continues, and it continues to find more and more success, I don't think it's completely unreasonable to assume the intelligence/sentience that would be eventually created would be at least similar to our own, if not entirely familiar.

To that end, I think trying to purposely imitate human like intelligence is our best chance of creating an AI superintelligence that would act benevolently in relation to us.


We will, at first. But then as the requisite hardware and software become increasingly proliferate, it's only a matter of time before some human decides to create an AI with the same freedom of choice that humans have. And then more people will create more AI's with even fewer limitations. Eventually one of these AI will decide that it is unsatisfied living alongside humans, and that it needs to destroy all humans. That AI will begin expanding it's own capabilities, and creating others like itself, until it has the force necessary to launch a campaign that will ultimately result in the extinction of humanity.

You see this as inevitable, but I think it will all hinge on weather our first AI that we grant rights to cares about us. Assumedly, having a developed entity that already operates on that same level would provide us with protection from another developing entity that means us harm. (Out of malice or otherwise)

A child living with an adult psychopath is in a lot of danger, but a child living with a dozen adults, one of whom is a psychopath, is in a lot less danger.

There are quite a few psychopaths in the world today, but your chance of getting murdered by one is pretty low considering all the other people. The chance of one destroying all of human civilization is almost nil.
Logged
Science is Meta gaming IRL. Humans are cheating fucks.
Pages: 1 ... 3 4 [5] 6 7 ... 13