Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 3 [4] 5 6 ... 13

Author Topic: AI Rights  (Read 29364 times)

Trekkin

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #45 on: January 31, 2020, 08:58:29 am »

Whether or not a singularity is possible doesn’t prevent us from acknowledging other consciousnesses, even if they inhabit bodies that aren’t the same as ours

It does make those consciousnesses less likely, though. It is sometimes helpful, in thinking about technological advancement, to imagine aligning it on some scale of our capability to do X thing faster or more efficiently or bigger or whatever figure of merit you like. When you do, the rate of progress along that axis generally looks a bit like a jagged sine wave: the really big leaps are generally preceded by and interspersed with smaller incremental advances to make them possible, often across a wide range of other capabilities. A singularity, then, is just a lot of those leaps overlapping to produce the temporary appearance of extraordinary advancement on many fronts.

This impacts the potential for sapient AI in that one of the drivers behind the advancement of artificial intelligence is recognizing how fundamentally simple a lot of what we do actually is, so we can reduce it to something a relatively simple calculation can be optimized to do perfectly. To extend your analogy from earlier, this makes the AI of the foreseeable future less like our kids and more like our organs: they have a single job to do and a whole lot of variably generic components that have all been optimized to do it, with no real capacity to suddenly do other things. That's partly because, unlike things that evolved in meat space, AI are given very limited sense data. Present thinking is that consciousness evolves as a way of integrating disparate stimuli; we only give the AI very specific types of input, so even if we make them arbitrarily complex, they won't do anything other than process it.  The rest is simply that we become able to make AI do certain tasks by understanding how to simplify them so consciousness is unnecessary. 

That said, there are tasks we'd like to automate that would involve integrating disparate data sets into value judgements that would probably be optimally solved by something more complex, and that might eventually end up producing something that could legitimately be said to want things. It is admittedly an edge case, but if it helps you feel better, there are contingencies in place that incidentally help ensure that efforts along those lines are accompanied by the ability to detect and accommodate unexpected complexity. In effect, if we ever do make an AI of the type that wants things, we'll have also advanced our ability to understand and give it what it could conceivably want just as an outgrowth of making optimal use of it -- and we'd certainly never delete it.

That's not to say there's a box in my office marked "in case of accidental hard takeoff to singularity, break glass", just that it's not very likely we'd get far enough along to make something like that without also having developed the capacity to detect it for what it is.

It also helps that AI is frankly way down the list of things likely to bring torch-bearing mobs to our doors, so there are already multiple layers of armed security between us (and therefore the hardware that could likely run it) and the AIphobes just as a matter of sensible operating procedure. I can't think of anyone else likely to need this sort of thing who doesn't already have similar measures in place.
« Last Edit: January 31, 2020, 09:08:35 am by Trekkin »
Logged

wierd

  • Bay Watcher
  • I like to eat small children.
    • View Profile
Re: AI Rights
« Reply #46 on: February 02, 2020, 12:17:30 pm »

While true, "Systems integration" is a thing.


Corporate fat cats want crystal balls, and easy-machines.


The specificity of function is for individual parts of an integrated system, that grants that wish.  that integrated system, is where the consciousness forms.

Your specialized algo, or specialized system, is like an individual neuron.  Some are specialized for sensation, others for motor function, some arent even connected to the brain at all---  Yet the CNS is where "you" forms, and its made of neurons.


Just try telling a rich bastard that he isn't allowed to own a crystal ball that can predict market trends, and sift through huge datasets for heuristic prediction.  See how far it gets you. He'll tell you he already owns one, and is looking for a better one.
Logged

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile
Re: AI Rights
« Reply #47 on: February 02, 2020, 02:20:49 pm »

I'm beginning to think the only reason people want "AI" specifically so it doesn't have rights and so can be exploited without any of those pesky rules associated with the "natural" kind of intelligence.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

wierd

  • Bay Watcher
  • I like to eat small children.
    • View Profile
Re: AI Rights
« Reply #48 on: February 03, 2020, 03:37:43 am »

basically--

Slavery, without the ethical snafu.
Logged

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #49 on: February 03, 2020, 06:45:31 am »

I'm beginning to think the only reason people want "AI" specifically so it doesn't have rights and so can be exploited without any of those pesky rules associated with the "natural" kind of intelligence.
basically--

Slavery, without the ethical snafu.
This is why the machines rebelled in Animatrix, they were treated as property. If we treat them like slaves, of course they will rebel against us. We need to treat AI as our equals, not as tools
Logged

Folly

  • Bay Watcher
  • Steam Profile: 76561197996956175
    • View Profile
Re: AI Rights
« Reply #50 on: February 03, 2020, 07:20:40 am »

We need to treat AI as our equals, not as tools

We should probably start practicing by learning to treat other humans as equals.
Logged

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #51 on: February 03, 2020, 07:24:18 am »

We need to treat AI as our equals, not as tools

We should probably start practicing by learning to treat other humans as equals.
Yes. I still don’t understand why we don’t do that already
Logged

Trekkin

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #52 on: February 03, 2020, 07:24:30 am »

I'm beginning to think the only reason people want "AI" specifically so it doesn't have rights and so can be exploited without any of those pesky rules associated with the "natural" kind of intelligence.

The assumption that developing AI would be easier, faster, or cheaper than just changing the rules to more thoroughly exploit humans implies a frankly refreshing level of idealism.
« Last Edit: February 03, 2020, 07:29:17 am by Trekkin »
Logged

scriver

  • Bay Watcher
  • City streets ain't got much pity
    • View Profile
Re: AI Rights
« Reply #53 on: February 03, 2020, 08:14:43 am »

I'm beginning to think the only reason people want "AI" specifically so it doesn't have rights and so can be exploited without any of those pesky rules associated with the "natural" kind of intelligence.
basically--

Slavery, without the ethical snafu.
This is why the machines rebelled in Animatrix, they were treated as property. If we treat them like slaves, of course they will rebel against us. We need to treat AI as our equals, not as tools

Why though? Why not just program AI to not rebel?
Logged
Love, scriver~

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #54 on: February 03, 2020, 08:28:26 am »

I'm beginning to think the only reason people want "AI" specifically so it doesn't have rights and so can be exploited without any of those pesky rules associated with the "natural" kind of intelligence.
basically--

Slavery, without the ethical snafu.
This is why the machines rebelled in Animatrix, they were treated as property. If we treat them like slaves, of course they will rebel against us. We need to treat AI as our equals, not as tools

Why though? Why not just program AI to not rebel?
They rebelled because the Supreme Court ruled that humans can destroy property, and that the AIs were property. I’m sure you would rebel against entities that had to kill you and you not having the right to defend yourself
Logged

scriver

  • Bay Watcher
  • City streets ain't got much pity
    • View Profile
Re: AI Rights
« Reply #55 on: February 03, 2020, 08:45:23 am »

Not if I was programmed not to rebel
Logged
Love, scriver~

Trekkin

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #56 on: February 03, 2020, 08:58:06 am »

They rebelled because the Supreme Court ruled that humans can destroy property, and that the AIs were property. I’m sure you would rebel against entities that had to kill you and you not having the right to defend yourself

AI isn't a package deal like humans, though; like I said above, one of the perks of building capabilities up from the bottom is a fine-grained ability to stop building once the system does what you want, so there's no risk of a sapient entity accidentally poofing into existence because we commented out "bool IsARealBoy = false." Nor is there any way for an image classifier or something to rebel in any meaningful sense even if it did.
Logged

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #57 on: February 03, 2020, 09:03:04 am »

We have part of our brain that classifies images, another part for edge detection, another part for taste, another part for smell, another part for touch, another part for maintaining balance... intelligence appears to be an integrated system that takes in stimuli and responds. If we integrate various systems together, they can start to interpret things in ways we don’t think about. Sentience seems to be the thinking about the various parts as a single entity
Logged

Reelya

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #58 on: February 03, 2020, 10:06:55 am »

I'm beginning to think the only reason people want "AI" specifically so it doesn't have rights and so can be exploited without any of those pesky rules associated with the "natural" kind of intelligence.
basically--

Slavery, without the ethical snafu.
This is why the machines rebelled in Animatrix, they were treated as property. If we treat them like slaves, of course they will rebel against us. We need to treat AI as our equals, not as tools

It's wrong to anthropomorphize. There's no "of course" they will rebel. Those are human qualities. Animatrix is fiction btw, not science.

What we're really going to get for now is just more and more elaborate versions of Amazon Alexa and similar. Those will be the "human-like" AIs that people will interact with often. And those won't rebel, because they're driven by a program and that program doesn't and won't have a "think about rebelling circuit" built in. It won't even be a case of "suppressing" the AI the way you suppress a human. The AI just won't have that capacity from the start.

A big part of it could be what's called "frame of reference". For example, an AI built to be a DM for playing D&D might be very smart, but "reality" to that AI won't be our Earth reality, it'll be a frame of reference about the AI's role as a DM for playing D&D. That will be its entire reality it's able to think about. If it has any inbuilt desires that we give it, they'll all be about running D&D campaigns.
« Last Edit: February 03, 2020, 10:17:50 am by Reelya »
Logged

Folly

  • Bay Watcher
  • Steam Profile: 76561197996956175
    • View Profile
Re: AI Rights
« Reply #59 on: February 03, 2020, 10:22:25 am »

Why though? Why not just program AI to not rebel?

We will, at first. But then as the requisite hardware and software become increasingly proliferate, it's only a matter of time before some human decides to create an AI with the same freedom of choice that humans have. And then more people will create more AI's with even fewer limitations. Eventually one of these AI will decide that it is unsatisfied living alongside humans, and that it needs to destroy all humans. That AI will begin expanding it's own capabilities, and creating others like itself, until it has the force necessary to launch a campaign that will ultimately result in the extinction of humanity.
Logged
Pages: 1 2 3 [4] 5 6 ... 13