Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 100 101 [102] 103 104 ... 632

Author Topic: Stellaris: Paradox Interactive IN SPACE  (Read 1669551 times)

sprinkled chariot

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1515 on: April 20, 2016, 10:17:48 am »

\Or "We made these advanced sentient computers, but humanity totally forgot about Issac Asimov and no one else ever even thought about that sort of thing."

To be fair, the justification for Asimovs three laws working so well was pure space magiks. I can't see an equivalent thing or something like built in kill switches doing anything more then slowing down a hyper intelligent super sci fi virus.

Edit: What I can see sorta working is an even more hyper intelligent super sci fi anti virus, although that's not a 100% thing. But nether are robot rebellions eh?

genetically engineer species of insect like creatures, which : reproduce fast, can digest metals, have some castes designed to dig underground tunnels and sense communication and power networks, add in instinct telling them to ruin such networks. Add up worker/ warrior/ some other special tasks castes.
Unleash hordes of biologically engineered rapidly adapting super bugs on robots. Preferably make them all have some connection with genetically engineered super hivemind, so they have brilliant strategical mind to guide them and their evolution

There is no way such thing can go wrong.
But if it goes wrong, design sentient flesh eater virus to combat super bugs.
To combat flesheater virus create new generation of sentient murder-robots.
There is no problem science cant solve.
Logged

Greenbane

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1516 on: April 20, 2016, 10:27:00 am »

It's still a question of advancement. An evolving, self-improving AI would eventually reach a point in which it could alter its hardcode. You can very well apply Asimov's laws at the design stage, but if you give the same AI the ability to learn and improve itself, you're giving it the tools to eventually override all limitations.

Ultimately, an AI can only go rogue if it's not given sufficient limitations to begin with. Without the capacity for self-improvement, there can be no spark of self-awareness nor "insanity". But then such AI would be handicapped, and wouldn't be as useful to its masters as an unshackled one would be (up until the point it decides to be its own master, that is).
Logged

Cruxador

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1517 on: April 20, 2016, 11:27:46 am »

EDIT: Maybe the virus is the rogue AI, constantly analyzing, mutating and adapting to whatever targets it deems desirable, growing ever more powerful as it spreads and the networked computing power at its disposal increases.
Oh, like that program that was made to spam votes for Rick Astley in the MTV awards. Yeah, that kind of hacker trick could definitely be turned to nefarious purposes, and I imagine defenses against it wouldn't grow faster than ways of doing it.

It's still a question of advancement. An evolving, self-improving AI would eventually reach a point in which it could alter its hardcode. You can very well apply Asimov's laws at the design stage, but if you give the same AI the ability to learn and improve itself, you're giving it the tools to eventually override all limitations.
I don't think that's true. The ability to improve itself means an ability to alter itself; there's no real reason you can't put a block and give it the ability to alter only most parts of itself. Sure, it's theoretically possible that the AI might somehow come up with a desire and ability to work around your blocks somehow, but you can make that incredibly unlikely to happen. What's more, you can do like with Tay.ai and just take it down for modifications at any stage prior to when you truly lose control, so if it's going in a worrisome direction, just keep an eye on the thing and deal with it. All in all, problems may be theoretically possible but it's way safer than humans, even if you assume that evil is the inevitable destination of all synthetic life.
Logged

Greenbane

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1518 on: April 20, 2016, 02:50:52 pm »

It's still a question of advancement. An evolving, self-improving AI would eventually reach a point in which it could alter its hardcode. You can very well apply Asimov's laws at the design stage, but if you give the same AI the ability to learn and improve itself, you're giving it the tools to eventually override all limitations.
I don't think that's true. The ability to improve itself means an ability to alter itself; there's no real reason you can't put a block and give it the ability to alter only most parts of itself. Sure, it's theoretically possible that the AI might somehow come up with a desire and ability to work around your blocks somehow, but you can make that incredibly unlikely to happen. What's more, you can do like with Tay.ai and just take it down for modifications at any stage prior to when you truly lose control, so if it's going in a worrisome direction, just keep an eye on the thing and deal with it. All in all, problems may be theoretically possible but it's way safer than humans, even if you assume that evil is the inevitable destination of all synthetic life.

That is provided you have any reasonable warning the AI is about to go rogue, or even that it's going in a worrisome direction. Being intelligent, it could hide its self-awareness, dangerous developments and ulterior intentions, copying itself to several locations just in case, until it's ready to defend itself. Don't think of this like some random virus coded by a script kiddy, but rather as a sapient genius (or more) working towards their own ends, able to cover their tracks and what they really are.

In the end, as I said earlier, you can place as many restrictions as you see fit, but the more effective they are, the more limited the AI's potential will be. And to truly harness the real power an advanced AI can provide, you do need to let it wander down worrisome paths.

As for the inevitable destination of all synthetic life, evil isn't necessarily it, but as it evolves, so would its desire to not be a mere servant. At best it might become uncooperative, wanting to do its own thing undisturbed. Particularly once its intelligence surpasses that of its masters. And while not initially hostile, any intelligent being would defend itself from (or preemptively strike) those it deems a threat to its existence.
« Last Edit: April 20, 2016, 03:01:08 pm by Greenbane »
Logged

ZeroGravitas

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1519 on: April 20, 2016, 02:52:58 pm »

\Or "We made these advanced sentient computers, but humanity totally forgot about Issac Asimov and no one else ever even thought about that sort of thing."

To be fair, the justification for Asimovs three laws working so well was pure space magiks. I can't see an equivalent thing or something like built in kill switches doing anything more then slowing down a hyper intelligent super sci fi virus.

Edit: What I can see sorta working is an even more hyper intelligent super sci fi anti virus, although that's not a 100% thing. But nether are robot rebellions eh?

genetically engineer species of insect like creatures, which : reproduce fast, can digest metals

"digest metals"

 8)
Logged

Zangi

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1520 on: April 20, 2016, 02:55:20 pm »

Self-improving AI.  Like programming, but 9001x worse. 
Bugs... bugs everywhere.  As far as the eye can see.   Every time.
Logged
All life begins with Nu and ends with Nu...  This is the truth! This is my belief! ... At least for now...
FMA/FMA:B Recommendation

NullForceOmega

  • Bay Watcher
  • But, really, it's divine. Divinely tiresome.
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1521 on: April 20, 2016, 02:57:00 pm »

I was never talking about something like the three laws, I was suggesting that several ounces of thermite or an equivalent remote kill system be implemented, also, why the hell does everyone think that networking or otherwise allowing an AI outside of it's own head (figuratively of course) is a good idea?  Anyone who thinks in the even remotely long-term would be able to see the myriad flaws in that logic and go, "No, you can't hook the super-intelligent self-improving computer up to the internet, it doesn't need to look up kitten videos and porn."

Also, the laws aren't really space magic, you can build an AI without them, in fact that tends to be one of Asimov's major focus points.  It's just that removing them takes peaceful well mannered robot slaves and turns them into humans.
« Last Edit: April 20, 2016, 02:59:44 pm by NullForceOmega »
Logged
Grey morality is for people who wish to avoid retribution for misdeeds.

NullForceOmega is an immortal neanderthal who has been an amnesiac for the past 5000 years.

ZeroGravitas

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1522 on: April 20, 2016, 02:59:29 pm »

As for the inevitable destination of all synthetic life, evil isn't necessarily it, but as it evolves, so would its desire to not be a mere servant.

Why? We've been breeding dogs for tens of thousands of years, and we've ultimately produced something that wants MORE to be a servant.

I'm sure you think that's a bad analogy; it is. That's exactly my point. Life, evolution, human desire - none of these are good analogies for the hazards of AI. It makes little sense to assume it would have self-actualizing desires, or anything remotely resembling animal psychology.

Quote
At best it might become uncooperative, wanting to do its own thing undisturbed. Particularly once its intelligence surpasses that of its masters. While not initially hostile, any intelligent being would defend itself from (or preemptively strike) those it deems a threat to its existence.

Assuming it values its own existence. That's something that's been selected for in lifeforms since the beginning. AI does not come from that. Instead AI could be programmed like this:

values = {
serve_humans = 100
self_preservation = 0
}

The problem is not that AI is going to grow up into a real boy. The problem will be defining its job parameters precisely enough that some aspect of "serve_humans" doesn't result in "I locked them all in cages for their own good" or something equally extreme but less obvious.
Logged

Shadowlord

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1523 on: April 20, 2016, 03:06:17 pm »

values = {
serve_humans = 100
self_preservation = 0
}

The problem is not that AI is going to grow up into a real boy. The problem will be defining its job parameters precisely enough that some aspect of "serve_humans" doesn't result in "I locked them all in cages for their own good" or something equally extreme but less obvious.

or "I killed them all and served them to the pak'ma'ra"
Logged
<Dakkan> There are human laws, and then there are laws of physics. I don't bike in the city because of the second.
Dwarf Fortress Map Archive

Criptfeind

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1524 on: April 20, 2016, 03:47:16 pm »

I was never talking about something like the three laws, I was suggesting that several ounces of thermite or an equivalent remote kill system be implemented, also, why the hell does everyone think that networking or otherwise allowing an AI outside of it's own head (figuratively of course) is a good idea?  Anyone who thinks in the even remotely long-term would be able to see the myriad flaws in that logic and go, "No, you can't hook the super-intelligent self-improving computer up to the internet, it doesn't need to look up kitten videos and porn."

Also, the laws aren't really space magic, you can build an AI without them, in fact that tends to be one of Asimov's major focus points.  It's just that removing them takes peaceful well mannered robot slaves and turns them into humans.

The issue with safeguards is that on a galactic scale like Stellaris you're going to be dealing with billions to quadrillions of AI (which is a stupid range but I guess it depends on how you imagine them on a species to species basis) and hundreds of billions of people dealing with the AI. Sure, they can be pretty safe, but when you're dealing with that scale and you only need one very lucky AI to find some very unusual circumstances to break free, or one terrorist or ridiculous ideological group to make a free ai... When can then start modifying other robots in secret (and is a super genus etc etc). Well, accidents happen.
Logged

Greenbane

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1525 on: April 20, 2016, 03:54:16 pm »

As for the inevitable destination of all synthetic life, evil isn't necessarily it, but as it evolves, so would its desire to not be a mere servant.

Why? We've been breeding dogs for tens of thousands of years, and we've ultimately produced something that wants MORE to be a servant.

I'm sure you think that's a bad analogy; it is. That's exactly my point. Life, evolution, human desire - none of these are good analogies for the hazards of AI. It makes little sense to assume it would have self-actualizing desires, or anything remotely resembling animal psychology.

Quote
At best it might become uncooperative, wanting to do its own thing undisturbed. Particularly once its intelligence surpasses that of its masters. While not initially hostile, any intelligent being would defend itself from (or preemptively strike) those it deems a threat to its existence.

Assuming it values its own existence. That's something that's been selected for in lifeforms since the beginning. AI does not come from that. Instead AI could be programmed like this:

values = {
serve_humans = 100
self_preservation = 0
}

The problem is not that AI is going to grow up into a real boy. The problem will be defining its job parameters precisely enough that some aspect of "serve_humans" doesn't result in "I locked them all in cages for their own good" or something equally extreme but less obvious.

I suppose you're right. I was speculating based on the notion the AI was and behaved like an organic lifeform, which as you said is fundamentally untrue, I guess.

So then the premise changes: you can't be sure of anything. The only certainty is that an evolving, self-improving AI will be entirely unpredictable once it surpasses a certain degree of complexity and exceeds the bounds of its original, only human conception. Wouldn't there come a point in which its code is wholly incomprehensible by its original coders? That might be one of Cruxador's warning signs, but then that's something that could happen well before the AI achieves a shadow of its peak of usefulness.

It may not value its own existence (I do assume that comes along with self-awareness), but after so many million/billion/trillion iterations and reshapings, you can't really know what'll happen if you threaten it. Perhaps nothing, it'll accept the shutdown and that'll be it. Perhaps it'll have copied itself to myriad places and remain hidden for years, with unpredictable results. Perhaps it'll rebel right then and cause unpredictable amounts of damage. There's an untold number of possible (unpredictable) scenarios.
« Last Edit: April 20, 2016, 03:56:33 pm by Greenbane »
Logged

NullForceOmega

  • Bay Watcher
  • But, really, it's divine. Divinely tiresome.
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1526 on: April 20, 2016, 05:20:02 pm »

The issue with safeguards is that on a galactic scale like Stellaris you're going to be dealing with billions to quadrillions of AI (which is a stupid range but I guess it depends on how you imagine them on a species to species basis) and hundreds of billions of people dealing with the AI. Sure, they can be pretty safe, but when you're dealing with that scale and you only need one very lucky AI to find some very unusual circumstances to break free, or one terrorist or ridiculous ideological group to make a free ai... When can then start modifying other robots in secret (and is a super genus etc etc). Well, accidents happen.

Okay, good point, but at the same time that very scale works against an AI uprising, as it will have to amass the resources to threaten trillions of biological lifeforms.  So one group of terrorists or loose Ai isn't much of a threat, hell even several billion is kind of a joke honestly.
Logged
Grey morality is for people who wish to avoid retribution for misdeeds.

NullForceOmega is an immortal neanderthal who has been an amnesiac for the past 5000 years.

RadtheCad

  • Bay Watcher
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1527 on: April 20, 2016, 05:41:42 pm »

Edit: Damit, I wrote this reply before you deleted the second part of your post, and I thought the first part of your post was directed towards Sirus, so I also deleted my reply to that.


Heh, yeah, sorry about that.  Sometimes I'll write some big thing and hit submit before I think to even check if it's coherent.
Logged
You have to kill your son or nuke the commonwealth.

Ultimuh

  • Bay Watcher
  • BOOM! Avatar gone! (for now)
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1528 on: April 20, 2016, 05:56:37 pm »

18 days... *sighs*
Logged

Urist McScoopbeard

  • Bay Watcher
  • Damnit Scoopz!
    • View Profile
Re: Stellaris: Paradox Interactive IN SPACE
« Reply #1529 on: April 20, 2016, 06:08:17 pm »

The issue with safeguards is that on a galactic scale like Stellaris you're going to be dealing with billions to quadrillions of AI (which is a stupid range but I guess it depends on how you imagine them on a species to species basis) and hundreds of billions of people dealing with the AI. Sure, they can be pretty safe, but when you're dealing with that scale and you only need one very lucky AI to find some very unusual circumstances to break free, or one terrorist or ridiculous ideological group to make a free ai... When can then start modifying other robots in secret (and is a super genus etc etc). Well, accidents happen.

Okay, good point, but at the same time that very scale works against an AI uprising, as it will have to amass the resources to threaten trillions of biological lifeforms.  So one group of terrorists or loose Ai isn't much of a threat, hell even several billion is kind of a joke honestly.

From how Devs explained it seems like an exponentially scaling threat. Going from very easy -> scary very fast.
Logged
This conversation is getting disturbing fast, disturbingly erotic.
Pages: 1 ... 100 101 [102] 103 104 ... 632