Bay 12 Games Forum

Finally... => General Discussion => Topic started by: SirQuiamus on September 23, 2016, 08:04:16 am

Title: AI risk re-re-revisited
Post by: SirQuiamus on September 23, 2016, 08:04:16 am
I'm pretty sure y'all know what AI risk is supposed to be, so I'm not going to waste time on introductions. I was prompted to start yet another thread on this subject because Scott is doing a pretty interesting experiment (http://slatestarcodex.com/2016/09/22/ai-persuasion-experiment/) to figure out how to effectively persuade non-believers of the reality of AI risk, which is to say that the results are going to have obvious relevance to the interests of people on both sides of the debate.

Quote
’ve been trying to write a persuasive essay about AI risk, but there are already a lot of those out there and I realize I should see if any of them are better before pushing mine. This also ties into a general interest in knowing to what degree persuasive essays really work and whether we can measure that.

So if you have time, I’d appreciate it if you did an experiment. You’ll be asked to read somebody’s essay explaining AI risk and answer some questions about it. Note that some of these essays might be long, but you don’t have to read the whole thing (whether it can hold your attention so that you don’t stop reading is part of what makes a persuasive essay good, so feel free to put it down if you feel like it).

Everyone is welcome to participate in this, especially people who don’t know anything about AI risk and especially especially people who think it’s stupid or don’t care about it.

I want to try doing this two different ways, so:

If your surname starts with A – M, try the first version of the experiment here at https://goo.gl/forms/8quRVmYNmDKAEsvS2

If your surname starts with N – Z, try the second version at https://goo.gl/forms/FznD6Bm51oP7rqB82

Thanks to anyone willing to put in the time.

As someone who (still) thinks that AI risk is complete bunk, I really wanna see where this leads.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Criptfeind on September 23, 2016, 09:19:25 am
I got a placebo essay I guess, since it had nothing to do with AI.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Flying Dice on September 23, 2016, 10:19:57 am
Holy shit but that is a bad survey. The bias is real.

It opens reasonably, by asking whether we think AI risk is worth studying -- I agree that it is, because it's a potential extinction-level event if an AI with sufficient power acts against our interests. But then the fucker concludes with a bunch of leading questions which all prejudice participants towards viewing AI as a threat.

Not to mention that the essays, even ones unrelated to AI risk, are pretty shit themselves.

Here's one:
Spoiler (click to show/hide)
The author moved the goal-posts on the initial premise from "it is easy to imagine simulating civilizations" to "it is easy to simulate civilizations" and apparently doesn't expect the audience to notice. He's changed the initial assumption from a reasonable and provable one to one which has been designed to be contradictory.

Jesus, I hope none of these people are actually employed in the hard sciences, they've got a weaker grasp on experimental design, objectivity, and logic than I do, and I took a degree in Liberal fucking Arts. Like what the hell, they're not even trying to pretend that they're acting in good faith.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 12:04:55 pm
This is one of those things that I've been aware of for a long time, though it doesn't discuss weird routes like Accelerando: it isn't actively murdering people to build a computronium shell around the skyfire, it's just kinda pushing them aside, professional courtesy if you will.

That world is still pretty damn magical and amazing by modern standards, and definitely preferable to many options... but it isn't my first choice.

It sounds super goddamn sappy, like goddammit I am mad at myself for what I am about to type, but I really hope they teach any human level AI to love.

I'd much rather be the ward of a Mind than a matrioshka brain hungry for resources.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Gigaz on September 23, 2016, 12:40:27 pm
I got a part of the Waitbutwhy about superintelligence.

I think the thing that bothers me most about the discussion is how easily people discard the distinction between intelligence and knowledge.
As long as man had intelligence, but hardly any culturally inherited knowledge, he wasn't significantly different from an animal.

All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort. Research is definitely slow and tedious even for a superintelligent AI, because AI can not change the scaling laws of common algorithms. The world is at least partly chaotic which means that prediction becomes exponentially difficult with time. There is nothing an AI can do about that.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: SirQuiamus on September 23, 2016, 01:25:18 pm
Holy shit but that is a bad survey. The bias is real.

It opens reasonably, by asking whether we think AI risk is worth studying -- I agree that it is, because it's a potential extinction-level event if an AI with sufficient power acts against our interests. But then the fucker concludes with a bunch of leading questions which all prejudice participants towards viewing AI as a threat.

Not to mention that the essays, even ones unrelated to AI risk, are pretty shit themselves.
Each essay is pretty dreadful in its own special way, but at least the test has the potential to prove it to the authors of said essays---if enough people outside the LessWrong filter bubble take the survey, that is.

Here's one:
Spoiler (click to show/hide)
The author moved the goal-posts on the initial premise from "it is easy to imagine simulating civilizations" to "it is easy to simulate civilizations" and apparently doesn't expect the audience to notice. He's changed the initial assumption from a reasonable and provable one to one which has been designed to be contradictory.
It's a stupid and intellectually dishonest argument, but note that it's formally identical to the original one used by Bostrom, Yudkowsky, and others. You know, the argument that was apparently good enough to convince super-genius Elon Musk of the unreality of our reality.

Jesus, I hope none of these people are actually employed in the hard sciences, they've got a weaker grasp on experimental design, objectivity, and logic than I do, and I took a degree in Liberal fucking Arts. Like what the hell, they're not even trying to pretend that they're acting in good faith.
Nah, they're not scientists in any real sense of the word: Big Yud is an autodidact Wunderkind whereas Scott is a doctor, and Bostrom and his colleagues are just generic hacks who have found a fertile niche in the transhumanist scene. I'm not sure all of them are always acting in bad faith, though: when you spend enough time within an insular subculture, you'll genuinely lose the ability to tell what makes a valid argument in the outside world.

E:
Spoiler: Related (click to show/hide)
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: MetalSlimeHunt on September 23, 2016, 01:31:11 pm
I like Star Slate Codex, and I am a notorious transhumanist. But this? This is dumb. We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.

God, I cannot stand singulatarians. The very idea runs afoul of every conception of computer science and biology and psychology.

~If anybody disagrees with this post I will unleash Roko's Basilisk on the thread; please precommit to obeying me as your ruler~
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: SirQuiamus on September 23, 2016, 01:48:13 pm
As for Bostrom being a hack, I'd like you to actually explore the argument that he made in his book (rather than just some essay loosely based on his books) to come to that conclusion. That essay is not particularly representative of Bostrom's ideas.
I swear to the robot-gods that one day I'll read Superintelligence from cover to cover. I've already tried a few times, but it's so riddled with goalpost-shifting shenanigans of the above type that I'm always overcome with RAEG before I get to page 5.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Cthulhu on September 23, 2016, 01:55:52 pm
Holy shit but that is a bad survey. The bias is real.

It opens reasonably, by asking whether we think AI risk is worth studying -- I agree that it is, because it's a potential extinction-level event if an AI with sufficient power acts against our interests. But then the fucker concludes with a bunch of leading questions which all prejudice participants towards viewing AI as a threat.

Not to mention that the essays, even ones unrelated to AI risk, are pretty shit themselves.
Each essay is pretty dreadful in its own special way, but at least the test has the potential to prove it to the authors of said essays---if enough people outside the LessWrong filter bubble take the survey, that is.

Here's one:
Spoiler (click to show/hide)
The author moved the goal-posts on the initial premise from "it is easy to imagine simulating civilizations" to "it is easy to simulate civilizations" and apparently doesn't expect the audience to notice. He's changed the initial assumption from a reasonable and provable one to one which has been designed to be contradictory.
It's a stupid and intellectually dishonest argument, but note that it's formally identical to the original one used by Bostrom, Yudkowsky, and others. You know, the argument that was apparently good enough to convince super-genius Elon Musk of the unreality of our reality.

Jesus, I hope none of these people are actually employed in the hard sciences, they've got a weaker grasp on experimental design, objectivity, and logic than I do, and I took a degree in Liberal fucking Arts. Like what the hell, they're not even trying to pretend that they're acting in good faith.
Nah, they're not scientists in any real sense of the word: Big Yud is an autodidact Wunderkind whereas Scott is a doctor, and Bostrom and his colleagues are just generic hacks who have found a fertile niche in the transhumanist scene. I'm not sure all of them are always acting in bad faith, though: when you spend enough time within an insular subculture, you'll genuinely lose the ability to tell what makes a valid argument in the outside world.

E:
Spoiler: Related (click to show/hide)

Good thing Elon Musk isn't the be-all end-all of intellect then, apparently, because looking at that I'm pretty sure it's just a word salad of the thousand year old ontological argument.

The only thing I know about the lesswrongosphere is that a bunch of them got convinced they were in robot hell unless they gave all their money to Elon Musk.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: MetalSlimeHunt on September 23, 2016, 02:02:20 pm
As for the idea of "An AI would be able to predict every action of people blah blah blah". To be honest, I thought this to be true for a while. Putting it in a mathematical context, though, it's fundamentally impossible, assuming that the decision function of the human brain is a chaotic function, i.e. varies wildly for very close inputs (even if they appear close over small enough metrics of time close to 0), topologically mixes (covers every single possibility over a long enough period of time regardless of input) and has dense periodic orbits (for particular inputs the system might be predictable, but not everywhere). This system is, by its construction, impossible to mathematically model for all given inputs. No AI could "simulate" this system.

Note that this does not mean that an AI can never exist as a chaotic system. It just means that the AI cannot approximate any chaotic system sufficiently far in time, including itself.
A fun thing is that in the post-singularity setting Orion's Arm, the inability of AI to do this is considered one of their few absolute limits within the canon, alongside violating c and reversing entropy. It's worth noting that some of the AI in Orion's Arm are at a level where they are literally worshiped as gods even by people who understand what they are, on the basis that they fit the theological conception closer than any other demonstrable being.

This also resulted in some interesting writing such as deductive telepathy (http://www.orionsarm.com/eg-article/479402ad7111c), which human-level intelligences and friendly AI play as a game, the latter trying to determine the inner thoughts of the former without any kind of access to their mind.

The less friendly version of this is baroquification (http://www.orionsarm.com/eg-article/50c8e8cee4215), which is baselines intentionally becoming irrational actors in order to confuse superintelligences.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Harry Baldman on September 23, 2016, 02:06:21 pm
The survey is kind of shit, since of course you'll exercise critical thinking (it being a skill you at least pretend to have if you're on the Internet) if told you're participating in a survey where you're supposed to exercise critical thinking about a matter you're told about in advance.

UNLESS IT'S A DIVERSION, in which case good show, only occurred to me just now. Though what it could be testing in that case, and what the other test groups are, will sadly remain an unfortunate mystery.

Good thing Elon Musk isn't the be-all end-all of intellect then, apparently, because looking at that I'm pretty sure it's just a word salad of the thousand year old ontological argument.

The only thing I know about the lesswrongosphere is that a bunch of them got convinced they were in robot hell unless they gave all their money to Elon Musk.

Actually, yeah. It does look a lot like the ontological argument. Very easily dismissed as complete nonsense, but it takes a little bit of doing and mental exercise to put into words why it's nonsense.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Sergarr on September 23, 2016, 02:15:34 pm
I got a part of the Waitbutwhy about superintelligence.

I think the thing that bothers me most about the discussion is how easily people discard the distinction between intelligence and knowledge.
As long as man had intelligence, but hardly any culturally inherited knowledge, he wasn't significantly different from an animal.

All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort. Research is definitely slow and tedious even for a superintelligent AI, because AI can not change the scaling laws of common algorithms. The world is at least partly chaotic which means that prediction becomes exponentially difficult with time. There is nothing an AI can do about that.
Well, yes, this much is evident to anyone who has actually worked with data analysis (i.e. garbage in = garbage out, and no amount of clever algorithms will help produce non-garbage out of garbage), but there are also people - pretty popular people among the super-intelligence community - that say things like this (http://lesswrong.com/lw/qk/that_alien_message/):
Quote
A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple.  It might guess it from the first frame, if it saw the statics of a bent blade of grass.
Basically, these people understand "super-intelligence" as "being omniscient", and "omniscient" as "having arbitrarily-powerful reality-warping powers" (https://wiki.lesswrong.com/wiki/Paperclip_maximizer).

And this happens often enough to drown any real arguments in this bullshit. Which is the reason why I don't take them seriously.

EDIT: Took the poll, and well, the essay I've got contained this load of dung:
Quote
    One might think that the risk [..] arises only if the AI has been given some clearly open- ended final goal, such as to manufacture as many paperclips as possible. It is easy to see how this gives the superintelligent AI an insatiable appetite for matter and energy. […] But suppose that the goal is instead to make at least one million paperclips (meeting suitable design specifications) rather than to make as many as possible.

    One would like to think that an AI with such a goal would build one factory, use it to make a million paperclips, and then halt. Yet this may not be what would happen. Unless the AI’s motivation system is of a special kind, or there are additional elements in its final goal that penalize strategies that have excessively wide- ranging impacts on the world, there is no reason for the AI to cease activity upon achieving its goal. On the contrary: if the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal. […]The AI should therefore continue to make paperclips in order to reduce the (perhaps astronomically small) probability that it has somehow still failed to make at least a million of them, all appearances notwithstanding. There is nothing to be lost by continuing paperclip production and there is always at least some microscopic probability increment of achieving its final goal to be gained. Now it might be suggested that the remedy here is obvious. (But how obvious was it before it was pointed out that there was a problem here in need of remedying?)
Their "Bayesian" model of super-intelligence is so smart that it effortlessly takes over the world, yet so stupid that it can't even count. I'm fucking speechless.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 02:50:01 pm
We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.
>.> Even the most outlandish model runs don't suggest anything which could possibly involve us "all dying in the climate crisis" being a thing for centuries man, where does this come from? Did I miss the part where people start to spontaneously combust if the planet were to become half a Kelvin warmer? How is panic over potential kilodeath-scale outcomes over the next couple hundred years via sea level rise/heat waves/aridification/etc any less silly than concern over potential gigadeath-scale outcomes via unexpected superintelligence excursion a matter of decades from now?

Though, if we start devoting ourselves to nothing but supercomputer construction on every bit of available land the waste heat dumped into the environment could let you freak out over warming AND strong AI takeoffs!
As for the idea of "An AI would be able to predict every action of people blah blah blah". To be honest, I thought this to be true for a while. Putting it in a mathematical context, though, it's fundamentally impossible, assuming that the decision function of the human brain is a chaotic function, i.e. varies wildly for very close inputs (even if they appear close over small enough metrics of time close to 0), topologically mixes (covers every single possibility over a long enough period of time regardless of input) and has dense periodic orbits (for particular inputs the system might be predictable, but not everywhere). This system is, by its construction, impossible to mathematically model for all given inputs. No AI could "simulate" this system.
I've been told my jerk circuits have a strangely attracting property, but I don't think I've ever been called a fractal before. My self-similarity is pretty low, after all.

More to the point: the axiom of choice exists, you can construct decision functions over sets of choices and criteria, and the physical limits on the complexity of the hardware running this function should put an upper bound on the possible information needed to represent it at any point in time. I do agree that an AI which assumes we've all got butterflies flapping around inside our skulls would be terrible at predicting our behavior, though, so I guess I have to agree with you in case we don't end up with a benevolent strong AI, though it might view too much noise in the system as something to be reduced... heck of a quandry there.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: MetalSlimeHunt on September 23, 2016, 04:51:40 pm
We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.
>.> Even the most outlandish model runs don't suggest anything which could possibly involve us "all dying in the climate crisis" being a thing for centuries man, where does this come from? Did I miss the part where people start to spontaneously combust if the planet were to become half a Kelvin warmer? How is panic over potential kilodeath-scale outcomes over the next couple hundred years via sea level rise/heat waves/aridification/etc any less silly than concern over potential gigadeath-scale outcomes via unexpected superintelligence excursion a matter of decades from now
We've already spoken at length about the dangers of the climate crisis, and how it's here now not in centuries, in other threads. Though the topic of this thread is an actually fake thing, let's not ruin it by repeating ourselves.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 04:58:11 pm
Yeah, I don't have it in me to go any further than pointing out the silliness of choosing one possible bogeyman over another possible bogeyman anyways.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Frumple on September 23, 2016, 05:12:40 pm
All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort.
Back a bit, but... the latter is incredibly difficult for humans and requires a giant effort because we still more or less require the same environment to live, and need it to mostly be there when the pest is gone. We could probably wipe out, say, mosquitoes relatively easily at this point, ferex (tailored diseases, genetic muckery, etc.), but we don't because of the various knock-on effects (biosphere disruption, potential mutation in diseases or whatev') aren't worth anything involved with it. Unfortunately, most of the knock-on effects of wiping out humanity are, uh. Pretty positive. Particularly if you can still use our infrastructure and accumulated knowledge without actually needing the fleshsacks walking around crapping on everything >_>
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: IronyOwl on September 23, 2016, 05:53:18 pm
Back to the survey for a moment, can I take a moment to bitch about how stupid the questions are? "Once we invent a human-level AI, how likely is it to surpass all humans within a year" is a particularly dumbass way to phrase an escalation question, because you usually don't "invent" something that complicated out of whole cloth; you iterate something you already had until it starts to look kinda like something else. Like, we "invented" computers because the weird doohickies were were building eventually matched some arbitrary criteria, not because we up and made a useful modern computer on a whim.

So when you ask "will AI be better than us a year after we invent it," the impression I get is that you think somebody's literally just going to appear on television yelling GOOD NEWS I INVENTED A SUPERCOMPUTER THAT EVEN NOW GROWS IN STRENGTH, SOON IT WILL BE TOO POWERFUL FOR YOU TO STOP. As opposed to, you know, the far more likely scenario of Google's latest phone sex app getting patched to be better at managing your finances and rapping on command than you are.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 08:50:01 pm
I've never seen human decision making presented as a chaotic process, I can't even find this being done somewhere, am I using the wrong search terms or something? It's like a perfectly irrational actor being subbed in for what I thought was the usual rational individual/irrational crowd type of modeling assumption.

Could it be possible that the chaotic function model only makes sense with the sort of incomplete information which one of us would possess?

Would that necessarily be the case were the completeness of our models and understanding improved?

Is there any point you can think of where someone or something vastly more intelligent than you might find the chaotic decision function model to be inaccurate?

I assume we have a similar quality of intelligence, but just from trying to reason it out as a starting assumption, couldn't the strange attractors for said assumption look like rational actor behavior anyways?

If so, why is the irrational actor assumption preferable? If not, why does the idea of a rational actor exist?

Also, in my essay there was something about the idea of spending time with a superintelligent spider being unpleasant, stop the spiderbro hate!
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 09:22:07 pm
That is weirdly charitable and simultaneously uncharitable, well done!

There is a phase space of actions any given person is likely to take, which is a subset of the actions they could possibly take, which is a subset of actions that might be prohibited due to time or distance or physical limitations but are at least theoretically possible.

It sounds like you're arguing that the strange attractor in this situation would begin tracing out the entire universe, rather than a portion of the likely actions phase space.

It is possible I could get up now, walk outside, stick a beetle in my ear and run into traffic, but that is well outside the phase space of likely actions given the simple assumption that my mental state won't wildly change from one moment to the next.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 09:46:11 pm
Of course. However, stretch that period out to 10 years from now. It's very possible that a series of tragic events could leave you completely mentally broken and that that set of circumstances would enter the phase space.

As I said, the phase space is actually rather small initially, depending on the event in question (sufficiently potent events, i.e. torture, war, etc, could represent the more "extreme" divergences between possible events and push towards what would not normally be considered in the phase space), and an AI could reasonably predict over a more immediate time frame one's decisions based on that immediate event. The decisions made as a result of this event, say, a year in the future, could not be predicted with any such accuracy, because the error in the initial assumptions from reality increase exponentially over a period of time.
Assuming the chaotic decision function is a reasonable model, of course.

Still though, we're discussing hypothetical minds of arbitrarily greater intelligence here, if we assume we could simulate the processes in a human mind well enough that when run it believes itself to be a human, and is capable of demonstrating human level intelligence. If we take the leap that it should be possible to produce something which, when ran, is capable of demonstrating beyond human level intelligence, at what point is it too much of a leap to think it could run a subroutine with a simulation of a mind that, when ran, believes itself to be you?

You find yourself being told by what appears to be you, claiming to be speaking from outside a simulation, that YOU are a simulation. How do you respond to this? What could you do to prove to yourself that you are or are not you?

Perhaps the idea that you could be simulated so well that the simulation is actually sitting over there on the other side of this internet connection discussing this with me isn't particularly comforting, but aside from some unknown attribute of "you"ness there is no real reason this scenario couldn't take place, is there?

I can't find a plausible choice for that attribute which I could use to make this distinction between me and sim!me other than my continuity of awareness suggesting that I am either not the sim, or it is an extremely in depth model which either fully iterated my life, or recovered my mental state exactly enough to leave me convinced that it was in fact my life.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Baffler on September 23, 2016, 10:01:22 pm
In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?

Because it wouldn't know how. This should give you an idea of the problems we're currently banging our collective heads against. (http://blogs.sciencemag.org/pipeline/archives/2015/10/30/simulating-the-brain-sure-thing) Direct to the source. (https://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/) It won't know anything we can't tell it, and the idea that it could somehow divine the answers to these problems out of sheer processing power brings us into omniscient AI Jesus territory.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 11:00:45 pm
In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?

Because it wouldn't know how. This should give you an idea of the problems we're currently banging our collective heads against. (http://blogs.sciencemag.org/pipeline/archives/2015/10/30/simulating-the-brain-sure-thing) Direct to the source. (https://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/) It won't know anything we can't tell it, and the idea that it could somehow divine the answers to these problems out of sheer processing power brings us into omniscient AI Jesus territory.
We do have weird progress towards this end goal though: http://browser.openworm.org/#nav=4.04,-0.03,2.8

In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?

Assuming the chaotic model here, it's really rather quite simple.

The AI would have to simulate the entire universe to exact detail. Sure, you might argue that that could be possible on sufficiently potent software design and hardware architecture.

However, such an AI would necessarily need to simulate itself.

It would need to simulate itself simulating itself.

And so on.

As to why it would need to simulate the entire universe, any chaotic model requires an exact simulation to get exact results; the system is deterministic. However, any error increases exponentially over time, so the simulation must be exact or else risk serious errors coming up. No one thing can be neglected from the simulation, due to the nature of mathematical chaos.
Ah, you're looking at a different problem, intelligence is messy anyways, but the important thing is that there is no simple way for me to prove to you that I am sitting next to real!Ispil and we're watching the outputs on the machine running sim!Ispil, i.e. the you I am speaking with on this forum.

Similarly there are numerous things you can do which suggest to me that the hypothesis that you are just a chatroutine is falsified, I can't disprove that you actually think you exist without getting into solipsistic nonsense.

Now, taking the assumption that you have internally consistent mental states, and that you observe yourself to be embedded within a universe, what are the minimum requirements necessary to achieve that?

You can't go out and touch a star, so we only need to make them behave plausibly if observed with certain equipment, you can't actually directly interact with anything more than a few feet away so we need to apply a certain level of detail within that volume, thankfully we can fudge most of it because you lack microscale senses. We need to account for sound and light reflection, which is a bit more complex, but far from impossible, smell and taste could be tricky but they are usually running at a sub-aware level so we only need to call those up when prompted. Naturally the framework for your meatpuppet needs to send back certain data to emulate biomechanical feedback, but that isn't too onerous, and thankfully you are very unlikely to start trying to dig around inside your own chest cavity to see what is happening... though we should probably put in place some sort of placeholder we can drop the relevant models on just in case.

We could probably use backdrops and scenery from live footage to add another layer of versimilitude, but most of the extra processing power would go towards making sure the (probably claustrophobic sounding) box bounded by your limbs at full extension behaves as you expect it should, though the actual self!sim itself will still be eating up a decent chunk of resources as it trundles around, but we can make use of things like a limited attention span and fatigue to trim a good amount of the overhead down outside of extended bouts of deep existential pondering.

Now, I'm not saying you should open your abdominal cavity and see if there are any graphical errors as chunks of it are rendered, but can you think of a way to prove you aren't in a glass case of emotionsimulation?

It doesn't need to be exact and complete to produce something which would think it was you or I. Yes, after initializing it there would be divergences as the decision factors for both take them down different routes through their respective phase spaces...

...but hey, just in case you were comfortable with the idea of sim!you existing in some hypothetical, don't forget that it would probably be more productive if the likely region of your decision phase space were mapped out intensively, so the question would then become: how many iterations of sim!you does it take to map out the most likely responses for real!you to any given stimuli?

I'm not saying that I would run endless sims of you and then shut them down after selecting the most useful data from the runs, it sounds horrific to me to do that to someone with a similar level of intelligence and attachment to their own existence, but I'm not a godlike AI without a reason to be attached to the particular mental state of specific individuals, am I?

And yes, this is all assuming that the chaotic decision function is a reasonable model. If it isn't, then either the decision model is stochastic, or both deterministic and polynomial-scaling (or less) in perturbation scaling for any error in the input.

In other words, humans are either chaotic, random, or predictable over the entirety of the phase space (in this case, the Oxford comma is in use; the "entirety of the phase space" only applies to predictable). There of course exists plenty of given inputs for particular decision functions with particular priors that particular decisions are predictable; those are the periodic orbits of the decision function.
You omit that it could be chaotic and deterministic with perfect initial information. Figuring out what will happen when you start a chaotic system is totally possible if you know how you started it.

These seem like another way of describing the different subsets of possible actions, random actions covering the broadest region if given enough time to evolve, chaotic actions having a likely portion of the phase space, and deterministic actions providing anchor points--we know you and I will go eat, drink, breathe, sleep, and so forth though we can choose to activate or delay these processes--which staple parts of the other two sets together. There are no random actions which will result in someone living and breathing without any environmental protection in the upper atmosphere of jupiter tomorrow, there are no chaotic trajectories that wind up with you avoiding to eat and avoiding death in the near future.

I may not know the initial conditions well enough to make these predictions about a chaotic decision function, you may not, but can you confidently state that it is impossible to know them?
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 11:13:29 pm
Left it for you to edit again if you wanted but you didn't, but like I said, given perfect information of the initial state you can predict what a chaotic system will do. Which goes back to the question of, if that can be known, how does it prevent a simulation of someone from behaving just like they would? The whole universe simulation argument has some merit, but seems excessive if the goal is just producing something which believes it is a given individual, as we ourselves lack the vast majority of the information which would make said universe sim necessary.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 23, 2016, 11:51:28 pm
I am mad at you for making me delve into various searches trying to find something to support a chaotic decision making theory of mind, though I know you couldn't predict I would wind up encountering all the pseudospiritualistic bullshit with like, whoa, things are like, butterflies and like, shit happens man, so like, it's cool that I just went through.

I still think you're overestimating the requirements for doing something like fooling a human level intelligence into accepting and responding to the environment in a realistic fashion, chaos theory or not it is totally plausible for something with more information than you or I to produce a simulation with a fine enough grain that we can't distinguish it from reality.

Arguing that you need a perfect universe sim to produce a reasonably accurate human sim is going towards the implication that human behavior is damn near random, otherwise stuff like the specific motions of a particle here or there wouldn't matter.

The universe only observes itself with limited instruments, stuff like us, and we are really fucking shitty instruments for doing this.

Making a universe sim that could fool an arbitrarily powerful mind is where I totally agree with you about it being impossible, but we've gotten pretty good at convincing your mind that it is in fact doing something somewhere completely different from where you know yourself to be, like the edge of a building which isn't even trying to look realistic. (https://youtu.be/pVdZh03ju6U?t=190)

I love that guy, btw, "I am so glad you landed that... I had my eyes closed."
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Folly on September 24, 2016, 12:17:10 am
I'm optimistic about cyborg technology outpacing AI.
By the time that AI starts evolving independent of human intervention, we should all have computer chips throughout our brains allowing us to match the AI's in thinking speed, and mega-man fists that can shoot lasers at any bots that try to attack us.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: MetalSlimeHunt on September 24, 2016, 04:57:12 am
I'm optimistic about cyborg technology outpacing AI.
By the time that AI starts evolving independent of human intervention, we should all have computer chips throughout our brains allowing us to match the AI's in thinking speed, and mega-man fists that can shoot lasers at any bots that try to attack us.
Ah-ha, but you see, you would be Ozymandias: Which is easier, for your computer chips to do their programmed tasks or to brainwash you into wanting to nuke the world, thus curing cancer! Oh, you poor deluded innocent, thank god we have enlightened folk like me to rationalize your utility functions in these matters. /s
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Sergarr on September 24, 2016, 06:19:34 am
I wonder if it can be argued that if a system is Turing complete, then it exhibits chaos.
There's the halting problem (https://en.wikipedia.org/wiki/Halting_problem), which, while isn't technically chaos, means that, in general, the only way to certainly predict the execution length of a program is to run it. Since a program's output can be made a function dependent on executing length, it means that, in general, the only way to certainly predict the output of a program is to run that exact program.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Frumple on September 24, 2016, 07:19:11 am
I'm optimistic about cyborg technology outpacing AI.
By the time that AI starts evolving independent of human intervention, we should all have computer chips throughout our brains allowing us to match the AI's in thinking speed, and mega-man fists that can shoot lasers at any bots that try to attack us.
The trick is, just like our current functionally!cyborg technology, the chips probably won't be in our brain. We do have the occasional bit of internal or grafted cybertech at the moment, but it looks a lot like most of our development there is going to be like it currently is -- via external peripherals. It's a lot safer, probably a fair bit more efficient, and certainly currently a hell of a lot easier to just... make stuff that interfaces with the wetware via the wetware instead of implanted hardward. Smartphones, glasses, guns... bluetooth, developing AR software, etc., etc., etc. Conceptually we could probably wire some of those directly to our brain, even at the moment (if with likely fairly shoddy results -- results, but not particularly decent ones), but.. why, when you can get the same effect laying it on the palm of your hand or building it into your eyeware?

I mean. Other than the awesome factor and maybe the glowing laser eyes and whatnot. I'm sure that's reason enough to many but it probably won't be for the folks funding development for quite a long while :V
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Radio Controlled on September 24, 2016, 06:09:57 pm
Quote
Their "Bayesian" model of super-intelligence is so smart that it effortlessly takes over the world, yet so stupid that it can't even count. I'm fucking speechless.

I thought the idea there was a sort of solipsism-for-computers: the AI can't be 100% certain it has made sufficient paperclips yet, so it'll keep working to diminish the chance. After all, it might have a camera feed to count the number of paperclips rolling of the factory floor, but who'se to say the video feed isn't a recording/simulation made by those dastardly humans! As part of a test to check the AI's behaviour perhaps, or because they thought a reverse matrix would be hilarious. Or maybe a small software glitch made the computer miscount by 1, so better make more paperclips just to be a little extra sure.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Dozebôm Lolumzalìs on September 25, 2016, 04:13:26 pm
Is that sarcasm? Cars and cranes are made by humans, and are both metallic. That doesn't make every manmade object metallic.

Unless every T-complete system can be described as a sum of multiples of R110 and GoL. But they aren't vectors, so I find that unlikely.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Antioch on September 25, 2016, 05:31:37 pm
I thought: what would be the best way for an AI to kill all humans? And the answer was nuke the shit out of everything.

Which made be pretty relieved because we can do that ourselves just fine.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Frumple on September 25, 2016, 06:02:16 pm
Probably not, really. I'd wager a tailored disease of some sort would be its best resource/results outcome. Maybe some targeted bunker buster type stuff for folks that notice and manage to do something about it.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 25, 2016, 07:30:47 pm
As an aside, I found two papers; one demonstrates that gliders in Rule 110 exhibit topological mixing, and the other demonstrates that gliders in Conway's Game of Life exhibit topological mixing. This means that both of those exhibit chaos, which means that all Turing-complete systems exhibit chaos.


So my initial premise, that the human decision function is chaotic, is correct by virtue of it being Turing-complete.
I think you need a more explicit proof that human decision making is actually Turing-complete. I see this stated but have yet to actually find proof of it, I figured for sure that Wolfram would have been responsible for this given his combination of brilliance and fetish for showing everything is computers on turtles on turtles made of computers all the way down, but I don't think he has gone that far yet.

It may very well be the case, but I don't know that it actually is just yet.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 25, 2016, 11:37:35 pm
Yeah, turing equivalent, turing complete, hyperturing, oracles, these are all things which need to be factored in.

There could also just be quirks like a theoretical consciousness-complete description is turing-complete or hyperturing, but practically we fall short of that. I can't iterate an arbitrarily long sequence of instructions in any real sense, but theoretically I might be able to do this.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 26, 2016, 12:48:42 am
Hmmm, still seems like a problem in that, if we are turing-complete mentally we should be deterministic, or our output should be computable by a deterministic turing machine, though I think that is well out in PSPACE land.

I had something I was going to say but while trying to figure out more about why the assertion bothered me I ended up on a wikiwalk all the way over to the game of life after a divergence through complexity and the background of Turing himself.

It's definitely an interesting question, and a proof either way would be important.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 26, 2016, 01:00:41 am
Oh yeah! Kinda poops on that whole "free will" thing doesn't it?
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Max™ on September 26, 2016, 01:12:50 am
I don't think for a second that the deep mind programs which saw dogs everywhere were anything but a clear sign of Warp influence, and the programmers are obviously Heretics.
Title: Re: AI risk re-re-revisited [Participate in the survey!]
Post by: Reelya on September 26, 2016, 02:18:48 am
Quote
Their "Bayesian" model of super-intelligence is so smart that it effortlessly takes over the world, yet so stupid that it can't even count. I'm fucking speechless.

I thought the idea there was a sort of solipsism-for-computers: the AI can't be 100% certain it has made sufficient paperclips yet, so it'll keep working to diminish the chance. After all, it might have a camera feed to count the number of paperclips rolling of the factory floor, but who'se to say the video feed isn't a recording/simulation made by those dastardly humans! As part of a test to check the AI's behaviour perhaps, or because they thought a reverse matrix would be hilarious. Or maybe a small software glitch made the computer miscount by 1, so better make more paperclips just to be a little extra sure.

I haven't read through the thread, but reaching any sort of conclusion from their logic seems to be a case of concocting some unlikely system then showing the flaws with that - straw man argument. Who can be 100% sure of anything? Maybe we're in The Matrix, maybe our own senses are being screwed with. An AI doesn't necessarily have to follow a Robbie-the-Robot "does not compute" script: it can map observation to its own actions, and optimize for that. After all, if all we/it knows is our own data stream, then the AI would see that it's pointless to optimize for something outside that. If it keeps making paper clips literally forever because it can't be sure on a philosophical level that the paper clips it can see really exist, then this shows a lack of self-awareness that contradicts the basis of the argument.

The flaw in the rationale behind this sort of argument seems to be taking the metaphor of the "bit" which can only be 0 or 1, with literally no possibilities in the middle, then assuming that it keeps that property when scaling that up to a system with billions of bit. It's equivalent to arguing that because a neuron works a specific way, we can describe humans as acting like a "big neuron".

The problem is that the binary 0/1 "0% or 100%" logic falls down as soon as you move from a 1-bit system to a 2-bit system. It also relies on false dichotomies. The "opposite" of "white things" isn't "black things", it's "everything that is not white". Even with 1 bit, if 1 is interpreted to mean "certainly true", the opposite of that is not "certainly false", it's "uncertain". So the problem with making assumptions that a 1-bit thing will resolve to "definitely true" or "definitely false" is that we're making huge assumptions about how things should be interpreted, that aren't necessarily part of the nature of that thing. And as soon as you add that second bit, the entire nature of the data changes completely. Just 2-bits of data can encode "certainly true", "likely", "unlikely" and "certainly false", which is not just a binary system scaled up, it allows a whole different form of reasoning.
Title: Re: AI risk re-re-revisited
Post by: SirQuiamus on September 26, 2016, 09:14:25 am
Spoiler: lel (click to show/hide)
Title: Re: AI risk re-re-revisited
Post by: Max™ on September 26, 2016, 03:46:59 pm
Hmmm, this seems to be asking a similar question by discussing properties of quantum chaotic turing machines: http://www.mrmains.com/bio/academic_files/pmath370/project.pdf
Title: Re: AI risk re-re-revisited
Post by: SirQuiamus on October 31, 2016, 06:32:29 am
I was offline for a few days and missed it, but Scott has posted the results, (http://slatestarcodex.com/2016/10/24/ai-persuasion-experiment-results/) such as they are.

Spoiler (click to show/hide)
Title: Re: AI risk re-re-revisited
Post by: martinuzz on November 02, 2016, 03:08:37 pm
The Google Brain project reports that two of it's AI have developed an encrypted language to communicate with each other, with a third AI unable to break it's code.

The three AI, 'Alice', 'Bob', and Eve were provided with a clear task: Alice was tasked with sending a secret message to Bob, Bob was tasked with decoding it, and Eve was tasked to snoop in on the conversation.

None of the AI were given any information on how to encode messages or which techniques to use.
Alice and Bob were given a code key, which Eve had no access to.

At first, Alice and Bob were pretty bad at encrypting decoding messages, and Eve was able to snoop in on the badly encrypted messages.
However, after 15000 generations of evolving code, Alice developed an encryption strategy which Bob was quickly able to decode, while Eve was no longer able to crack the encryption.

While examining the encrypted messages, researchers noted that Alice used unknown and unexpected methods of encryption, very different from techniques used by human encryption software.

The researchers say that the results seem to indicate that in the future, AI will be able to communicate with each other in ways that we, or other AI cannot decrypt.
However, AI still has a long way to go. Even though the techniques used were surprising and innovative, Alice's encryption method was still rather simple compared to present day human encryption systems.

As it stands, they describe their research as an interesting excercise, but nothing more. They don't foresee any practical applications, as they do not know exactly what kind of encryption technique Alice used.

https://arxiv.org/pdf/1610.06918v1.pdf
Title: Re: AI risk re-re-revisited
Post by: TheBiggerFish on November 03, 2016, 03:34:24 pm
Cool.
Title: Re: AI risk re-re-revisited
Post by: PanH on November 04, 2016, 04:21:22 am
Pretty cool indeed. Not sure if it has been mentioned but there's something similar with an AI translator (by Google too). I'll try to find a link later.