Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3 4 5

Author Topic: AI risk re-re-revisited  (Read 6791 times)

Harry Baldman

  • Bay Watcher
  • What do I care for your suffering?
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #15 on: September 23, 2016, 02:06:21 pm »

The survey is kind of shit, since of course you'll exercise critical thinking (it being a skill you at least pretend to have if you're on the Internet) if told you're participating in a survey where you're supposed to exercise critical thinking about a matter you're told about in advance.

UNLESS IT'S A DIVERSION, in which case good show, only occurred to me just now. Though what it could be testing in that case, and what the other test groups are, will sadly remain an unfortunate mystery.

Good thing Elon Musk isn't the be-all end-all of intellect then, apparently, because looking at that I'm pretty sure it's just a word salad of the thousand year old ontological argument.

The only thing I know about the lesswrongosphere is that a bunch of them got convinced they were in robot hell unless they gave all their money to Elon Musk.

Actually, yeah. It does look a lot like the ontological argument. Very easily dismissed as complete nonsense, but it takes a little bit of doing and mental exercise to put into words why it's nonsense.
« Last Edit: September 23, 2016, 02:44:00 pm by Harry Baldman »
Logged

Ispil

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #16 on: September 23, 2016, 02:12:44 pm »

I'm actually starting to really like my assumption that more powerful decision functions exhibit greater and greater chaos relative to the intelligence of the system.

Or, perhaps, that only a chaotic decision system exhibits "intelligence" at anything beyond our current threshold, which would explain why we have not passed it.
Logged
"Those who can make you believe absurdities can make you commit atrocities." - Voltaire

"I have never made but one prayer to God, a very short one: 'O Lord, make my enemies ridiculous.' And God granted it."- Voltaire

I transcribe things, too.

Sergarr

  • Bay Watcher
  • (9) airheaded baka (9)
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #17 on: September 23, 2016, 02:15:34 pm »

I got a part of the Waitbutwhy about superintelligence.

I think the thing that bothers me most about the discussion is how easily people discard the distinction between intelligence and knowledge.
As long as man had intelligence, but hardly any culturally inherited knowledge, he wasn't significantly different from an animal.

All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort. Research is definitely slow and tedious even for a superintelligent AI, because AI can not change the scaling laws of common algorithms. The world is at least partly chaotic which means that prediction becomes exponentially difficult with time. There is nothing an AI can do about that.
Well, yes, this much is evident to anyone who has actually worked with data analysis (i.e. garbage in = garbage out, and no amount of clever algorithms will help produce non-garbage out of garbage), but there are also people - pretty popular people among the super-intelligence community - that say things like this:
Quote
A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple.  It might guess it from the first frame, if it saw the statics of a bent blade of grass.
Basically, these people understand "super-intelligence" as "being omniscient", and "omniscient" as "having arbitrarily-powerful reality-warping powers".

And this happens often enough to drown any real arguments in this bullshit. Which is the reason why I don't take them seriously.

EDIT: Took the poll, and well, the essay I've got contained this load of dung:
Quote
    One might think that the risk [..] arises only if the AI has been given some clearly open- ended final goal, such as to manufacture as many paperclips as possible. It is easy to see how this gives the superintelligent AI an insatiable appetite for matter and energy. […] But suppose that the goal is instead to make at least one million paperclips (meeting suitable design specifications) rather than to make as many as possible.

    One would like to think that an AI with such a goal would build one factory, use it to make a million paperclips, and then halt. Yet this may not be what would happen. Unless the AI’s motivation system is of a special kind, or there are additional elements in its final goal that penalize strategies that have excessively wide- ranging impacts on the world, there is no reason for the AI to cease activity upon achieving its goal. On the contrary: if the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal. […]The AI should therefore continue to make paperclips in order to reduce the (perhaps astronomically small) probability that it has somehow still failed to make at least a million of them, all appearances notwithstanding. There is nothing to be lost by continuing paperclip production and there is always at least some microscopic probability increment of achieving its final goal to be gained. Now it might be suggested that the remedy here is obvious. (But how obvious was it before it was pointed out that there was a problem here in need of remedying?)
Their "Bayesian" model of super-intelligence is so smart that it effortlessly takes over the world, yet so stupid that it can't even count. I'm fucking speechless.
Logged
._.

Ispil

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #18 on: September 23, 2016, 02:18:52 pm »

I believe, actually, that that's Bostrom's argument right from Superintelligence. As in, quoted word-for-word.

Maybe I should've read that book twice rather than just once. Seems I really didn't pay attention to the arguments it presented.
Logged
"Those who can make you believe absurdities can make you commit atrocities." - Voltaire

"I have never made but one prayer to God, a very short one: 'O Lord, make my enemies ridiculous.' And God granted it."- Voltaire

I transcribe things, too.

Ispil

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #19 on: September 23, 2016, 02:25:08 pm »

I have to apologize for the various earlier arguments where I bought into this crap wholesale without actually considering the arguments at hand.

EDIT: I mean, hell, a few hours ago I was looking at Gigaz's post and was thinking of all the ways to refute it. My next post in this thread was me agreeing wholesale with his premise without even realizing it, and I was completely blind to this.

Turns out you can seriously not notice how deep into this shit you went until you get out.
« Last Edit: September 23, 2016, 02:30:49 pm by Ispil »
Logged
"Those who can make you believe absurdities can make you commit atrocities." - Voltaire

"I have never made but one prayer to God, a very short one: 'O Lord, make my enemies ridiculous.' And God granted it."- Voltaire

I transcribe things, too.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #20 on: September 23, 2016, 02:50:01 pm »

We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.
>.> Even the most outlandish model runs don't suggest anything which could possibly involve us "all dying in the climate crisis" being a thing for centuries man, where does this come from? Did I miss the part where people start to spontaneously combust if the planet were to become half a Kelvin warmer? How is panic over potential kilodeath-scale outcomes over the next couple hundred years via sea level rise/heat waves/aridification/etc any less silly than concern over potential gigadeath-scale outcomes via unexpected superintelligence excursion a matter of decades from now?

Though, if we start devoting ourselves to nothing but supercomputer construction on every bit of available land the waste heat dumped into the environment could let you freak out over warming AND strong AI takeoffs!
As for the idea of "An AI would be able to predict every action of people blah blah blah". To be honest, I thought this to be true for a while. Putting it in a mathematical context, though, it's fundamentally impossible, assuming that the decision function of the human brain is a chaotic function, i.e. varies wildly for very close inputs (even if they appear close over small enough metrics of time close to 0), topologically mixes (covers every single possibility over a long enough period of time regardless of input) and has dense periodic orbits (for particular inputs the system might be predictable, but not everywhere). This system is, by its construction, impossible to mathematically model for all given inputs. No AI could "simulate" this system.
I've been told my jerk circuits have a strangely attracting property, but I don't think I've ever been called a fractal before. My self-similarity is pretty low, after all.

More to the point: the axiom of choice exists, you can construct decision functions over sets of choices and criteria, and the physical limits on the complexity of the hardware running this function should put an upper bound on the possible information needed to represent it at any point in time. I do agree that an AI which assumes we've all got butterflies flapping around inside our skulls would be terrible at predicting our behavior, though, so I guess I have to agree with you in case we don't end up with a benevolent strong AI, though it might view too much noise in the system as something to be reduced... heck of a quandry there.
Logged
Engraved here is a rendition of an image of the Dwarf Fortress learning curve. All craftsdwarfship is of the highest quality. It depicts an obsidian overhang which menaces with spikes of obsidian and tears. Carved on the overhang is an image of Toady One and the players. The players are curled up in a fetal position. Toady One is laughing. The players are burning.
The VectorCurses+1 tileset strikes the square set and the severed part sails off in an arc!

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #21 on: September 23, 2016, 04:51:40 pm »

We could and should be doing so many better things like not all dying in the climate crisis and maybe trying to actually get a funded transhuman project off the ground instead of trying to invent AI Jesus to just give it to us. Or in this case, having abstract silly vacuous arguments about what we can do to keep AI Jesus from sending us to AI Hell.
>.> Even the most outlandish model runs don't suggest anything which could possibly involve us "all dying in the climate crisis" being a thing for centuries man, where does this come from? Did I miss the part where people start to spontaneously combust if the planet were to become half a Kelvin warmer? How is panic over potential kilodeath-scale outcomes over the next couple hundred years via sea level rise/heat waves/aridification/etc any less silly than concern over potential gigadeath-scale outcomes via unexpected superintelligence excursion a matter of decades from now
We've already spoken at length about the dangers of the climate crisis, and how it's here now not in centuries, in other threads. Though the topic of this thread is an actually fake thing, let's not ruin it by repeating ourselves.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #22 on: September 23, 2016, 04:58:11 pm »

Yeah, I don't have it in me to go any further than pointing out the silliness of choosing one possible bogeyman over another possible bogeyman anyways.
Logged
Engraved here is a rendition of an image of the Dwarf Fortress learning curve. All craftsdwarfship is of the highest quality. It depicts an obsidian overhang which menaces with spikes of obsidian and tears. Carved on the overhang is an image of Toady One and the players. The players are curled up in a fetal position. Toady One is laughing. The players are burning.
The VectorCurses+1 tileset strikes the square set and the severed part sails off in an arc!

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #23 on: September 23, 2016, 05:12:40 pm »

All the scenarios where superintelligent AI kills all humans usually requires that it finds/knows a way to do this without significant effort. But how realistic is that? Extincting a specific pest is incredibly difficult for humans and requires a giant effort.
Back a bit, but... the latter is incredibly difficult for humans and requires a giant effort because we still more or less require the same environment to live, and need it to mostly be there when the pest is gone. We could probably wipe out, say, mosquitoes relatively easily at this point, ferex (tailored diseases, genetic muckery, etc.), but we don't because of the various knock-on effects (biosphere disruption, potential mutation in diseases or whatev') aren't worth anything involved with it. Unfortunately, most of the knock-on effects of wiping out humanity are, uh. Pretty positive. Particularly if you can still use our infrastructure and accumulated knowledge without actually needing the fleshsacks walking around crapping on everything >_>
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

IronyOwl

  • Bay Watcher
  • Nope~
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #24 on: September 23, 2016, 05:53:18 pm »

Back to the survey for a moment, can I take a moment to bitch about how stupid the questions are? "Once we invent a human-level AI, how likely is it to surpass all humans within a year" is a particularly dumbass way to phrase an escalation question, because you usually don't "invent" something that complicated out of whole cloth; you iterate something you already had until it starts to look kinda like something else. Like, we "invented" computers because the weird doohickies were were building eventually matched some arbitrary criteria, not because we up and made a useful modern computer on a whim.

So when you ask "will AI be better than us a year after we invent it," the impression I get is that you think somebody's literally just going to appear on television yelling GOOD NEWS I INVENTED A SUPERCOMPUTER THAT EVEN NOW GROWS IN STRENGTH, SOON IT WILL BE TOO POWERFUL FOR YOU TO STOP. As opposed to, you know, the far more likely scenario of Google's latest phone sex app getting patched to be better at managing your finances and rapping on command than you are.
Logged
"Next, you're gonna wanna fuse the soul of a god with the devourer of time and space and cook it into a nice curry, which you'll find the instructions to in our previous video."
2,500+ chapters about slapping faces

Ispil

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #25 on: September 23, 2016, 07:25:54 pm »

More to the point: the axiom of choice exists, you can construct decision functions over sets of choices and criteria, and the physical limits on the complexity of the hardware running this function should put an upper bound on the possible information needed to represent it at any point in time. I do agree that an AI which assumes we've all got butterflies flapping around inside our skulls would be terrible at predicting our behavior, though, so I guess I have to agree with you in case we don't end up with a benevolent strong AI, though it might view too much noise in the system as something to be reduced... heck of a quandry there.

You can construct decision functions, sure, but you have no idea to what degree that it approximates the particular person question. You also do not know to what accuracy you managed to replicate the input.

Axiom of choice doesn't matter here; it just means you can choose one over the span of infinite possible decision functions, which isn't the point; the point is that they're approximations. The way that a dynamical system that exhibits chaos works is that over the span of several iterations, two inputs, when particularly close, exponentially diverge. This applies for any two sets of inputs that are not equal.

You can reasonably predict humans in the short term. You cannot reasonably predict them in the long term. Assuming that the decision function in question takes into account very intricate details of a person's personality and such, we can say that such a decision function is unique to each person, though exhibits similarities to other decision functions. Regardless, only short-term approximations can be created. This is fundamentally how a chaotic system works. Even trying to recreate the system in question through approximations is doubling down on the amount of expected error here, and there's no way to know for certain whether such a model is correct or incorrect other than to apply identical inputs to both and see where you end up.
Logged
"Those who can make you believe absurdities can make you commit atrocities." - Voltaire

"I have never made but one prayer to God, a very short one: 'O Lord, make my enemies ridiculous.' And God granted it."- Voltaire

I transcribe things, too.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #26 on: September 23, 2016, 08:50:01 pm »

I've never seen human decision making presented as a chaotic process, I can't even find this being done somewhere, am I using the wrong search terms or something? It's like a perfectly irrational actor being subbed in for what I thought was the usual rational individual/irrational crowd type of modeling assumption.

Could it be possible that the chaotic function model only makes sense with the sort of incomplete information which one of us would possess?

Would that necessarily be the case were the completeness of our models and understanding improved?

Is there any point you can think of where someone or something vastly more intelligent than you might find the chaotic decision function model to be inaccurate?

I assume we have a similar quality of intelligence, but just from trying to reason it out as a starting assumption, couldn't the strange attractors for said assumption look like rational actor behavior anyways?

If so, why is the irrational actor assumption preferable? If not, why does the idea of a rational actor exist?

Also, in my essay there was something about the idea of spending time with a superintelligent spider being unpleasant, stop the spiderbro hate!
Logged
Engraved here is a rendition of an image of the Dwarf Fortress learning curve. All craftsdwarfship is of the highest quality. It depicts an obsidian overhang which menaces with spikes of obsidian and tears. Carved on the overhang is an image of Toady One and the players. The players are curled up in a fetal position. Toady One is laughing. The players are burning.
The VectorCurses+1 tileset strikes the square set and the severed part sails off in an arc!

Ispil

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #27 on: September 23, 2016, 09:07:03 pm »

I will admit, this is very much a construction made on-the-fly, without regard to grounding it in any prior works of similar nature. The idea is simple: construct the span of universes that contain a person at a particular point in time. Each universe has some ever-so-slight change in some random quantity of the universe, assuming that there's no random action acting within human consciousness (that is, human thought is deterministic; if it isn't, this whole exercise is meaningless because it implies that they're inherently unpredictable due to random action upon the decision model). Over a sufficiently long enough period of time, the decisions made by that person will diverge exponentially, though not necessarily immediately. It doesn't imply that they're an irrational actor; rather, it implies that the slightest perturbation of circumstance will cause exponentially-increasing errors in any approximation over long enough periods of time. In the end, those universes will cover every single possibility of decisions that can be done by that person.

This, of course, is applying to an identical person acting to slightly different situations with the starting point being the same for each test. Human decision making is chaotic in a second sense as well; the experiences and knowledge feeding into the decision function are themselves chaotic. That is, any slight perturbation of experience or knowledge when applied to the person's decision function will cause exponentially greater distance between future decisions despite being in identical situations.

If I have a person A, and at time t0 have them experience either events a or b, then at time t, the distance between the decisions made following event a or following event b is exponentially related to the distance between events a and b themselves. Note that the orientation of the initial events matters, and can further change the distance between two future states.


In this sense, a sufficiently intelligent AI could predict, given an event, how a person would make decisions further down the road as a consequence of this event. However, assuming that the simulation is relying on an approximation of the experience and knowledge feeding into the human decision function, the exact details of the event, or both, then the simulation will exponentially diverge from reality.
« Last Edit: September 23, 2016, 09:11:04 pm by Ispil »
Logged
"Those who can make you believe absurdities can make you commit atrocities." - Voltaire

"I have never made but one prayer to God, a very short one: 'O Lord, make my enemies ridiculous.' And God granted it."- Voltaire

I transcribe things, too.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #28 on: September 23, 2016, 09:22:07 pm »

That is weirdly charitable and simultaneously uncharitable, well done!

There is a phase space of actions any given person is likely to take, which is a subset of the actions they could possibly take, which is a subset of actions that might be prohibited due to time or distance or physical limitations but are at least theoretically possible.

It sounds like you're arguing that the strange attractor in this situation would begin tracing out the entire universe, rather than a portion of the likely actions phase space.

It is possible I could get up now, walk outside, stick a beetle in my ear and run into traffic, but that is well outside the phase space of likely actions given the simple assumption that my mental state won't wildly change from one moment to the next.
Logged
Engraved here is a rendition of an image of the Dwarf Fortress learning curve. All craftsdwarfship is of the highest quality. It depicts an obsidian overhang which menaces with spikes of obsidian and tears. Carved on the overhang is an image of Toady One and the players. The players are curled up in a fetal position. Toady One is laughing. The players are burning.
The VectorCurses+1 tileset strikes the square set and the severed part sails off in an arc!

Ispil

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #29 on: September 23, 2016, 09:28:36 pm »

Of course. However, stretch that period out to 10 years from now. It's very possible that a series of tragic events could leave you completely mentally broken and that that set of circumstances would enter the phase space.

As I said, the phase space is actually rather small initially, depending on the event in question (sufficiently potent events, i.e. torture, war, etc, could represent the more "extreme" divergences between possible events and push towards what would not normally be considered in the phase space), and an AI could reasonably predict over a more immediate time frame one's decisions based on that immediate event. The decisions made as a result of this event, say, a year in the future, could not be predicted with any such accuracy, because the error in the initial assumptions from reality increase exponentially over a period of time.
Logged
"Those who can make you believe absurdities can make you commit atrocities." - Voltaire

"I have never made but one prayer to God, a very short one: 'O Lord, make my enemies ridiculous.' And God granted it."- Voltaire

I transcribe things, too.
Pages: 1 [2] 3 4 5