Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 [3]

Author Topic: AI risk re-re-revisited  (Read 16643 times)

Antioch

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #30 on: September 25, 2016, 05:31:37 pm »

I thought: what would be the best way for an AI to kill all humans? And the answer was nuke the shit out of everything.

Which made be pretty relieved because we can do that ourselves just fine.
Logged
You finish ripping the human corpse of Sigmund into pieces.
This raw flesh tastes delicious!

Frumple

  • Bay Watcher
  • The Prettiest Kyuuki
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #31 on: September 25, 2016, 06:02:16 pm »

Probably not, really. I'd wager a tailored disease of some sort would be its best resource/results outcome. Maybe some targeted bunker buster type stuff for folks that notice and manage to do something about it.
Logged
Ask not!
What your country can hump for you.
Ask!
What you can hump for your country.

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #32 on: September 25, 2016, 07:30:47 pm »

As an aside, I found two papers; one demonstrates that gliders in Rule 110 exhibit topological mixing, and the other demonstrates that gliders in Conway's Game of Life exhibit topological mixing. This means that both of those exhibit chaos, which means that all Turing-complete systems exhibit chaos.


So my initial premise, that the human decision function is chaotic, is correct by virtue of it being Turing-complete.
I think you need a more explicit proof that human decision making is actually Turing-complete. I see this stated but have yet to actually find proof of it, I figured for sure that Wolfram would have been responsible for this given his combination of brilliance and fetish for showing everything is computers on turtles on turtles made of computers all the way down, but I don't think he has gone that far yet.

It may very well be the case, but I don't know that it actually is just yet.
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #33 on: September 25, 2016, 11:37:35 pm »

Yeah, turing equivalent, turing complete, hyperturing, oracles, these are all things which need to be factored in.

There could also just be quirks like a theoretical consciousness-complete description is turing-complete or hyperturing, but practically we fall short of that. I can't iterate an arbitrarily long sequence of instructions in any real sense, but theoretically I might be able to do this.
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #34 on: September 26, 2016, 12:48:42 am »

Hmmm, still seems like a problem in that, if we are turing-complete mentally we should be deterministic, or our output should be computable by a deterministic turing machine, though I think that is well out in PSPACE land.

I had something I was going to say but while trying to figure out more about why the assertion bothered me I ended up on a wikiwalk all the way over to the game of life after a divergence through complexity and the background of Turing himself.

It's definitely an interesting question, and a proof either way would be important.
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #35 on: September 26, 2016, 01:00:41 am »

Oh yeah! Kinda poops on that whole "free will" thing doesn't it?
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #36 on: September 26, 2016, 01:12:50 am »

I don't think for a second that the deep mind programs which saw dogs everywhere were anything but a clear sign of Warp influence, and the programmers are obviously Heretics.
Logged

Reelya

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited [Participate in the survey!]
« Reply #37 on: September 26, 2016, 02:18:48 am »

Quote
Their "Bayesian" model of super-intelligence is so smart that it effortlessly takes over the world, yet so stupid that it can't even count. I'm fucking speechless.

I thought the idea there was a sort of solipsism-for-computers: the AI can't be 100% certain it has made sufficient paperclips yet, so it'll keep working to diminish the chance. After all, it might have a camera feed to count the number of paperclips rolling of the factory floor, but who'se to say the video feed isn't a recording/simulation made by those dastardly humans! As part of a test to check the AI's behaviour perhaps, or because they thought a reverse matrix would be hilarious. Or maybe a small software glitch made the computer miscount by 1, so better make more paperclips just to be a little extra sure.

I haven't read through the thread, but reaching any sort of conclusion from their logic seems to be a case of concocting some unlikely system then showing the flaws with that - straw man argument. Who can be 100% sure of anything? Maybe we're in The Matrix, maybe our own senses are being screwed with. An AI doesn't necessarily have to follow a Robbie-the-Robot "does not compute" script: it can map observation to its own actions, and optimize for that. After all, if all we/it knows is our own data stream, then the AI would see that it's pointless to optimize for something outside that. If it keeps making paper clips literally forever because it can't be sure on a philosophical level that the paper clips it can see really exist, then this shows a lack of self-awareness that contradicts the basis of the argument.

The flaw in the rationale behind this sort of argument seems to be taking the metaphor of the "bit" which can only be 0 or 1, with literally no possibilities in the middle, then assuming that it keeps that property when scaling that up to a system with billions of bit. It's equivalent to arguing that because a neuron works a specific way, we can describe humans as acting like a "big neuron".

The problem is that the binary 0/1 "0% or 100%" logic falls down as soon as you move from a 1-bit system to a 2-bit system. It also relies on false dichotomies. The "opposite" of "white things" isn't "black things", it's "everything that is not white". Even with 1 bit, if 1 is interpreted to mean "certainly true", the opposite of that is not "certainly false", it's "uncertain". So the problem with making assumptions that a 1-bit thing will resolve to "definitely true" or "definitely false" is that we're making huge assumptions about how things should be interpreted, that aren't necessarily part of the nature of that thing. And as soon as you add that second bit, the entire nature of the data changes completely. Just 2-bits of data can encode "certainly true", "likely", "unlikely" and "certainly false", which is not just a binary system scaled up, it allows a whole different form of reasoning.
« Last Edit: September 26, 2016, 02:28:43 am by Reelya »
Logged

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: AI risk re-re-revisited
« Reply #38 on: September 26, 2016, 09:14:25 am »

Spoiler: lel (click to show/hide)
Logged

Max™

  • Bay Watcher
  • [CULL:SQUARE]
    • View Profile
Re: AI risk re-re-revisited
« Reply #39 on: September 26, 2016, 03:46:59 pm »

Hmmm, this seems to be asking a similar question by discussing properties of quantum chaotic turing machines: http://www.mrmains.com/bio/academic_files/pmath370/project.pdf
Logged

SirQuiamus

  • Bay Watcher
  • Keine Experimente!
    • View Profile
Re: AI risk re-re-revisited
« Reply #40 on: October 31, 2016, 06:32:29 am »

I was offline for a few days and missed it, but Scott has posted the results, such as they are.

Spoiler (click to show/hide)
Logged

martinuzz

  • Bay Watcher
  • High dwarf
    • View Profile
Re: AI risk re-re-revisited
« Reply #41 on: November 02, 2016, 03:08:37 pm »

The Google Brain project reports that two of it's AI have developed an encrypted language to communicate with each other, with a third AI unable to break it's code.

The three AI, 'Alice', 'Bob', and Eve were provided with a clear task: Alice was tasked with sending a secret message to Bob, Bob was tasked with decoding it, and Eve was tasked to snoop in on the conversation.

None of the AI were given any information on how to encode messages or which techniques to use.
Alice and Bob were given a code key, which Eve had no access to.

At first, Alice and Bob were pretty bad at encrypting decoding messages, and Eve was able to snoop in on the badly encrypted messages.
However, after 15000 generations of evolving code, Alice developed an encryption strategy which Bob was quickly able to decode, while Eve was no longer able to crack the encryption.

While examining the encrypted messages, researchers noted that Alice used unknown and unexpected methods of encryption, very different from techniques used by human encryption software.

The researchers say that the results seem to indicate that in the future, AI will be able to communicate with each other in ways that we, or other AI cannot decrypt.
However, AI still has a long way to go. Even though the techniques used were surprising and innovative, Alice's encryption method was still rather simple compared to present day human encryption systems.

As it stands, they describe their research as an interesting excercise, but nothing more. They don't foresee any practical applications, as they do not know exactly what kind of encryption technique Alice used.

https://arxiv.org/pdf/1610.06918v1.pdf
Logged
Friendly and polite reminder for optimists: Hope is a finite resource

We can ­disagree and still love each other, ­unless your disagreement is rooted in my oppression and denial of my humanity and right to exist - James Baldwin

http://www.bay12forums.com/smf/index.php?topic=73719.msg1830479#msg1830479

TheBiggerFish

  • Bay Watcher
  • Somewhere around here.
    • View Profile
Re: AI risk re-re-revisited
« Reply #42 on: November 03, 2016, 03:34:24 pm »

Cool.
Logged
Sigtext

It has been determined that Trump is an average unladen swallow travelling northbound at his maximum sustainable speed of -3 Obama-cubits per second in the middle of a class 3 hurricane.

PanH

  • Bay Watcher
    • View Profile
Re: AI risk re-re-revisited
« Reply #43 on: November 04, 2016, 04:21:22 am »

Pretty cool indeed. Not sure if it has been mentioned but there's something similar with an AI translator (by Google too). I'll try to find a link later.
« Last Edit: November 04, 2016, 04:23:45 am by PanH »
Logged
Pages: 1 2 [3]