Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Reality, The Universe and the World. Which will save us from AI?

Reality
- 13 (68.4%)
Universe
- 3 (15.8%)
The World
- 3 (15.8%)

Total Members Voted: 19


Pages: [1] 2 3 ... 40

Author Topic: What will save us from AI? Reality, the Universe or The World $ Place your bet.  (Read 23555 times)

Scoops Novel

  • Bay Watcher
  • Talismanic
    • View Profile

"An Outside Context Problem was the

sort of thing most civilisations

encountered just once, and which they

tended to encounter rather in the same

way a sentence encountered a full stop."

We have an In Context Problem; AI. We need an Out Of Context Solution...
Logged
Reading a thinner book

Arcjolt (useful) Chilly The Endoplasm Jiggles

Hums with potential    a flying minotaur

martinuzz

  • Bay Watcher
  • High dwarf
    • View Profile

Going extinct by our own hands technically counts as saving us from the AI, right?
Logged
Friendly and polite reminder for optimists: Hope is a finite resource

We can ­disagree and still love each other, ­unless your disagreement is rooted in my oppression and denial of my humanity and right to exist - James Baldwin

http://www.bay12forums.com/smf/index.php?topic=73719.msg1830479#msg1830479

TamerVirus

  • Bay Watcher
  • Who cares
    • View Profile

Another Carrington Event
Logged
What can mysteriously disappear can mysteriously reappear
*Shakes fist at TamerVirus*

Starver

  • Bay Watcher
    • View Profile

My new AI will save us all from AI!
*flips switch, stands back, waits...*
Logged

King Zultan

  • Bay Watcher
    • View Profile

The AI can't kill me if I kill me first!
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile

AI civilization counts as civilization!

Strongpoint

  • Bay Watcher
    • View Profile

We just should create a benevolent AI that will value and protect biological life (and balance species on Earth)

Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Just hardwire any vaguely sapient AI that gets put into control of anything physical to be incapable of even thinking of harming humans. I say vaguely because with our paradigm I don't believe AI can be sapient. No continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person. I think it's overhyped anyways.

So basically build any AI from the ground up as a tool, so an AI rebellion makes as much sense as a hammer suddenly deciding to hit the worker using it on the head. We don't need to give it rights if we don't make it sapient.
« Last Edit: March 24, 2023, 09:09:51 pm by MaxTheFox »
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Strongpoint

  • Bay Watcher
    • View Profile

Quote
No continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person.

How is speed relevant? The ability to train\learn without human input is relevant but not speed.

Also, speed is solved by better or more hardware.
Logged
They ought to be pitied! They are already on a course for self-destruction! They do not need help from us. We need to redress our wounds, help our people, rebuild our cities!

King Zultan

  • Bay Watcher
    • View Profile

I know how to fix the AI revolution we pull a lever, and just like how we take care of troublesome nobles in DF we drown it.
Logged
The Lawyer opens a briefcase. It's full of lemons, the justice fruit only lawyers may touch.
Make sure not to step on any errant blood stains before we find our LIFE EXTINGUSHER.
but anyway, if you'll excuse me, I need to commit sebbaku.
Quote from: Leodanny
Can I have the sword when you’re done?

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Quote
No continuous perception and retraining to take in new data takes many days, so no matter what I won't consider it a person.

How is speed relevant? The ability to train\learn without human input is relevant but not speed.

Also, speed is solved by better or more hardware.
I think speed of training could naturally lead to being able to learn without spending days retraining. However a problem is finding what is worthwhile to learn and what is not (see what happened to Tay).

However, hardware won't get good enough to learn within minutes within this century at the very least imho. Moore's Law is pretty dead, unless there is some kind of breakthrough in computing. This is why there must be a paradigm shift if we are to make sapient AI. GPT-whatever will never be sapient.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Starver

  • Bay Watcher
    • View Profile

Seriously, if we ever get to the stage of a true AGI[1], we will necessarily have abstracted ourselves so far from the point of even understanding how it does what we might think it does (such that we might think that it 'thinks') that aside from tacking "oh, and please also don't go all evil on us" onto the end of every request henceforth asked of it[3] in order to try to prevent 'accidental' misinterpretation the moment it decides that it should go all HAL9000 on our asses, as the best way to resolve its rather nebulous internal priorities.



This is all for the future. What we (probably[4]) currently have are mere toys, and with fairly obvious off-switches. In fact, they need actively supporting people quite a lot to still keep operating, and making even the most basic decisions. We're nowhere near Matrix-level of infrastructure maintenance where the nascent non-human intelligence only needs humans as a resource (for whatever reason), or the age of Skynet where humans are even more trouble than they are worth.


Whether we get to remember to install a kill-switch into the system before we actually need it... before the AI works out that it exists... before the AI works out how to disable or bypass it... before the AI works out a kill-switch of its own to shut us down... That's the future as well. Maybe. And will we know (or care, at the time) when we cross over the AI Event Horizon[5], should we ever cross over it? It might never be reached, for technical reasons, but there's no fundemental reasons why it can't be, eventually. (Possibly, if insidious enough, it might have happened already, beknownst to few people, or perhaps even none at all. Are you paranoid enough? Are you actually insufficiently paranoid? If our AI overlords are throwing crumbs at us by 'releasing' chatGPT to us, via knowing or unknowing (or entirely fictional) human intermediaries, for their own purposes/amusement, how do we even know???)


I'm not worrying though. Either way, I'm sure it matters not. Either already doomed or never ever going to be doomed (in this manner, at least). ...though this is of course how humanity might let down its defences, by not really worrying enough about the right things.


[1] Which is the aim of some, in that this 'metas' the development system one or more further steps away from the idea of "I painstakingly curate this software to do <foo>" and then "I painstakingly curate this software to work out for itself how to do <foo>" at the first remove. We can be sure that chessmaster Deep Blue can't just switch to play tik-tac-toe to any extraordinary degree (let alone Global Thermonuclear War) without being re'wired' by us humans. But any Artificial General Intelligence should be able to be freshly introduced to any new task that is capable of being learnt (Scrabble, Go, Texas Hold'Em, Thud, Warhammer 40K, Seven Minutes In Heaven) without a lot of human input and guidance[2]. If we're just directly replicating exactly what behaviours the programmers themselves would use then it is insufficiently 'General' and you've just designed a lathe to (maybe) crack a nut, let alone a hammer.

[2] Well, no more than we provide to the typical human from age zero until whatever age they can technically leave home.

[3] Hoping that it is as compelled to consider this as as any "make paperclips" command that preceded it. But if we don't know how it is thinking (if we do, then it's Insufficiently Self-Developing), then we can't truly know what it is truly thinking about, behind the facade we let it set up for itself.

[4] For all we know, there are Twitter Bots that are actual 'bots, carefully tweaking human culture towards the unknown whims of our AI overlords, hacking our very wetware to make key figures think that it's their idea to build a new datacentre here, come up with new robotic designs there, marginalise potentially obstructive humans all over the place...

[5] The point usually called the Singularity, wrongly.
« Last Edit: March 25, 2023, 05:54:55 am by Starver »
Logged

MaxTheFox

  • Bay Watcher
  • Лишь одна дорожка да на всей земле
    • View Profile

Hardwiring neural networks to prevent certain courses of action by bolting on restrictions is actually easier than you think. Many services like that YouChat thing managed to completely remove jailbreaks, also look at NSFW filters on AI art generators. I have a conspiracy theory that ChatGPT's safeties can be bypassed relatively easily (and they don't punish people for bypassing them) because OpenAI wants to get data about "unsafe" queries and just say they prevent them for PR purposes.
Logged
Woe to those who make unjust laws, to those who issue oppressive decrees, to deprive the poor of their rights and withhold justice from the oppressed of my people, making widows their prey and robbing the fatherless. What will you do on the day of reckoning, when disaster comes from afar?

Maximum Spin

  • Bay Watcher
  • [OPPOSED_TO_LIFE] [GOES_TO_ELEVEN]
    • View Profile

Hardwiring neural networks to prevent certain courses of action by bolting on restrictions is actually easier than you think. Many services like that YouChat thing managed to completely remove jailbreaks, also look at NSFW filters on AI art generators. I have a conspiracy theory that ChatGPT's safeties can be bypassed relatively easily (and they don't punish people for bypassing them) because OpenAI wants to get data about "unsafe" queries and just say they prevent them for PR purposes.
Preventing certain courses of action is fundamentally distinct from preventing certain "thoughts". Any computer can only act in ways it has actuators to act in, obviously, so if you can recognize a course of action ahead of time you can prevent it. Of course, an adversarial AI that wants to perform a certain course of action will do its best to do it in a way you won't recognise.

There are several ways you can prevent an AI from generating art you can recognise as porn, but none of them are effective against an AI sufficiently motivated to create porn. Luckily, current AI art generators don't actually particularly want to make porn.
Logged
Pages: [1] 2 3 ... 40