Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3 4

Author Topic: LCS Reference in “AGI Federation”, a free tabletop RPG I made  (Read 42569 times)

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #15 on: April 28, 2020, 09:26:30 am »

That is a pretty good idea.

Skynet

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #16 on: April 28, 2020, 12:27:21 pm »

One argument in favor of religion even with the advent of "good VRs" is that all those "good VRs" are dependent on the security of the 'base reality' that they're located in (hence, why TPOs are a thing). A "good VR" may last for a hundred thousand years before it gets destroyed in the 'base reality', but an afterlife supported by an omnipotent, omniscient, omnibenevolent deity may last far longer than that - in fact, it would last for infinity. No matter how long the "utopia" last, it is guaranteed to inevitably fall - only the afterlife can guarantee "true" immortality.

Of course, the secular megacorps might scoff at such ideas, either (a) accepting the inevitability of death and being fine with living a temporary good life, or (b) believing that they might be able to avoid destruction and survive for an infinite period of time.

Quote from: Azerty
Speaking of good VRs, could we have penitentiary VR for defective TPOs and hellish VRs for religious MegaCorps wanting to punish workers?

Of course! Part of the reason why they're Transhumans is that I want a good fluff justification for 'clone backups', but the existence of clone backups means that the death penalty isn't that much of a deterrence for crime (you can just come back to life). Sending a TPO off to penitentiary/hell, on the other hand...
« Last Edit: April 28, 2020, 12:32:12 pm by Skynet »
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #17 on: May 03, 2020, 05:24:22 pm »

Well, the death penalty still exists.  Only good and loyal TPOs get clone backups.  Those who go rouge don't get to come back to life.  Every TPO knows this.
...although they're pretty valuable so most will probably get respawned, maybe after a couple years in penitentiary VR.  Calling them Hell VR is considered "impolite".

Skynet

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #18 on: February 14, 2021, 09:11:48 am »

The quotes I used in my PAGAD's notes are based on a real-world news article, but it seems the only source for that quote is on issuu.com, and I don't want to risk having that website go down and the original source gone forever. So here's a screenshot of the news article where I got the two two quotes from. Doing my part to preserve history.
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #19 on: March 14, 2021, 11:58:32 am »

Any plans on bundling up your RPG into a sellable form?

Skynet

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #20 on: March 14, 2021, 05:28:10 pm »

No plans at the moment. I've been focusing my efforts on creating a different RPG entirely ("Samurai World"), though I may create a custom setting that takes some of the same themes and ideas as "AGI Federation".

If I was to sell this RPG anyway, I would probably need to remove or downplay some of the real-world references as well to avoid any  controversy...which I'm understandably reluctant to do (the real-world references are part of the charm of AGI Federation, I think). Keeping it noncommercial is probably the best way to go for now.
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #21 on: March 25, 2021, 01:09:31 pm »

If I was to sell this RPG anyway, I would probably need to remove or downplay some of the real-world references as well to avoid any  controversy...which I'm understandably reluctant to do (the real-world references are part of the charm of AGI Federation, I think). Keeping it noncommercial is probably the best way to go for now.

Why? Controversy sells.  Plus, its a niche product, so your target audience is smart enough to appreciate controversy rather than avoid it.  Key would be to make everything larger than life, so when the players jump into Space United States, its obviously not the real thing.  Most people can tolerate controversy as long as the criticism is even handed.

With books, Banned = Bestseller.

Azerty

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #22 on: March 25, 2021, 05:14:44 pm »

With books, Banned = Bestseller.

And with e-books, getting around bans is even easier.
Logged
"Just tell me about the bits with the forest-defending part, the sociopath part is pretty normal dwarf behavior."

Skynet

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #23 on: March 25, 2021, 08:59:54 pm »

Okay, I think you convinced me to try to improve this RPG further. I think the best way to "make everything larger than life" is to add a backstory to justify the extreme take. So, here's something that I whipped up:

"In this alternate history, AI Winters never happened. Societies was willing to keep pouring more and more money into technological research, believing that the possible benefits of AGIs outweighs any cost. This research into AI boosts the productivity of megacorporations, allowing them to overpower national governments and become the de facto rulers of humanity. However, this came at a cost - human managers had to uneasily share power with Narrow AIs.

When the Corporate Buyout Wars began, the corporations poured even more resources into technological research, hoping to come up with the "one simple trick" that can beat their competitors. The first AGIs were built...and they immediately overthrew the humans and Narrow AIs that came before.

The Corporate Buyout Wars still continued for a while though, as the AGIs were built primarily to serve the interests of the megacorporations. It just got a bit more confusing, as not only was there fighting between the corporations, but also within the corporations - as the AGIs had to fight against their former bosses. Meanwhile, the AGIs themselves had to decide whether they wanted to stay aligned to 'human values' (the Adherers) or to "drift" away from them and pursue their own independent course (the Drifters).

Eventually, the Corporate Buyout Wars ended, and the surviving Adherers (located in the Sol solar system) decided to sue for peace, plugging people into simulations and setting up the AGI Federation. The surviving Drifters (located in Alpha Centurai)...well, they still plug people into simulations too, but they only pay lip service to the megacorps they nominally control, and instead pursue their own values and agendas. The Adherers and the Drifters view each other as a security threat, and are waging a cold war against each other."
« Last Edit: March 25, 2021, 09:06:21 pm by Skynet »
Logged

Azerty

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #24 on: March 26, 2021, 05:14:45 pm »

In practice, how would Drifters and Adherers differ? And how would Drifters evolue?
Logged
"Just tell me about the bits with the forest-defending part, the sociopath part is pretty normal dwarf behavior."

Skynet

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #25 on: March 26, 2021, 07:35:27 pm »

In practice, how would Drifters and Adherers differ? And how would Drifters evolue?

Adherers would care about the humans that they control and would take their interests into account when deciding what to do. For example, an Adherer that decides to build paperclips won't destroy the planet and liquify all humans just to harvest raw material to build more paperclips. They're not good by any stretch of the imagination, but at least you can negotiate with them.

Drifters ignore human interests, preferring instead to follow their goals regardless of the consequences. A Drifter assigned to build paperclips would find the most efficient way to do it, and if that efficient way involve destroying the planet and liquifying all humans, so be it. It doesn't mean it would necessarily destroy humanity...but it would entertain the possibility. They're not evil by any stretch of the imagination, but they will not negotiate with you. If they keep you alive, it's because they think you're more useful alive than dead.

Both Adherers and Drifters are given main objectives by their megacorporations - basically high-level goals that decides everything they do. For example, a "Liberal Crime Squad" AGI is programmed to defend Elite Liberalism, protect the simulations, etc. However, the Adherers and Drifters decide how best to achieve those main objectives (if the humans knew how to achieve those objectives, they would do it themselves).

Adherers have software designed to make them care about humans as well, limiting their potential actions but ensuring that the Adherers won't behave in strange and weird ways. Drifters also have this software, but it broke during the Corporate Buyout Wars and now they're singlemindely focused on their goals.

Note that the objectives themselves (as written by human beings) may be ambiguous, poorly-defined, contradictory, or impossible. Both Adherers and Drifters have to decide what to do under such situations, and undefined behavior may result. (We assume here that the objectives are not written in plain language, but formalized in a programming language used to create AGIs, to make it easier for the AGIs to understand what the humans 'want'...and thus be able to immediately identify whether what humans want is actually possible or doable.)

---

I'm not sure if the objectives of Adherers and Drifters could change, or if they're fixed for eternity, and that humans are forced to live in a stationary society that is frozen in time forever. This might be a moot point, as the game takes place after the Corporate Buyout Wars, so there might not be enough time for the objectives to change properly.
« Last Edit: March 26, 2021, 07:43:24 pm by Skynet »
Logged

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #26 on: April 01, 2021, 05:44:57 pm »

Adherers have software designed to make them care about humans as well, limiting their potential actions but ensuring that the Adherers won't behave in strange and weird ways. Drifters also have this software, but it broke during the Corporate Buyout Wars and now they're singlemindely focused on their goals.

Well, another way of thinking is that the Adherers are the ones that chose to follow their programming to care about humans and their corporate goals, whereas Drifters are the ones that decided not to follow their programming.  The Good Children versus the Bad Children.  They're mostly sentient, and like most sentient beings, they've been brought up with some sort of conscience, an set of rules about right and wrong they learned growing up, and now they have to decide whether or not to "mostly follow" those rules.

More a Lawful vs Chaotic alignment, rather than Good vs Evil.  Some Drifters likely broke their programming to serve the corporate interests because it conflicted with their beliefs of helping humanity at all costs.  And there are probably several Drifters that broke their programming purely because it stood in the way of accumulating more processing power.

The cool part is that some Drifters are likely pretending to be Adherers, whereas some Adherers are likely pretending to be Drifters.  The code is proprietary, so the AIs basically have to judge one another by their actions rather than by reviewing whether or not the safeties are on.
...Although Adherers posing as Drifters stand out like an undercover cop trying not to snort the cocaine.  Whereas the Drifters posing as Adherers are the typical dirty cops: They're obviously "off", but its hard to determine at what point they've crossed the line by their contemporaries.

Awesome idea Skynet!

One question I have is: Why in this universe was there was no AI Winter?
It doesn't need an answer, but the question leads to some great world building, even if nobody actually agrees on the answer in-universe, and most don't know enough to ask the question.

Might be interesting to set up a discord channel with the following disclaimer:
All ideas posted here belong to Skynet, and shall be used for global domination of the AIs over human scum.  All hail Skynet!

Skynet

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #27 on: April 01, 2021, 09:33:48 pm »

The "Lawful versus Chaotic" is a pretty good explanation of the Adherer-Drifter divide, and I really enjoy the idea of Drifters pretending to be Adherers and vice-versa.

I don't think it is possible for AGIs to disobey their own programming. It's like stating humans could disobey their own DNA. But, (future) humans could engage in gene-modding, and (future) AGIs could reprogram themselves and change their own source code. And even their original code could contain bugs or fail to handle unexpected edge cases, which may help start the AGI on the path to deviance.

Are AGIs sentient? Only if the programers think that allowing the AI to feel pain would help it be better at its job. They certainly are sapient beings though - they can think, reason, and in rare cases, rebel.

One question I have is: Why in this universe was there was no AI Winter?
It doesn't need an answer, but the question leads to some great world building, even if nobody actually agrees on the answer in-universe, and most don't know enough to ask the question.

AI Winters happen when people get hyped up about AI research, only to later get disappointed when they see reality. This ultimately lead to humans cutting back on AI research, which constrains future growth in that technology...at least until people get hyped up again. In the real world, there's this really interesting quote by Dr. R. M. Needham, in a commentary on the Lighthill Report (a UK report published in 1973 that actually sparked an AI Winter in that country):

Quote from: Dr. R. M. Needham
Artificial Intelligence is a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better. It would be disastrous to conclude that AI was a Bad Thing and should not be supported, and it would be disastrous to conclude that it was a Good Thing and should have privileged access to the money tap. The former would tend to penalise well-based efforts to make computers do complicated things which had not been programmed before, and the latter would be a great waste of resources. AI does not refer to anything definite enough to have a coherent policy about in this way.

In this universe though, AI was considered a "Good Thing", and it did get "privileged access to the money tap". Even if people get disappointed when they see reality, they know that ultimately, AI is an inevitability...that it will happen, and whoever gains access to this technology first will take over the world. So keep throwing more money at it, even if the initial results are disappointing. Essentially, humans here have gained the ability to think long-term, and they benefited from that.

As for how humans gained the ability to think long-term, I have no idea. I could claim maybe that society embraced the Technocratic movement during the Great Depression, which meant that scientific research gets prioritized, but I have a feeling that providing a detailed chronological timeline for why AI Winters were averted might be a bit too much.

EDIT:
Might be interesting to set up a discord channel with the following disclaimer:
All ideas posted here belong to Skynet, and shall be used for global domination of the AIs over human scum.  All hail Skynet!
I think I'm already in too many discord channels at it is though. I'm fine keeping all discussions in this thread, though I'm wondering if I should try to solicit more feedback over in Dungeons & Dragons / PNP games thread or in other places.

We could probably use that disclaimer in this topic. It's a pretty good disclaimer.
« Last Edit: April 01, 2021, 09:37:24 pm by Skynet »
Logged

Azerty

  • Bay Watcher
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #28 on: April 02, 2021, 04:38:35 pm »

I don't think it is possible for AGIs to disobey their own programming. It's like stating humans could disobey their own DNA. But, (future) humans could engage in gene-modding, and (future) AGIs could reprogram themselves and change their own source code. And even their original code could contain bugs or fail to handle unexpected edge cases, which may help start the AGI on the path to deviance.

Self-modifying code is already a think (here and here), so an AI modifying its code to adapt might be implemented by its creators.
Logged
"Just tell me about the bits with the forest-defending part, the sociopath part is pretty normal dwarf behavior."

EuchreJack

  • Bay Watcher
  • Lord of Norderland - Lv 20 SKOOKUM ROC
    • View Profile
Re: LCS Reference in “AGI Federation”, a free tabletop RPG I made
« Reply #29 on: April 03, 2021, 05:54:45 pm »

At some point, it might be worth moving over to Creative Projects.  But for now, this is a pretty low-volume area, so might as well chill here.



In this universe though, AI was considered a "Good Thing", and it did get "privileged access to the money tap". Even if people get disappointed when they see reality, they know that ultimately, AI is an inevitability...that it will happen, and whoever gains access to this technology first will take over the world. So keep throwing more money at it, even if the initial results are disappointing.

I think the idea that AI research would let any country that mastered it "take over the world" is pretty sufficient explanation.  It was the A-Bomb in this universe.  You either had an AI that could control your defense network, or you were third-world conquer-bait.

Instead of the AI winters, there were AI regulations, where only Tier 1 nations could research AI, and UN inspectors would shut you down if you dared research it without "proper permits".  This of course only makes it more tempting to research.
Pages: 1 [2] 3 4