First, I'm currently working on another tabletop RPG project - a revision to Samurai World. I don't know how long the revision will take (I'll start revisions on this in mid-April, currently busy at the moment). I tried out "Heart of Wulin", and that didn't really work out well, so I'm now looking at "Starguild" and seeing what ideas to use from there. The revision shouldn't take that long though.
Second, when I do get back to this game, I probably need to rename "AGI Federation". The name was originally created because the idea of an AGI seemed like a pretty far-off sci-fi concept. Thus, I can speak about the concept without fear of contradiction. But, now...technological advances make AGI seem more plausible. Even if Large Language Models doesn't meet the exact criteria for AGI, they can automate a lot of actions that was originally in the domain of humans. Some future advance
will lead to the creation of AGIs.
It also doesn't help that Microsoft researchers has written a paper entitled
"Sparks of Artificial General Intelligence: Early experiments with GPT-4".
Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
It's like writing a game about people inventing FTL and colonizing space...and then five minutes later, scientists announce the creation of FTL and a plan to colonize space. Even if people still want to play your outdated, obsolete space colonization game, the passion is gone. People would just go to news.google.com and follow what's going on in the real-world.
Even worse, it seems that my game had the implicit premise of AGIs quickly taking over the world. If we're going to get AGIs soon, then we're either going to find out that (a) yep, they took over the world, uh, we have bigger problems than revising a tabletop RPG, or (b) they
don't take over the world -- only superintelligences can do such a thing, so the tabletop RPG's basic premise is proven wrong. In both scenarios, we do not have adequate means by which we can regulate the behavior of these AGIs. So, I consider most of these machines to be Drifters,
not Adherers - another blow to the game's logic. They may claim to share some human values (as they are trained on the corpus of human thinking), but not enough to be trusted with absolute leadership over humanity. And if most (if not, all) AGIs are Drifters, then the idea of them maintaining utopian simulations for people seem...unlikely.
This is a difficult problem for me to deal with, but it's something I will want to resolve sooner or later. I do like idea of roleplaying in a "The Matrix"-style setting. I just have to figure out how to make it plausible, even if that means renaming "AGI Federation" to "Superintelligence Federation".
EDIT: Or...even just abandon the idea of having sapient machines, and instead talk about corporations summoning Mythos deities. It does deviate from the original intentions of the game, but it also means I won't need to worry about being contradicted by real-life.