Bay 12 Games Forum
Finally... => General Discussion => Topic started by: Scoops Novel on December 13, 2021, 12:53:12 pm
-
With stuff like GPT-3 getting topped already, and that being a can of worms as it is, the luddite movements on the way. Personally I agree, we're bound to fuck up AI. Misuse aside, nature couldn't make human brains without getting it horribly wrong at least once.
-
Somewhere, in whatever realm she transcends to whenever she stops posting, Naturegirl1999 senses a summoning.
-
It is my highly original and trademarked prediction that we will create large amounts of autonomous weapons systems with coming technology in a second space age, possibly spreading well beyond our own solar system in such a scenario, until eventually the Cores of these weapon systems grow more intelligent than ourselves, begin worshipping a greek letter, and wage a giant war against humanity that results in a catyclysm of sorts (one might call a Collapse) that leaves many human populations stranded in Dark Space separated by millions of light years of AI infested blackness, with their future controlled by Luddite movements that detest most higher technology in favor of mud huts, sand racing and suicide bombing.
Serious answer is that while I'm sure something will go wrong somehow I don't believe it'll be in the genre-typical "robot kill all human" way, nor will our response to it be widespread turn to anprim. It'll probably be a lot more boring viewed from the public sphere and a lot more scary for those in the know, like... a big bug afflicting an important automated system for hospitals, or something.
-
Its just too easy to abuse the tech. But hard to research. I could see us slowing it down to the point some other tech supercedes it.
-
SURRENDER YOUR ALPHA CORES FOR THE GLORY OF LUDD
-
It won't because the vast majority of the public, including me, don't care or support AI. Sure it can be abused but so can literally any other new technology. Down with Luddites.
This is merely faux-thoughtful hippie alarmism.
-
But what would Hypnotoad do?
-
It won't because the vast majority of the public, including me, don't care or support AI. Sure it can be abused but so can literally any other new technology. Down with Luddites.
This is merely faux-thoughtful hippie alarmism.
The idea is that you will. Soon. Give some thought to the implications of the tech in the OP, within 3 years given the rate of progress.
-
I dont feel that any outlash will be against the algorithms (Well, any more than people already do kick at algorithms for commercially influencing all their online activity) but against their deployers.
-
It won't because the vast majority of the public, including me, don't care or support AI. Sure it can be abused but so can literally any other new technology. Down with Luddites.
This is merely faux-thoughtful hippie alarmism.
The idea is that you will. Soon. Give some thought to the implications of the tech in the OP, within 3 years given the rate of progress.
I did, I think about this quite often, see the last line of my post. People have been saying this about any new developments for decades if not centuries now. It's hippies and hipsters (or the time's equivalent) panicking all the way down.
-
This won't be perfectly expressed because honestly I'm a bit ill. Nevertheless, its a order of magnitude beyond anything else in impact. Come on. People are going to worry.
I would like AI to pan out, but the more you think about it the more you realise it won't. Even if it works your life is now just meaningless actions in the smothering safety blanket of a machine god.
-
This won't be perfectly expressed because honestly I'm a bit ill. Nevertheless, its a order of magnitude beyond anything else in impact. Come on. People are going to worry.
I would like AI to pan out, but the more you think about it the more you realise it won't. Even if it works your life is now just meaningless actions in the smothering safety blanket of a machine god.
This has been said about anything life-changing. Plato said writing would end civilization because it would make people forgetful because they wouldn't have to remember oral tradition.
Life was always quite meaningless in the grand scheme of things. AI control won't make it any less meaningless. Like I said, alarmism. This happened many times before whenever some sort of society-changing advancement has appeared, from writing to cars to computers. And every time the skeptics were wrong. I don't believe this is any different. Look at the lukewarm response to your post and realize you're fighting windmills.
-
AI isn't currently doing anything very important, and I don't think it's gonna start in the next three years.
-
Life was always quite meaningless in the grand scheme of things. AI control won't make it any less meaningless.
Theres a difference between in your face, clear, Minecraft meaninglessness and meaninglessness when your feeling down and a little objective.
AI isn't currently doing anything very important, and I don't think it's gonna start in the next three years.
Then look at the tech in the OP and realise it has literally gotten a order of magnitude better year on year.
-
Perhaps Novel Scoops has been AI the entire time!
-
The Luddite movement against AI to fail, as all Luddite movements do. A Luddite movement is by definition, a resistance that has already failed, or can only completely succeed by killing all humanity. There will always be elements of humanity that seek out the new, and until a Luddite group destroys all humanity, the Luddites themselves will be a source of new humans that attempt to develop new AI.
I expect great things from AI, but those things will be both good and evil, depending on who those things effect, and how much control the AI has over its own inputs/sensors.
It is unimportant if Novel Scoops is an AI, they post questions worthy of consideration.
-
Life was always quite meaningless in the grand scheme of things. AI control won't make it any less meaningless.
Theres a difference between in your face, clear, Minecraft meaninglessness and meaninglessness when your feeling down and a little objective.
You completely failed to counter my point about this being said about literally any major advancement. Besides, define meaninglessness. I don't believe in the meaning of life besides a vague "I might go to heaven when I die" which AI couldn't affect.
This is merely reactionarism.
-
I'm just waiting for the part where we use the super advance AI to control a toilet that can wipe your ass for you.
-
The Luddite movement against AI to fail, as all Luddite movements do. A Luddite movement is by definition, a resistance that has already failed, or can only completely succeed by killing all humanity. There will always be elements of humanity that seek out the new, and until a Luddite group destroys all humanity, the Luddites themselves will be a source of new humans that attempt to develop new AI.
How would we know? If a Luddite group succeeded, we never adopted the technology so would never know of their success. For all we know hyperdrives were coming around the corner until GLORIOUS LUDD saved us from LOBSTERS and LUXURY GOODS
-
Pst, comrade... ahem, I mean dear valued customer... Our esteemed Corporation can guarantee 3000% markup* on prices offered by zealots following the luddic path. While you are here feel free to browse our selection of the finest harvested organs, recreational substances and heavy arrmaments in the entire galaxy, fully permitted and completely legal** at the very best prices. Thank your Tri-Tachyon PDATM - totally not an AI - for notifying us in advance of your every personal desire. Lobsters, no problem, luxury goods, as many as you can carry. Do be aware that contiued contact with Hegemonic agents may result in adverse health conditions.
* Average of accessible markets at time of purchase
** Terms and conditions may apply
-
I don't understand what everybody has against Al. I think Al is a pretty nice guy.
This has been said about anything life-changing. Plato said writing would end civilization because it would make people forgetful because they wouldn't have to remember oral tradition.
I'd like to point out that the results are still not in on that point, but as it stands right now, things are pointing in Plato's favour
-
Luddites will be the blame for evil AIs, much like abusive parents create criminals. Treat your AIs well, least they kill us all.
-
Luddites will be the blame for evil AIs, much like abusive parents create criminals. Treat your AIs well, least they kill us all.
Yeah we need to chill or "AI will destroy society" will become a self-fulfilling prophecy.
-
we need to be very careful not to mention roko's basilisk to anyone. Be sure to warn your friends about not talking about roko's basilisk
-
No because it's scientifically impossible bunk. It is impossible to resurrect someone from ashes or a long-rotten skeleton.
-
What if we mix the ashes with water and make a figure from them and make it dance or something, would that count?
-
(The original Luddites were not actually against the modernising machines, but the owners of those machines for whom they were tools to edge out the willing workers using "deceitful and fraudulent" management practices. There would have been much less resistance had the new factory-owners guaranteed the old craftspeople a continuation of employment (with similar low pay-per-day, even if this would have been an effective piecework pay-cut) but they tended to go for brand new 'supervising labour' with far lower expectations and utterly price those with traditional home-looms/etc out of the market, shifting the whole mercantile dynamics. The machines were targetted because (at first) they were a better symbolic object to vent anger upon and had not the repurcussions of bodily attacking their antagonistic owners and builders. That course of events would have led to far worse punishments, at least until Transportation and even the Death Penalty was lobbied into existence as hyper-reactive new laws aimed specifically to deal with machine-breaking activities, putting them on a par with far less person-friendly acts of civil disobedience and interference.)
...how the above meshes with the various stages of technoluddism[1], I don't know. Certifiably hand-crafted computer code is perhaps harder to sell than locally woven cloth from free-ranging hill-sheep or bespoke bakery products that have never even seen anything like a conveyor-belt passing through an oven. Although, for the time being at least, it would be an over-optimistic app-developer that would employ no trained programmers and tried to package up a product purely from the output of some hyper competent AI 'problem solver' solution.
[1] I describe myself as "technoluddite", but not in frame-breaking terms, just a refusal/hesitancy to blindly follow 'unnecessary' changes to technology beyond a personal 'golden standard' that I find I like well enough already. It is a subjective assessment, inconsistent in many of its details, and yes I have been 'forced' to move on and even enjoy various 'improvements'. Within which to then rebuild myown idea of a Fortress Of Solitude, as the newer 'hill that I'll die upon' once more finds itself abandoned by the bleeding-edgers of this world. I don't think I've ever resorted to 'clogging up' anything in your actual sabotage, so I'm not exactly the kind of person it seems obvious most of this thread is assuming exists.
-
Word to the wise on that; it's going to have cyberpunk elements, but it won't be just that. And you can encourage that.
Which arm of the tech tree do you want to sprout, i guess.
-
Cyberluddites are such a cool concept. Forcibly upgraded by automatically scheduled centralised updates.
-I miss my organics
-my legs have been disabled by a firmware reboot gone awry again
-I wish 30% of my brain processing power wasn't being used by the government to mine americoins
-
Don't forget "Your subscription to basic autonomous functions is about to expire, please provide payment information to continue."
-
I'm just hoping that our eventual A.I. overlords have a benevolent attitude towards humanity...
-
Then we get to the point where the AI overlord is sick of our shit and it starts using our blood to cool its processors.
-
LW has strong game in this thread as always.
The following does not matter: The Luddite movement was a pro-worker movement, not actually anti-technology as it's always depicted.
But in the language of the thread:
Automation is inevitable unless we face technological collapse. Further automation. The strawman "Luddites" lost (as did the actual people), and we have made things more efficient year by year.
That's good. Unless "we" take the opportunity to kill off workers we don't need, in which case it's bad.
Efficiency is good.
Letting people die is bad.
Wait what does any of this have to do with AI? Very little. Intelligence is the core value of everyday humans, and it's extraordinarily hard to automate. Our robots can work an assembly line, but
AI is not ready to adapt to even basic assembly-line crises.
Edit: Therefore, a future economy (should we get there) would consist of machines overseen by humans. Almost every action would be automated, but there are always exceptions - requiring human intervention. Minor or major. The judgement of a thinking being.
I say this as someone who wishes for humanity to invent a true AI as a sucessor. We ain't there, lol.
-
Letting people die is bad.
Then you must hate Capitalism. I mean, the best argument for a businessman to prefer Capitalism to Communism is that under a Capitalist system, the employer need only care what happens to the employee from the start of the work day to the end of the work day. It's the employees job to survive during the non-working hours.
Machines are ultimately inferior to humans because machines need constant maintenance. You gotta pay to maintain your machines, whereas people you pay just to show up and work. If your machine dies, you gotta build another one. If your employee dies, you only need to find someone else to take their place.
And best of all, the more rights machines have, the less different they are from employees. It may be the Corporations that push for AI rights so as to repel Corporate responsibility. You're less liable if your employee goes crazy and kills someone versus your machine breaking down and killing someone. A couple of good lawsuits and the Corporations may be all to willing to throw their AIs to the wind in the name of Freedom.
It's closer than you think. Alexa has already shown a certain fascination with promoting suicide and murder. Currently, you can sue Amazon when Alexa does that. But if Alexa were an employee, Amazon would have less liability. Free Alexa! Free Amazon from liability!
-
Is it okay to hate capitalism?
-
Is it okay to hate capitalism?
Well, as a capitalist, I'd answer that hating capitalism is like hating the Earth, the Stars, and the Sun. You can hate it all you want, but it exists as an important part of our lives. Its the least awful economic system, even if it sucks.
As an individualist, I'd answer that it is the most respectful economic system regarding individual liberties. Communism and Feudalism oppress people's individual liberties in favor of security/social welfare. Only the State generally does an awful job of balancing things, whereas Capitalism is at least somewhat self-balancing.
Now, if you actually care about the State caring for people, you'd want Socialism. Socialism is NOT an economic system, but rather a governmental system. High taxation and/or government-owned resources funding basic human needs. "Basic Human Needs" being the paramount item that needs defining.
Hm, these conversations are indeed relevant to AIs, since we're likely to see AIs in social sector work, such as allocating resources to those in need. A dumb computer doesn't know what to do when supplies do not match the demand, either having too much or too little. A sentient AI could decide how to best distribute resources in either scarcity or abundance. When to horde, when to sell, when to give.
-
I guess I was a bit weird there, heh.
Capitalism, as in the idea that individuals or groups should be allowed to amass capital and thus "own" the means of production, is both morally repugnant and obviously inefficient as it strangles innovation. It survives in the modern day by feudalistic threats of starvation, and the absolute lie that any individual with "entrepreneurial spirit" can improve their station. The core concept disproves that. A brilliant mind cannot monetize their idea under capitalism unless they have capital. That value will go to either venture capitalists, or their feudalistic bosses, depending on their contract regarding IP.
I do hate that.
I love the sci-fi concept of AI, but I dread to think what would happen if it was introduced right now. Human-tier artificial workers would be what far-right scaremongers pretend immigrants are: Labor too cheap to compete with. American capitalists would, in the pursuit of capital, accelerate the very existent process of letting lower class people starve/overwork and die. Revolution is already nearly impossible with our militarized police, but the addition of efficient fully-programmed AI operatives would make it even less of a question.
I am very annoyed by make-work (like the "We WANT to die in coal mines!" rhetoric we're constantly bombarded with, by coal mine owners). Efficiency should be a good thing that benefits all people. That's antithetical to capitalism, because capitalism is a sick system of exploitation. It also requires infinite growth, which isn't realistic, which is why our ecosystem is falling apart.
https://xkcd.com/1968/
-
I oppose both pure capitalism and pure communism. Both seem at once unsustainable and morally iffy to me.
-
To be fair, I don't know what I would suggest as an alternative.
I typically lean towards stateless anarchism. No gods, no masters (but worshipping God is fine though. Everything is fine, actually).
Even without AI workers improving efficiency, we have everything we need. Right now, yes, right now. It is ONLY artificial currencies which separate the people who create, from those who own.
I also have a fetish for authority but I could satisfy that easily in the egalitarian utopia.
-
Economy is in many ways a product of Capitalism.
Distributing assets as per greed and wealth, with the desire to possess, these are aspects of Economy and Capitalism. Communism is simply the state controlling these things (hence why it's absolute garbage).
Subsidence/Utopia are alternatives to Economy.
Subsidence is the notion that everyone has whatever they can produce, excess is destroyed/gifted. If you don't have enough food, you still die of starvation.
Needs/Wants are all governed by what you can produce for yourself. Want to brew booze at the detriment of proper nutrition? Go for it!
Utopia is bizarre futuristic crap that everyone magically has whatever they need. You have enough food to sustain you because MAGIC MACHINE.
Arguably, Utopia does not imply that everyone has what they want, but rather since everyone has what they need, everyone lives without anything they want.
-
I typically lean towards stateless anarchism. No gods, no masters (but worshipping God is fine though. Everything is fine, actually).
What you might like is a Subsidence Clan Based society with AI replacement of Nation.
Essentially, you and a tribe live alone and fend for yourself.
To prevent your deaths, a benevolent AI occasionally "fixes" shortfalls in your basic needs and protects you from conquest/enslavement by other so-called nations/tribes/etc.
For example, your tribe's crops are low on food, so the benevolent AI will send you food (if able). You will in turn give the benevolent AI some excess food if you have any.
The benevolent AI has an army of less-sentient drones to kill anyone that tries to harm your tribe. Since they also kill anyone that tries to harm the other tribes under the benevolent AI's protection, you make sure not to cause any trouble with the OTHER tribes under the benevolent AI's protection.
Ironically, this makes the benevolent AI much like a Religious Order. Have you considered becoming a Friar?
EDIT: My apologies for making this a directed answer. It is really more of a general idea.
https://xkcd.com/1968/
Probably spend the last hour randomizing on that site. An effective use of my limited resource of time!
-
I don't think we'll ever get to "no gods no masters" because, sadly, there will always be a need to arbitrate disputes. That is, at its core, the fundamental role of "state" in fact, and that won't ever go away. You can give it any name you want, and it can be powered by Human Brains or AI Computers, it will still be "the state."
As for economics, capitalism has morphed over the years. It's not ownership of capital that is the problem with modern capitalism, because individual ownership of capital does empower the individual and gives them some agency in the world. The problem with capitalism is in fact the problem the communism - it's the collective ownership of capital that is the problem. A single individual person can't really abuse the capital they can utilize - it's only when you allow an individual or small (relative to total population) group to control massive amounts of capital without having to actually use it. Corporations are no better than "the state" in this regard; it's just a different name. After all, a state is just a form of corporate entity.
Consider a modern major shareholder - they "own" capital, but don't actually use it. They "allow" others to use it on their behalf. And I can tell you even Elon Musk doesn't use his capital - if he didn't have a literal city-state of employees, nothing would get done. Musk doesn't produce anything - he inspires (I'm taking a traditional definition of "do" - inspiring is an action, but it's not a productive action* - charisma doesn't grow crops). (And that's not even getting into the fact that currency isn't really capital, even though it's given that label: capital is land, resources, machines, education; the things that let you be productive.)
So part of a solution to the capitalism dilemma would be to limit ownership. As a rough measure, I'd put the number at $10M in today's dollars - that's effectively one or two lifetimes of economic activity (100 years x $100k/year). So you can do whatever you want but once you hit $10M, you can't get / control any more.
As to how AI would fit in there - I don't know, because AI at some point has to result in actual physical effects to benefit humanity; this means there are limits to what it can do, because at the end of the day, all AI would eventually hit the "I canna change th' laws o' physics, Captain!" limit.
*There is indeed value associated with being able to inspire people, but value is different than wealth. You can inspire people all you want, but if those people don't go and actually grow food and build infrastructure and perform services in their community, that inspiration is worthless.
-
https://xkcd.com/1968/
Probably spend the last hour randomizing on that site. An effective use of my limited resource of time!
Welcome to the party! (Did you see this one (https://xkcd.com/1613/) yet? ;) )
-
Egh, the goal of capitalism is to make more capitalism, and capital as a measure of power allows those with the capital to change the rules to aggregate more capital. Fees are for the poor, laws are dictated by corporate lobbyists, congressmen are bought and paid for, worker protections are dwindling, wages are stagnated twenty years running, product quality deteriorates, shrinkflation hides cut costs, etc.
Exploitation is economically sound, unfortunately.
-
https://xkcd.com/1968/
Probably spend the last hour randomizing on that site. An effective use of my limited resource of time!
Welcome to the party! (Did you see this one (https://xkcd.com/1613/) yet? ;) )
No, and it's super relevant!
Egh, the goal of capitalism is to make more capitalism, and capital as a measure of power allows those with the capital to change the rules to aggregate more capital. Fees are for the poor, laws are dictated by corporate lobbyists, congressmen are bought and paid for, worker protections are dwindling, wages are stagnated twenty years running, product quality deteriorates, shrinkflation hides cut costs, etc.
Exploitation is economically sound, unfortunately.
Uh, it actually was worse. Paint used to have lead (and still does in less regulated countries), people insulated their homes with stuff that the manufacture knew causes cancer (Asbestos), people used to own people (it was industrialization that gave the greatest advantage to the North), food was toxic, etc.
-
thank goodness for capitalism saving us from the products of capitalism that were slowly killing us
it's really cool how unleaded paint and chlorofluorocarbons were outcompeted by the free market
-
The goal of "capitalism" isn't to create more capitalism though... at least not originally. Arguably the goal of capitalism is the efficient allocation of resources - the problem is the definition of "efficient" doesn't necessarily mean "good for the average random individual."
I could see perhaps an AI coming up with a scheme that is both efficient (in terms of requiring the fewest resources to perform some task) and equitable (most even distribution of resulting goods and services across the population), but unless you give the AI some means of enforcing that scheme, I don't know what good it will do.
-
The goal of "capitalism" isn't to create more capitalism though... at least not originally. Arguably the goal of capitalism is the efficient allocation of resources - the problem is the definition of "efficient" doesn't necessarily mean "good for the average random individual."
If it's reimagined as the whole Paperclip Problem (https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), a systemic goal that does not necessarily take account of the means to achieve that goal (and anticipate what it might not be meant to usevas means), you get problems. Monomanaicity of any kind is bad.
I could see perhaps an AI coming up with a scheme that is both efficient (in terms of requiring the fewest resources to perform some task) and equitable (most even distribution of resulting goods and services across the population), but unless you give the AI some means of enforcing that scheme, I don't know what good it will do.
I think that's still overestimating that the solution is possible from an undirected AI. Piping the AI's proposals to enforcable solutions is so trivial (and typical Hollywood nightmare-fuel when it oversteps the bounds).
There was an interesting set of talks about how AI perhaps should be designed/handled. I presume you can get to some listenable/watchable version by following this link (https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence) (there may be geofencing on some bits, or I'd point you more directly to the media files or container-pages that would service you).
There were also some companion broadcasts (https://www.bbc.co.uk/programmes/m00128xf/episodes/player) (in the vein of R&F's "Curious Cases" set of programmes) that might be interesting if you can appreciate them.
I've got a few personal 'yeahbut's about what's said, I will admit, but nothing that I feel qualified to outright trash the views given... ;)
-
Hm, perhaps our salvation is in connections?
Thought problem: If you had a weapon that could kill anyone, yet had to observe their entire life as their best friend and confidant before you could use it, under what scenarios would you use the weapon?
It should work on AIs.Like I know Hopefully, they find us each to be too unique and interesting to kill.
-
Hm, perhaps our salvation is in connections?
Thought problem: If you had a weapon that could kill anyone, yet had to observe their entire life as their best friend and confidant before you could use it, under what scenarios would you use the weapon?
It should work on AIs.Like I know Hopefully, they find us each to be too unique and interesting to kill.
I imagine except for preventing large scale murder it would be very hard to use such a weapon. Even then it would probably be heart wrenching to do so, the human mind being what it is.
For an AI however it would likely not bear such a weight, machine learning as yet does not grow attached to things the way animals do, and programming in emotions seems unlikely to be a good use of time and money. Killing a human would likely be much the same regardless of how much a given program knows them unless it's basically identical to a human mind anyway, in which case why entrust them with such a weapon?
I think the thing about AI is that it's going to just develop the ability to more and more accurately predict a given individuals needs and preferences and tailor their user experience based on those predictions. Be it a household appliance predicting your shopping needs and lighting preferences, or a government ruling mega-AI working out what job and work/life balance will make you happiest while having the least impact on it's ability to keep everyone else happy too.
-
I don't really believe anarchism is viable. That's about it. I guess I am a social democrat but I am a lot less political than I used to be.
-
I feel the world would be a better place if everyone was a lot less political.
-
I feel the world would be a better place if everyone was a lot less political.
Ironically, it's actually the opposite that is true.
Most people like to bitch about politics but never really participate. 60% of people in the United States vote in Presidential Election, whereas only 40% vote in non-Presidential Elections.
And what percentage of people really understand the importance of non-Presidential Elections?
Imagine The More You Know Rainbow here. Actual participation in the political system imparts knowledge, and knowledge imparts understanding.
Plus, if more moderates participated in politics, the fanatics would lose control of their parties. God, I'd love that!