But can not the same arguments be made about us?
Another problem lies in system optimization, because simulating sentient being with full consciousness and self perception in digital environment consumes more cpu power for certain. Lag would be terrible. I can't see any possibility for this 'til DF becomes multithreaded at least. :P
Dwarves in DF are not remotely close to even being 1% sentient. You might have overestimated exactly how deep DF goes. Dwarves' "thoughts" are nothing but simple triggers that raise or lower a single number variable depending on specific stimuli. If a dwarf's relative happens to die, the dwarf's stress goes up by a set value written in the code. Dwarves do not have complex emotional responses; they only sometimes "tantrum" or go "insane", both of which are exactly as mathematical and procedural as their thoughts. They cannot think deeply, either. A military dwarf, if they "see" a hostile creature, will immediately run it down with mechanical precision, whether or not it would be better to wait for their fellow militia.
It is completely ethical to play Dwarf Fortress, because the creatures with which we interact are not in fact creatures, but simply data being manipulated by deterministic processes. Killing somebody in this game (or in any game) does nothing but change a few bytes in your computer's memory. Not even close to killing somebody in real life.
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
There is this idea of a thing called a philosophical zombie. It's basicaly just a robot that made to resemble a human in every way but lacks a soul or anything like that. But if it is programmed to think its alive and truly sentient.....is it truly alive and sentient?
If you took an insect and expanded its consciousness using cyberpunk wizardry to human or even further levels....is the resulting consciousness truly sentient?
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
We do not know if we are in a simulation. It's best to assume that we aren't unless proven otherwise.
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
Exactly
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
Exactly
Why would I care?
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
Exactly
Why would I care?
Ethics?
You're vastly under estimating the "intelligence" and agency of bacteria when placing them on an equal footing with a crude simulation of some behaviors (this is not intended as criticism of DF). DF is far below the level of virii as well (the real world kind, not the malicious code one).
It is not ethical, but not for that reason.
We see the creatures in this game as less complex and sentient than ourselves and so say that mistreating them is not unethical.
It would be interesting if in the Myth update we see how the deities feel about these ideals.
Whether they consider killing simple sapiens to be unethical. They might argue about it.
Honestly this discussion is absurd. Why are we even still debating this?
If you think of save files and .xmls etc as alternate dimensions bound by the same principle rules (and those created extraneously) every-time you play they are their own little existence and there are both events abstract in and after the natural course of time which we would refer to as world generation, where did the world and all the things that exist come to be as central unanswered questions. For a matter of fact when you stop playing time is quite literally frozen for the world inside until by action of the player you resume.
Somewhere on people's hard-drives in the untraceable post-deleted 'ghosts' of files (even if you tried scrubbing your hardware with a magnet) there are entire generated worlds completely paused in the motion of their programmed 'lives' or in the middle of dying horribly in the face of !Fun! for they have ceased to be and remain, we ourselves would not know the moment of our own demise as the concept of time is also a construct of our brains to make chronological order of past, present and future.
If the universe is vanilla, what is DFhack? Divine intervention? In my mind its as ethical as killing lobsters with boiling water, a personal preference on your attitudes on killing animals or increasingly elaborate synthetic intelligence instantly being more humane as a comparison to killing them in a way that infers suffering, as we actually get closer to self aware and self-protection conscious machines (like skynet at worst case) it will be more of issue on digital intelligence rights i think.
Again, none of this matters since dwarves in DF are not remotely sentient at all. A human in real life is several billion orders of magnitude more complex than any of them. Dwarves' entire emotional state is represented by one value, so the argument that they are somehow mentally equivalent to us makes no sense whatsoever.
There was a Philip K. Dick short story, "The Trouble with Bubbles", covering the ideas of simulated worlds and their creation, and the issues surrounding such.
Wikipedia sums it up as:
"The story is set in a future where mankind has attempted to reach other intelligent lifeforms through space exploration, and found nothing. In light of this yearning to connect with other lifeforms, people can buy a plastic bubble known as a Worldcraft, the tagline of which reads "Own Your Own World!". The owner of the Worldcraft is able to create a whole universe, controlling all the variables inherent to its development. Within the universe, lifeforms just like humans exist.
In the story we see Nathan Hull, the protagonist, attending a contest to judge who has created the best Worldcraft universe. A contestant subsequently smashes and destroys her bubble after being announced the winner. Hull, feeling the immorality of the control owners have over the lives within the bubbles, works to have laws passed against creating any more Worldcrafts. At the end of the story, Hull is about to drive through a newly built underground tunnel to Asia when an unexpected earthquake breaks it up, killing scores of people."
Is it ethical to take medicine to kill bacteria/viruses/parasites? Stop being an elf and go destroy a few more fortresses!
We don't take kindly to your types around here! (*shakes his axe*)
Yes, they're different, because they can't think. Your argument makes no sense because bacteria also think they're alive, yet you have no qualms about killing bacteria. And DF characters are less intelligent than bacteria. Hypocrisy.
muh ethics?
Quote from: Zaphodmuh ethics?
Jeez... this topic is still alive?
Anyways... I hope it's not ethical, because
amount of ethics is inversely proportional to the amount of fun.
It has always been so and it will always be.
Therefore, less ethics = more fun! :P
wtf is wrong with u guys?
Honestly this discussion is absurd. Why are we even still debating this?
Jeez... this topic is still alive?
Anything I do to the dwarves is justified, as they are less powerful and cannot stop me.
Anything I do to the dwarves is justified, as they are less powerful and cannot stop me.
It's still fine because they're not sentient and are videogame characters.
It's still fine because they're not sentient and are videogame characters.
The people who are playing the game are sentient.
Ooh, Neil degrasse Tyson.They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
We do not know if we are in a simulation. It's best to assume that we aren't unless proven otherwise.
*whistles*
https://www.space.com/32543-universe-a-simulation-asimov-debate.html
Davoudi proposed a possible way to spot one of these shortcuts: by studying cosmic rays, the most energetic particles scientists have ever observed. Cosmic rays would appear subtly different if space-time were formed of tiny, discrete chunks — like those computer pixels — as opposed to continuous, intact swaths, she said.And so it went, along that strange assumption.
Anything I do to the dwarves is justified, as they are less powerful and cannot stop me.RULES OF NATURE
-universe simulation snip-The whole "universe is a simulation" theory was also debunked based on trying to make computer models of quantum physical interactions.
-universe simulation snip-The whole "universe is a simulation" theory was also debunked based on trying to make computer models of quantum physical interactions.
http://www.pbs.org/wgbh/nova/next/physics/physicists-confirm-that-were-not-living-in-a-computer-simulation/
I wasn't talking about the players. I was talking about the characters. DF is a single-player game, and the characters can't stop you, so everything you do to them is fine.
Who cares. You all people are being offloaded when I don't look to save processing power and I am a center of all existence. Prove me wrong - you can't ;D
Who cares. You all people are being offloaded when I don't look to save processing power and I am a center of all existence. Prove me wrong - you can't ;D
We can, actually -- if only those in your general vicinity are active at any given time, that means the people which you spend most of your time with would go through life much more quickly than those you see very rarely. We would notice that a certain area of the world would age much faster than the rest.
I wasn't talking about the players. I was talking about the characters. DF is a single-player game, and the characters can't stop you, so everything you do to them is fine.
The victims here would not be the characters but the players themselves. That is because there is no difference between killing the exact appearance of 100 people and actually killing 100 people from the POV of the one doing the killing (not in objective reality). So regardless of whether the people being killed actually exist all, the person is now themselves psychologically affected by the act of having killed the appearance of 100 people exactly as they would as if they had really killed 100 people.
To clarify things, I am of course not arguing that playing DF is actually unethical, only that it is possible to play the game such that it is actually unethical. Even in the case that you did play the game unethically, the effect would be minor because the game lacks immersion owing to it's poor graphics, meaning that there is a long way to go between killing folks in DF to killing folks in real-life. This however is just a technological limitation really, some types of games have more unethical potential than others, the worse games are those that fit closest to reality as depicted in either the abstract or the mundane, that is they mirror the real-world either as we abstractly imagine it to be (think strategy games with maps) or as we actually see it (think Elder Scrolls type games like Morrowind, Oblivion or Skyrim).
For a thought experiment lets imagine something called the PIG, (stands for perfectly immersive game); it's rather like the Matrix really. If we do something in the PIG then it is essentially identical to actually doing it in real-life. In PIG I murder 100 small children, these beings do not of course actually exist in PIG but because PIG is so immersive the experience of killing them mirrors near-perfectly the experience of doing so in real-life. Even though you did not actually kill anybody who exactly existed, your experience of the world is now that of a person who murders small children; you are a murderer in effect even though you never actually killed anyone.
They still aren't real people. I wouldn't feel a thing in PIG.
That's not really true. Any decent sim would be able to reconcile the lost time relatively easily, just because everything wasn't rendered graphically doesn't mean the simulation can't reconstruct the events that happened in the intervening time. We're offloaded and our states are checked on reload. If we're not synced to the proper time, the off screen events are rapidly simulated offscreen- none of the actual physics needs to be run, just a calculation of probabilities and outcomes.
That'd be like if, in dwarf fortress, no one who wasn't on your adventurer's screen aged.
They still aren't real people. I wouldn't feel a thing in PIG.
That is exactly one the problems. Not feeling anything about the appearance of the killing defenseless people is not something we want to encourage. Because in Real-Life the appearance of a defenceless person *is* still an actual defenceless person. We want people to care when they see it. :)
I would feel something if I wasn't told it was a game. If I WAS informed that it was a game, apathy, as always.
I would feel something if I wasn't told it was a game. If I WAS informed that it was a game, apathy, as always.
You are now choosing whether to care rather than actually caring. Your ability to choose whether or not to care in a particular context based solely on what you know in the abstract can equally be employed in real-life.
When you look at all genocidal mass-murderers, they do something similar to what you are doing. Despite the appearance being identical they are able to choose not to care based upon what they *know* in the abstract, in your case that PIG's people don't really exist. Genocidal murderers make similar abstract distinctions and are able to use it to category bracket particular appearances of murder from others despite their identicality.
I would feel something if I wasn't told it was a game. If I WAS informed that it was a game, apathy, as always.
You are now choosing whether to care rather than actually caring. Your ability to choose whether or not to care in a particular context based solely on what you know in the abstract can equally be employed in real-life.
When you look at all genocidal mass-murderers, they do something similar to what you are doing. Despite the appearance being identical they are able to choose not to care based upon what they *know* in the abstract, in your case that PIG's people don't really exist. Genocidal murderers make similar abstract distinctions and are able to use it to category bracket particular appearances of murder from others despite their identicality.
I would feel something if I wasn't told it was a game. If I WAS informed that it was a game, apathy, as always.
You are now choosing whether to care rather than actually caring. Your ability to choose whether or not to care in a particular context based solely on what you know in the abstract can equally be employed in real-life.
When you look at all genocidal mass-murderers, they do something similar to what you are doing. Despite the appearance being identical they are able to choose not to care based upon what they *know* in the abstract, in your case that PIG's people don't really exist. Genocidal murderers make similar abstract distinctions and are able to use it to category bracket particular appearances of murder from others despite their identicality.
If they don't actually exist there's no ethical dilemma-they're not sophonts, and no moral actor aware of that fact is obliged to pretend that they are. You're equating the ability to discern the difference between fiction and reality to justifying mass-murder.
Are you partially culpable for murder if you watch a slasher film?
What's preventing me from killing simulated people but being nice to real, living people? Yes, that's kinda racist, but still. Also, your arguments are irrelevant no matter your answer because DF doesn't have a perfect appearance of a human.
If they don't actually exist there's no ethical dilemma-they're not sophonts, and no moral actor aware of that fact is obliged to pretend that they are. You're equating the ability to discern the difference between fiction and reality to justifying mass-murder.
Are you partially culpable for murder if you watch a slasher film?
I will not give up on my standpoint. So, you're comparing me to a mass murderer because I, unlike you, can discern between fiction and reality? And simulated characters are OK to be cruel to because they're inferior to humans in every way. Maybe by your terms, I'm a mass murderer, but I'm a harmless one because I kill simulated characters only.
This comicstrip immediately reminded me of this discussion :D
http://www.blastwave-comic.com/index.php?p=comic&nro=79
thank you for reminding me of this nice comic from xkcd :)This comicstrip immediately reminded me of this discussion :D
http://www.blastwave-comic.com/index.php?p=comic&nro=79
That comic strip reminds me of a bunch of rocks from xkcd.
https://xkcd.com/505/
thank you for reminding me of this nice comic from xkcd :)This comicstrip immediately reminded me of this discussion :D
http://www.blastwave-comic.com/index.php?p=comic&nro=79
That comic strip reminds me of a bunch of rocks from xkcd.
https://xkcd.com/505/
As far as we can determine we can think and have free will, as well as feel pain (unless, of course, "Zaphod" is just a bot). I know there are some religious/philosophical schools that claim this is an illusion (at least the free will part), and everything is predetermined and time an illusion. You can then make two choices: assume everything is pre determined, and you can act as despicably as you want, because it's not your fault, or make the opposite decision to assume that you actually do have a free will and (try to) act like a civilized creature. If your assumption is wrong in the first case you *are* a despicable creature worthy of the punishment you receive for your actions, while if you're wrong in the second case you didn't actually have any choice but to behave in a civilized manner.
If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise.
The amount of navel-gazing in this thread melted my notebook's processor.
Dwarves feel joy and pain. You shouldn't torture them.
Dwarves feel joy and pain. You shouldn't torture them.
Except DF characters don't use machine learning. They aren't sentient. I can torture them all I want.
This is a fun argument that may at some point in the future become relevant, once we create real AI.
That being said, let me pose the following suggestion: The quality of being an entity that experiences existence (for the sake of brevity, this concept will be referred to as "conscious") cannot be tied to any particular degree of complexity. Otherwise, any particular point of complexity you choose to "draw the line" will be completely arbitrary. Is an ape conscious? A human infant? A dog? A lizard? A plant? A bacterium? An atom? All are entities that respond to their environment in some sense, the only difference is the complexity with which they do so
Therefore, I suggest the following: Everything is conscious. Consciousness is a fundamental property of reality itself; the degree of an entity's experiential consciousness is reliant on how much information it is capable of storing. An electron "stores" only a few bits of data - its own energy state - and its responses to input are extremely simple - it can absorb or emits a photon. A human being is considerably more complex. But there is no qualitative difference between them.
This suggestion will be rejected, since it flies in the face of certain things we take for granted. For example, that the killing of conscious entities is wrong. But the entire idea of right and wrong are non-physical in nature. These are human concepts.
The reason why we consider some things to be right and others to be wrong is because these beliefs work. Societies that consider wanton murder of other humans unacceptable outlive those that do not, and so the taboo against murder is nearly universal. It is risky to uproot traditional morality for the same reason it is risky to perform invasive surgery on someone - these systems evolved over many generations of trial and error as we as a species worked out which beliefs work and which ones don't. Sometimes the reasons are obvious, other times, less so. Sometimes a better system may exist, and so societies evolve and refine their views on morality; other times a society may think it is advancing forward when it is in fact a non-viable mutant; history weeds these out as they come. It is impossible to be certain until after the fact.
Why do most societies consider the murder of a human wrong, while killing animals is typically less looked down on? It isn't because of any intrinsic quality that makes it "wrong" to kill a human; it is because a human can be reasoned with. If we both agree not to kill each other, we can work together and build a society instead of fighting. Therefore societies where people agree not to kill each other are more successful than those which do not. For the same reasons, it has often been considered acceptable to put people to death who refuse to follow this "agreement".
Of course, humans being creatures of pattern-making and metaphor, it is only logical that we should draw analogy between members of our own society which follows our own laws and foreigners or criminals, or even species that in some way resemble us. Exactly where we draw the line is, again, arbitrary; it is a quirk of human thought, or perhaps motivated by other, more complex systems - killing criminals, foreigners, or animals can train a person to be less empathetic, which can be detrimental to a society, so perhaps certain societies have "learned" that it is better not to kill.
Back to the ethics of DF and AI in general:
Whether it is wrong to kill a vaguely simulated dwarf, or a complex "real" AI, or hit backspace and delete a letter in a post, has nothing to do with whether or not the destroyed entity is "conscious". What matters is what are the ramifications of doing so on the society that considers it to be ethical or non-ethical?
Does playing a realistic FPS, or fighting game, or slowly mutilating an elf in Adventure Mode make a person less empathetic? Will this lack of empathy cause detrimental effects on society? Or does it serve as catharsis and make people less likely to go out and perform such actions in reality? I would argue it does both, but at any rate the effects on society seem to be pretty negligible, so for now our society seems to go with [KILL_VIRTUAL:ACCEPTABLE][TORTURE_VIRTUAL:MISGUIDED]
And what about when we make real, practical AI that is on par with ourselves intellectually and (most importantly) doesn't want to be killed (this is an important clarifier; I do not believe a desire to live is intrinsic to life or even intelligence; we simply evolved that way because it allowed our ancestors to survive). Well in that case, a society that decides that abusing robots is OK is probably less likely to survive than one which grants them equal rights. So in that case, we will probably decide that destroying such an AI is wrong.
But we aren't there yet, and it certainly doesn't matter for DF, so by all means, kill all the virtual dwarves you like.
We know that humans are conscious, we do not know that apes, dogs, lizards, plants, bacteria or atoms are conscious. Nor can we in any definitive way
Does playing a realistic FPS, or fighting game, or slowly mutilating an elf in Adventure Mode make a person less empathetic? Will this lack of empathy cause detrimental effects on society? Or does it serve as catharsis and make people less likely to go out and perform such actions in reality? I would argue it does both, but at any rate the effects on society seem to be pretty negligible, so for now our society seems to go with [KILL_VIRTUAL:ACCEPTABLE][TORTURE_VIRTUAL:MISGUIDED](Choosing just one paragraph to some up a very interesting post)
especially
if
you're
in a
bad
mood
I'm surprised this didn't devolve into petty insults.
If you think of save files and .xmls etc as alternate dimensions bound by the same principle rules (and those created extraneously) every-time you play they are their own little existence and there are both events abstract in and after the natural course of time which we would refer to as world generation, where did the world and all the things that exist come to be as central unanswered questions. For a matter of fact when you stop playing time is quite literally frozen for the world inside until by action of the player you resume.
Somewhere on people's hard-drives in the untraceable post-deleted 'ghosts' of files (even if you tried scrubbing your hardware with a magnet) there are entire generated worlds completely paused in the motion of their programmed 'lives' or in the middle of dying horribly in the face of !Fun! for they have ceased to be and remain, we ourselves would not know the moment of our own demise as the concept of time is also a construct of our brains to make chronological order of past, present and future.
If the universe is vanilla, what is DFhack? Divine intervention? In my mind its as ethical as killing lobsters with boiling water, a personal preference on your attitudes on killing animals or increasingly elaborate synthetic intelligence instantly being more humane as a comparison to killing them in a way that infers suffering, as we actually get closer to self aware and self-protection conscious machines (like skynet at worst case) it will be more of issue on digital intelligence rights i think.
While I totally get that, it's so "I can know nothing outside of myself because it's all just a product of my perception" that I'm going to say: if you're seriously as dense as to stipulate that to the letter, I'm opening the door to all doubts about every human actually being bestowed with conscience, I swear to Armok. (not in your particular case, but given humanity's recent history)
We do agree then, I was merely stating that if you say you say you can't conclude conscience in animals, there is nothing allowing you to draw those conclusions in other humans either.
Meh, video game characters are inferior to humans or even bacteria, so there are no ethical repercussions for killing them. Well, according to my ethics. I don't know about yours.
But that's the kind of arrogance that annoys me, just take a look at Koko the gorilla, Kanzi the bonobo, Pebbles the cockatoo or Wojtek the bear... The evidence is overwhelming for animals.
Now as to radical constructivism: it's a perspective, a frame of mind and something to never forget. But the scientific method rejects every aspect that isn't necessary to a model (->"electron fly around protons, because god said so"). So from our 21st century point of vue there is next to no interest to color our explanations of the world with that additional layer. It might change tough, who knows.
In my point of vue consciousness (look I really don't care for that weak distinction, it's like I speak 3 languages and none of them well, so for many things I favor the words of one particular language, in this case I mean "Bewusstsein", but anyway you got me to look it up and well that... that helped duh)
let me start over
In my point of vue consciousness is a prerequesite for many things, such as intent, memory, planning and well even fucking conscience, since you couldn't have morality without emphaty, which you would not have either, because it's way higher in the evolution tree than consciousness. So I don't see how Occam's razor takes anything off that (thanks for teaching me the short way to reference this). As to insisting that only the awareness of consciousness defines true consciousness (or the ability to define said awareness). I find that very silly, and by the way it brings me back to my original point which is that by that measure most humans don't pass, and every mimickry argument can be applied to them.
I wonder tough, did I miss your point? Because I still feel like I kind of have to explain myself.
edit: it's point of VIEW right?
play more DFEventually DF shall become such an intricate simulation of reality, that by understanding DF, one can understand the world
Reading violent books is unethical. You're creating thinking beings that run on the hardware of your brain just so they can suffer and die.This is actually something that the Rationalists are worrying about - a sufficiently intelligent and knowledgeable AI might actually, in the process of predicting others' behaviors, produce thinking beings that are quickly deleted. This is usually seen as a bad thing.
Reading violent books is unethical. You're creating thinking beings that run on the hardware of your brain just so they can suffer and die.This is actually something that the Rationalists are worrying about - a sufficiently intelligent and knowledgeable AI might actually, in the process of predicting others' behaviors, produce thinking beings that are quickly deleted. This is usually seen as a bad thing.
What do you mean?And what of the AI's ethics?Reading violent books is unethical. You're creating thinking beings that run on the hardware of your brain just so they can suffer and die.This is actually something that the Rationalists are worrying about - a sufficiently intelligent and knowledgeable AI might actually, in the process of predicting others' behaviors, produce thinking beings that are quickly deleted. This is usually seen as a bad thing.
Well, if the AI is an ethical being (and I would be terrified if it wasn't), what of its ethics? Would it not agree and regard such a thing as unethical?What does it mean for something to be an ethical being? Is it sufficient to have a system of ethics, or does that system have to approximate/be near to ours?
Ethics don't apply in video games. The characters are inferior to you. They can't even think.True, but it seems plausible that killing video game characters normalizes or cheapens murder. If killing people, even if they are philosophical zombies, makes you a worse person, then you should not kill people.
Given that our current method of making AI seems to mostly involve sticking a bunch of neurons in a box and letting it mutate itself until you get something mostly alright, it might prove to be a bit difficult to give them explicit morals or rules.That is a really bad idea, and I think we should stop until we understand how to make a mind ourselves, instead of the equivalent of just shaking a time-dilated box until something assembles inside.
I still treat normal people well.Of course. My criticism was a mere quibble, only a slight exception to the moral statement you made. Ethics apply within video-games only to the extent that your actions within a video game affect the outside world, which in most cases is minimal. Very few Dwarf Fortress players build magma traps in their basement, for instance.
Consciousness remains inexplicable to this day if you get really gritty that's true. But I do believe that our models have some inherent value, even if they will always be incomplete. But I get what you say concerning scientific ideologues tough in my mind that is a direct consequence of our education system and not ill intented scientists or flawed methods. There are a fair amount of people in the field who know that all progress they make are but temporary truths.
If a model is able to make predictions it is a good partial description of reality. Ultimately we can not know reality but does that matter when we share this same perception?
As for GoblinCookie's "everything could be conscious or non-conscious, we can't really tell" - evidence is weaker than surety, but it can exist in the absence of absolute knowledge. It is more likely that a conscious mind is behind something that passes the Turing Test than something that doesn't, for instance. And we all have to work with the evidence available. Sure, there's a minimal chance that I am in an evil god's Matrix and pressing the "z" button is magically linked to killing a random person, but that doesn't keep me from typing "zymurgy."
True, but it seems plausible that killing video game characters normalizes or cheapens murder. If killing people, even if they are philosophical zombies, makes you a worse person, then you should not kill people.
It's like the Murder-Ghandi Parable.
If we take it that everything that can be known about the real world beyond it's existence is the catalogue of experiences then everyone's experiences add up equally to our collective understanding, there is no competition possible.
Is it ethical to simulate an entire world, with history, wars, people with feelings and so on then just to wipe the world and start over.#1. it depends on the kind of simulation you're running, it is ethical for DF as it is.
Are our dorfs simply philosophical zombies? Are they aware of what happens to them?
Is there a difference between being truly sentient and just being programmed to think you are?
Quite a few things, actually, up to and including murder, theoretically. I'd never even consider doing such a thing, even if I did consider you to be doing anything wrong. But it's always worth pointing out that sometimes, yes, people are indeed physically capable of stopping you from doing a thing.
In order to determine whether running a particular predictive model would create a sapient sub-being inside your own mind, would you not first have to run a predictive model that could in turn have the potential to create a sapient being?You can create a model without running it. You can analyze the structure of a model without running it. If these are both true, it might be possible to avoid running minds in our models.
Can you, though?In order to determine whether running a particular predictive model would create a sapient sub-being inside your own mind, would you not first have to run a predictive model that could in turn have the potential to create a sapient being?You can create a model without running it. You can analyze the structure of a model without running it. If these are both true, it might be possible to avoid running minds in our models.
If we can develop a criterion for what is a mind versus what is not, and we make a program that implements this categorization scheme, which is capable of running on itself, including its own processes, then we can notice when the analysis creates minds, and avoid this somehow.
(We don't have to be sure there's no mind in the models ever, the point is to reduce the mind in the models. We don't have to Win to make a difference.)
Because, I guess, the suffering that happens in the story happens to the characters, who are just inside the head of the reader, who is feeling empathy and thus just a small portion of the suffering being represented. Basically you're creating a situation where people consent to in some sense "be" the characters in the story by thinking about them and empathizing with them, and thus gaining an amount of enjoyment from inflicting a lesser pain on a certain portion of their mind.Yeah, that's a pretty good explanation of why it's okay for characters to suffer in stories.
...Does any of that make sense?
A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?Ah, that's a good way of putting it. A more abstract and vague thought experiment along these lines was what pushed me toward omnirealism - either all computible minds are real, or no minds are real, or [some weird thing that says that you realize a mind by writing down symbols but not by thinking about the model] (but the model is only present in some interconnected neurons; paper is part of my extended brain, and this possibility is invalid), or [some weird thing that says that you realize a mind when you understand how it works], or [some weird thing that says that you realize a mind not by understanding it, but by predicting how it works]. I prefer the first, because I don't see an important difference between the mathematical structure being known and the structure being ran. (There are ways to get the output without directly running things. If I use abstractions to determine what a model-mind does, rather than going variable-by-variable, I don't think that the mind-ness has disappeared. And if you can make a mind real just by knowing the mathematical model that describes how it works... then we have to define "knowledge," because otherwise I could just make a massive random file and say "statistically, at least one portion of this file would produce a mind if ran with one of the nearly infinitely-many possible interpretation systems." Or if I make it even larger, the same can be said for any given language. Heck, a rock has information. Maybe the rock's atoms, when analyzed and put into an interpretation system, make a mind. That's just ridiculous. We've effectively said that all minds are real, anyway, but in a weird and roundabout way.)
The main question is whether hypotheticals are morally real, then. And keep in mind that (as far as I know) we can never rule out that we are living in a simulation ourselves.Can you, though?In order to determine whether running a particular predictive model would create a sapient sub-being inside your own mind, would you not first have to run a predictive model that could in turn have the potential to create a sapient being?You can create a model without running it. You can analyze the structure of a model without running it. If these are both true, it might be possible to avoid running minds in our models.
If we can develop a criterion for what is a mind versus what is not, and we make a program that implements this categorization scheme, which is capable of running on itself, including its own processes, then we can notice when the analysis creates minds, and avoid this somehow.
(We don't have to be sure there's no mind in the models ever, the point is to reduce the mind in the models. We don't have to Win to make a difference.)
Running a model is simply... storing various states of the process, in sequence, using the code. Each instant is revealed to us, by the processor, and that is all that happens. To run a simulation is to observe a hypothetical, not to create it.
I think a similar argument can be made for the original programming of the simulation. Mostly because the only difference between me spending 5 seconds imagining hell, and actually programming a detailed simulation of hell, is precision and accuracy. Writing a book or programming a computer is like chiseling a statue from a marble block. Every hypothetical was always there, we merely reveal some of them, to various degrees.
That said, I just saw one of the Black Mirror episodes involving simulated copies of people trapped within nightmarish simulations, and at no point did this occur to me. I was very much hoping that the characters... succeeded. An appropriate and healthy response to a work of art designed to provoke empathy.
But why would it be bad to simulate humans in such a situation, yet it's fine to script a story where that happens?
We're certainly living in a hypothetical universe that is being simulated by an infinite number of hypothetical computers. But ours is special, as I'll demonstrate.But why would it be bad to simulate humans in such a situation, yet it's fine to script a story where that happens?The main question is whether hypotheticals are morally real, then. And keep in mind that (as far as I know) we can never rule out that we are living in a simulation ourselves.
A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?
If you can't tell whether anything is sentient or not, what even is sentience? Imagine that Omega* came down and told you that a certain thing was sentient;A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?
The problem with this is that we are making a mathematical model not of the sentience itself but the behavior we expect a sentient being to exhibit; there is no reason to think that there are not multiple ways to arrive at the result, only one of them actually involves the existence of a sentience. We have the problem then of the fact that we have no way of knowing whether the means we are employing to our 'ends' is the right means because we are reverse engineering the procedure as it were. The problem with true AI is as ever that it is essentially impossible to tell whether you have actually succeeded in creating it.
I don't understand how a (non-quantum?) computer could do anything that I can't do on paper and pencil, given arbitrarily but finitely more time and resources.Well, fundamentally the substrate doesn't really matter- that's the Church-Turing thesis, after all. If it works on a computer, it works on pencil and paper. In that regard, if we can consider that that sentience is Turing-complete, then it must be true that sentience can exist on any medium. So long as there is a retainer of information, and something to act upon them by explicit instruction, there can be sentience.A rather concerning thought experiment. Suppose that sentience does have a mathematical model, i.e. can be programmed. Suppose, then, that the steps required to express sentience are expressed onto a piece of paper, and an individual proceeds to follow the instructions of each step. Fundamentally, this is no different than a computer program performing a series of steps in place of this individual. If we agree that the computer following these steps results in sentience, does the piece of paper, when coupled with someone willing to write out each step, produce sentience of its own? If not, what is the difference between an individual performing each step on a piece of paper and a computer processing each step on transistors and memory storage devices?Ah, that's a good way of putting it. A more abstract and vague thought experiment along these lines was what pushed me toward omnirealism - either all computible minds are real, or no minds are real, or [some weird thing that says that you realize a mind by writing down symbols but not by thinking about the model] (but the model is only present in some interconnected neurons; paper is part of my extended brain, and this possibility is invalid), or [some weird thing that says that you realize a mind when you understand how it works], or [some weird thing that says that you realize a mind not by understanding it, but by predicting how it works]. I prefer the first, because I don't see an important difference between the mathematical structure being known and the structure being ran. (There are ways to get the output without directly running things. If I use abstractions to determine what a model-mind does, rather than going variable-by-variable, I don't think that the mind-ness has disappeared. And if you can make a mind real just by knowing the mathematical model that describes how it works... then we have to define "knowledge," because otherwise I could just make a massive random file and say "statistically, at least one portion of this file would produce a mind if ran with one of the nearly infinitely-many possible interpretation systems." Or if I make it even larger, the same can be said for any given language. Heck, a rock has information. Maybe the rock's atoms, when analyzed and put into an interpretation system, make a mind. That's just ridiculous. We've effectively said that all minds are real, anyway, but in a weird and roundabout way.)
(This assumes that the substrate is not inherently important to the mind - I am ran on a lump of sparky flesh, you are run on a lump of sparky silicon, but that doesn't make one of us necessarily not a person. This seems obvious to me, but is probably a controversial statement.)
There is a catch, though. Unbounded nondeterminism- the notion of a nondeterministic process whose time of termination is unbounded- can arise in concurrent systems. Unbounded nondeterminism, under clever interpretations, can be considered to be hypercomputational- any actor in such a system has an unbounded runtime and a nondeterministic outcome, so it remains unknowable the end result of such a system. If sentience requires such unbounded nondeterminism, then such a system would no longer need to ascribe by the Church-Turing thesis, and not need be replicable on pen and paper. We are already aware that the human brain is highly concurrent, so it's plausible that sentience requires this unbounded nondeterminism arising through concurrency to exist. It wouldn't mean that we cannot produce sentience on a computer- we've already produced system setups with unbounded nondeterminism- but that its existence on a computer does not necessarily imply that it can exist on simpler mediums. All without violating any existing proofs.
So it plausible that sentience can exist in a form that can be done on a computer or in a brain, but not with pen and paper. It would simply require a great deal of concurrency.
Or, alternatively, there could be somebody simulating universes who doesn't misplace bits often?We're certainly living in a hypothetical universe that is being simulated by an infinite number of hypothetical computers. But ours is special, as I'll demonstrate.But why would it be bad to simulate humans in such a situation, yet it's fine to script a story where that happens?The main question is whether hypotheticals are morally real, then. And keep in mind that (as far as I know) we can never rule out that we are living in a simulation ourselves.
I'm now imagining a universe like XKCD's man-with-rocks (https://xkcd.com/505/), except the person is a woman. Both these universes are now simulating our universe. There are infinite permutations available, all simulating our universes.
In fact there are universes simulating every universe, including permutations of our universe. Like in the comic, the man misplaces a rock - permutations like that, including the moon disappearing or the strong nuclear force ceasing.
If our universe is merely one of these infinite simulations, then the odds of physics continuing to work are statistically near zero.
If all conceivable, hypothetical universes had consciousness like you or I, then statistically speaking we should be experiencing total chaos. But we aren't.
Therefore, it's morally safe to imagine hypothetical universes, since the beings within are astronomically unlikely to have consciousness.
Even if they are otherwise copies, or near-copies, of us. Even if they react as we would, and it's natural to feel empathy for them.
We could definitely be brains in jars, but I reject the idea that simulation can create consciousness.
(This "proof" from my butt sounds familiar, I'm probably remembering something I read... Probably from some sci-fi growing up. I'd like to know what it's called, if anyone recognizes it. I really should study actual philosophy more.)
I think you have a very misinformed notion of how the human brain works.Dwarves in DF are not remotely close to even being 1% sentient. You might have overestimated exactly how deep DF goes. Dwarves' "thoughts" are nothing but simple triggers that raise or lower a single number variable depending on specific stimuli. If a dwarf's relative happens to die, the dwarf's stress goes up by a set value written in the code. Dwarves do not have complex emotional responses; they only sometimes "tantrum" or go "insane", both of which are exactly as mathematical and procedural as their thoughts. They cannot think deeply, either. A military dwarf, if they "see" a hostile creature, will immediately run it down with mechanical precision, whether or not it would be better to wait for their fellow militia.
It is completely ethical to play Dwarf Fortress, because the creatures with which we interact are not in fact creatures, but simply data being manipulated by deterministic processes. Killing somebody in this game (or in any game) does nothing but change a few bytes in your computer's memory. Not even close to killing somebody in real life.
Humans on Earth are not remotely close to even being 1% sentient. You might have overestimated exactly how complex sentience is. Human "thoughts" are nothing but simple triggers that raise or lower a chemical depending on specific stimuli. If a human's relative happens to die, the human's stress goes up by a set value written in the DNA. Humans do not have complex emotional responses; they only sometimes "tantrum" or go "insane", both of which are exactly as mathematical and procedural as their thoughts. They cannot think deeply, either. A policing human, if he "feels" threatened, will immediately shoot it down with mechanical precision, whether or not it would be better to wait for backup.
It is completely ethical to play Thermonuclear War or Murdering Hobo, because the creatures with which we interact are not in fact creatures, but simply data in disposable biodegradable shells being manipulated by a deterministic process. Killing someone in these games (or any games) does nothing but change a few bits of matter in the universe. Not even close to killing a dwarf in Dwarf Fortress.
I don't know why people find merperson breeding camps even slightly horrifying. Guess that sentence sums up my standing on DF ethics.
I do have a few technicalities to point out though; feel perfectly free to skip over them.This is negligible on the level of human cognition, as far as I know. Quantum effects are easily overwhelmed by thermal noise in most situations. If a human brain is unstable enough that quantum effects can push it from one decision to another, it's unstable enough that thermal noise will do the same. To the extent that people have reliable and consistent personality traits etc., we are not quantum minds. (This is not a statement that we can never be quantum minds, but it will take laboratory-grade controlled conditions, I believe, not wet, squishy, warm brains.)
1. The universe as we know it is not deterministic: the quantum processes at the subatomic level involve true randomness.
Have any of you read Godel, Escher, Bach: an Eternal Golden Braid by Douglas Hofstadter, by the way? It is a masterpiece about this very topic (minus the DF references.)I've read the first third... I should get around to the rest.
I do have a few technicalities to point out though; feel perfectly free to skip over them.This is negligible on the level of human cognition, as far as I know.
1. The universe as we know it is not deterministic: the quantum processes at the subatomic level involve true randomness.
If you can't tell whether anything is sentient or not, what even is sentience? Imagine that Omega* came down and told you that a certain thing was sentient;
if this would not change your expectations about that thing, not even a little, then the concept is useless. Otherwise, we can tell whether things are sentient,
but perhaps not with absolute certainty. (Principle: make your beliefs pay rent in anticipated experience (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/).)
*Omega is a rhetorical/explanatory/conceptual tool that helps construct a thought experiment where the muddy vagueness of the world can be cleared aside to see how our thoughts work when unobscured by uncertainty. For the thought experiment, you trust that what Omega says is definitely true. This is like "pinning down" part of a model to better understand how it all functions. It's also sort of like controls in a scientific experiment.
Should we therefore aspire to upload our minds into computers which can be more easily manipulated by quantum effects, thus gaining "free will"? :P
Or maybe we should make all our decisions based on cosmic noise, thus gaining a pretty good simulation of "free will".
The hell do you care. You're not reading them. And you've repeatedly stated that you don't care about the topic at all.
I'm coming to this conversation late and skimming, but I can't seem to find any point in the conversation where we define the value of "ethical behavior." What is the benefit of being concerned with quantum states of existence that are, by definition, inaccessible? There seems to be an implicit assumption here that our choices will be judged by some third party by a rubrick not presented to us. What are the possible outcomes? What is the opportunity cost of a bad choice?My understanding (and I skimmed some as well) is that people mostly assumed "ethical behavior" as the reasonable common denominator of modern societies. Mostly basic things like "murder is bad".
Ad hominem is when you discount somebody else's arguments by alluding to their character. It doesn't apply here both because I wasn't alluding to kittytac's character, and because she wasn't making an argument in the first place, just appearing to complain about the very fact that debate is happening at all. In a thread he could have easily ignored.The hell do you care. You're not reading them. And you've repeatedly stated that you don't care about the topic at all.
That is ad hominem.
.... What is the benefit of being concerned with quantum states of existence that are, by definition, inaccessible? ..... What are the possible outcomes? What is the opportunity cost of a bad choice?... the reasonable common denominator of modern societies. Mostly basic things like "murder is bad".
.... the safe premise that it's unethical to create conscious entities just to harm them. So the arguments are whether we're actually creating conscious entities or not....
.... What is the benefit of being concerned with quantum states of existence that are, by definition, inaccessible? ..... What are the possible outcomes? What is the opportunity cost of a bad choice?... the reasonable common denominator of modern societies. Mostly basic things like "murder is bad".
.... the safe premise that it's unethical to create conscious entities just to harm them. So the arguments are whether we're actually creating conscious entities or not....
Right, and I'm throwing down economic game theory as a challenge to this assumption. I say that another valid ethics is maximizing gain, with each individual responsible for defining "value," and maximizing their personal gain.
If the dorfs are sentient, then they are responsible for maximizing the value they get out of whatever life presents them. Each player is likewise responsible for maximizing the value they get out of their lives. If torturing dorfs provides value to their lives, then it is only as ethical to do it as it is profitable.
I am a very materialistic person and I know the mind is contained in the brain. We don't have the technology to easily read it yet, though.
With the caveat that they aren't sentient.
Right, and I'm throwing down economic game theory as a challenge to this assumption. I say that another valid ethics is maximizing gain, with each individual responsible for defining "value," and maximizing their personal gain.
If the dorfs are sentient, then they are responsible for maximizing the value they get out of whatever life presents them. Each player is likewise responsible for maximizing the value they get out of their lives. If torturing dorfs provides value to their lives, then it is only as ethical to do it as it is profitable.
That is not really a theory, more of a how-to description of how to be evil. ;) 8)
Experience is not fundamental. Anything that I can determine about myself through introspection, I could theoretically determine about somebody else by looking at their brain. If there exists a non-physical soul, it does not seem to have any effects on the world. This lack of effects extends to talking about souls, and for that matter thinking about souls.If you can't tell whether anything is sentient or not, what even is sentience? Imagine that Omega* came down and told you that a certain thing was sentient;
if this would not change your expectations about that thing, not even a little, then the concept is useless. Otherwise, we can tell whether things are sentient,
but perhaps not with absolute certainty. (Principle: make your beliefs pay rent in anticipated experience (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/).)
*Omega is a rhetorical/explanatory/conceptual tool that helps construct a thought experiment where the muddy vagueness of the world can be cleared aside to see how our thoughts work when unobscured by uncertainty. For the thought experiment, you trust that what Omega says is definitely true. This is like "pinning down" part of a model to better understand how it all functions. It's also sort of like controls in a scientific experiment.
Sentience is something experienced by the being that *is* sentient, only the behavior of that creature is ever experienced by other entities, whether sentient or not. Sentience is an explanation for the fact there is an observer that can experience anything at all, that is how it is *useful*.
The sentience of other beings is inferred by their similarity to the observer, the observer knows that they are human and that other beings are human, hence he infers that when other human beings act similarly to them they do so because they are themselves sentient. This is because the only alternative is that they are alone in the universe and the other beings were made to perfectly duplicate the behavior that I carry out consciously.Or alternatively, we can say that something is a mind if it appears to have goals and makes decisions, and is sufficiently complex to be able to communicate with us in some way. Not that this is the True Definition of mind - no such thing exists! And there might be a better definition. My point is that you don't have to define mind-ness by similarity to the definer.
The Omega has in effect already established the certainty of only one thing in real-life, the fact you exist and are conscious.How do you know that you are conscious?
In regard to the other beings the options are either that they are real consciousnesses or fake simulated ones. If you succeed in creating a program that simulates the external behavior of conscious beings then you have succeeded in creating one of those two things, but the problem is that you do not know which of the two you have created. Remember also that other people are also quite possibly fake consciousnesses already.Ah, you mean philosophical zombies! Right? And you're saying that other people could be controlled by a Zombie Master. Is that correct?
The problem is you have access only the external behavior of the thing. The fake consciousness is a system to produce the external behaviors of a conscious beings without having any 'internal' behaviors that in me (the certainly conscious being) correspond to my actually being conscious. The problem in making a new type of apparently conscious thing is that because it is *new* you cannot determine whether the internal mechanics that allow it to produce the behavior your associate with your being conscious even if you accept that other human beings are conscious. It is necessary in effect to isolate the 'mechanic itself', which cannot be done because even if you could see everything that it is possible to see there is still the possibility of other things that you cannot see. Other people's consciousness is inferred based upon the assumption that there is no essential mechanical difference between *I* and *you* and there is no reason to invent some unseen mechanical difference.But... what do you mean by something being "fake consciousness"? That's like something being "fake red", which acts just like red in all ways but is somehow Not Actually Red.
But we know full well that not everything that we consciously do requires that we be conscious don't we?I do not think I could do most of the things I do without having self-reflectivity, etc.
What do you mean, "nowhere for the minds to go"? Minds are abstractions, not physical objects. It is not like the brain contains a Mind Lobe, which is incapable of being placed inside a processor. If a computer replicates the function of a brain, the mind has been transferred. The mind is software.Should we therefore aspire to upload our minds into computers which can be more easily manipulated by quantum effects, thus gaining "free will"? :P
Or maybe we should make all our decisions based on cosmic noise, thus gaining a pretty good simulation of "free will".
We cannot upload our minds into computers because that is impossible. In the computers there is nowhere for the minds to go, plus we have no idea where to find minds in order to actually transport them.
Being a thing does not imply having complete knowledge of the thing. Does a bridge know civil engineering?I am a very materialistic person and I know the mind is contained in the brain. We don't have the technology to easily read it yet, though.
If that were so how come we need all that technology in the first place. Since we *are* brains in this silly model you are using, why can we not understand the workings *of* the brain just by well introspection.
Wait did you not say contained *in* the brain not the brain itself?It's a subtle and not entirely important difference. The mind is currently only found within the brain, and has never been separated. Because of this, we treat the mind and the brain as the same thing quite often.
With the caveat that they aren't sentient.
I would argue that since the sentience of all other beings is inherently uncertain, you cannot build an ethical system that depends upon sentience.
"Evil" is a concept within ethical theories, and being evil does not make something not-a-theory.Right, and I'm throwing down economic game theory as a challenge to this assumption. I say that another valid ethics is maximizing gain, with each individual responsible for defining "value," and maximizing their personal gain.
If the dorfs are sentient, then they are responsible for maximizing the value they get out of whatever life presents them. Each player is likewise responsible for maximizing the value they get out of their lives. If torturing dorfs provides value to their lives, then it is only as ethical to do it as it is profitable.
That is not really a theory, more of a how-to description of how to be evil. ;) 8)
Yeah, I'd not harm a benevolent true AI. Because harming a true, sentient being has the same repercussions as harming a human.What repercussions do you mean? Is this "if I hurt it, it might hurt me," a sort of Rawlsian veil, or "it causes pain, which is bad"?
Again with the value judgement based on an unspecified set of parameters that you're assuming are universal! Game Theory is hardly evil, it's a system by which you can concretely compare apples to oranges by converting things to a universal measurement.
For example, the Star Trek movie where Spock argues that the good of the many outweighs the good of the one, Kirk's counterargument is that since the good of strangers has less weight to him, that (Good * Many) isn't always equal to or greater than (Good * Me).
I understand that someone in this thread is concerned that they might be doing harm to a potentially sentient creature, but economic modelling measures the issue fairly concretely. Is the potential harm you're doing to the potentially sentient creatures greater than the amount of harm you're doing to yourself by worrying about it?
(% chance you're doing harm) * (amount of harm you're doing) * (% chance that the subject can sense your actions)
vs
(time you spend worrying about this) * (value of what you could be doing instead).
How is that evil?
While it's possible that AIs could be come sentient one day, DF entities are not.
And in fact, we're anthropomorphizing them: there are much more complex simulations that we don't wonder whether they're sentient or not, e.g. complex weather simulations. But when you make some ultra-simplistic model called "person" people immediately wonder whether it's sentient. DF creatures are just letters on a screen that we've assigned a semantic label of representing people. They're no more sentient than a cardboard cut-out is.
e.g "the sims" are just a paper-thin facade of skin and a bunch of prewritten animations. There's literally nothing going on "inside their head" because there's literally nothing inside their head. Meanwhile, Google Deep Dreams is a very complex neural network. It's actually more believable that there's a spark of "self-awareness" inside something like Google Deep Dreams than inside a Sims character or DF dwarf.
Experience is not fundamental. Anything that I can determine about myself through introspection, I could theoretically determine about somebody else by looking at their brain. If there exists a non-physical soul, it does not seem to have any effects on the world. This lack of effects extends to talking about souls, and for that matter thinking about souls.
Or alternatively, we can say that something is a mind if it appears to have goals and makes decisions, and is sufficiently complex to be able to communicate with us in some way. Not that this is the True Definition of mind - no such thing exists! And there might be a better definition. My point is that you don't have to define mind-ness by similarity to the definer.
How do you know that you are conscious?
Ah, you mean philosophical zombies! Right? And you're saying that other people could be controlled by a Zombie Master. Is that correct?
But... what do you mean by something being "fake consciousness"? That's like something being "fake red", which acts just like red in all ways but is somehow Not Actually Red.
You might be able to imagine something that doesn't seem conscious enough, like a chatbot, but the reason that we call it Not Conscious is that it fails to meet certain observable criteria.
I do not think I could do most of the things I do without having self-reflectivity, etc.
What do you mean, "nowhere for the minds to go"? Minds are abstractions, not physical objects. It is not like the brain contains a Mind Lobe, which is incapable of being placed inside a processor. If a computer replicates the function of a brain, the mind has been transferred. The mind is software.
Being a thing does not imply having complete knowledge of the thing. Does a bridge know civil engineering?
It's a subtle and not entirely important difference. The mind is currently only found within the brain, and has never been separated. Because of this, we treat the mind and the brain as the same thing quite often.
The results of one's actions are fundamentally uncertain, and yet all consequentialist ethical systems depend upon the results of actions. "What should I do?" is dependent on the results of doing A, and B, and so on - even though there is an uncertainty in those terms. You still have to choose whichever consequence you think is best.
Appearances, eh? I feel nothing when I kill a DF character. I would feel something when killing a human. They're different, simulated characters are inferior. And you can't convince me otherwise no matter how many walls of text you throw at me.Do you mean that all simulated beings are necessarily less morally important, or just that you haven't seen any morally important simulated being so far?
It's harmless to our society.But the key question is whether it is harmless to all morally important beings.
Appearances, eh? I feel nothing when I kill a DF character. I would feel something when killing a human. They're different, simulated characters are inferior. And you can't convince me otherwise no matter how many walls of text you throw at me.Do you mean that all simulated beings are necessarily less morally important, or just that you haven't seen any morally important simulated being so far?It's harmless to our society.But the key question is whether it is harmless to all morally important beings.
It is not like there is any objective morality to which we can compare people's values. To a person, certain things are ethical and certain other things are not.Again with the value judgement based on an unspecified set of parameters that you're assuming are universal! Game Theory is hardly evil, it's a system by which you can concretely compare apples to oranges by converting things to a universal measurement.
For example, the Star Trek movie where Spock argues that the good of the many outweighs the good of the one, Kirk's counterargument is that since the good of strangers has less weight to him, that (Good * Many) isn't always equal to or greater than (Good * Me).
I understand that someone in this thread is concerned that they might be doing harm to a potentially sentient creature, but economic modelling measures the issue fairly concretely. Is the potential harm you're doing to the potentially sentient creatures greater than the amount of harm you're doing to yourself by worrying about it?
(% chance you're doing harm) * (amount of harm you're doing) * (% chance that the subject can sense your actions)
vs
(time you spend worrying about this) * (value of what you could be doing instead).
How is that evil?
The issue here is that what is valued happens to be ethically significant. There is a difference between making calculations based values that two entities share and whether those values are ethical to start with. Torturing people is not a valid value ethically, however much torturers may value it.
But I thought there was no objectivity? :PWhile it's possible that AIs could be come sentient one day, DF entities are not.
And in fact, we're anthropomorphizing them: there are much more complex simulations that we don't wonder whether they're sentient or not, e.g. complex weather simulations. But when you make some ultra-simplistic model called "person" people immediately wonder whether it's sentient. DF creatures are just letters on a screen that we've assigned a semantic label of representing people. They're no more sentient than a cardboard cut-out is.
e.g "the sims" are just a paper-thin facade of skin and a bunch of prewritten animations. There's literally nothing going on "inside their head" because there's literally nothing inside their head. Meanwhile, Google Deep Dreams is a very complex neural network. It's actually more believable that there's a spark of "self-awareness" inside something like Google Deep Dreams than inside a Sims character or DF dwarf.
The problem is that they are an appearance/representation of humanity. It has nothing to do with what they objectively *are*.
What on Earth do you mean? I take minds into account all the time! I don't have anywhere near enough cognitive abilities or knowledge to predict the world through quantum mechanics, so I resort to modeling people by their minds. I simply don't include minds in my fundamental models - that is reserved for electric fields and whatnot.Experience is not fundamental. Anything that I can determine about myself through introspection, I could theoretically determine about somebody else by looking at their brain. If there exists a non-physical soul, it does not seem to have any effects on the world. This lack of effects extends to talking about souls, and for that matter thinking about souls.You modelled the world without taking the mind into account
of *course* it does not appear to have any effects on the world; that is because you made up a whole load of mechanics to substitute for the mind.Not to substitute, but to explain. Minds are things. But they can't be fundamental things. After all, my mind is made up of thoughts and emotions and memories and whatnot. If I keep zooming in, what do I get to? Perhaps an electron.
You can make up as many mechanics as you like to explain away anything you like after-all.No. "Explaining away" has a specific meaning. I can explain away gremlins by studying failure modes. I can explain rainbows by studying refraction. I have not explained away the mind; I have merely explained it. The mind is still real. It is just not fundamentally real.
You can always make up redundant mechanics to explain away all conscious decision making, since you are prejudiced against what you scornfully call a 'non-physical-soul' to begin with the redundancy is not apparent.You can't refute my arguments by saying "you can say that." Indeed, I have said it. Are you going to actually respond to it, or just say "that is a thing you can think"?
You can make up as many mechanics as you like to explain anything you like, it does not mean that they exist or are not redundant.They are not redundant. Only with psychology and neurology can we fully understand the mind.
Yes! That is exactly my point. So if it is observed to have goals and make decisions, then it would be...Or alternatively, we can say that something is a mind if it appears to have goals and makes decisions, and is sufficiently complex to be able to communicate with us in some way. Not that this is the True Definition of mind - no such thing exists! And there might be a better definition. My point is that you don't have to define mind-ness by similarity to the definer.
Something does not have goals or make decisions unless it is genuinely conscious.
What you are in effect saying is that it is observed to behave in a way that if *I* did it would impy conscious decision making.No, that is not actually what I am saying. I'm not talking about your concept of fundamental consciousness at all. I don't care if some mind has an Inscrutable Quale attached to it or not. I just care about how it acts, in order to model it and interact with it.
The point is invalid, you are still defining consciousness against yourself, though the assumptions are flawed in that they fail to take into account that two completely different things may still bring about the same effect.For my purposes, it is irrelevant whether, on some metaphysical plane, a mind's activities are being produced by a True Mind or a Manipulator. This difference is unobservable and meaningless. It acts the same way, talks about its feeling of self-awareness in the same way... all of the important functions of a mind that I can notice are associated with physical processes.
I suppose I should have been more clear. I know that I know things. I can hold ideas and symbols. I am aware of my awareness of myself. But that does not mean that my consciousness is a fundamental fact. How do you know that you are Conscious, as you have defined it? That is, how do you know that you have Metaphysical Qualia, or whatever traits you hold to be fundamental to True Minds?How do you know that you are conscious?
Because I *am* consciousness. You can disregard the fact of your own consciousness in favour of what you think you know about the unknowable external world all you wish, but that is a stupid thing to do so *I* will not be joining you.
Ah, you mean philosophical zombies! Right? And you're saying that other people could be controlled by a Zombie Master. Is that correct?
It could be correct, but that is not exactly relevant. The zombie masters are then conscious beings and the main thrust (my being eternally alone) no longer applies.
If it perfectly replicates the behavior, then it must be able to model it's ability to model, and so on. In other words, it needs to model the same functions that are key to my own observation of consciousness. I do not see any important difference between these functions being carried out by flesh or silicon, or even the type of program that computes them.But... what do you mean by something being "fake consciousness"? That's like something being "fake red", which acts just like red in all ways but is somehow Not Actually Red.
You might be able to imagine something that doesn't seem conscious enough, like a chatbot, but the reason that we call it Not Conscious is that it fails to meet certain observable criteria.
What I mean is something that exhibits the external behaviour of a conscious being perfectly yet does so by means that are completely different to how a conscious being does it.
It is nice and mechanical, different mechanics but same outcome. A cleverbot is a fake consciousness because it's programmers made no attempt to replicate an actual conscious being merely it's externally observable behaviour. It is does not become any less fake simply because it becomes good enough to perfectly replicate the behaviour rather than imperfectly.You are just asserting that. A chatbot is clearly not conscious, but if it were able to simulate consciousness, would it then be conscious? You have ignored this possibility, instead saying that the lack of consciousness is derived from the method of implementation of the chat function.
I find your statement absolutely and abhorrently evil, perhaps the root of most evil in this world. Something being different from you does not make it ethically unimportant. However, I am fully capable of interacting with your statement outside of my ethical model.I do not think I could do most of the things I do without having self-reflectivity, etc.
If you do the same thing a lot consciously, you tend to end up doing it reflectively without being aware of it I find. But that is just me, perhaps this is not so for you, it is one more reason to conclude you to be a philosophical zombie I guess, since the more differences there are between you and I, the lower the probability of your also being a conscious being.
I think you are using a decidedly different definition of "object," but that is irrelevant; the concepts matter more than the terms.What do you mean, "nowhere for the minds to go"? Minds are abstractions, not physical objects. It is not like the brain contains a Mind Lobe, which is incapable of being placed inside a processor. If a computer replicates the function of a brain, the mind has been transferred. The mind is software.
So wrong. Minds are not only objects, material or otherwise but they are only actual objects the existence of which is certain to be so. If a computer replicates the function of a brain, it is nothing but a computer that replicates the function of a brain. The cleverness is yours, not it's.
I do not see why conscious things, specifically, must necessarily have complete self-knowledge.Being a thing does not imply having complete knowledge of the thing. Does a bridge know civil engineering?
A bridge is not conscious
and neither are brains for that matter. If consciousness had a physical form then the being would necessarily know the complete details of it's own physical makeup because everything about it's physical makeup *is* made of consciousness.
Well. If I mess around with your brain, you act differently. I'd say the mind is in the brain, then. Where else could it be?It's a subtle and not entirely important difference. The mind is currently only found within the brain, and has never been separated. Because of this, we treat the mind and the brain as the same thing quite often.
The mind has never been found *anywhere*.
The brain is at best the projecting machine that produces the mind, the mind itself however is not *in* the brain because if it were we would have an intuitive understanding of neuroscience, which we lack. That we need to learn neuroscience in the first place implies that our brain is part of the 'external reality' and not the mind.What do you mean, the "projecting machine"? Where would this projection be projected onto? And is this projection epiphenomenal?
How do you make your decisions, then? And how are you certain that what you do is right?The results of one's actions are fundamentally uncertain, and yet all consequentialist ethical systems depend upon the results of actions. "What should I do?" is dependent on the results of doing A, and B, and so on - even though there is an uncertainty in those terms. You still have to choose whichever consequence you think is best.
That is a problem with consequentialist ethical systems.
Why is that?Appearances, eh? I feel nothing when I kill a DF character. I would feel something when killing a human. They're different, simulated characters are inferior. And you can't convince me otherwise no matter how many walls of text you throw at me.Do you mean that all simulated beings are necessarily less morally important, or just that you haven't seen any morally important simulated being so far?It's harmless to our society.But the key question is whether it is harmless to all morally important beings.
They all are less important. They're irrelevant, even.
It is not like there is any objective morality to which we can compare people's values. To a person, certain things are ethical and certain other things are not.
In fact, it seems to me that Romeo was not even describing Ethics (in the philosophy sense) so much as personal decision-making, in which you consider the utility and disutility of each course of action, including thinking about courses of action. They were describing how people work, not judging it. You consider the costs of X and Y. If A is true, then choosing X will make people hurt, and that thought hurts you. But A is almost certainly not true, so you choose X anyway, because it has much more likely benefits. Calling all this "evil" is missing the point, and confusing the view from inside and outside of a model.
But I thought there was no objectivity? :P
What on Earth do you mean? I take minds into account all the time! I don't have anywhere near enough cognitive abilities or knowledge to predict the world through quantum mechanics, so I resort to modeling people by their minds. I simply don't include minds in my fundamental models - that is reserved for electric fields and whatnot.
Not to substitute, but to explain. Minds are things. But they can't be fundamental things. After all, my mind is made up of thoughts and emotions and memories and whatnot. If I keep zooming in, what do I get to? Perhaps an electron.
If something's effects can be entirely explained through a simpler model, that just means that the thing is non-fundamental.
And furthermore, I did not "make up" these mechanics, nor do they substitute for the mind. It's more looking at the actual mechanics of the mind, and using them to understand the mind.
No. "Explaining away" has a specific meaning. I can explain away gremlins by studying failure modes. I can explain rainbows by studying refraction. I have not explained away the mind; I have merely explained it. The mind is still real. It is just not fundamentally real.
You can't refute my arguments by saying "you can say that." Indeed, I have said it. Are you going to actually respond to it, or just say "that is a thing you can think"?
Please do not use ad hominem arguments. They are not productive.
They are not redundant. Only with psychology and neurology can we fully understand the mind.
And I cannot actually explain anything I like. Fundamental particles cannot be broken down; they simply are. Only non-fundamental things can be broken down into their parts.
Yes! That is exactly my point. So if it is observed to have goals and make decisions, then it would be...
No, that is not actually what I am saying. I'm not talking about your concept of fundamental consciousness at all. I don't care if some mind has an Inscrutable Quale attached to it or not. I just care about how it acts, in order to model it and interact with it.
"Mind" is a category of things. That is all. It is the same sort of thing as "red", except vastly more complicated to define.
For my purposes, it is irrelevant whether, on some metaphysical plane, a mind's activities are being produced by a True Mind or a Manipulator. This difference is unobservable and meaningless. It acts the same way, talks about its feeling of self-awareness in the same way... all of the important functions of a mind that I can notice are associated with physical processes.
I suppose I should have been more clear. I know that I know things. I can hold ideas and symbols. I am aware of my awareness of myself. But that does not mean that my consciousness is a fundamental fact. How do you know that you are Conscious, as you have defined it? That is, how do you know that you have Metaphysical Qualia, or whatever traits you hold to be fundamental to True Minds?
That is not necessarily true. Something does not need to be conscious to replicate conscious functions - is that not your claim?
If it perfectly replicates the behavior, then it must be able to model it's ability to model, and so on. In other words, it needs to model the same functions that are key to my own observation of consciousness. I do not see any important difference between these functions being carried out by flesh or silicon, or even the type of program that computes them.
You are just asserting that. A chatbot is clearly not conscious, but if it were able to simulate consciousness, would it then be conscious? You have ignored this possibility, instead saying that the lack of consciousness is derived from the method of implementation of the chat function.
Did you know that your own consciousness was implemented by a 'blind idiot god' (of sorts)? If a blind idiot can make a mind, it must not be that hard (on the timescale of millions of years at least).
I find your statement absolutely and abhorrently evil, perhaps the root of most evil in this world. Something being different from you does not make it ethically unimportant. However, I am fully capable of interacting with your statement outside of my ethical model.
Why do you think that whatever makes you conscious is only likely to be in things that are similar to you?
I think you are using a decidedly different definition of "object," but that is irrelevant; the concepts matter more than the terms.
Minds are things. They are not fundamental things. They are, as the buzzword goes, "emergent" (but so is everything else that is not a quark). For something to be a mind, it has to have certain capabilities. These capabilities can be carried out by any Turing machine, as well as more specialized machines such as the brain.
I do not see why conscious things, specifically, must necessarily have complete self-knowledge.
I am not claiming that the brain is made up of consciousness. Consciousness is not an element. It is not a peculiar type of molecule. It is a process, and this process may not have complete knowledge of the substrate on which the process is carried out.
Well. If I mess around with your brain, you act differently. I'd say the mind is in the brain, then. Where else could it be?
What do you mean, the "projecting machine"? Where would this projection be projected onto? And is this projection epiphenomenal?
How do you make your decisions, then? And how are you certain that what you do is right?
(Hmm. If you cannot cope with uncertainty, it does not surprise me that you have turned to these ideas. They offer complete and utter certainty, without even needing any evidence. Still, finding a cognitive reason for your statements is not equivalent to refuting them. I am just making an observation, and perhaps you would like to consider it.)
Well of course there is an objective morality against which we can compare people's values. The whole setup does not work otherwise, since you can just select any set of values you like the justify anything you like and then change it again. Ethics is pointless if it has no foundation in anything solid, since it has no force to control the behaviour of people.The universe may well be unfair. Perhaps it has neglected to provide us with an objective basis for ethics. Perhaps ethics are all pointless. Saying "but that would be bad" isn't a good argument against that being the case.
It is more complicated than that. The point I was making is that since appearances are factual (100% certain) while the existence of any external reality is uncertain (not false but less than 100% certain), thus you cannot built an ethical system based upon knowing what the objective facts beyond your appearances actually are.In the real world, we can never be certain about anything. We have to build an ethical system on fundamental uncertainty, or simply not build any.
Since morality is built upon what people appear to be doing rather than what they are doing, things like images actually start to matter. Violence against an image of something is akin to violence against the thing itself, because the ethical signifier (?) is the appearance and not the reality.
You are modelling things using minds and then assuming that there is some other mechanic involved 'really'. Why is the other mechanic even needed then?Minds are not, and cannot be, fundamental. They are far too complex. They must be made of smaller and simpler pieces. If I want to be as accurate as possible in my models, I should consider the pieces as well as the whole.
You are looking at the fundamental mechanics of the *brain*, not the mind.What is a memory? We can tell by noticing the difference between "having a memory" and "not having a memory." This difference is within the brain; the memory is stored in the connection of neurons. Similarly for all other quasi-fundamental mental objects (which may also be stored in other forms of biological information, such as hormones).
The relationship between minds and the contents of the mind is interesting though, is the mind best seen as a container into which stuff 'goes' or instead a collection of things which are thrown together?I do not see any important difference, nor is there a way to check either definition's validity. This is meaningless philosophy.
Explaining away is when you ignore a fact that you reasonably should know to be the case and then you invent a theoretical construct to explain the effect of that thing on the world.This continues to not be the definition. Explaining away is when you show that an alleged object/entity/phenomenon, said to be responsible for a certain physical effect, need not exist - the effect is caused by something else. Explaining is when you show how an object/entity/phenomenon is made up of smaller things. See here for more. (http://lesswrong.com/lw/oo/explaining_vs_explaining_away/)
You for all practical purposes know that the mind exists, but you insist on ignoring it's existence and seeking to ultimately explain all behaviour through mindless mechanics.This is an incorrect summary of my beliefs and words. I know that the mind exists, and do not ignore its existence. Rather, I seek to understand its functions and composition. The mind is made of of non-mental things, just as a mountain is made up of non-mountain things, and an airplane is made up of non-airplane things. In order to better know a mind, you must also know the non-mental things which make it up.
It's was not a personal attack.I never said it was. Ad hominem arguments rely on showing a belief's proponent to be flawed, and using this as a counter-argument. This is fallacious and non-productive.
It was just that the core of the strain of 'wrongness which leads to folks thinking that minds and brains are somehow the same thing is rooted in nothing except the a-priori rejection of the notion of an 'immaterial soul'. There must at all costs not be such a thing, which explains all other argumentation.This psychoanalysis is incorrect, but beside the point. As I have said, showing an belief's proponents to be flawed is not an argument against the belief.
There aren't necessarily any fundamental particles, there might just be just things you have not figured out how to split yet.Perhaps not, but it seems unlikely.
The brain is not the mind, this means you cannot ever understand the mind by studying the brain.How do you know that?
That rules out neurology for certain, though psychology not so much.People's actions result from their neurology. There is no point where a metaphysical process leaps in and shifts an atom to the side, changing someone's actions. We can draw a casual chain back from "Bob raises his right hand" to "an electrical signal stimulates the muscles in Bob's right arm" to "an electrical signal is emitted by the brain down a particular set of nerves" to "a complicated set of signals is passed between neurons, which we describe as Bob deciding to raise his right arm" and so on.
Nothing is 'observed' to have goals and make decisions. We take as an a-priori assumption that the thing is conscious and we then explain it's behaviour in that fashion. If we take as an a-priori assumption that the thing is just a mindless bot, we simply explain it's behaviour as a programmed response to input and internal variables.I do not, in fact, take something's consciousness as an a priori assumption. I look at its behavior, and see whether it demonstrates a tendency to act to satisfy a certain set of criteria. If it does, then I call that "goals and decisions," and move onto the next criterion.
If you can invent a mindless explanation for everything that may well *work*. But since you are a mind, you know minds exist meaning in the end that this model is incorrect in any case.A mindless explanation does not make minds cease to exist, just as quantum physics does not make bridges and planes and mountains cease to exist, despite there being no term for "bridge" in the wave function.
Remember that when you move your arm, you are not in fact moving your actual arm at all. You are instead moving an image of your arm inside your mind. Supposedly the actual material arm exists and the brain (which may also not exist) picks up on your moving the imaginary arm, executing the necessary functions to make the actual arm move.I do not actually remember this.
This means that on a physical level the brain *must* be able to actually do everything that the mind does.This is true, because they are the same thing. (Or close enough to be "hardware" and "software.")
Because the mind is outside of the observable material universeOnly in the same sense that the redness of an object is outside of the observable material universe. That is, not at all.
then this 'causality' cannot be tracked, the effect can be observed but not the cause.Then how do you know there is even any cause?
You can thus invent a theoretical cause you believe to be within the material world (not actually a directly observed one) to explain away the mind.No, I can't, because minds are real. What I can do is look at a person, and see what they do, and look inside them, and see what happens, and so on. Very little of this is "theoretical," and that's not even an insult like you think it is.
This may well 'work' and this is what is dangerous, we know that this is not what is going on only because we know that minds exist.In a purely material world, intelligence could still exist, and people could still think that minds exist. Therefore, (thought that minds exist) -/-> (minds are non-material). (See the contrapositive.) [That is, you're saying that A -> ~B, where A is mind-feeling and B is material world. However, ~(B -> ~A) => ~(A -> ~B), QED.]
Remember that *my* consciousness *is* to me the *only* fundamental fact.How do you know this to be true? Do you suppose, out of all possible mind-instantiations that are equivalent to yours, that none of them will be embodied in a world where your a priori belief is false? (See Bayes.)
Your 'consciousness' on the other hand, well it may not even exist. You may be a philosophical zombie that is programmed to mimic consciousness, this rather well explains why 'your' idea of consciousness rather fits my idea of 'fake consciousness'.Excuse me? I am most definitely conscious, and my lack of a priori belief in fundamental consciousness does not invalidate my subjective experiences nor my moral worth.
Actual control implies consciousness. So we have an illusion of control being executed over those who are unconscious.It does? So anything that exercises control has a Metaphysical Consciousness Source, or will the Zombie Masters come and install a Fake Consciousness Source in every AI I design?
This reminds me of the days of yore (I think it was the 18th century) when clockwork was all the rage. They had this clockwork duck (https://www.theguardian.com/books/2002/feb/16/extract.gabywood) which replicated nearly all the functions of the real duck to the minds of the audience.It is a mechanism, but not one that can be changed without affecting the physical universe. This mechanism is the same sort of mechanism as an engine - it's a physical thing that does things. It can also be broken down into non-mechanism parts.
If you replicate the behaviour of the thing that does not mean you have made the thing unless you do so by the same mechanisms the thing you are replicated was using. Mind is from the perspective of the material universe a mechanism, not an object; outside of material reality it is an object.
If consciousness is possible then it is just a question of combining whatever things it is that makes consciousness together, whether accidentally or not. The problem is that it is impossible, (well without horribly unethical experiments) to actually isolate the 'conscious generating' elements of the human from everything else. Only when you have isolated this element, that is we have 'cut it away' from everything else (likely literally) can we then replicate the essential and replace the 'inessential' elements with other elements of our choosing to make a strong AI.Only when? Are you saying that it is impossible to make a self-reflective thing without that thing having True Consciousness?
Remember that I am not saying that the perfect Cleverbot (that is the perfect fake consciousness) would not genuinely be a conscious being. I am saying that it would be impossible to tell, that is because if Perfect Cleverbot is actually conscious it is an accident of us having inadvertently ended up using the actual mechanic that brings about consciousness in the process of making a fake one. Because however Cleverbot is consciously designed as a perfection of a fake consciousness, it is impossible to determine whether we 'accidentally' stumbled on the right mechanic.So this mechanic is physical, then?
The odds of other beings being mindless goes up the more different you are from me, but is probabilistic thing. As for the evil part, I am the one arguing that actual consciousness is actually irrelevant, hence it does not follow that a medium cannot be unethical because no actual conscious beings were hurt because it is the appearance that matter.It is the appearance of calling something not-a-person that matters, in a consequentialist sense. Perhaps you treat chatbots well, but I doubt everyone does. This is a very bad idea.
This seems like useless, meaningless philosophical gibberish to me. Have you found any actual evidence for this non-material universe yet, or are you still just asserting its existence?I think you are using a decidedly different definition of "object," but that is irrelevant; the concepts matter more than the terms.
Minds are things. They are not fundamental things. They are, as the buzzword goes, "emergent" (but so is everything else that is not a quark). For something to be a mind, it has to have certain capabilities. These capabilities can be carried out by any Turing machine, as well as more specialized machines such as the brain.
From the perspective of the material universe (in so far as we think we understand it) the mind is not an object but a mechanism, from the perspective of the mental universe the material universe is a mechanism to explain the objects in the mind.
Or rather the fact that there are mental objects that we cannot simply wish away (material reality is an mechanical explanation for the lack of Matrix spoon bending).Alternatively, we cannot wish away mental objects because brains do not have conscious self-editing powers.
No. I do not see how you are getting this. We experience some functions of the brain, not all. You are basically saying A is a subset of B, therefore B is a subset of A. That is not valid logic.I do not see why conscious things, specifically, must necessarily have complete self-knowledge.
That is because consciousness does not exist in material reality. If the mind were the brain, the brain is also the mind. That means what we are aware of (the mind) is the functioning of the brain. That being so we would be able to learn about the internal functioning of the brain from introspection.
That our mind teaches us nothing about the internal mechanics of our brain establishes pretty solidly that the mind and brain are completely different things.Do you think, in a purely material universe, all conscious beings would always have complete self-knowledge? And how do you know that?
Taboo "material", please.I am not claiming that the brain is made up of consciousness. Consciousness is not an element. It is not a peculiar type of molecule. It is a process, and this process may not have complete knowledge of the substrate on which the process is carried out.
Yes, consciousness/mind is not a material thing.
The brain is a material thing, hence it is not the mind/consciousness.Consciousness is not a physical thing you can pick up, but it is a physical thing that happens in the physical world. It's the difference between a log and fire. It's a process, with physical causes and physical effects. No metaphysics involved.
That is reserved for concepts. Processes have locations.Well. If I mess around with your brain, you act differently. I'd say the mind is in the brain, then. Where else could it be?
The mind is nowhere (that is it has no location).
The brain does many things, in fact most things mindlessly. You can also change the appearances in the mind by altering material reality, of which the brain is a part.How does the physical world affect the mind? And if you had never learned about Phineas Gage, would he have been (at least weak) evidence against dualism, in your view? (What would you have predicted beforehand?)
You are misusing a mathematical term. Functions are maps from sets to sets (or a more general version of the same).What do you mean, the "projecting machine"? Where would this projection be projected onto? And is this projection epiphenomenal?
The projection machine is the 'function' of the universe that produces consciousness.
We have reason to believe that the projecting machine is the material object behind the appearance we call the 'brain'.We do, do we? Would you care to share the reason for supposing the existence of a "projection" at all?
It is actually epiphenonomal in any case, consciousness is clearly a byproduct of something material, which is to say something unknowable.Do you know what epiphenomenalism even is?
The problem here is that there is the mental imput (the mental appearance) but in order for certain aspects of consciousness to exist (free will) the output must also allow an returning-input. *That* means that the projector is not simply projecting an image, we are projecting a user-interface to something.I do not follow. This all seems baseless speculation, anyway.
The second part is a problem since it ties consciousness to material reality as a mechanic. The fatalistic 'movie consciousness' can work quite nicely with mindless mechanics, the 'user-interface consciousness' needs to function as a mechanism (though that does not make it a material object).
Yes, but you do not need material facts, correct? You are giving the uncertain up for lost, and basing everything on the certainty of your thoughts alone. You are, in fact, thinking that minds are metaphysical; this is all you can know, and all you need to know. Right?How do you make your decisions, then? And how are you certain that what you do is right?
(Hmm. If you cannot cope with uncertainty, it does not surprise me that you have turned to these ideas. They offer complete and utter certainty, without even needing any evidence. Still, finding a cognitive reason for your statements is not equivalent to refuting them. I am just making an observation, and perhaps you would like to consider it.)
My ideas purport that all material facts and all other consciousnesses are inherently uncertain and there is no way to ever make it otherwise, not exactly any refuge from uncertainty there. The only certain thing is the existence of my appearances in themselves (apart from their supposed material cause), hence to answer your question I make my decisions, ethical or otherwise based upon appearances.
Welp. You still didn't react to my response that DF characters are inferior and morally irrelevant. Guess you ran out of walls of text to throw at me.
The universe may well be unfair. Perhaps it has neglected to provide us with an objective basis for ethics. Perhaps ethics are all pointless. Saying "but that would be bad" isn't a good argument against that being the case.
In the real world, we can never be certain about anything. We have to build an ethical system on fundamental uncertainty, or simply not build any.
Also, that does not follow. The mere fact that some things are more certain than others does not mean that the more certain things are a better basis for morality.
Additionally, it is still appearances by which Reelya assigns moral worth. It is just a deeper sort of appearance, one that you must investigate in order to see.
(All things moral are built on subjectives) -/-> (all subjective things have moral worth). That is confusing the superset and the subset. In other words, simply because all morally-important things happen to be subjective, does not mean that all subjective things are morally important.
Have you ever read Asimov's Relativity of Wrong?
Minds are not, and cannot be, fundamental. They are far too complex. They must be made of smaller and simpler pieces. If I want to be as accurate as possible in my models, I should consider the pieces as well as the whole.
What is a memory? We can tell by noticing the difference between "having a memory" and "not having a memory." This difference is within the brain; the memory is stored in the connection of neurons. Similarly for all other quasi-fundamental mental objects (which may also be stored in other forms of biological information, such as hormones).
I do not see any important difference, nor is there a way to check either definition's validity. This is meaningless philosophy.
This continues to not be the definition. Explaining away is when you show that an alleged object/entity/phenomenon, said to be responsible for a certain physical effect, need not exist - the effect is caused by something else. Explaining is when you show how an object/entity/phenomenon is made up of smaller things. See here for more. (http://lesswrong.com/lw/oo/explaining_vs_explaining_away/)
This is an incorrect summary of my beliefs and words. I know that the mind exists, and do not ignore its existence. Rather, I seek to understand its functions and composition. The mind is made of of non-mental things, just as a mountain is made up of non-mountain things, and an airplane is made up of non-airplane things. In order to better know a mind, you must also know the non-mental things which make it up.
This psychoanalysis is incorrect, but beside the point. As I have said, showing an belief's proponents to be flawed is not an argument against the belief.
You can argue this about anything. Any belief could conceivably be held as an a priori, absolute, unreasonable belief - including yours. As such, this possibility is not an argument against any particular belief. (See Bayes as it applies to arguments.)
Perhaps not, but it seems unlikely.
How do you know that?
People's actions result from their neurology. There is no point where a metaphysical process leaps in and shifts an atom to the side, changing someone's actions. We can draw a casual chain back from "Bob raises his right hand" to "an electrical signal stimulates the muscles in Bob's right arm" to "an electrical signal is emitted by the brain down a particular set of nerves" to "a complicated set of signals is passed between neurons, which we describe as Bob deciding to raise his right arm" and so on.
I do not, in fact, take something's consciousness as an a priori assumption. I look at its behavior, and see whether it demonstrates a tendency to act to satisfy a certain set of criteria. If it does, then I call that "goals and decisions," and move onto the next criterion.
A mindless explanation does not make minds cease to exist, just as quantum physics does not make bridges and planes and mountains cease to exist, despite there being no term for "bridge" in the wave function.
I do not actually remember this.
This is true, because they are the same thing. (Or close enough to be "hardware" and "software.")
Is it made of physical stuff? Then it's material. Is it unobservable? Then how do you know it exists? The only observable immaterial things are mathematical concepts, perhaps.
Then how do you know there is even any cause?
No, I can't, because minds are real. What I can do is look at a person, and see what they do, and look inside them, and see what happens, and so on. Very little of this is "theoretical," and that's not even an insult like you think it is.
In a purely material world, intelligence could still exist, and people could still think that minds exist. Therefore, (thought that minds exist) -/-> (minds are non-material). (See the contrapositive.) [That is, you're saying that A -> ~B, where A is mind-feeling and B is material world. However, ~(B -> ~A) => ~(A -> ~B), QED.]
How do you know this to be true? Do you suppose, out of all possible mind-instantiations that are equivalent to yours, that none of them will be embodied in a world where your a priori belief is false? (See Bayes.)
Excuse me? I am most definitely conscious, and my lack of a priori belief in fundamental consciousness does not invalidate my subjective experiences nor my moral worth.
Your argument makes no sense. By your own definition, there is no observable difference between Zombie Chatbots and Real Humans, since the only difference is an unobservable metaphysical source of consciousness. Therefore, nothing that I do can be evidence toward me being a Zombie Chatbot. (See Bayes.)
It does? So anything that exercises control has a Metaphysical Consciousness Source, or will the Zombie Masters come and install a Fake Consciousness Source in every AI I design?
I don't see how you could possibly obtain any of this "information" yourself to any degree of reliability.
It is a mechanism, but not one that can be changed without affecting the physical universe. This mechanism is the same sort of mechanism as an engine - it's a physical thing that does things. It can also be broken down into non-mechanism parts.
Only when? Are you saying that it is impossible to make a self-reflective thing without that thing having True Consciousness?
So this mechanic is physical, then?
It is the appearance of calling something not-a-person that matters, in a consequentialist sense. Perhaps you treat chatbots well, but I doubt everyone does. This is a very bad idea.
This seems like useless, meaningless philosophical gibberish to me. Have you found any actual evidence for this non-material universe yet, or are you still just asserting its existence?
Alternatively, we cannot wish away mental objects because brains do not have conscious self-editing powers.
No. I do not see how you are getting this. We experience some functions of the brain, not all. You are basically saying A is a subset of B, therefore B is a subset of A. That is not valid logic.
Do you think, in a purely material universe, all conscious beings would always have complete self-knowledge? And how do you know that?
Taboo "material", please.
Consciousness is not a physical thing you can pick up, but it is a physical thing that happens in the physical world. It's the difference between a log and fire. It's a process, with physical causes and physical effects. No metaphysics involved.
That is reserved for concepts. Processes have locations.
How does the physical world affect the mind? And if you had never learned about Phineas Gage, would he have been (at least weak) evidence against dualism, in your view? (What would you have predicted beforehand?)
You are misusing a mathematical term. Functions are maps from sets to sets (or a more general version of the same).
We do, do we? Would you care to share the reason for supposing the existence of a "projection" at all?
Do you know what epiphenomenalism even is?
The material is not unknowable. It is the only knowable thing. It is not certain, but it is knowable.
Yes, but you do not need material facts, correct? You are giving the uncertain up for lost, and basing everything on the certainty of your thoughts alone. You are, in fact, thinking that minds are metaphysical; this is all you can know, and all you need to know. Right?
Can't stop me from killing elf children and telling their parents about it.Oh, yeah? Well, consider this: There is no elf.
If the universe were entirely fair then why would we need ethics at all? Ethics requires, not to exist but to actually have any point in existing something that is not as it should be. In an automatically perfect world, where everything that happens has to be fair, why would anyone need ethical judgements of anything? I am not talking about a world where wrong things simply do not happen but one in which they cannot happen....I do not see how this responds to my point. Yes, things aren't always fair. That's what I said.
The trick is not to build your ethical system on the real-world because you cannot be certain about anything real. :)I do not have to be certain in order to judge and act. And if my ethical system is not firmly connected to the real world, what is the point? I'm not interested in creating the appearance of good, I want to make actual good things happen. I can't be certain that I'm not in some Bizarro world where good is bad and vice versa, but at least I have a chance of success, since I am able to conceive of true ethical success within my ethical system.
No I have not unfortunately. I also never said that *all* subjective things are morally important.I think it is a good short essay that interacts with the fallacy of gray, so I recommend reading it. At the very least, it will help you understand my point of view.
Minds are made of ideas, but really on the whole there is not that many ideas in your mind at the moment in any case. So while minds are not entirely simple, compared to say the brain they are pretty simple.Minds are more than the stream of consciousness, though, are they not? It's more than a snapshot of my current thoughts. Otherwise, you'd be dealing with the "never step in the same river twice" problem.
Yesterday I read a book called the Starfish and the Spider (https://www.amazon.co.uk/Starfish-Spider-Unstoppable-Leaderless-Organizations-ebook/dp/B000S1LU3M/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=#reader_B000S1LU3M). Back in the 1960s they assumed, based upon the basic structure of the mind which they assumed as materialists do to be the same thing as the brain, the brain had to be organized in a hierarchical structure with memories all nicely assigned to a particular part of the brain.I do not think that research indicates what you think it does. There have never been nuggets of memory in our brains according to the materialists, never any physical objects to find. Instead, memory is an emergent property of the persistent traits of connected neurons.
They found that this was simply not the case. The solution was put forward by someone called Jerry Lettvin, instead of the brain being a centralized thing in which memories are stored in the brain as separate objects, perhaps the brain was a decentralized thing. What this means in effect is that memory (and everything else) is not a thing stored in the brain, it is consequence of the whole functioning of the brain as a whole.
As I said, the brain is the projector and the mind the projection. The mind (the projection) is centralized but the projector is decentralised, or as Dr Who put it once 'the footprint does not look like the boot'. :)
It is a very important difference. If you are 'glass' then you are the same person you were yesterday before you went to sleep and are potentially immortal. If you are the 'water' then not only is the afterlife out but your existence also began when you woke up and ends when you go sleep. It also matters because if you are glass then your existence can said to be objectively the case, but if you are the water your existence is entirely subjective.Well, in that case, my sense of persistent self would indicate that my idea of "mind" fits better with the box interpretation than the contents interpretation. However, I disagree that the contents interpretation is not reconcilable with a persistent self. I also do not see how the box interpretation results in an objective mind and the contents interpretation a subjective mind.
I am using the exact same definition as you are Dozebom. What I am saying is that it is sometimes wrong to explain away thing even when you *can*, which is when you have other evidence from other 'sources' that something is the case. That is because if something *can be*, it does not mean that it *is*.But I haven't explained anything away! There are still bridges even when I understand atoms! There are still minds even when I understand neurons! Understanding the fundamental workings of a world does not make the higher-level arrangements unreal.
A mindless universe is quite possible to model and it does work. We do however know we do not live in such a universe, which is why any model that explains away the mind using another mechanic to the mind is wrong. Explaining away redundant theoretical things can be a good idea, but explaining away the things of which you are more certain of with the things you are less certain of is not a good idea.Taboo "mindless"? I think you might be using it as "universe with no entities possessing subjective experience," while I am using it as "universe with solely non-mental fundamental components."
The brain is made up of neurons. The mind is made up of ideas, neurons are not ideas.This is not a pipe? Yes, but that's like saying that Microsoft Word extends beyond the material world. In other words, either all abstractions/concepts are non-material, or none are.
There is a difference between believing in something a-priori and disbelieving in things a-priori. :)Is there a meaningful difference? After all, all beliefs require you to disbelieve in their negations. (bivalence)
A-priori beliefs are acceptable, a-priori disbelief are not. That is because since the material reality is unknowable, you can always invent a contrived explanation to continue to believe in something.The material reality is not "unknowable." You might say these things, but if you want to know what time to arrive for an appointment, you won't sit around philosophizing about the fundamental enigmatic state of the universe, you'll check your physical calendar and look at the written symbols. And if you wrote it down correctly, you'll be on time.
People thought that atoms were unsplittable, indeed that was the whole idea. They were wrong.Yes, but they were more right than people who said that all was water. Quantum physics seems pretty accurate so far; I'd say there's a decent chance that there is no infinite recursion of ever-tinier subsub...atomic particles.
You're ignoring the entire field of neuropsychology.The brain breaks down into neurons, the mind into ideas; neurons are *not* ideas.A program breaks down into logical steps, not computer parts, and yet I could conceivably read off a Word document by going bit-by-bit over my computer's hard drive, if I so wished and if I had the right tools.Most things the brain does it does mindlessly. The mind is not involved in the actual execution of the tasks it decides upon.I feel like you missed my crucial point there. People's actions follow from their neurological activity.
As for your response, is the subconscious not part of the mind?So in your reality is never possible for two completely different things to realize the same observable outcome by using completely different means?I do not assign consciousness to the means, but rather the function. What does it do? How does it respond if I say "hello"? How does it react if something sudden happens? Can it be fooled? Can it notice that it's being fooled? To me, everyone's specific brains are black boxes. All I care about is the output.Correct. Both mindless and mindful explanations for the same things can coexist in the same universe.Explanations do not exist within a universe; they serve to explain parts and levels of a universe.
I can explain somebody's behavior by saying "they were mad" or "neurons 1, 9, 39, 20832, etc., fired and this particular neurochemical was released and this hormone is at X levels." The first is an abstraction, and a short word for a seemingly-simple output. It's like saying "my OS crashed", or "this piece of code caused a segfault." They're both true. One is more general, abstract, higher-level. That's all. The other does not invalidate the one.Think about phantom limbs, about how people who lose limbs can sometimes feel the limbs that they lost. Point is that the mind 'does things' by manipulating an image of the body, not the actual material body but yet the latter must somehow respond to it's image being modified. It is rather a 'this is what I look like, now show me what I must do?', except of course the whole thing could be a lie naturally.I see what you're saying. It sounds a bit like predictive processing (http://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/), but maybe kind of not?
(Yes, Descartes.)
(actually Descartes came to the conclusion that a good God would not allow the Cartesian demon to deceive him and so this joke fails but my esoteric historical-philosophical trivia wins)Mind actually only exists when there is no software 'installed' on the brain to do a function.Thinking is a function, of sorts. It's a process.Over the course of making the latest version of my mod, which is far more work than I had ever envisoned I have used the copy function so many times repetitively in the same context that I am no longer aware of actually doing so. That is to say when I press the 'paste key', I usually find I have copied the relevant date even though I have absolutely no recollection or awareness of doing so (this includes the physical act of pressing the right mouse button).That is like calling a routine. But not all software is calling routines.So the brains 'software' is basically the anti-mind. If the brain ever accumulates enough 'software' to execute all your functions without you, then kiss your conscious existence goodbye.This does sound like something that I've read somewhere (https://squid314.livejournal.com/332946.html), but keep in mind that this is conscious vs subconscious and not mental vs material. It's all mental-material. "I can identify words" and "I don't even have to consciously identify words, I just recognize their shapes and instantly move on" are both cognitive skills, done by the brain.It is observable, but it is not material or physical.Taboo "observation"?That is an odd question to ask when we use a whole set of mechanics that are not physically observable at all to explain things. Gravity for instance.Gravity is not an explanation, it is an observation. (Except that "lifted things fall because F=GMm/r2" is sort of an explanation, but it's just one step of a chain of explanations.)
We think that spacetime is warped by gravity because we notice that time slows down around massive things. There is a neat mathematical way of representing all the various relativistic effects, and so we model it all with General Relativity.
To put my question differently, how do you know anything about the connection between the cause and effect, which would then mean that there is a cause?
Actually, what do you even mean by "causality"? I wouldn't say that gravity causes falling, but rather that there is a force exerted on bodies under certain conditions, which causes/is acceleration. This force's specifics can be predicted with an ever-more-accurate series of equations, from F=GMm/r2 onward. I have now sort-of Tabooed "causality" in gravity, except for the force-acceleration thing (which I think is just an axiom of physics, and not what you're talking about). Can you conceivably do the same for your dualistic theory of minds?Minds are not real, minds are factual. Real things are things that exist independently of the mind (or is that objective things?), minds do not exist independently of the mind.That which does not go away when you stop believing in it, is how I have seen it put, but your mind does not go away when you stop believing in it.
It is a word, anyway. It can be defined however we like. Its use here is far more subtle and removed from everyday life that there are multiple valid ways of specifying reality. Can we stop quibbling over "real" vs "factual" and just discuss the actual concepts? My point was that I cannot explain away the mind, because the mind is a thing that is, and you can only explain away things that aren't.People could not think that minds exist because there are no people thinking anything. You cannot think that minds exist unless you have a mind, but you can falsely conclude that other entities have minds when they do not.You have defined personhood as having Metaphysical Qualia, or whatever, but - we could conceivably simulate this, yes? Since the metaphysical world follows certain rules, right? Then we can imagine a hypothetical Material Person with the same sort of mind-function as a Metaphysical Person.
They would still have thoughts, because thoughts are material things. There is a measurable difference between your brain-state as it thinks different things. If thoughts were not physical, they could not affect the physical world. If thoughts cannot affect the physical world, then why are you talking about them? Your physical hands are twitching and talking about thoughts. They clearly have to be affected by your thoughts, then.I know this is true because whenever I perceive something external there is a probability that is is illusory. Illusory or not however, it is still a fact that *I saw* an appearance of something; that is to say the reality behind everything perceived is a question but the appearances are not questionable.Do you think that all your philosophy about Metaphysical Minds follows directly from the fundamental fact of self?It is easy for you to be programmed to simply state you are conscious. It is harder though to program you to actually demonstrate a comprehension of the 'unobservable metaphysical source of consciousness', especially if the programmer has no consciousness itself.As I've said - Bayes. How easy is it for anybody to comprehend the Unobservable Metaphysics of Consciousness? And how is a non-conscious thing supposed to create consciousness? Evolution did that, but how will this Zombie Master select consciousness from unconsciousness in order to fool others into thinking that its Zombies are Actually Conscious?By consciousness the zombie does not mean actual consciousness, it means the behavior that it must exhibit in order to trick the universe's actual consciousnesses into falsely ascribing it consciousness.What do you mean by Actual Consciousness, though? Why is that a good way to split minds up into categories? Does it carve reality at its joints (http://lesswrong.com/lw/o0/where_to_draw_the_boundary/)?This has the interesting consequence that one way to determine experimentally whether you are really dealing with a philosophical zombie or not is to draw a distinction between a fake and a real consciousness with a description of both. This will ferret the fake consciousness out, unless it's programmer is actually a conscious being and specifically programmed a special script into the zombie such that it would always parrot the correct answers.Wait, so Fake Consciousnesses are actually empirically distinguishable from Actual Consciousnesses? Huh, that changes things.
Let me reiterate something that I think you are not getting. Everything you say about consciousness can be traced back through physico-causality to neurons firing. There is no point where a Zombie Master or a Metaphysical Consciousness reaches in there and makes somebody say "I'm conscious!" You would do that anyway.The most reliable evidence is my own experience, as already discussed above. That is because all material/objective/real things are possibly fake/illusory and all other people are possible philosophical zombies.How do you know that the fact that you experience things implies the existence of Metaphysical Consciousness Sources and Possible Philosophical Zombies and all that stuff?You are still not comprehending the idea that two things can do the same and look the same, without being the same thing?They're not literally the same thing, no. I feel like you are lacking a certain concept of unimportant differences. You often say "ah, your analogy does not work, because the two things being compared are not the same thing!" That is not how comparisons work, and this is not how categories work.
Just because two things have some difference does not mean that they must be in different categories. If I categorize things based on their appearances, then whether or not two things are Actually Literally Fundamentally Identical, things that seem similar are grouped together.The word false consciousness means what it says on the tin. Something that exhibits the observable behaviors of a true consciousness enough to trick the observer but has no underlying consciousness behind those behaviors.You still haven't sufficiently defined consciousness, though. Can something be self-reflective and yet not Truly Conscious? Can something demonstrate self-awareness and yet lack True Awareness? I don't remember you clarifying these.The projector is physical but not the projection. It is the decentralization of the brain (see above) that makes the isolation process very difficult to pull off. It is not a single thing but a whole network of things you have to isolate and you have to figure out of potentially millions of connections which set of connection is the essential connection you need to give your Strong AI so that you know you have actually made a genuine consciousness rather than a fake one.What do you mean by "connection"? Is this interneural or interplanar?What is wrong with treating chatbots well?Nothing, but many people would probably disagree with you that chatbots should be treated well. Also, if it comes at a cost to actual people, I would not help chatbots, because they're literally just tiny pieces of code that spit back prerecorded messages on certain triggers. It's like treating... literally any random piece of code well, except that it will say "thank you" if you say "you're cool!" and most things will just sit there.To be evidence means to be perceived, that is to appear in the non-material universe. There is no material evidence for anything ever.Perceptions are not immaterial. You perceive something when your neurons get entangled with it through your sensory organs. This is a material process.
Also, if you want to know how to make a bridge that won't fall over, you'll start re-inventing the idea of evidence pretty quickly in order to get the science of statics, material science, etc. running again. In practice, material evidence is totally a thing. Look. hits a rock with toe I call that "material."There are no brains in this scenario. We are talking about there being no material universe remember? So yes what you are saying is just what I am saying but with the brain in particular rather than the material universe in general.We are? Looking back, I don't think that's what's been happening... I don't know anything about this supposed scenario.Exactly. We only experience some functions of the brain because we are not the brain. We only know what the brain tells us and the brain reveals nothing of it's own mechanics to us in what it tells us.Clarification: we are running on limited portions of our brain. We are not our entire brain, but we are nothing but our brain (and assorted other entangled things, extended mind, whatever, it's all physical).If there were somehow consciousness in a purely material universe, then it would have to be material as well. If the brain is the physical mind, then we will understand the mechanics of the brain, since that is what we can be aware of in this universe. If we are less than the whole brain, there would be some part of the brain we understood even if not everything; but all parts of the real brain work using the same mechanics.You're asserting that, but I don't see how it follows. The brain is not quite the physical mind - it's not like I've thrown the Metaphysical Consciousness into the material world and shoved it into people's heads and called them "brains." It's more like... the mind is the process carried out by operations within the brain.I could say objective if you wish. Except that subjective experience is an objective fact, so material is a better word to use.That's more a rephrasing. What I mean is...
Do you mean "made up of subatomic particles"? In that case, everything is partially non-material - are symbols and ideas made up of particles? Not quite. But that doesn't mean that they're anywhere else besides here.
The US is an institution. It's not really made up of atoms. It's an abstraction, a group. It's not on the Metaphysical Plane of Nations. It's just here.In the physical world it appears only a mechanic to explain the behavior of objects.In the physical world, consciousness is a property/function of certain kinds of objects. It doesn't really explain behaviors. It's just "this is a thinking object". An object's Consciousness Boolean can be used to predict its action, yes, but that is not all.To itself (in the non-physical world) it appears as an object while the physical world appears a mechanic to explain it.I don't get what you mean by "object" and "mechanic," or even "explain."
Also, what "non-physical world"? What do you even mean by that?? Is it a place where things are? Is it an idea? Is it a state of being?Mechanics do not have locations. Gravity is a mechanic and it does not have any location.What is a "mechanic"? And gravity is a force, and any given instance of gravitational force has a location.As for the rest, recall consciousness is not happening in the brain, the output of consciousness acts are a mechanic in the brain.What is a consciousness act? How is its output a mechanic in the brain? Recall this - the brain is a physical, material object. It does not interface with metaphysics. It acts according to the strict laws of physics. (quantum mechanics is probabilistic, but strictly so - you cannot mess with probabilities any more than you can violate thermodynamics)I would have projected that altering the projector would alter the projection in some unpredictable way. That would in turn alter the decisions made consciously, which would in turn cause the mechanic effect of consciousness to change.So in your dualistic model, there is a back-and-forth between mind and brain? How is this happening? How is the mind affecting the brain? How can it, when the brain is a physical object?I have no idea what any of those mathematical terms mean in any case. :-[Eh, I think I got a bit too much on you case there. "Function" has a much vaguer but still valid meaning as "use" or "operation", come to think of it.By function I mean either mechanism or object, I am looking for a word that includes both things that are physical objects and things which are mechanisms. The word you seem to like (concept) excludes material objects.This doesn't make much sense, though, since "function" pretty much means "use" or "what-is-done". It's an act or happening, or the kind of act or happening that is intended/possible for a thing. Objects are not happenings, and mechanisms are (I'm guessing) means to a happening.My existence.But your existence doesn't require there to be two interfacing planes. That doesn't really follow.Uncertain things are not knowable. Even if you can increase the probability to 99% that something is the case, nothing stops the 1% thing from actually happening to be so.Nobody else, AFAIK, uses "knowable" to mean "can be known with certainty." Knowledge is probabilistic.I am saying that because ethics require certainty[citation needed]it makes the most sense to base ethical judgements entirely on what is certain (appearances), rather than material facts that are always possibly going to be wrong. To me it is right to sacrifice the apparent few for the apparent many, but not to sacrifice the apparent few for the theoretical many in effect.But appearances are not people. That's comparing apples and oranges.I need material facts because barring the extremely improbable situation where somehow I am all-powerful but don't know it, I need to explain why my appearances, certain as I am of their existence are not mine to alter as I wish; as I put it, the appearance of the absence of matrix spoon bending is the evidential basis for material reality.So "material" just means "stuff that isn't you", then?
"This Is Not An Elf" / There is only the symbol of an elfCan't stop me from killing elf children and telling their parents about it.Oh, yeah? Well, consider this: There is no elf.
I meant in DF.There's nothing "in" DF. It is not the elf that dies, it is only yourself.
This quote-replying to 100 individual sentences in another person's post makes me feel like I'm 13 again and the Internet is all fresh and new.
I would feel something when killing a human.Well, I felt something, but less than I expected to, back when I served in military. Killing people is simple if you have good enough reason to do it.
They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
I do not have to be certain in order to judge and act. And if my ethical system is not firmly connected to the real world, what is the point? I'm not interested in creating the appearance of good, I want to make actual good things happen. I can't be certain that I'm not in some Bizarro world where good is bad and vice versa, but at least I have a chance of success, since I am able to conceive of true ethical success within my ethical system.
I think it is a good short essay that interacts with the fallacy of gray, so I recommend reading it. At the very least, it will help you understand my point of view.
Also, your argument was just that something was subjective and thus it was morally important, right?
Minds are more than the stream of consciousness, though, are they not? It's more than a snapshot of my current thoughts. Otherwise, you'd be dealing with the "never step in the same river twice" problem.
I do not think that research indicates what you think it does. There have never been nuggets of memory in our brains according to the materialists, never any physical objects to find. Instead, memory is an emergent property of the persistent traits of connected neurons.
The brain is messy, since it was made by evolution. It doesn't have neat little bins for each memory. But the memories are still stored inside the brain, just not in a formalized way.
Well, in that case, my sense of persistent self would indicate that my idea of "mind" fits better with the box interpretation than the contents interpretation. However, I disagree that the contents interpretation is not reconcilable with a persistent self. I also do not see how the box interpretation results in an objective mind and the contents interpretation a subjective mind.
But I haven't explained anything away! There are still bridges even when I understand atoms! There are still minds even when I understand neurons! Understanding the fundamental workings of a world does not make the higher-level arrangements unreal.
Taboo "mindless"? I think you might be using it as "universe with no entities possessing subjective experience," while I am using it as "universe with solely non-mental fundamental components."
This is not a pipe? Yes, but that's like saying that Microsoft Word extends beyond the material world. In other words, either all abstractions/concepts are non-material, or none are.
Ideas are not physical, but they must be stored/represented somehow. Having a consciousness without a brain is like having Microsoft Word running on thin air.
Zooming in doesn't have to be physical. What do you mean by "happy" or "sad"? If you want to describe it further than synonyms like "it feels good," you have to start talking biochemistry.
Bailey and motte. Saying that ideas are not physical objects does not get you to the statement that neurology cannot teach you about minds, or whatever else you're saying about minds.
Is there a meaningful difference? After all, all beliefs require you to disbelieve in their negations. (bivalence)
The material reality is not "unknowable." You might say these things, but if you want to know what time to arrive for an appointment, you won't sit around philosophizing about the fundamental enigmatic state of the universe, you'll check your physical calendar and look at the written symbols. And if you wrote it down correctly, you'll be on time.
Consciousness depends on experiences. Experiences build and correlate. If there was literally no connection between experiences, we'd be like Boltzmann brains. There must be a degree to which we can learn about the world around us. (Perhaps "the world around us" is not the True Reality, but I don't see how this refutes my point. kicks a rock)
Yes, but they were more right than people who said that all was water. Quantum physics seems pretty accurate so far; I'd say there's a decent chance that there is no infinite recursion of ever-tinier subsub...atomic particles.
You're ignoring the entire field of neuropsychology.
A program breaks down into logical steps, not computer parts, and yet I could conceivably read off a Word document by going bit-by-bit over my computer's hard drive, if I so wished and if I had the right tools.
I feel like you missed my crucial point there. People's actions follow from their neurological activity.
As for your response, is the subconscious not part of the mind?
I do not assign consciousness to the means, but rather the function. What does it do? How does it respond if I say "hello"? How does it react if something sudden happens? Can it be fooled? Can it notice that it's being fooled? To me, everyone's specific brains are black boxes. All I care about is the output.
Explanations do not exist within a universe; they serve to explain parts and levels of a universe.
I can explain somebody's behavior by saying "they were mad" or "neurons 1, 9, 39, 20832, etc., fired and this particular neurochemical was released and this hormone is at X levels." The first is an abstraction, and a short word for a seemingly-simple output. It's like saying "my OS crashed", or "this piece of code caused a segfault." They're both true. One is more general, abstract, higher-level. That's all. The other does not invalidate the one.
I see what you're saying. It sounds a bit like predictive processing (http://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/), but maybe kind of not?
(Yes, Descartes.)
(actually Descartes came to the conclusion that a good God would not allow the Cartesian demon to deceive him and so this joke fails but my esoteric historical-philosophical trivia wins)
Thinking is a function, of sorts. It's a process.
That is like calling a routine. But not all software is calling routines.
This does sound like something that I've read somewhere (https://squid314.livejournal.com/332946.html), but keep in mind that this is conscious vs subconscious and not mental vs material. It's all mental-material. "I can identify words" and "I don't even have to consciously identify words, I just recognize their shapes and instantly move on" are both cognitive skills, done by the brain.
Gravity is not an explanation, it is an observation. (Except that "lifted things fall because F=GMm/r2" is sort of an explanation, but it's just one step of a chain of explanations.)
We think that spacetime is warped by gravity because we notice that time slows down around massive things. There is a neat mathematical way of representing all the various relativistic effects, and so we model it all with General Relativity.
To put my question differently, how do you know anything about the connection between the cause and effect, which would then mean that there is a cause?
Actually, what do you even mean by "causality"? I wouldn't say that gravity causes falling, but rather that there is a force exerted on bodies under certain conditions, which causes/is acceleration. This force's specifics can be predicted with an ever-more-accurate series of equations, from F=GMm/r2 onward. I have now sort-of Tabooed "causality" in gravity, except for the force-acceleration thing (which I think is just an axiom of physics, and not what you're talking about). Can you conceivably do the same for your dualistic theory of minds?
That which does not go away when you stop believing in it, is how I have seen it put, but your mind does not go away when you stop believing in it.
It is a word, anyway. It can be defined however we like. Its use here is far more subtle and removed from everyday life that there are multiple valid ways of specifying reality. Can we stop quibbling over "real" vs "factual" and just discuss the actual concepts? My point was that I cannot explain away the mind, because the mind is a thing that is, and you can only explain away things that aren't.
You have defined personhood as having Metaphysical Qualia, or whatever, but - we could conceivably simulate this, yes? Since the metaphysical world follows certain rules, right? Then we can imagine a hypothetical Material Person with the same sort of mind-function as a Metaphysical Person.
They would still have thoughts, because thoughts are material things. There is a measurable difference between your brain-state as it thinks different things. If thoughts were not physical, they could not affect the physical world. If thoughts cannot affect the physical world, then why are you talking about them? Your physical hands are twitching and talking about thoughts. They clearly have to be affected by your thoughts, then.
Do you think that all your philosophy about Metaphysical Minds follows directly from the fundamental fact of self?
As I've said - Bayes. How easy is it for anybody to comprehend the Unobservable Metaphysics of Consciousness? And how is a non-conscious thing supposed to create consciousness? Evolution did that, but how will this Zombie Master select consciousness from unconsciousness in order to fool others into thinking that its Zombies are Actually Conscious?
What do you mean by Actual Consciousness, though? Why is that a good way to split minds up into categories? Does it carve reality at its joints (http://lesswrong.com/lw/o0/where_to_draw_the_boundary/)?
Wait, so Fake Consciousnesses are actually empirically distinguishable from Actual Consciousnesses? Huh, that changes things.
Let me reiterate something that I think you are not getting. Everything you say about consciousness can be traced back through physico-causality to neurons firing. There is no point where a Zombie Master or a Metaphysical Consciousness reaches in there and makes somebody say "I'm conscious!" You would do that anyway.
How do you know that the fact that you experience things implies the existence of Metaphysical Consciousness Sources and Possible Philosophical Zombies and all that stuff?
They're not literally the same thing, no. I feel like you are lacking a certain concept of unimportant differences. You often say "ah, your analogy does not work, because the two things being compared are not the same thing!" That is not how comparisons work, and this is not how categories work.
Just because two things have some difference does not mean that they must be in different categories. If I categorize things based on their appearances, then whether or not two things are Actually Literally Fundamentally Identical, things that seem similar are grouped together.
You still haven't sufficiently defined consciousness, though. Can something be self-reflective and yet not Truly Conscious? Can something demonstrate self-awareness and yet lack True Awareness? I don't remember you clarifying these.
What do you mean by "connection"? Is this interneural or interplanar?
Nothing, but many people would probably disagree with you that chatbots should be treated well. Also, if it comes at a cost to actual people, I would not help chatbots, because they're literally just tiny pieces of code that spit back prerecorded messages on certain triggers. It's like treating... literally any random piece of code well, except that it will say "thank you" if you say "you're cool!" and most things will just sit there.
Perceptions are not immaterial. You perceive something when your neurons get entangled with it through your sensory organs. This is a material process.
Also, if you want to know how to make a bridge that won't fall over, you'll start re-inventing the idea of evidence pretty quickly in order to get the science of statics, material science, etc. running again. In practice, material evidence is totally a thing. Look. hits a rock with toe I call that "material."
We are? Looking back, I don't think that's what's been happening... I don't know anything about this supposed scenario.
Clarification: we are running on limited portions of our brain. We are not our entire brain, but we are nothing but our brain (and assorted other entangled things, extended mind, whatever, it's all physical).
The brain is decentralized, so there is no way that setup can work.You're asserting that, but I don't see how it follows. The brain is not quite the physical mind - it's not like I've thrown the Metaphysical Consciousness into the material world and shoved it into people's heads and called them "brains." It's more like... the mind is the process carried out by operations within the brain.
That is what the statement that brains are minds amounts to. You have of course retreated and in effect declared we are only part of our brain, all I must now do is reduce the portion to 0%.That's more a rephrasing. What I mean is...
Do you mean "made up of subatomic particles"? In that case, everything is partially non-material - are symbols and ideas made up of particles? Not quite. But that doesn't mean that they're anywhere else besides here.
The US is an institution. It's not really made up of atoms. It's an abstraction, a group. It's not on the Metaphysical Plane of Nations. It's just here.
The US is really made up of atoms. That is because it is a collective (higher object) made up of the bodies of human beings which are made up of atoms. The problem is that unless the US itself has some kind of compound mind, it is only a material object in the sense that a bridge or a plane is; even though it is made up of parts which themselves are likely to have minds.In the physical world, consciousness is a property/function of certain kinds of objects. It doesn't really explain behaviors. It's just "this is a thinking object". An object's Consciousness Boolean can be used to predict its action, yes, but that is not all.
I am trying to explain how the non-physical can effect the physical world. What I am in effect saying is that mechanics are non-physical (Gravity say) and that every conscious is basically operating in the material world in the same fashion. The difference being there is no 'mind of gravity' while the mind-mechanic is tied up with the mind-object which is somewhere outside of the material world altogether.I don't get what you mean by "object" and "mechanic," or even "explain."
Also, what "non-physical world"? What do you even mean by that?? Is it a place where things are? Is it an idea? Is it a state of being?
The non-physical world is the mind to itself (the mind as object). Objects are type of mechanic, that is something invented to explain the non-physical appearance of said objects (something quite different) in the non-physical mind.What is a "mechanic"? And gravity is a force, and any given instance of gravitational force has a location.
Force is the type of mechanic that gravity exists. Gravity for that matter along with all the other physical forces, were just something made up to explain things. The difference between them and ordinary objects is they have no definite location, the mind 'appears' in the physical world as a non-object mechanic, because it is doing something inside the physical world but is not *in* the physical world.What is a consciousness act? How is its output a mechanic in the brain? Recall this - the brain is a physical, material object. It does not interface with metaphysics. It acts according to the strict laws of physics. (quantum mechanics is probabilistic, but strictly so - you cannot mess with probabilities any more than you can violate thermodynamics)
It does do any of things you say. Those are lies.So in your dualistic model, there is a back-and-forth between mind and brain? How is this happening? How is the mind affecting the brain? How can it, when the brain is a physical object?
Think of the mind not as an object in a spatial relationship with the brain but as a mechanic underlying the functioning of the brain. By mechanic I mean something like gravity, it does something but does not have a spatial location within the system. The difference is that the mind is also an object, but that object is *not* within the material universe but *somewhere else*, or rather it *is* the somewhere else.This doesn't make much sense, though, since "function" pretty much means "use" or "what-is-done". It's an act or happening, or the kind of act or happening that is intended/possible for a thing. Objects are not happenings, and mechanisms are (I'm guessing) means to a happening.
Objects are mechanisms to explain subjective experience. That is the only basis by which we conclude their existence. What is confusing here is that while all objects are mechanisms, all mechanisms are not objects.But your existence doesn't require there to be two interfacing planes. That doesn't really follow.
It does if I reason correctly.Nobody else, AFAIK, uses "knowable" to mean "can be known with certainty." Knowledge is probabilistic.
All knowledge except that of one's own appearances. That is the true knowledge (100% certain), everything else being material or other-consciousness is uncertain.But appearances are not people. That's comparing apples and oranges.
The appearances of the people are what matters ethically, not the people.So "material" just means "stuff that isn't you", then?
It is material if it is not me and neither is it anyone else. Only in zombie universe does material translate simply to stuff that is not me.
that's like using vegan waterfilters so no bacteria was killed. do those people also not wash themselves for the same reason?In order to determine whether running a particular predictive model would create a sapient sub-being inside your own mind, would you not first have to run a predictive model that could in turn have the potential to create a sapient being?You can create a model without running it. You can analyze the structure of a model without running it. If these are both true, it might be possible to avoid running minds in our models.
If we can develop a criterion for what is a mind versus what is not, and we make a program that implements this categorization scheme, which is capable of running on itself, including its own processes, then we can notice when the analysis creates minds, and avoid this somehow.
(We don't have to be sure there's no mind in the models ever, the point is to reduce the mind in the models. We don't have to Win to make a difference.)
Yeah, sub-beings being killed is a necessary evil, and I do not avoid it.that's like using vegan waterfilters so no bacteria was killed. do those people also not wash themselves for the same reason?In order to determine whether running a particular predictive model would create a sapient sub-being inside your own mind, would you not first have to run a predictive model that could in turn have the potential to create a sapient being?You can create a model without running it. You can analyze the structure of a model without running it. If these are both true, it might be possible to avoid running minds in our models.
If we can develop a criterion for what is a mind versus what is not, and we make a program that implements this categorization scheme, which is capable of running on itself, including its own processes, then we can notice when the analysis creates minds, and avoid this somehow.
(We don't have to be sure there's no mind in the models ever, the point is to reduce the mind in the models. We don't have to Win to make a difference.)
But can not the same arguments be made about us?
I mean, they're really powerful. They're just not that interesting from a... whatever the hell my standpoint is, standpoint.
And so it went, along that strange assumption.For a scientific debate, there was an awful lot of bad logic going on (or in the report at least):
A cool mathematician gives a little talk here (http://www.youtube.com/watch?v=PFkZGpN4wmM) about his position that data can never truly represent reality.
Do not revive the thread. Let it rot. :Pwell, it was too tempting, because i cannot delete it from my list of updated topics...
Let it bury itself until GoblinCookie notices.Do not revive the thread. Let it rot. :Pwell, it was too tempting, because i cannot delete it from my list of updated topics...
Let it bury itself until GoblinCookie notices.
Neither is any more or less ethical than the other. They're both computer games with the "violent activities" being just the shifting around of numbers.
Is reviving dead threads ethical?erm, well it's not...
Should I ask Toady to lock this thread so it dies forever?Please do so!
I messaged Toady already. Let's hope this deservedly dies forever.*starts forging a +Platin Mace+ for Toady to kill the thread*
*starts forging a +Platin Mace+ for Toady to kill the thread*Fool! The Toad's Mighty Banhammer is far stronger than any which can be forged by mere mortals.
so my strange mood was futile?! *goes stark raging mad**starts forging a +Platin Mace+ for Toady to kill the thread*Fool! The Toad's Mighty Banhammer is far stronger than any which can be forged by mere mortals.
This quote-replying to 100 individual sentences in another person's post makes me feel like I'm 13 again and the Internet is all fresh and new.By the way, I think that the cause of this tendency is that internet conversations can be more non-linear/non-sequential. There's still a back-and-forth element, but because you can send a large amount of information in one go, you can carry out multiple adjacent discussions in one post. It's sort of like exploring a tree - verbal discussions have to travel up one branch, then go back down perhaps, then up another. Internet discussions can take the cross-section of the tree and go upward.
Oh for the love of Armok, can we PLEASE just let this thread rot?I saw some real dumb comments earlier and i really needed to point out how wrong they were, sorry, also it was at the top of this subforum so I didnt see it had been necrod, but in any case i finished my argument and i don't wish to drag it on, i just wanted to point out some things and some real life examples people could refer to when debating the topic of "is it ethical to torture simulated creatures" with some actual studies and scientific evidence and such.
At this rate we’ll be debating the sentience of differential equations in no time.
No more sentient than an L-system.
https://en.wikipedia.org/wiki/L-systemHuh, I always called those "context-free grammars."
I would replace "complex" with "partly autonomous", as the processes controlling them are complex regardless.
About bacteria being as intelligent as dwarf fortress dwarves.Kind of feels like you've only proven that dwarves store more complex information.
They are not.
Bacteria move via something called "run and tumble" which is a ridiculously simple algorithm to write in code. Dwarves are smarter then bacteria because they can pathfind, bacteria actually cant, and because for example they track memories while a bacteria doesn't, even at that simplistic a level, however a dwarf is in NO WAY as intelligent or complex as the simplest multcellular organism, which required a huge supercomputer with 400 computers to simulate the entire nervous system of (and they pulled it off, look it up, but still 400 computers)
And before you say :
"false bacteria are complex as heck"
complexity ~= intelligence.And while bacteria may be more complex then dwarves, they are not nearly as intelligent as dwarves.
But saying dwarves in dwarf fortress are less intelligent then bacteria is absolute nonsense.
[...]
You noticed. ;D Don't fully resuscitate it, though.
About bacteria being as intelligent as dwarf fortress dwarves.
They are not.
Bacteria move via something called "run and tumble" which is a ridiculously simple algorithm to write in code. Dwarves are smarter then bacteria because they can pathfind, bacteria actually cant, and because for example they track memories while a bacteria doesn't, even at that simplistic a level, however a dwarf is in NO WAY as intelligent or complex as the simplest multcellular organism, which required a huge supercomputer with 400 computers to simulate the entire nervous system of (and they pulled it off, look it up, but still 400 computers)
And before you say :
"false bacteria are complex as heck"
complexity ~= intelligence.And while bacteria may be more complex then dwarves, they are not nearly as intelligent as dwarves.
But saying dwarves in dwarf fortress are less intelligent then bacteria is absolute nonsense.
[...]
-snip-
We can't compare that to an entire self-contained organism that runs its own instructions. The bacterium's complexity is its hardware! DF doesn't even simulate a CPU for each dwarf brain!
I'm not sure that the pathfinding proves that the dwarves in the game are actually intelligent. That's anthropomorphizing them too much.Spoiler (click to show/hide)
About bacteria being as intelligent as dwarf fortress dwarves.
They are not.
Bacteria move via something called "run and tumble" which is a ridiculously simple algorithm to write in code. Dwarves are smarter then bacteria because they can pathfind, bacteria actually cant, and because for example they track memories while a bacteria doesn't, even at that simplistic a level, however a dwarf is in NO WAY as intelligent or complex as the simplest multcellular organism, which required a huge supercomputer with 400 computers to simulate the entire nervous system of (and they pulled it off, look it up, but still 400 computers)
And before you say :
"false bacteria are complex as heck"
complexity ~= intelligence.And while bacteria may be more complex then dwarves, they are not nearly as intelligent as dwarves.
But saying dwarves in dwarf fortress are less intelligent then bacteria is absolute nonsense.
[...]
-snip-
We can't compare that to an entire self-contained organism that runs its own instructions. The bacterium's complexity is its hardware! DF doesn't even simulate a CPU for each dwarf brain!
Wait a minute. I believe Untrustedlife is talking about the actions of dwarves vs. bacteria. Your reply, though, applies to the complexity of the hardware that dwarves and bacteria have. But hardware, a lower level than behavior, isn't necessarily relevant to how intelligent an entity is; to use the terminology of Douglas Hofstadter*, to argue thus "rests on a severe confusion of levels".
*Incedentally, I think that his masterpiece GEB (http://"https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach") is the sort of book everyone involved in this discussion would enjoy.
they require the player to tell them to get foodActually, if you dont arrange food for them they will forage and catch vermin themselves to eat, raw.
Ignore the armchair moderators.
All these things are as procedural and simplified as the dwarves themselves are. They have no moral weight. Remember that you kill billions of living organisms (bacteria and single-celled animals) just by walking one step.Quotethey require the player to tell them to get foodActually, if you dont arrange food for them they will forage and catch vermin themselves to eat, raw.
Theyll also raid water and booze and sleep according their own requirements.
I dont recall exactly but each dwarf and many creatures have about:
50 behavioural tendency values which slowly change due to their virtual experiences,
A dozen detailed and varied virtual item preferences,
About 30 values for respect and regard of abstract subjects
One of a dozen special goal/satisfaction types and a few deities.
A list of about 20 specific wants (needs) with details of how well met, and how pressing each is on the virtual psyche
Sometimes 100's of relationship links with other units such as friends, quarrels, favorite performers ...
On top of that the very latest versions have also introduced a facility to remember and be affected by emotionally intense (virtual) events.
What is an armchair moderator?
Since I don't know, it's possible I've been unknowingly been one.
you kill billions of living organisms (bacteria and single-celled animals) just by walking one step.I dont expect a footstep does routinely destroy that many microbes - not that they would hardly mind, but the idea of it seems overly tragic. The pressure difference involved should be less significant at microbial scale than for a chipmunk under an elephant toe.
Quoteyou kill billions of living organisms (bacteria and single-celled animals) just by walking one step.I dont expect a footstep does routinely destroy that many microbes - not that they would hardly mind, but the idea of it seems overly tragic. The pressure difference involved should be less significant at microbial scale than for a chipmunk under an elephant toe.
Armchair moderators
Quoteyou kill billions of living organisms (bacteria and single-celled animals) just by walking one step.I dont expect a footstep does routinely destroy that many microbes - not that they would hardly mind, but the idea of it seems overly tragic. The pressure difference involved should be less significant at microbial scale than for a chipmunk under an elephant toe.
The thing is, there are about 7 billion or more microbes on a patch of average house floor the size of a foot. Assuming a 1/10 chance of killing a microbe by stepping on it, you are killing 700 million microbes with every step. Still tragic, eh? Even more outside houses.Quoteyou kill billions of living organisms (bacteria and single-celled animals) just by walking one step.I dont expect a footstep does routinely destroy that many microbes - not that they would hardly mind, but the idea of it seems overly tragic. The pressure difference involved should be less significant at microbial scale than for a chipmunk under an elephant toe.
Thanks, even tough the underlying idea is kind of correct the numbers are off. How many times have I stepped on an ant without crushing it? Not to mention the size of our pores.
Other than that this thread keeps reminding me the chinese room. Remember the chinese room is just a tought experiment (tough not one as dumbshit as the archers paradox).QuoteArmchair moderators
Some people prefer things locked. Door Locked. Window locked. Pantzippers locked. Thread locked. State of mind locked. World view locked.
QuoteArmchair moderators
Some people prefer things locked. Door Locked. Window locked. Pantzippers locked. Thread locked. State of mind locked. World view locked.
I accept your challenge :) To imagine such a scene regard little details. The only microbes in danger of a bit of a squishing will be the ones perched on rare pinnacles of the tiny landscape that happen to contact the soles texture, and most will even be resilient to an occasional squish. The change in lighting is likely more disruptive than compressions. Its a crazy party down there anyway, microbes are spared the facilities to lament their apparent misfortunes and are quickly reincarnated.Well, the changes in lighting will also kill microbes, leaving the death toll rather high, so whatever. They will breed again in about a minute like nothing happened.
Actually, we don't have moderators, we have exactly one admin who does all the moderating himself. And personally, I think that's a good thing. When you have some members of the community who have power over everyone else, you can end up with an unhealthy power dynamic.
var happy = 1
var sad = 0
if sadness < happiness_threshold then emotion_state = happy
...theres no particular logic to regard oneself as quite such a walking apocalypse...
all that happens is each time a unit finishes a job, the CPU loops through a list of unassigned tasks and finds one that fits the allowed labors of the unit.It is a bit more complex than that.
I don't have much to say about the majority of your post, but on the topics of sentient hallucinations, hallucinations meet some of the criteria for independent sentience (http://meltingasphalt.com/neurons-gone-wild/). They're entirely held within the brain, of course - I'm not proposing dualism - but they can still be somewhat separate, on a higher level of abstraction, from the self; they seem to have a degree of agency (perhaps only as much as an animal, though); they often have motivations, etc. that differ from that of the self; and so on. I suppose it would depend on the individual hallucination, with some hallucinations (of objects, ferex) being entirely non-agents, others being semi-agents, and a few (of people?) being nearly full agents.
For those who believe sentience results with appearance of sentience (deep AI etc) the hardware is relevant only to its ability to create/host an appearance of sentience. Dwarves have non-zero score of appearing sentient - they each maintain hundreds of data points designed to be analogous to organic conditions and characters and to produce relate-able behaviours.
For the sake of argument let’s define “intelligence” as complexity of behavior.I don't think that definition of intelligence is particularly meaningful in a discussion of ethics. Furthermore, bacteria do more than just walk. They breed, react to stimuli, release chemicals, etc.
Also bumbler, bacteria aren’t capeable of learning. Some can exchange plasmids though but this is out of the context of the conversation. Their adaptation is simply darwinian adaptation which is a natural law. Not a product of the bacteria itself “learning” behavior. Dwarves don’t learn either they are governed by a pre-written set of behaviors but those behaviors are objectively from a computing standpoint more complex then run and tumble.(which is simply a “biased random walk” )
[...]
I don't have much to say about the majority of your post, but on the topics of sentient hallucinations, hallucinations meet some of the criteria for independent sentience (http://meltingasphalt.com/neurons-gone-wild/). They're entirely held within the brain, of course - I'm not proposing dualism - but they can still be somewhat separate, on a higher level of abstraction, from the self; they seem to have a degree of agency (perhaps only as much as an animal, though); they often have motivations, etc. that differ from that of the self; and so on. I suppose it would depend on the individual hallucination, with some hallucinations (of objects, ferex) being entirely non-agents, others being semi-agents, and a few (of people?) being nearly full agents.
The problem is that a 32-bit integer is not analogous to a real-world quality, by any means.That is the purpose of any analogy - to be analogous to something. Qualities of matter, roughness, hairyness, weight, etc - such things are quantised to some degree of numeric accuracy and used in formulations to produce a likeness of a system. They never amount to the things they aim to represent but they are by intent analogies of the things they aim to represent.
The DF main process runs all the routines, like a puppet-master controlling the dwarves. Bacteria are at least carrying out their instructions independently.If you believe that what runs the routines is important to the question of sentience (I do) then you have something as yet unconceptualised involved in generating sentience. If you had a convincing AI which ran on dedicated neuron like cores - inspired by familiar biology - that AIs hardware can be virtualised and simulated itself to run on a single simple core. All known human designed hardware, except perhaps qubits, can be converted to software (into to data) which is processable to create the same output as its hardware incarnation, by any Turing complete computer. Thats very solidly known. What is not known, is whether a sentient nature of the kind we value in our selves can be generated or induced within any hardware or software. If it might be, then I think virtual DF entities are perfectly suitable receptacles for a tiny little bit of it.
Quote from: ReelyaThe problem is that a 32-bit integer is not analogous to a real-world quality, by any means.That is the purpose of any analogy - to be analogous to something.
No AI we've created thus far exhibits any kind of life or sentience or self-awareness. We're lightyears away from a true '13th Floor' or 'World on a Wire' style simulation. The AI in Dwarf Fortress for example has no ability to self-improve, mutate or otherwise do something unexpected. It's just a beautiful (and very simple, when you get down to it) numbers game that you can watch unfold, manipulate and poke.
I think we as a people have a pretty big problem of misattributing intelligence and awareness. We're frequently painting this horror picture of AI and evolving technology around the IoT as a problem, while thinking of ourselves as biological machines instead of our awareness being a spiritual quality. We're also stubbornly refusing to award sentience to the animal kingdom beyond maybe companion animals vs. the uncountable billions that we've delegated to a short life of suffering.
I didn't wanna put such a predictably lame spin on it, but this is an ethics question. Basically: There are great ethical problems to be overcome in our time. AI isn't one of them (yet).
Analysing the source code. They are explicitly governed by simple processes.No AI we've created thus far exhibits any kind of life or sentience or self-awareness. We're lightyears away from a true '13th Floor' or 'World on a Wire' style simulation. The AI in Dwarf Fortress for example has no ability to self-improve, mutate or otherwise do something unexpected. It's just a beautiful (and very simple, when you get down to it) numbers game that you can watch unfold, manipulate and poke.
I think we as a people have a pretty big problem of misattributing intelligence and awareness. We're frequently painting this horror picture of AI and evolving technology around the IoT as a problem, while thinking of ourselves as biological machines instead of our awareness being a spiritual quality. We're also stubbornly refusing to award sentience to the animal kingdom beyond maybe companion animals vs. the uncountable billions that we've delegated to a short life of suffering.
I didn't wanna put such a predictably lame spin on it, but this is an ethics question. Basically: There are great ethical problems to be overcome in our time. AI isn't one of them (yet).
The problem here is that how would we ever know if we did? How do we know if the thing we are dealing with is actually an intelligent being or simply a sufficiently advanced impression of one?
Analysing the source code. They are explicitly governed by simple processes.
Dwarves are an order of magnitude or two less complex than humans.Analysing the source code. They are explicitly governed by simple processes.
So are brains.
Dwarves are an order of magnitude or two less complex than humans.
Brains are also like, the neuron alone is simply a cog in the machine but the sum of all the neurons is a person.
And that even scientists can't agree on what makes a person
Let's not forget that a theory is not a fact. And that even scientists can't agree on what makes a person. They may have figured out how memories are stored, how nerves grow and how injury to the brain results in personality changes, but when you get down to it - we know very little about the brain still. And whether or not it's the seat of consciousness.. about that, we know exactly zero.
There is no "this is a person" script for us to identify in order to tell
A computer program works similarly, if you break it down you get a lot of little scripts. There is no "this is a person" script for us to identify in order to tell if our prospective AI program is actually a real entity or just a regular machine.We can know what kinds of scripts it has. You probably need self-reflection and learning functions for sentience.
[Snip]I'm not entirely sure I agree with this. Can we not have sentience without randomness? Or in other words, without "free will"?
The problem: it is obvious that dwarves are nowhere near as complex as humans. They do not actually "learn" anything, they do not actually "feel" anything, and they do not actually "think" about anything.
Determinism and sentience aren't actually incompatible. e.g. it's perfectly possible to have the sense of free-will even in a deterministic universe.
e.g. if "you" make a decision, that decision is determined by your previous state. But you are the state. So there's no "external" force "making" you do what you didn't want to do. The confusion comes from the idea that something "external" forced you to act as you did. But it didn't, because you are the deterministic system. Determinism is in fact internal decision-making because the self and the system are not a duality, they're the same thing. There's also nothing special about humans in this. We're just biomachines who have feedback/sentience. There's no need to start creating new pseudo-science physics because we're uncomfortable with the idea that the existing laws of physics might pre-determine what we do. We're just not special enough in the universe to warrant that.
The idea that hooking up a "random" input source means free will is wrong. That's not freedom, that's being buffeted uncontrollably by whatever random fluctuations happen to occur. In a sense, that could even be said to be less free than just being a deterministic being who uses it's own state to decide how to act next.
We can know what kinds of scripts it has. You probably need self-reflection and learning functions for sentience.
Well, if this discussion is going to continue, might as well drop some stuff that came up in another thread where we were discussing the same topic.Well, let's take a look at some theories of consciousness/sentience.
First we have the attention schema theory (https://en.wikipedia.org/wiki/Attention_schema_theory), which... claims that "consciousness" is just a machine's attempt to build a model of itself; ultimately consciousness as a whole isn't just an "illusion" but outright does not exist.
Secondly we have the global workspace theory (https://en.wikipedia.org/wiki/Global_workspace_theory), which states that all of the processes in the brain "compete" for sending signals to a "global workspace" which can interact with any other process in a voluntary fashion. That is, that all of our "conscious" processes are actually just subconscious ones attempting to influence other parts of the brain.
Thirdly we have the holonomic brain theory (https://en.wikipedia.org/wiki/Holonomic_brain_theory), which seems to state that cognition works like quantum physics; it does not say that the brain in any way relies on quantum properties or anything like that, but that consciousness behaves mathematically like quantum physics. Very distinct difference.
Fourthly we have the integrated information theory (https://en.wikipedia.org/wiki/Integrated_information_theory), which is mostly concerned with what means you could call any particular system "sentient." Specifically, it is a set of axioms, postulates, and mathematical formulations of both that describe the characteristics of a dynamic system such that the given system demonstrates consciousness.
Fifthly we have the multiple drafts model (https://en.wikipedia.org/wiki/Multiple_drafts_model), which states that consciousness isn't a property of a system of its parts. Rather, it is a property of the flow of information itself. It takes the notion of qualia, throws it out, and regards consciousness as a description of behavior. That is, that the properties of consciousness and the judgement of those properties are indistinguishable. It borrows from the global workspace theory in the notion that any particular neural process compete for a notion of "consciousness", but specifically that such processes reach that state the moment they leave something behind.
So, if you are looking for some criterion of things that let you measure the "consciousness" of a system, try starting with integrated information theory. Regarding what consciousness "is" in the first place, try multiple drafts model or attention schema theory.
Why does bay 12 always devolve into quantum physics discussion
The closest thing Bay12 has to a flamewar is an argument over philosophy that slowly transitioned to an argument about quantum physics.
The closest thing Bay12 has to a flamewar is an argument over philosophy that slowly transitioned to an argument about quantum physics.That's a-goin' in the ol' signarooni...
The closest thing Bay12 has to a flamewar is an argument over philosophy that slowly transitioned to an argument about quantum physics.
Not quite what the third one is saying. It's that you can use the equations for quantum physics on consciousness, which is not something you can necessarily do with most physical systems. It mostly relies on the concept of oscillating electrical waves analyzed through Fourier transforms. Particularly, it says that information storage is distributed across the entire system; memories are distributed throughout the brain as a whole, rather than in any specific location. Recent research has confirmed this. Since that theory is a bit old, here's the more modern (https://en.wikipedia.org/wiki/Quantum_cognition) approach to using quantum methods to understanding cognition.
I think you're... very much misreading what's going on here. It's not a claim about anything; it's just using concepts of quantum physics (entanglement, superposition) to model how information is handled in the brain. I think you're mistaking it for quantum mind theory, which this is distinctly not. This makes no claims regarding the physics involved in consciousness whatsoever; it just takes the math from one area of physics and applies it to modeling consciousness.
We're talking stuff like the introduction of new concepts being modeled as quantum superposition. You're thinking about this one a bit too nitty-gritty here; it's not talking about the physical workings of the brain at all.
The brain is definitely a macroscopic physical system operating on the scales (of time, space, temperature) which differ crucially from the corresponding quantum scales. (The macroscopic quantum physical phenomena such as e.g. the Bose-Einstein condensate are also characterized by the special conditions which are definitely not fulfilled in the brain.) In particular, the brain is simply too hot to be able perform the real quantum information processing, i.e., to use the quantum carriers of information such as photons, ions, electrons. As is commonly accepted in brain science, the basic unit of information processing is a neuron. It is clear that a neuron cannot be in the superposition of two states: firing and non-firing. Hence, it cannot produce superposition playing the basic role in the quantum information processing. Superpositions of mental states are created by complex networks of neurons (and these are classical neural networks). Quantum cognition community states that the activity of such neural networks can produce effects which are formally described as interference (of probabilities) and entanglement. In principle, the community does not try to create the concrete models of quantum (-like) representation of information in the brain.
It's more of an analogy. The function of the brain in a "big" sense is analogous to quantum theory. It's dualist in the sense that it's not grounded in the focal point of its theories being tied to neurons, but it's not really predicting anything about consciousness itself being dualist. Think of it more in the sense of "regardless of how consciousness arises materialistically, this is how it works in a broader sense" rather than making any sort of physical predictions.
Keep in mind, quantum cognition is not the same thing as quantum mind theory (https://en.wikipedia.org/wiki/Quantum_mind).
I think y'all are overestimating just how complex DF is. It's complex, but it's not concious mind complex. When the game starts updating itself I'll believe it's become self aware. Maybe it's tired of all the blood shed and locks you out of the game, "I'm afraid I can't let you do that, Urist."
The problem: It is irrelevant because dwarves are not appearances of sentient beings. They are a little sprite that looks like a smiley face with a few bits of data appended to it. And I would not care even if it had lifelike graphics, either. That would be a 3D model with a few bits of data appended to it, not an appearance. :)I think y'all are overestimating just how complex DF is. It's complex, but it's not concious mind complex. When the game starts updating itself I'll believe it's become self aware. Maybe it's tired of all the blood shed and locks you out of the game, "I'm afraid I can't let you do that, Urist."
Nobody is positively claiming that the game is actually conscious. The point I have been trying to make for a long time, is that the question is irrelevant. If it appears that you are doing wrong to a conscious being, then that is wrong even if no actual wrong is really being done to anyone. The reason I support this idea is that it works better than the alternative because it evades the need to answer ultimately unanswerable questions about what is in fact conscious.
Nobody is positively claiming that the game is actually conscious. The point I have been trying to make for a long time, is that the question is irrelevant. If it appears that you are doing wrong to a conscious being, then that is wrong even if no actual wrong is really being done to anyone. The reason I support this idea is that it works better than the alternative because it evades the need to answer ultimately unanswerable questions about what is in fact conscious.Wrong for one's own mental health, right?
Nope! He meant morally wrong, like killing an actual human being.Nobody is positively claiming that the game is actually conscious. The point I have been trying to make for a long time, is that the question is irrelevant. If it appears that you are doing wrong to a conscious being, then that is wrong even if no actual wrong is really being done to anyone. The reason I support this idea is that it works better than the alternative because it evades the need to answer ultimately unanswerable questions about what is in fact conscious.Wrong for one's own mental health, right?
That's a pretty contentious claim. From the studies I've seen, violent video games reduce actual violence. Presumably they work like an outlet, rather than training.
But maaaybe that's only true when players have the emotional maturity to separate fact from fiction. I've gotten too immersed in certain games on several occasions, to the point that I actually felt bad about making amoral choices. That's a learning experience for *me*, since I feel bad and can examine why. But what if I was that immersed, did an amoral thing, and didn't feel bad? Got rewarded, even? We do explicitly use games to teach good behaviors and train skills.
I guess what I'm saying (by "just asking questions") is that I agree with keeping certain games out of the hands of kids too young to, you know, know what death is.
violent video games reduce actual violenceDepends, first person shooters are used in military training to make shooting reflexive, overriding the powerful instinct to worry and delay over lethal action that was discovered caused most of the soldiers in WW2 to be effectively incapable of aiming and shooting (https://en.wikipedia.org/wiki/Killology#The_problem_of_non-_or_mis-firing_soldiers) at each other. Many of the big combat themed games and films are given assistance from arms industry because they are useful for recruiting and popularising arms spending and research. No such assistance for the gritty indie adventure This War of Mine (http://www.thiswarofmine.com/)
dwarves are not appearances of sentient beingsZach and Tarn have developed them to be as convincingly life like as possible. Its not possible to make them completely convincing, but they wouldn't appear or act anything like mystical creatures if not for the long and hard efforts to make them so. With a little imaginative license we can empathize with them, or else the game is just like a matrix screensaver. They are fictions, their purpose is to appear to live a story.
They are, in actuality, a letter on the screen. By that logic, using the backspace button would be genocide.Quote from: KittyTacdwarves are not appearances of sentient beingsZach and Tarn have developed them to be as convincingly life like as possible. Its not possible to make them completely convincing, but they wouldn't appear or act anything like mystical creatures if not for the long and hard efforts to make them so. With a little imaginative license we can empathize with them, or else the game is just like a matrix screensaver. They are fictions, their purpose is to appear to live a story.
The closest thing Bay12 has to a flamewar is an argument over philosophy that slowly transitioned to an argument about quantum physics.Or perhaps the argument over relativity in your Asteroid game.
So:I would probably feel something when killing a human. But not a DF human or dwarf.
Dwarves are not sentient; however, because of the Eliza effect, does harm done to a DF character reflect/modify one's real-life view of violence?
My gut feeling is no; at the moment, though, I would be hard pressed to say why. Could someone try to?
I'm not so sure. People tend to humanize and personify anything, but they are still fully aware that they do not exist and have a mental disconnect that prevents them from feeling the way they would if they were fully human and sentient.I don't, however. I do not grow attached to DF characters. But I do get attached to other humans.
That is... not accurate in the slightest, and still not what's being said at all. It's about using the same math, dammit. That's it. Full stop. Period. There's nothing beyond that. No additional assumptions, no hypotheticals, nothing. "The math we use to analyze quantum physics also happens to be good at analyzing consciousness". Just because two things happen to be analyzed using the same mathematics does not mean that one is necessarily required in the mechanics of the other- that's a complete failure on understanding how math works. That's like arguing that everything that operates on an inverse-square law must be the same physical mechanic.
Wrong for one's own mental health, right?
That's a pretty contentious claim. From the studies I've seen, violent video games reduce actual violence. Presumably they work like an outlet, rather than training.
But maaaybe that's only true when players have the emotional maturity to separate fact from fiction. I've gotten too immersed in certain games on several occasions, to the point that I actually felt bad about making amoral choices. That's a learning experience for *me*, since I feel bad and can examine why. But what if I was that immersed, did an amoral thing, and didn't feel bad? Got rewarded, even? We do explicitly use games to teach good behaviors and train skills.
I guess what I'm saying (by "just asking questions") is that I agree with keeping certain games out of the hands of kids too young to, you know, know what death is.
Yeah, a 3 year old girl ripping off a doll's head probably does NOT mean that she would do that to her brother. :P
That is just silly. Any proof of that? Scientific studies? I'm stubborn enough to not get converted by forum philosophers.
The studies says lots of different things, plus you have to conclude there really is no objective definitions to measure, how violent is hitting someone VS stabbing them VS insulting them; what if video games make people less likely to insult them but more likely to kill them, but the base probability of insulting people is far higher to begin with?I think they measure by police reports of violence by minors, which seems like a meaningful measure. I could be wrong, but that's how I'd do it.
We have to think here about what makes sense, if it were not a genuine reason to think that violent media caused violence, then nobody would be motivated to invent one. If there were however there would be every motivation from the fans of said media to deny there was. To put it one way only one side has a motivation to be wrong and that is the side arguing there is no relationship.That's... quite a claim. The politicians who demonize videogames have clear motivation to do so. People demand answers in the wake of tragedies, targets even. "Something must be done!" Legislating against video games only offended a minority of weird hobbyists and minors (in the 90's, when this was mostly done). It gave people comfort via scapegoat, and the politicians got to "protect the children".
In any case, I am not arguing wrong in the context of mental health. I am arguing that it is the appearances that are ethically relevant. I am wrong if I appear to be doing wrong (to myself), it does not matter if the appearance is illusory.Oh, my fault, I had been skimming. I'll have to reread some of your posts sometime!
It's... a hypothesis, but I don't think it's true. A child butchering their toys is creepy, and there should probably be a conversation about it, but creepy is just a feeling. It could be a healthy outlet.Yeah, a 3 year old girl ripping off a doll's head probably does NOT mean that she would do that to her brother. :P
It is reasonable however to assume that a girl that loves to rip off doll's head is more likely to rip off her brother's head than another girl that does not do this.
We have to think here about what makes sense, if it were not a genuine reason to think that violent media caused violence, then nobody would be motivated to invent one.Actually, no. That's ad populum.
Point taken - I referenced a whackier study than I had expected.
Geez, I see now why your reputation is thin on the ground. And yet you are complaining about it being bad. You are, in effect, a troll.
I didn't realise I was advocating having pointless off-topic mud-slinging matches, I was just pointing out that ironically such a pointless mud-slinging match may well actually cause a proposal to rise up the lists faster than a constructive, civil and on-topic debates. If you are OP make sure to promote mayhem on your thread, as long as it does not actually get your thread locked.
Obviously the atmosphere is sentient.Oxygen is sentient. Breathing is genocide.
Like, the element is sentient? You know that breathing doesn't actually destroy oxygen atoms, right?I meant that absorbing it into your body would mentally harm it! Think of the baby oxygen!
Like, the element is sentient? You know that breathing doesn't actually destroy oxygen atoms, right?But imagine the horrible mutilations they are forced to become part of... The true damage is not on the outside, but on the inside....
Its not fair to reframe Goblincookies statement in a political/racisit/bigotted sense. It has some validity as an inductive argument for violence in video games, its saying its a popular position and we can see no poltical/racist/bigotted etc motive to generate it, and genuine reasons do also generate popular positions
Seems like this topic is godwined and trolled out then. Well if ya cant beat'em join em...
Your mom farms smug nazi microbes in another thread - ad infinitum huzaha!
Take that you inethical clods :P
reductio ad absurdum (Latin for "reduction to absurdity"; also argumentum ad absurdum, "argument to absurdity") is a form of argument which attempts either to disprove a statement by showing it inevitably leads to a ridiculous, absurd, or impractical conclusion, or to prove one by showing that if it were not true, the result would be absurd or impossible.
Where did I call someone a Nazi?
Does anyone have any standards here?
You don't know if I called someone a Nazi or not? --I'll fill you in since you've joined in a conversation that you are unable to read.
And the question was to Reelya who accused me in bold text of that in the comment immediately before your own.
How about taking just a minute or two to get a basic grasp of what's been discussed, before the lofty appeal for a "valid conclusion"