Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 11 12 [13] 14 15 ... 25

Author Topic: Is playing dwarf fortress ethical?  (Read 48764 times)

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #180 on: January 23, 2018, 07:51:28 am »

Appearances, eh? I feel nothing when I kill a DF character. I would feel something when killing a human. They're different, simulated characters are inferior. And you can't convince me otherwise no matter how many walls of text you throw at me.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

dragdeler

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #181 on: January 23, 2018, 02:19:19 pm »

I'm getting amused at how genocidal you're starting to sound. And kinda makes me consider to go full Hitler on this world, I'm allready missing 60k gobbos, merely 8 years later, despite my best efforts to preserve populations. Also, I might just have a masterrace breeding pair (no spoilers  8) STOP DISTRACTING ME)
Logged
let

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #182 on: January 23, 2018, 09:06:09 pm »

It's harmless to our society.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #183 on: January 23, 2018, 11:55:03 pm »

Appearances, eh? I feel nothing when I kill a DF character. I would feel something when killing a human. They're different, simulated characters are inferior. And you can't convince me otherwise no matter how many walls of text you throw at me.
Do you mean that all simulated beings are necessarily less morally important, or just that you haven't seen any morally important simulated being so far?

It's harmless to our society.
But the key question is whether it is harmless to all morally important beings.
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #184 on: January 24, 2018, 12:11:48 am »

Appearances, eh? I feel nothing when I kill a DF character. I would feel something when killing a human. They're different, simulated characters are inferior. And you can't convince me otherwise no matter how many walls of text you throw at me.
Do you mean that all simulated beings are necessarily less morally important, or just that you haven't seen any morally important simulated being so far?

It's harmless to our society.
But the key question is whether it is harmless to all morally important beings.

They all are less important. They're irrelevant, even.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #185 on: January 24, 2018, 12:31:20 am »

Again with the value judgement based on an unspecified set of parameters that you're assuming are universal! Game Theory is hardly evil, it's a system by which you can concretely compare apples to oranges by converting things to a universal measurement.

For example, the Star Trek movie where Spock argues that the good of the many outweighs the good of the one, Kirk's counterargument is that since the good of strangers has less weight to him, that (Good * Many) isn't always equal to or greater than (Good * Me).

I understand that someone in this thread is concerned that they might be doing harm to a potentially sentient creature, but economic modelling measures the issue fairly concretely. Is the potential harm you're doing to the potentially sentient creatures greater than the amount of harm you're doing to yourself by worrying about it?

(% chance you're doing harm) * (amount of harm you're doing) * (% chance that the subject can sense your actions)

vs

(time you spend worrying about this) * (value of what you could be doing instead).

How is that evil?

The issue here is that what is valued happens to be ethically significant.  There is a difference between making calculations based values that two entities share and whether those values are ethical to start with.  Torturing people is not a valid value ethically, however much torturers may value it.
It is not like there is any objective morality to which we can compare people's values. To a person, certain things are ethical and certain other things are not.

In fact, it seems to me that Romeo was not even describing Ethics (in the philosophy sense) so much as personal decision-making, in which you consider the utility and disutility of each course of action, including thinking about courses of action. They were describing how people work, not judging it. You consider the costs of X and Y. If A is true, then choosing X will make people hurt, and that thought hurts you. But A is almost certainly not true, so you choose X anyway, because it has much more likely benefits. Calling all this "evil" is missing the point, and confusing the view from inside and outside of a model.

While it's possible that AIs could be come sentient one day, DF entities are not.

And in fact, we're anthropomorphizing them: there are much more complex simulations that we don't wonder whether they're sentient or not, e.g. complex weather simulations. But when you make some ultra-simplistic model called "person" people immediately wonder whether it's sentient. DF creatures are just letters on a screen that we've assigned a semantic label of representing people. They're no more sentient than a cardboard cut-out is.

e.g "the sims" are just a paper-thin facade of skin and a bunch of prewritten animations. There's literally nothing going on "inside their head" because there's literally nothing inside their head. Meanwhile, Google Deep Dreams is a very complex neural network. It's actually more believable that there's a spark of "self-awareness" inside something like Google Deep Dreams than inside a Sims character or DF dwarf.

The problem is that they are an appearance/representation of humanity.  It has nothing to do with what they objectively *are*.
But I thought there was no objectivity? :P

Experience is not fundamental. Anything that I can determine about myself through introspection, I could theoretically determine about somebody else by looking at their brain. If there exists a non-physical soul, it does not seem to have any effects on the world. This lack of effects extends to talking about souls, and for that matter thinking about souls.
You modelled the world without taking the mind into account
What on Earth do you mean? I take minds into account all the time! I don't have anywhere near enough cognitive abilities or knowledge to predict the world through quantum mechanics, so I resort to modeling people by their minds. I simply don't include minds in my fundamental models - that is reserved for electric fields and whatnot.

of *course* it does not appear to have any effects on the world; that is because you made up a whole load of mechanics to substitute for the mind.
Not to substitute, but to explain. Minds are things. But they can't be fundamental things. After all, my mind is made up of thoughts and emotions and memories and whatnot. If I keep zooming in, what do I get to? Perhaps an electron.

If something's effects can be entirely explained through a simpler model, that just means that the thing is non-fundamental.

And furthermore, I did not "make up" these mechanics, nor do they substitute for the mind. It's more looking at the actual mechanics of the mind, and using them to understand the mind.

You can make up as many mechanics as you like to explain away anything you like after-all.
No. "Explaining away" has a specific meaning. I can explain away gremlins by studying failure modes. I can explain rainbows by studying refraction. I have not explained away the mind; I have merely explained it. The mind is still real. It is just not fundamentally real.

You can always make up redundant mechanics to explain away all conscious decision making, since you are prejudiced against what you scornfully call a 'non-physical-soul' to begin with the redundancy is not apparent.
You can't refute my arguments by saying "you can say that." Indeed, I have said it. Are you going to actually respond to it, or just say "that is a thing you can think"?

Please do not use ad hominem arguments. They are not productive.

You can make up as many mechanics as you like to explain anything you like, it does not mean that they exist or are not redundant.
They are not redundant. Only with psychology and neurology can we fully understand the mind.

And I cannot actually explain anything I like. Fundamental particles cannot be broken down; they simply are. Only non-fundamental things can be broken down into their parts.

Or alternatively, we can say that something is a mind if it appears to have goals and makes decisions, and is sufficiently complex to be able to communicate with us in some way. Not that this is the True Definition of mind - no such thing exists! And there might be a better definition. My point is that you don't have to define mind-ness by similarity to the definer.

Something does not have goals or make decisions unless it is genuinely conscious.
Yes! That is exactly my point. So if it is observed to have goals and make decisions, then it would be...

What you are in effect saying is that it is observed to behave in a way that if *I* did it would impy conscious decision making.
No, that is not actually what I am saying. I'm not talking about your concept of fundamental consciousness at all. I don't care if some mind has an Inscrutable Quale attached to it or not. I just care about how it acts, in order to model it and interact with it.

"Mind" is a category of things. That is all. It is the same sort of thing as "red", except vastly more complicated to define.

The point is invalid, you are still defining consciousness against yourself, though the assumptions are flawed in that they fail to take into account that two completely different things may still bring about the same effect.
For my purposes, it is irrelevant whether, on some metaphysical plane, a mind's activities are being produced by a True Mind or a Manipulator. This difference is unobservable and meaningless. It acts the same way, talks about its feeling of self-awareness in the same way... all of the important functions of a mind that I can notice are associated with physical processes.

How do you know that you are conscious?

Because I *am* consciousness.  You can disregard the fact of your own consciousness in favour of what you think you know about the unknowable external world all you wish, but that is a stupid thing to do so *I* will not be joining you.
I suppose I should have been more clear. I know that I know things. I can hold ideas and symbols. I am aware of my awareness of myself. But that does not mean that my consciousness is a fundamental fact. How do you know that you are Conscious, as you have defined it? That is, how do you know that you have Metaphysical Qualia, or whatever traits you hold to be fundamental to True Minds?

Ah, you mean philosophical zombies! Right? And you're saying that other people could be controlled by a Zombie Master. Is that correct?

It could be correct, but that is not exactly relevant.  The zombie masters are then conscious beings and the main thrust (my being eternally alone) no longer applies.
 
That is not necessarily true. Something does not need to be conscious to replicate conscious functions - is that not your claim?

But... what do you mean by something being "fake consciousness"? That's like something being "fake red", which acts just like red in all ways but is somehow Not Actually Red.

You might be able to imagine something that doesn't seem conscious enough, like a chatbot, but the reason that we call it Not Conscious is that it fails to meet certain observable criteria.

What I mean is something that exhibits the external behaviour of a conscious being perfectly yet does so by means that are completely different to how a conscious being does it.
If it perfectly replicates the behavior, then it must be able to model it's ability to model, and so on. In other words, it needs to model the same functions that are key to my own observation of consciousness. I do not see any important difference between these functions being carried out by flesh or silicon, or even the type of program that computes them.

It is nice and mechanical, different mechanics but same outcome.  A cleverbot is a fake consciousness because it's programmers made no attempt to replicate an actual conscious being merely it's externally observable behaviour.  It is does not become any less fake simply because it becomes good enough to perfectly replicate the behaviour rather than imperfectly.
You are just asserting that. A chatbot is clearly not conscious, but if it were able to simulate consciousness, would it then be conscious? You have ignored this possibility, instead saying that the lack of consciousness is derived from the method of implementation of the chat function.

Did you know that your own consciousness was implemented by a 'blind idiot god' (of sorts)? If a blind idiot can make a mind, it must not be that hard (on the timescale of millions of years at least).

I do not think I could do most of the things I do without having self-reflectivity, etc.

If you do the same thing a lot consciously, you tend to end up doing it reflectively without being aware of it I find.  But that is just me, perhaps this is not so for you, it is one more reason to conclude you to be a philosophical zombie I guess, since the more differences there are between you and I, the lower the probability of your also being a conscious being.
I find your statement absolutely and abhorrently evil, perhaps the root of most evil in this world. Something being different from you does not make it ethically unimportant. However, I am fully capable of interacting with your statement outside of my ethical model.

Why do you think that whatever makes you conscious is only likely to be in things that are similar to you?

What do you mean, "nowhere for the minds to go"? Minds are abstractions, not physical objects. It is not like the brain contains a Mind Lobe, which is incapable of being placed inside a processor. If a computer replicates the function of a brain, the mind has been transferred. The mind is software.

So wrong.  Minds are not only objects, material or otherwise but they are only actual objects the existence of which is certain to be so.  If a computer replicates the function of a brain, it is nothing but a computer that replicates the function of a brain.  The cleverness is yours, not it's.
I think you are using a decidedly different definition of "object," but that is irrelevant; the concepts matter more than the terms.

Minds are things. They are not fundamental things. They are, as the buzzword goes, "emergent" (but so is everything else that is not a quark). For something to be a mind, it has to have certain capabilities. These capabilities can be carried out by any Turing machine, as well as more specialized machines such as the brain.

Being a thing does not imply having complete knowledge of the thing. Does a bridge know civil engineering?

A bridge is not conscious
I do not see why conscious things, specifically, must necessarily have complete self-knowledge.

and neither are brains for that matter.  If consciousness had a physical form then the being would necessarily know the complete details of it's own physical makeup because everything about it's physical makeup *is* made of consciousness.
 
I am not claiming that the brain is made up of consciousness. Consciousness is not an element. It is not a peculiar type of molecule. It is a process, and this process may not have complete knowledge of the substrate on which the process is carried out.

It's a subtle and not entirely important difference. The mind is currently only found within the brain, and has never been separated. Because of this, we treat the mind and the brain as the same thing quite often.

The mind has never been found *anywhere*.
Well. If I mess around with your brain, you act differently. I'd say the mind is in the brain, then. Where else could it be?

The brain is at best the projecting machine that produces the mind, the mind itself however is not *in* the brain because if it were we would have an intuitive understanding of neuroscience, which we lack.  That we need to learn neuroscience in the first place implies that our brain is part of the 'external reality' and not the mind.
What do you mean, the "projecting machine"? Where would this projection be projected onto? And is this projection epiphenomenal?

The results of one's actions are fundamentally uncertain, and yet all consequentialist ethical systems depend upon the results of actions. "What should I do?" is dependent on the results of doing A, and B, and so on - even though there is an uncertainty in those terms. You still have to choose whichever consequence you think is best.

That is a problem with consequentialist ethical systems.
How do you make your decisions, then? And how are you certain that what you do is right?

(Hmm. If you cannot cope with uncertainty, it does not surprise me that you have turned to these ideas. They offer complete and utter certainty, without even needing any evidence. Still, finding a cognitive reason for your statements is not equivalent to refuting them. I am just making an observation, and perhaps you would like to consider it.)
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #186 on: January 24, 2018, 12:34:20 am »

Appearances, eh? I feel nothing when I kill a DF character. I would feel something when killing a human. They're different, simulated characters are inferior. And you can't convince me otherwise no matter how many walls of text you throw at me.
Do you mean that all simulated beings are necessarily less morally important, or just that you haven't seen any morally important simulated being so far?

It's harmless to our society.
But the key question is whether it is harmless to all morally important beings.

They all are less important. They're irrelevant, even.
Why is that?

If I hypothetically simulated a human brain (and perhaps the body as well) on a supercomputer, would the simulation have any moral value?

If AI were somehow kept from going foom and instead stayed at roughly human-level cognition and power, I suspect that we would need to modify our moral theories to include them. In practice, it can be useful to treat something as a person if it is capable of doing the same for you (for purposes of cooperation, etc.).
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #187 on: January 24, 2018, 12:49:39 am »

Wait, I misread. DF characters are morally irrelevant. An AI with the same intelligence as a human is as morally relevant as a human.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

GoblinCookie

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #188 on: January 24, 2018, 09:48:07 am »

It is not like there is any objective morality to which we can compare people's values. To a person, certain things are ethical and certain other things are not.

In fact, it seems to me that Romeo was not even describing Ethics (in the philosophy sense) so much as personal decision-making, in which you consider the utility and disutility of each course of action, including thinking about courses of action. They were describing how people work, not judging it. You consider the costs of X and Y. If A is true, then choosing X will make people hurt, and that thought hurts you. But A is almost certainly not true, so you choose X anyway, because it has much more likely benefits. Calling all this "evil" is missing the point, and confusing the view from inside and outside of a model.

Well of course there is an objective morality against which we can compare people's values.  The whole setup does not work otherwise, since you can just select any set of values you like the justify anything you like and then change it again.  Ethics is pointless if it has no foundation in anything solid, since it has no force to control the behaviour of people. 

But I thought there was no objectivity? :P

It is more complicated than that.  The point I was making is that since appearances are factual (100% certain) while the existence of any external reality is uncertain (not false but less than 100% certain), thus you cannot built an ethical system based upon knowing what the objective facts beyond your appearances actually are. 

Since morality is built upon what people appear to be doing rather than what they are doing, things like images actually start to matter.  Violence against an image of something is akin to violence against the thing itself, because the ethical signifier (?) is the appearance and not the reality. 

What on Earth do you mean? I take minds into account all the time! I don't have anywhere near enough cognitive abilities or knowledge to predict the world through quantum mechanics, so I resort to modeling people by their minds. I simply don't include minds in my fundamental models - that is reserved for electric fields and whatnot.

You are modelling things using minds and then assuming that there is some other mechanic involved 'really'.  Why is the other mechanic even needed then?

Not to substitute, but to explain. Minds are things. But they can't be fundamental things. After all, my mind is made up of thoughts and emotions and memories and whatnot. If I keep zooming in, what do I get to? Perhaps an electron.

If something's effects can be entirely explained through a simpler model, that just means that the thing is non-fundamental.

And furthermore, I did not "make up" these mechanics, nor do they substitute for the mind. It's more looking at the actual mechanics of the mind, and using them to understand the mind.

You are looking at the fundamental mechanics of the *brain*, not the mind.  The relationship between minds and the contents of the mind is interesting though, is the mind best seen as a container into which stuff 'goes' or instead a collection of things which are thrown together? 

No. "Explaining away" has a specific meaning. I can explain away gremlins by studying failure modes. I can explain rainbows by studying refraction. I have not explained away the mind; I have merely explained it. The mind is still real. It is just not fundamentally real.

Explaining away is when you ignore a fact that you reasonably should know to be the case and then you invent a theoretical construct to explain the effect of that thing on the world.  You for all practical purposes know that the mind exists, but you insist on ignoring it's existence and seeking to ultimately explain all behaviour through mindless mechanics. 

You can't refute my arguments by saying "you can say that." Indeed, I have said it. Are you going to actually respond to it, or just say "that is a thing you can think"?

Please do not use ad hominem arguments. They are not productive.

It's was not a personal attack.  It was just that the core of the strain of 'wrongness which leads to folks thinking that minds and brains are somehow the same thing is rooted in nothing except the a-priori rejection of the notion of an 'immaterial soul'.  There must at all costs not be such a thing, which explains all other argumentation. 

They are not redundant. Only with psychology and neurology can we fully understand the mind.

And I cannot actually explain anything I like. Fundamental particles cannot be broken down; they simply are. Only non-fundamental things can be broken down into their parts.

There aren't necessarily any fundamental particles, there might just be just things you have not figured out how to split yet. 

The brain is not the mind, this means you cannot ever understand the mind by studying the brain.  That rules out neurology for certain, though psychology not so much. 

Yes! That is exactly my point. So if it is observed to have goals and make decisions, then it would be...

Nothing is 'observed' to have goals and make decisions.  We take as an a-priori assumption that the thing is conscious and we then explain it's behaviour in that fashion.  If we take as an a-priori assumption that the thing is just a mindless bot, we simply explain it's behaviour as a programmed response to input and internal variables. 

No, that is not actually what I am saying. I'm not talking about your concept of fundamental consciousness at all. I don't care if some mind has an Inscrutable Quale attached to it or not. I just care about how it acts, in order to model it and interact with it.

"Mind" is a category of things. That is all. It is the same sort of thing as "red", except vastly more complicated to define.

If you can invent a mindless explanation for everything that may well *work*.  But since you are a mind, you know minds exist meaning in the end that this model is incorrect in any case. 

For my purposes, it is irrelevant whether, on some metaphysical plane, a mind's activities are being produced by a True Mind or a Manipulator. This difference is unobservable and meaningless. It acts the same way, talks about its feeling of self-awareness in the same way... all of the important functions of a mind that I can notice are associated with physical processes.

Remember that when you move your arm, you are not in fact moving your actual arm at all.  You are instead moving an image of your arm inside your mind.  Supposedly the actual material arm exists and the brain (which may also not exist) picks up on your moving the imaginary arm, executing the necessary functions to make the actual arm move. 

This means that on a physical level the brain *must* be able to actually do everything that the mind does.  Because the mind is outside of the observable material universe then this 'causality' cannot be tracked, the effect can be observed but not the cause.  You can thus invent a theoretical cause you believe to be within the material world (not actually a directly observed one) to explain away the mind.  This may well 'work' and this is what is dangerous, we know that this is not what is going on only because we know that minds exist. 

I suppose I should have been more clear. I know that I know things. I can hold ideas and symbols. I am aware of my awareness of myself. But that does not mean that my consciousness is a fundamental fact. How do you know that you are Conscious, as you have defined it? That is, how do you know that you have Metaphysical Qualia, or whatever traits you hold to be fundamental to True Minds?

Remember that *my* consciousness *is* to me the *only* fundamental fact.  Your 'consciousness' on the other hand, well it may not even exist.  You may be a philosophical zombie that is programmed to mimic consciousness, this rather well explains why 'your' idea of consciousness rather fits my idea of 'fake consciousness'.   :)

That is not necessarily true. Something does not need to be conscious to replicate conscious functions - is that not your claim?

Actual control implies consciousness.  So we have an illusion of control being executed over those who are unconscious. 

If it perfectly replicates the behavior, then it must be able to model it's ability to model, and so on. In other words, it needs to model the same functions that are key to my own observation of consciousness. I do not see any important difference between these functions being carried out by flesh or silicon, or even the type of program that computes them.

This reminds me of the days of yore (I think it was the 18th century) when clockwork was all the rage.  They had this clockwork duck which replicated nearly all the functions of the real duck to the minds of the audience.

If you replicate the behaviour of the thing that does not mean you have made the thing unless you do so by the same mechanisms the thing you are replicated was using.  Mind is from the perspective of the material universe a mechanism, not an object; outside of material reality it is an object. 

You are just asserting that. A chatbot is clearly not conscious, but if it were able to simulate consciousness, would it then be conscious? You have ignored this possibility, instead saying that the lack of consciousness is derived from the method of implementation of the chat function.

Did you know that your own consciousness was implemented by a 'blind idiot god' (of sorts)? If a blind idiot can make a mind, it must not be that hard (on the timescale of millions of years at least).

If consciousness is possible then it is just a question of combining whatever things it is that makes consciousness together, whether accidentally or not.  The problem is that it is impossible, (well without horribly unethical experiments) to actually isolate the 'conscious generating' elements of the human from everything else.  Only when you have isolated this element, that is we have 'cut it away' from everything else (likely literally) can we then replicate the essential and replace the 'inessential' elements with other elements of our choosing to make a strong AI. 

Remember that I am not saying that the perfect Cleverbot (that is the perfect fake consciousness) would not genuinely be a conscious being.  I am saying that it would be impossible to tell, that is because if Perfect Cleverbot is actually conscious it is an accident of us having inadvertently ended up using the actual mechanic that brings about consciousness in the process of making a fake one.  Because however Cleverbot is consciously designed as a perfection of a fake consciousness, it is impossible to determine whether we 'accidentally' stumbled on the right mechanic. 

I find your statement absolutely and abhorrently evil, perhaps the root of most evil in this world. Something being different from you does not make it ethically unimportant. However, I am fully capable of interacting with your statement outside of my ethical model.

Why do you think that whatever makes you conscious is only likely to be in things that are similar to you?

The odds of other beings being mindless goes up the more different you are from me, but is probabilistic thing.  As for the evil part, I am the one arguing that actual consciousness is actually irrelevant, hence it does not follow that a medium cannot be unethical because no actual conscious beings were hurt because it is the appearance that matter. 

I think you are using a decidedly different definition of "object," but that is irrelevant; the concepts matter more than the terms.

Minds are things. They are not fundamental things. They are, as the buzzword goes, "emergent" (but so is everything else that is not a quark). For something to be a mind, it has to have certain capabilities. These capabilities can be carried out by any Turing machine, as well as more specialized machines such as the brain.

From the perspective of the material universe (in so far as we think we understand it) the mind is not an object but a mechanism, from the perspective of the mental universe the material universe is a mechanism to explain the objects in the mind.  Or rather the fact that there are mental objects that we cannot simply wish away (material reality is an mechanical explanation for the lack of Matrix spoon bending). 

I do not see why conscious things, specifically, must necessarily have complete self-knowledge.

That is because consciousness does not exist in material reality.  If the mind were the brain, the brain is also the mind.  That means what we are aware of (the mind) is the functioning of the brain.  That being so we would be able to learn about the internal functioning of the brain from introspection.

That our mind teaches us nothing about the internal mechanics of our brain establishes pretty solidly that the mind and brain are completely different things. 

I am not claiming that the brain is made up of consciousness. Consciousness is not an element. It is not a peculiar type of molecule. It is a process, and this process may not have complete knowledge of the substrate on which the process is carried out.

Yes, consciousness/mind is not a material thing.  The brain is a material thing, hence it is not the mind/consciousness. 

Well. If I mess around with your brain, you act differently. I'd say the mind is in the brain, then. Where else could it be?

The mind is nowhere (that is it has no location). 

The brain does many things, in fact most things mindlessly.  You can also change the appearances in the mind by altering material reality, of which the brain is a part.

What do you mean, the "projecting machine"? Where would this projection be projected onto? And is this projection epiphenomenal?

The projection machine is the 'function' of the universe that produces consciousness.  We have reason to believe that the projecting machine is the material object behind the appearance we call the 'brain'. 

It is actually epiphenonomal in any case, consciousness is clearly a byproduct of something material, which is to say something unknowable.  The problem here is that there is the mental imput (the mental appearance) but in order for certain aspects of consciousness to exist (free will) the output must also allow an returning-input.  *That* means that the projector is not simply projecting an image, we are projecting a user-interface to something.

The second part is a problem since it ties consciousness to material reality as a mechanic.  The fatalistic 'movie consciousness' can work quite nicely with mindless mechanics, the 'user-interface consciousness' needs to function as a mechanism (though that does not make it a material object). 

How do you make your decisions, then? And how are you certain that what you do is right?

(Hmm. If you cannot cope with uncertainty, it does not surprise me that you have turned to these ideas. They offer complete and utter certainty, without even needing any evidence. Still, finding a cognitive reason for your statements is not equivalent to refuting them. I am just making an observation, and perhaps you would like to consider it.)

My ideas purport that all material facts and all other consciousnesses are inherently uncertain and there is no way to ever make it otherwise, not exactly any refuge from uncertainty there.  The only certain thing is the existence of my appearances in themselves (apart from their supposed material cause), hence to answer your question I make my decisions, ethical or otherwise based upon appearances. 
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #189 on: January 24, 2018, 08:28:46 pm »

Welp. You still didn't react to my response that DF characters are inferior and morally irrelevant. Guess you ran out of walls of text to throw at me.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #190 on: January 25, 2018, 01:26:53 am »

Well of course there is an objective morality against which we can compare people's values.  The whole setup does not work otherwise, since you can just select any set of values you like the justify anything you like and then change it again.  Ethics is pointless if it has no foundation in anything solid, since it has no force to control the behaviour of people.
The universe may well be unfair. Perhaps it has neglected to provide us with an objective basis for ethics. Perhaps ethics are all pointless. Saying "but that would be bad" isn't a good argument against that being the case.

It is more complicated than that.  The point I was making is that since appearances are factual (100% certain) while the existence of any external reality is uncertain (not false but less than 100% certain), thus you cannot built an ethical system based upon knowing what the objective facts beyond your appearances actually are.
In the real world, we can never be certain about anything. We have to build an ethical system on fundamental uncertainty, or simply not build any.

Also, that does not follow. The mere fact that some things are more certain than others does not mean that the more certain things are a better basis for morality.

Additionally, it is still appearances by which Reelya assigns moral worth. It is just a deeper sort of appearance, one that you must investigate in order to see.

Since morality is built upon what people appear to be doing rather than what they are doing, things like images actually start to matter.  Violence against an image of something is akin to violence against the thing itself, because the ethical signifier (?) is the appearance and not the reality.

(All things moral are built on subjectives) -/-> (all subjective things have moral worth). That is confusing the superset and the subset. In other words, simply because all morally-important things happen to be subjective, does not mean that all subjective things are morally important.

Have you ever read Asimov's Relativity of Wrong?

You are modelling things using minds and then assuming that there is some other mechanic involved 'really'.  Why is the other mechanic even needed then?
Minds are not, and cannot be, fundamental. They are far too complex. They must be made of smaller and simpler pieces. If I want to be as accurate as possible in my models, I should consider the pieces as well as the whole.

You are looking at the fundamental mechanics of the *brain*, not the mind.
What is a memory? We can tell by noticing the difference between "having a memory" and "not having a memory." This difference is within the brain; the memory is stored in the connection of neurons. Similarly for all other quasi-fundamental mental objects (which may also be stored in other forms of biological information, such as hormones).

The relationship between minds and the contents of the mind is interesting though, is the mind best seen as a container into which stuff 'goes' or instead a collection of things which are thrown together?
I do not see any important difference, nor is there a way to check either definition's validity. This is meaningless philosophy.

Explaining away is when you ignore a fact that you reasonably should know to be the case and then you invent a theoretical construct to explain the effect of that thing on the world.
This continues to not be the definition. Explaining away is when you show that an alleged object/entity/phenomenon, said to be responsible for a certain physical effect, need not exist - the effect is caused by something else. Explaining is when you show how an object/entity/phenomenon is made up of smaller things. See here for more.

You for all practical purposes know that the mind exists, but you insist on ignoring it's existence and seeking to ultimately explain all behaviour through mindless mechanics.
This is an incorrect summary of my beliefs and words. I know that the mind exists, and do not ignore its existence. Rather, I seek to understand its functions and composition. The mind is made of of non-mental things, just as a mountain is made up of non-mountain things, and an airplane is made up of non-airplane things. In order to better know a mind, you must also know the non-mental things which make it up.

It's was not a personal attack.
I never said it was. Ad hominem arguments rely on showing a belief's proponent to be flawed, and using this as a counter-argument. This is fallacious and non-productive.

It was just that the core of the strain of 'wrongness which leads to folks thinking that minds and brains are somehow the same thing is rooted in nothing except the a-priori rejection of the notion of an 'immaterial soul'.  There must at all costs not be such a thing, which explains all other argumentation.
This psychoanalysis is incorrect, but beside the point. As I have said, showing an belief's proponents to be flawed is not an argument against the belief.

You can argue this about anything. Any belief could conceivably be held as an a priori, absolute, unreasonable belief - including yours. As such, this possibility is not an argument against any particular belief. (See Bayes as it applies to arguments.)

There aren't necessarily any fundamental particles, there might just be just things you have not figured out how to split yet.
Perhaps not, but it seems unlikely.

The brain is not the mind, this means you cannot ever understand the mind by studying the brain.
How do you know that?

That rules out neurology for certain, though psychology not so much.
People's actions result from their neurology. There is no point where a metaphysical process leaps in and shifts an atom to the side, changing someone's actions. We can draw a casual chain back from "Bob raises his right hand" to "an electrical signal stimulates the muscles in Bob's right arm" to "an electrical signal is emitted by the brain down a particular set of nerves" to "a complicated set of signals is passed between neurons, which we describe as Bob deciding to raise his right arm" and so on.

Nothing is 'observed' to have goals and make decisions.  We take as an a-priori assumption that the thing is conscious and we then explain it's behaviour in that fashion.  If we take as an a-priori assumption that the thing is just a mindless bot, we simply explain it's behaviour as a programmed response to input and internal variables.
I do not, in fact, take something's consciousness as an a priori assumption. I look at its behavior, and see whether it demonstrates a tendency to act to satisfy a certain set of criteria. If it does, then I call that "goals and decisions," and move onto the next criterion.

If you can invent a mindless explanation for everything that may well *work*.  But since you are a mind, you know minds exist meaning in the end that this model is incorrect in any case.
A mindless explanation does not make minds cease to exist, just as quantum physics does not make bridges and planes and mountains cease to exist, despite there being no term for "bridge" in the wave function.

Remember that when you move your arm, you are not in fact moving your actual arm at all.  You are instead moving an image of your arm inside your mind.  Supposedly the actual material arm exists and the brain (which may also not exist) picks up on your moving the imaginary arm, executing the necessary functions to make the actual arm move.
I do not actually remember this.

This means that on a physical level the brain *must* be able to actually do everything that the mind does.
This is true, because they are the same thing. (Or close enough to be "hardware" and "software.")

Because the mind is outside of the observable material universe
Only in the same sense that the redness of an object is outside of the observable material universe. That is, not at all.

Is it made of physical stuff? Then it's material. Is it unobservable? Then how do you know it exists? The only observable immaterial things are mathematical concepts, perhaps.

then this 'causality' cannot be tracked, the effect can be observed but not the cause.
Then how do you know there is even any cause?

You can thus invent a theoretical cause you believe to be within the material world (not actually a directly observed one) to explain away the mind.
No, I can't, because minds are real. What I can do is look at a person, and see what they do, and look inside them, and see what happens, and so on. Very little of this is "theoretical," and that's not even an insult like you think it is.

This may well 'work' and this is what is dangerous, we know that this is not what is going on only because we know that minds exist.
In a purely material world, intelligence could still exist, and people could still think that minds exist. Therefore, (thought that minds exist) -/-> (minds are non-material). (See the contrapositive.) [That is, you're saying that A -> ~B, where A is mind-feeling and B is material world. However, ~(B -> ~A) => ~(A -> ~B), QED.]

Remember that *my* consciousness *is* to me the *only* fundamental fact.
How do you know this to be true? Do you suppose, out of all possible mind-instantiations that are equivalent to yours, that none of them will be embodied in a world where your a priori belief is false? (See Bayes.)

Your 'consciousness' on the other hand, well it may not even exist.  You may be a philosophical zombie that is programmed to mimic consciousness, this rather well explains why 'your' idea of consciousness rather fits my idea of 'fake consciousness'.
Excuse me? I am most definitely conscious, and my lack of a priori belief in fundamental consciousness does not invalidate my subjective experiences nor my moral worth.

Your argument makes no sense. By your own definition, there is no observable difference between Zombie Chatbots and Real Humans, since the only difference is an unobservable metaphysical source of consciousness. Therefore, nothing that I do can be evidence toward me being a Zombie Chatbot. (See Bayes.)

Actual control implies consciousness.  So we have an illusion of control being executed over those who are unconscious.
It does? So anything that exercises control has a Metaphysical Consciousness Source, or will the Zombie Masters come and install a Fake Consciousness Source in every AI I design?

I don't see how you could possibly obtain any of this "information" yourself to any degree of reliability.

This reminds me of the days of yore (I think it was the 18th century) when clockwork was all the rage.  They had this clockwork duck which replicated nearly all the functions of the real duck to the minds of the audience.

If you replicate the behaviour of the thing that does not mean you have made the thing unless you do so by the same mechanisms the thing you are replicated was using.  Mind is from the perspective of the material universe a mechanism, not an object; outside of material reality it is an object.
It is a mechanism, but not one that can be changed without affecting the physical universe. This mechanism is the same sort of mechanism as an engine - it's a physical thing that does things. It can also be broken down into non-mechanism parts.

If consciousness is possible then it is just a question of combining whatever things it is that makes consciousness together, whether accidentally or not.  The problem is that it is impossible, (well without horribly unethical experiments) to actually isolate the 'conscious generating' elements of the human from everything else.  Only when you have isolated this element, that is we have 'cut it away' from everything else (likely literally) can we then replicate the essential and replace the 'inessential' elements with other elements of our choosing to make a strong AI.
Only when? Are you saying that it is impossible to make a self-reflective thing without that thing having True Consciousness?

Remember that I am not saying that the perfect Cleverbot (that is the perfect fake consciousness) would not genuinely be a conscious being.  I am saying that it would be impossible to tell, that is because if Perfect Cleverbot is actually conscious it is an accident of us having inadvertently ended up using the actual mechanic that brings about consciousness in the process of making a fake one.  Because however Cleverbot is consciously designed as a perfection of a fake consciousness, it is impossible to determine whether we 'accidentally' stumbled on the right mechanic.
So this mechanic is physical, then?

The odds of other beings being mindless goes up the more different you are from me, but is probabilistic thing.  As for the evil part, I am the one arguing that actual consciousness is actually irrelevant, hence it does not follow that a medium cannot be unethical because no actual conscious beings were hurt because it is the appearance that matter.
It is the appearance of calling something not-a-person that matters, in a consequentialist sense. Perhaps you treat chatbots well, but I doubt everyone does. This is a very bad idea.

I think you are using a decidedly different definition of "object," but that is irrelevant; the concepts matter more than the terms.

Minds are things. They are not fundamental things. They are, as the buzzword goes, "emergent" (but so is everything else that is not a quark). For something to be a mind, it has to have certain capabilities. These capabilities can be carried out by any Turing machine, as well as more specialized machines such as the brain.

From the perspective of the material universe (in so far as we think we understand it) the mind is not an object but a mechanism, from the perspective of the mental universe the material universe is a mechanism to explain the objects in the mind.
This seems like useless, meaningless philosophical gibberish to me. Have you found any actual evidence for this non-material universe yet, or are you still just asserting its existence?

Or rather the fact that there are mental objects that we cannot simply wish away (material reality is an mechanical explanation for the lack of Matrix spoon bending).
Alternatively, we cannot wish away mental objects because brains do not have conscious self-editing powers.

I do not see why conscious things, specifically, must necessarily have complete self-knowledge.

That is because consciousness does not exist in material reality.  If the mind were the brain, the brain is also the mind.  That means what we are aware of (the mind) is the functioning of the brain.  That being so we would be able to learn about the internal functioning of the brain from introspection.
No. I do not see how you are getting this. We experience some functions of the brain, not all. You are basically saying A is a subset of B, therefore B is a subset of A. That is not valid logic.

That our mind teaches us nothing about the internal mechanics of our brain establishes pretty solidly that the mind and brain are completely different things.
Do you think, in a purely material universe, all conscious beings would always have complete self-knowledge? And how do you know that?

I am not claiming that the brain is made up of consciousness. Consciousness is not an element. It is not a peculiar type of molecule. It is a process, and this process may not have complete knowledge of the substrate on which the process is carried out.

Yes, consciousness/mind is not a material thing.
Taboo "material", please.

The brain is a material thing, hence it is not the mind/consciousness.
Consciousness is not a physical thing you can pick up, but it is a physical thing that happens in the physical world. It's the difference between a log and fire. It's a process, with physical causes and physical effects. No metaphysics involved.

Well. If I mess around with your brain, you act differently. I'd say the mind is in the brain, then. Where else could it be?

The mind is nowhere (that is it has no location).
That is reserved for concepts. Processes have locations.

The brain does many things, in fact most things mindlessly.  You can also change the appearances in the mind by altering material reality, of which the brain is a part.
How does the physical world affect the mind? And if you had never learned about Phineas Gage, would he have been (at least weak) evidence against dualism, in your view? (What would you have predicted beforehand?)

What do you mean, the "projecting machine"? Where would this projection be projected onto? And is this projection epiphenomenal?

The projection machine is the 'function' of the universe that produces consciousness.
You are misusing a mathematical term. Functions are maps from sets to sets (or a more general version of the same).

We have reason to believe that the projecting machine is the material object behind the appearance we call the 'brain'.
We do, do we? Would you care to share the reason for supposing the existence of a "projection" at all?

It is actually epiphenonomal in any case, consciousness is clearly a byproduct of something material, which is to say something unknowable.
Do you know what epiphenomenalism even is?

The material is not unknowable. It is the only knowable thing. It is not certain, but it is knowable.

The problem here is that there is the mental imput (the mental appearance) but in order for certain aspects of consciousness to exist (free will) the output must also allow an returning-input.  *That* means that the projector is not simply projecting an image, we are projecting a user-interface to something.

The second part is a problem since it ties consciousness to material reality as a mechanic.  The fatalistic 'movie consciousness' can work quite nicely with mindless mechanics, the 'user-interface consciousness' needs to function as a mechanism (though that does not make it a material object).
I do not follow. This all seems baseless speculation, anyway.

How do you make your decisions, then? And how are you certain that what you do is right?

(Hmm. If you cannot cope with uncertainty, it does not surprise me that you have turned to these ideas. They offer complete and utter certainty, without even needing any evidence. Still, finding a cognitive reason for your statements is not equivalent to refuting them. I am just making an observation, and perhaps you would like to consider it.)

My ideas purport that all material facts and all other consciousnesses are inherently uncertain and there is no way to ever make it otherwise, not exactly any refuge from uncertainty there.  The only certain thing is the existence of my appearances in themselves (apart from their supposed material cause), hence to answer your question I make my decisions, ethical or otherwise based upon appearances. 
Yes, but you do not need material facts, correct? You are giving the uncertain up for lost, and basing everything on the certainty of your thoughts alone. You are, in fact, thinking that minds are metaphysical; this is all you can know, and all you need to know. Right?
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!

GoblinCookie

  • Bay Watcher
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #191 on: January 26, 2018, 10:58:37 am »

Welp. You still didn't react to my response that DF characters are inferior and morally irrelevant. Guess you ran out of walls of text to throw at me.

A response has already been given, by many people, many times. 

The universe may well be unfair. Perhaps it has neglected to provide us with an objective basis for ethics. Perhaps ethics are all pointless. Saying "but that would be bad" isn't a good argument against that being the case.

If the universe were entirely fair then why would we need ethics at all?  Ethics requires, not to exist but to actually have any point in existing something that is not as it should be.  In an automatically perfect world, where everything that happens has to be fair, why would anyone need ethical judgements of anything?  I am not talking about a world where wrong things simply do not happen but one in which they cannot happen. 

The thing about ethics is that it must have *teeth* as it were.  That is because not only is life apparently unfair, but folks are rather attached to certain particular wrongs they commit; you have to be able to counter that 'motivation', with some conviction of your own or else they will win. 

In the real world, we can never be certain about anything. We have to build an ethical system on fundamental uncertainty, or simply not build any.

Also, that does not follow. The mere fact that some things are more certain than others does not mean that the more certain things are a better basis for morality.

Additionally, it is still appearances by which Reelya assigns moral worth. It is just a deeper sort of appearance, one that you must investigate in order to see.

The trick is not to build your ethical system on the real-world because you cannot be certain about anything real.   :)

(All things moral are built on subjectives) -/-> (all subjective things have moral worth). That is confusing the superset and the subset. In other words, simply because all morally-important things happen to be subjective, does not mean that all subjective things are morally important.

Have you ever read Asimov's Relativity of Wrong?

No I have not unfortunately.  I also never said that *all* subjective things are morally important. 

Minds are not, and cannot be, fundamental. They are far too complex. They must be made of smaller and simpler pieces. If I want to be as accurate as possible in my models, I should consider the pieces as well as the whole.

Minds are made of ideas, but really on the whole there is not that many ideas in your mind at the moment in any case.  So while minds are not entirely simple, compared to say the brain they are pretty simple. 

What is a memory? We can tell by noticing the difference between "having a memory" and "not having a memory." This difference is within the brain; the memory is stored in the connection of neurons. Similarly for all other quasi-fundamental mental objects (which may also be stored in other forms of biological information, such as hormones).

Yesterday I read a book called the Starfish and the Spider.  Back in the 1960s they assumed, based upon the basic structure of the mind which they assumed as materialists do to be the same thing as the brain, the brain had to be organized in a hierarchical structure with memories all nicely assigned to a particular part of the brain. 

They found that this was simply not the case.  The solution was put forward by someone called Jerry Lettvin, instead of the brain being a centralized thing in which memories are stored in the brain as separate objects, perhaps the brain was a decentralized thing.  What this means in effect is that memory (and everything else) is not a thing stored in the brain, it is consequence of the whole functioning of the brain as a whole. 

As I said, the brain is the projector and the mind the projection.  The mind (the projection) is centralized but the projector is decentralised, or as Dr Who put it once 'the footprint does not look like the boot'.   :)

I do not see any important difference, nor is there a way to check either definition's validity. This is meaningless philosophy.

It is a very important difference.  If you are 'glass' then you are the same person you were yesterday before you went to sleep and are potentially immortal.  If you are the 'water' then not only is the afterlife out but your existence also began when you woke up and ends when you go sleep.  It also matters because if you are glass then your existence can said to be objectively the case, but if you are the water your existence is entirely subjective. 

This continues to not be the definition. Explaining away is when you show that an alleged object/entity/phenomenon, said to be responsible for a certain physical effect, need not exist - the effect is caused by something else. Explaining is when you show how an object/entity/phenomenon is made up of smaller things. See here for more.

I am using the exact same definition as you are Dozebom.  What I am saying is that it is sometimes wrong to explain away thing even when you *can*, which is when you have other evidence from other 'sources' that something is the case.  That is because if something *can be*, it does not mean that it *is*. 

A mindless universe is quite possible to model and it does work.  We do however know we do not live in such a universe, which is why any model that explains away the mind using another mechanic to the mind is wrong.  Explaining away redundant theoretical things can be a good idea, but explaining away the things of which you are more certain of with the things you are less certain of is not a good idea.

This is an incorrect summary of my beliefs and words. I know that the mind exists, and do not ignore its existence. Rather, I seek to understand its functions and composition. The mind is made of of non-mental things, just as a mountain is made up of non-mountain things, and an airplane is made up of non-airplane things. In order to better know a mind, you must also know the non-mental things which make it up.

The brain is made up of neurons.  The mind is made up of ideas, neurons are not ideas. 

This psychoanalysis is incorrect, but beside the point. As I have said, showing an belief's proponents to be flawed is not an argument against the belief.

You can argue this about anything. Any belief could conceivably be held as an a priori, absolute, unreasonable belief - including yours. As such, this possibility is not an argument against any particular belief. (See Bayes as it applies to arguments.)

There is a difference between believing in something a-priori and disbelieving in things a-priori.   :)

A-priori beliefs are acceptable, a-priori disbelief are not.  That is because since the material reality is unknowable, you can always invent a contrived explanation to continue to believe in something.

Perhaps not, but it seems unlikely.

People thought that atoms were unsplittable, indeed that was the whole idea.  They were wrong. 

How do you know that?

Because people have already tried and found this not be so.  The brain breaks down into neurons, the mind into ideas; neurons are *not* ideas. 

People's actions result from their neurology. There is no point where a metaphysical process leaps in and shifts an atom to the side, changing someone's actions. We can draw a casual chain back from "Bob raises his right hand" to "an electrical signal stimulates the muscles in Bob's right arm" to "an electrical signal is emitted by the brain down a particular set of nerves" to "a complicated set of signals is passed between neurons, which we describe as Bob deciding to raise his right arm" and so on.

Most things the brain does it does mindlessly.  The mind is not involved in the actual execution of the tasks it decides upon. 

I do not, in fact, take something's consciousness as an a priori assumption. I look at its behavior, and see whether it demonstrates a tendency to act to satisfy a certain set of criteria. If it does, then I call that "goals and decisions," and move onto the next criterion.

So in your reality is never possible for two completely different things to realize the same observable outcome by using completely different means?

A mindless explanation does not make minds cease to exist, just as quantum physics does not make bridges and planes and mountains cease to exist, despite there being no term for "bridge" in the wave function.

Correct.  Both mindless and mindful explanations for the same things can coexist in the same universe. 

I do not actually remember this.

Think about phantom limbs, about how people who lose limbs can sometimes feel the limbs that they lost.  Point is that the mind 'does things' by manipulating an image of the body, not the actual material body but yet the latter must somehow respond to it's image being modified.  It is rather a 'this is what I look like, now show me what I must do?', except of course the whole thing could be a lie naturally . 

This is true, because they are the same thing. (Or close enough to be "hardware" and "software.")

Mind actually only exists when there is no software 'installed' on the brain to do a function.  Over the course of making the latest version of my mod, which is far more work than I had ever envisoned I have used the copy function so many times repetitively in the same context that I am no longer aware of actually doing so.  That is to say when I press the 'paste key', I usually find I have copied the relevant date even though I have absolutely no recollection or awareness of doing so (this includes the physical act of pressing the right mouse button). 

So the brains 'software' is basically the anti-mind.  If the brain ever accumulates enough 'software' to execute all your functions without you, then kiss your conscious existence goodbye. 

Is it made of physical stuff? Then it's material. Is it unobservable? Then how do you know it exists? The only observable immaterial things are mathematical concepts, perhaps.

It is observable, but it is not material or physical.

Then how do you know there is even any cause?

That is an odd question to ask when we use a whole set of mechanics that are not physically observable at all to explain things.  Gravity for instance. 

No, I can't, because minds are real. What I can do is look at a person, and see what they do, and look inside them, and see what happens, and so on. Very little of this is "theoretical," and that's not even an insult like you think it is.

Minds are not real, minds are factual.  Real things are things that exist independently of the mind (or is that objective things?), minds do not exist independently of the mind.

In a purely material world, intelligence could still exist, and people could still think that minds exist. Therefore, (thought that minds exist) -/-> (minds are non-material). (See the contrapositive.) [That is, you're saying that A -> ~B, where A is mind-feeling and B is material world. However, ~(B -> ~A) => ~(A -> ~B), QED.]

People could not think that minds exist because there are no people thinking anything.  You cannot think that minds exist unless you have a mind, but you can falsely conclude that other entities have minds when they do not. 

How do you know this to be true? Do you suppose, out of all possible mind-instantiations that are equivalent to yours, that none of them will be embodied in a world where your a priori belief is false? (See Bayes.)

I know this is true because whenever I perceive something external there is a probability that is is illusory.  Illusory or not however, it is still a fact that *I saw* an appearance of something; that is to say the reality behind everything perceived is a question but the appearances are not questionable. 

Excuse me? I am most definitely conscious, and my lack of a priori belief in fundamental consciousness does not invalidate my subjective experiences nor my moral worth.

Your argument makes no sense. By your own definition, there is no observable difference between Zombie Chatbots and Real Humans, since the only difference is an unobservable metaphysical source of consciousness. Therefore, nothing that I do can be evidence toward me being a Zombie Chatbot. (See Bayes.)

It is easy for you to be programmed to simply state you are conscious.  It is harder though to program you to actually demonstrate a comprehension of the 'unobservable metaphysical source of consciousness', especially if the programmer has no consciousness itself.  By consciousness the zombie does not mean actual consciousness, it means the behavior that it must exhibit in order to trick the universe's actual consciousnesses into falsely ascribing it consciousness. 

This has the interesting consequence that one way to determine experimentally whether you are really dealing with a philosophical zombie or not is to draw a distinction between a fake and a real consciousness with a description of both.  This will ferret the fake consciousness out, unless it's programmer is actually a conscious being and specifically programmed a special script into the zombie such that it would always parrot the correct answers. 

It does? So anything that exercises control has a Metaphysical Consciousness Source, or will the Zombie Masters come and install a Fake Consciousness Source in every AI I design?

I don't see how you could possibly obtain any of this "information" yourself to any degree of reliability.

The most reliable evidence is my own experience, as already discussed above.  That is because all material/objective/real things are possibly fake/illusory and all other people are possible philosophical zombies. 

It is a mechanism, but not one that can be changed without affecting the physical universe. This mechanism is the same sort of mechanism as an engine - it's a physical thing that does things. It can also be broken down into non-mechanism parts.

You are still not comprehending the idea that two things can do the same and look the same, without being the same thing? 

Only when? Are you saying that it is impossible to make a self-reflective thing without that thing having True Consciousness?

The word false consciousness means what it says on the tin.  Something that exhibits the observable behaviors of a true consciousness enough to trick the observer but has no underlying consciousness behind those behaviors. 

So this mechanic is physical, then?

The projector is physical but not the projection.  It is the decentralization of the brain (see above) that makes the isolation process very difficult to pull off.  It is not a single thing but a whole network of things you have to isolate and you have to figure out of potentially millions of connections which set of connection is the essential connection you need to give your Strong AI so that you know you have actually made a genuine consciousness rather than a fake one. 

It is the appearance of calling something not-a-person that matters, in a consequentialist sense. Perhaps you treat chatbots well, but I doubt everyone does. This is a very bad idea.

What is wrong with treating chatbots well?

This seems like useless, meaningless philosophical gibberish to me. Have you found any actual evidence for this non-material universe yet, or are you still just asserting its existence?

To be evidence means to be perceived, that is to appear in the non-material universe.  There is no material evidence for anything ever. 

Alternatively, we cannot wish away mental objects because brains do not have conscious self-editing powers.

There are no brains in this scenario.  We are talking about there being no material universe remember?  So yes what you are saying is just what I am saying but with the brain in particular rather than the material universe in general. 

No. I do not see how you are getting this. We experience some functions of the brain, not all. You are basically saying A is a subset of B, therefore B is a subset of A. That is not valid logic.

Exactly.  We only experience some functions of the brain because we are not the brain.  We only know what the brain tells us and the brain reveals nothing of it's own mechanics to us in what it tells us.

Do you think, in a purely material universe, all conscious beings would always have complete self-knowledge? And how do you know that?

If there were somehow consciousness in a purely material universe, then it would have to be material as well.  If the brain is the physical mind, then we will understand the mechanics of the brain, since that is what we can be aware of in this universe.  If we are less than the whole brain, there would be some part of the brain we understood even if not everything; but all parts of the real brain work using the same mechanics. 

Taboo "material", please.

I could say objective if you wish.  Except that subjective experience is an objective fact, so material is a better word to use. 

Consciousness is not a physical thing you can pick up, but it is a physical thing that happens in the physical world. It's the difference between a log and fire. It's a process, with physical causes and physical effects. No metaphysics involved.

In the physical world it appears only a mechanic to explain the behavior of objects.  To itself (in the non-physical world) it appears as an object while the physical world appears a mechanic to explain it. 

That is reserved for concepts. Processes have locations.

Mechanics do not have locations.  Gravity is a mechanic and it does not have any location.  As for the rest, recall consciousness is not happening in the brain, the output of consciousness acts are a mechanic in the brain. 

How does the physical world affect the mind? And if you had never learned about Phineas Gage, would he have been (at least weak) evidence against dualism, in your view? (What would you have predicted beforehand?)

I would have projected that altering the projector would alter the projection in some unpredictable way.  That would in turn alter the decisions made consciously, which would in turn cause the mechanic effect of consciousness to change. 

You are misusing a mathematical term. Functions are maps from sets to sets (or a more general version of the same).

I have no idea what any of those mathematical terms mean in any case.   :-[

By function I mean either mechanism or object, I am looking for a word that includes both things that are physical objects and things which are mechanisms.  The word you seem to like (concept) excludes material objects. 

We do, do we? Would you care to share the reason for supposing the existence of a "projection" at all?

My existence.

Do you know what epiphenomenalism even is?

The material is not unknowable. It is the only knowable thing. It is not certain, but it is knowable.

Uncertain things are not knowable.  Even if you can increase the probability to 99% that something is the case, nothing stops the 1% thing from actually happening to be so. 

Yes, but you do not need material facts, correct? You are giving the uncertain up for lost, and basing everything on the certainty of your thoughts alone. You are, in fact, thinking that minds are metaphysical; this is all you can know, and all you need to know. Right?

I am saying that because ethics require certainty, it makes the most sense to base ethical judgements entirely on what is certain (appearances), rather than material facts that are always possibly going to be wrong.  To me it is right to sacrifice the apparent few for the apparent many, but not to sacrifice the apparent few for the theoretical many in effect. 

I need material facts because barring the extremely improbable situation where somehow I am all-powerful but don't know it, I need to explain why my appearances, certain as I am of their existence are not mine to alter as I wish; as I put it, the appearance of the absence of matrix spoon bending is the evidential basis for material reality. 
Logged

KittyTac

  • Bay Watcher
  • Impending Catsplosion. [PREFSTRING:aloofness]
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #192 on: January 26, 2018, 09:11:15 pm »

Can't stop me from killing elf children and telling their parents about it.
Logged
Don't trust this toaster that much, it could be a villain in disguise.
Mostly phone-posting, sorry for any typos or autocorrect hijinks.

Bumber

  • Bay Watcher
  • REMOVE KOBOLD
    • View Profile
Re: Is playing dwarf fortress ethical?
« Reply #193 on: January 27, 2018, 03:55:48 am »

Can't stop me from killing elf children and telling their parents about it.
Oh, yeah? Well, consider this: There is no elf.
« Last Edit: January 27, 2018, 03:59:33 am by Bumber »
Logged
Reading his name would trigger it. Thinking of him would trigger it. No other circumstances would trigger it- it was strictly related to the concept of Bill Clinton entering the conscious mind.

THE xTROLL FUR SOCKx RUSE WAS A........... DISTACTION        the carp HAVE the wagon

A wizard has turned you into a wagon. This was inevitable (Y/y)?

Dozebôm Lolumzalìs

  • Bay Watcher
  • what even is truth
    • View Profile
    • test
Re: Is playing dwarf fortress ethical?
« Reply #194 on: January 27, 2018, 03:58:26 am »

If the universe were entirely fair then why would we need ethics at all?  Ethics requires, not to exist but to actually have any point in existing something that is not as it should be.  In an automatically perfect world, where everything that happens has to be fair, why would anyone need ethical judgements of anything?  I am not talking about a world where wrong things simply do not happen but one in which they cannot happen.
...I do not see how this responds to my point. Yes, things aren't always fair. That's what I said.

The trick is not to build your ethical system on the real-world because you cannot be certain about anything real.   :)
I do not have to be certain in order to judge and act. And if my ethical system is not firmly connected to the real world, what is the point? I'm not interested in creating the appearance of good, I want to make actual good things happen. I can't be certain that I'm not in some Bizarro world where good is bad and vice versa, but at least I have a chance of success, since I am able to conceive of true ethical success within my ethical system.

No I have not unfortunately.  I also never said that *all* subjective things are morally important.
I think it is a good short essay that interacts with the fallacy of gray, so I recommend reading it. At the very least, it will help you understand my point of view.

Also, your argument was just that something was subjective and thus it was morally important, right?

Minds are made of ideas, but really on the whole there is not that many ideas in your mind at the moment in any case.  So while minds are not entirely simple, compared to say the brain they are pretty simple.
Minds are more than the stream of consciousness, though, are they not? It's more than a snapshot of my current thoughts. Otherwise, you'd be dealing with the "never step in the same river twice" problem.

Yesterday I read a book called the Starfish and the Spider.  Back in the 1960s they assumed, based upon the basic structure of the mind which they assumed as materialists do to be the same thing as the brain, the brain had to be organized in a hierarchical structure with memories all nicely assigned to a particular part of the brain. 

They found that this was simply not the case.  The solution was put forward by someone called Jerry Lettvin, instead of the brain being a centralized thing in which memories are stored in the brain as separate objects, perhaps the brain was a decentralized thing.  What this means in effect is that memory (and everything else) is not a thing stored in the brain, it is consequence of the whole functioning of the brain as a whole. 

As I said, the brain is the projector and the mind the projection.  The mind (the projection) is centralized but the projector is decentralised, or as Dr Who put it once 'the footprint does not look like the boot'.   :)
I do not think that research indicates what you think it does. There have never been nuggets of memory in our brains according to the materialists, never any physical objects to find. Instead, memory is an emergent property of the persistent traits of connected neurons.

The brain is messy, since it was made by evolution. It doesn't have neat little bins for each memory. But the memories are still stored inside the brain, just not in a formalized way.

It is a very important difference.  If you are 'glass' then you are the same person you were yesterday before you went to sleep and are potentially immortal.  If you are the 'water' then not only is the afterlife out but your existence also began when you woke up and ends when you go sleep.  It also matters because if you are glass then your existence can said to be objectively the case, but if you are the water your existence is entirely subjective.
Well, in that case, my sense of persistent self would indicate that my idea of "mind" fits better with the box interpretation than the contents interpretation. However, I disagree that the contents interpretation is not reconcilable with a persistent self. I also do not see how the box interpretation results in an objective mind and the contents interpretation a subjective mind.

I am using the exact same definition as you are Dozebom.  What I am saying is that it is sometimes wrong to explain away thing even when you *can*, which is when you have other evidence from other 'sources' that something is the case.  That is because if something *can be*, it does not mean that it *is*.
But I haven't explained anything away! There are still bridges even when I understand atoms! There are still minds even when I understand neurons! Understanding the fundamental workings of a world does not make the higher-level arrangements unreal.

A mindless universe is quite possible to model and it does work.  We do however know we do not live in such a universe, which is why any model that explains away the mind using another mechanic to the mind is wrong.  Explaining away redundant theoretical things can be a good idea, but explaining away the things of which you are more certain of with the things you are less certain of is not a good idea.
Taboo "mindless"? I think you might be using it as "universe with no entities possessing subjective experience," while I am using it as "universe with solely non-mental fundamental components."

The brain is made up of neurons.  The mind is made up of ideas, neurons are not ideas.
This is not a pipe? Yes, but that's like saying that Microsoft Word extends beyond the material world. In other words, either all abstractions/concepts are non-material, or none are.

Ideas are not physical, but they must be stored/represented somehow. Having a consciousness without a brain is like having Microsoft Word running on thin air.

Zooming in doesn't have to be physical. What do you mean by "happy" or "sad"? If you want to describe it further than synonyms like "it feels good," you have to start talking biochemistry.

Bailey and motte. Saying that ideas are not physical objects does not get you to the statement that neurology cannot teach you about minds, or whatever else you're saying about minds.

There is a difference between believing in something a-priori and disbelieving in things a-priori.   :)
Is there a meaningful difference? After all, all beliefs require you to disbelieve in their negations. (bivalence)

A-priori beliefs are acceptable, a-priori disbelief are not.  That is because since the material reality is unknowable, you can always invent a contrived explanation to continue to believe in something.
The material reality is not "unknowable." You might say these things, but if you want to know what time to arrive for an appointment, you won't sit around philosophizing about the fundamental enigmatic state of the universe, you'll check your physical calendar and look at the written symbols. And if you wrote it down correctly, you'll be on time.

Consciousness depends on experiences. Experiences build and correlate. If there was literally no connection between experiences, we'd be like Boltzmann brains. There must be a degree to which we can learn about the world around us. (Perhaps "the world around us" is not the True Reality, but I don't see how this refutes my point. kicks a rock)

People thought that atoms were unsplittable, indeed that was the whole idea.  They were wrong.
Yes, but they were more right than people who said that all was water. Quantum physics seems pretty accurate so far; I'd say there's a decent chance that there is no infinite recursion of ever-tinier subsub...atomic particles.

Quote from: GoblinCookie link=topic=168955.msg7677674#msg7677674 date=1516982317Because people have already tried and found this not be so.[/quote
You're ignoring the entire field of neuropsychology.

The brain breaks down into neurons, the mind into ideas; neurons are *not* ideas.
A program breaks down into logical steps, not computer parts, and yet I could conceivably read off a Word document by going bit-by-bit over my computer's hard drive, if I so wished and if I had the right tools.

Most things the brain does it does mindlessly.  The mind is not involved in the actual execution of the tasks it decides upon.
I feel like you missed my crucial point there. People's actions follow from their neurological activity.

As for your response, is the subconscious not part of the mind?

So in your reality is never possible for two completely different things to realize the same observable outcome by using completely different means?
I do not assign consciousness to the means, but rather the function. What does it do? How does it respond if I say "hello"? How does it react if something sudden happens? Can it be fooled? Can it notice that it's being fooled? To me, everyone's specific brains are black boxes. All I care about is the output.

Correct.  Both mindless and mindful explanations for the same things can coexist in the same universe.
Explanations do not exist within a universe; they serve to explain parts and levels of a universe.

I can explain somebody's behavior by saying "they were mad" or "neurons 1, 9, 39, 20832, etc., fired and this particular neurochemical was released and this hormone is at X levels." The first is an abstraction, and a short word for a seemingly-simple output. It's like saying "my OS crashed", or "this piece of code caused a segfault." They're both true. One is more general, abstract, higher-level. That's all. The other does not invalidate the one.

Think about phantom limbs, about how people who lose limbs can sometimes feel the limbs that they lost.  Point is that the mind 'does things' by manipulating an image of the body, not the actual material body but yet the latter must somehow respond to it's image being modified.  It is rather a 'this is what I look like, now show me what I must do?', except of course the whole thing could be a lie naturally.
I see what you're saying. It sounds a bit like predictive processing, but maybe kind of not?

(Yes, Descartes.)

(actually Descartes came to the conclusion that a good God would not allow the Cartesian demon to deceive him and so this joke fails but my esoteric historical-philosophical trivia wins)

Mind actually only exists when there is no software 'installed' on the brain to do a function.
Thinking is a function, of sorts. It's a process.

Over the course of making the latest version of my mod, which is far more work than I had ever envisoned I have used the copy function so many times repetitively in the same context that I am no longer aware of actually doing so.  That is to say when I press the 'paste key', I usually find I have copied the relevant date even though I have absolutely no recollection or awareness of doing so (this includes the physical act of pressing the right mouse button).
That is like calling a routine. But not all software is calling routines.

So the brains 'software' is basically the anti-mind.  If the brain ever accumulates enough 'software' to execute all your functions without you, then kiss your conscious existence goodbye.
This does sound like something that I've read somewhere, but keep in mind that this is conscious vs subconscious and not mental vs material. It's all mental-material. "I can identify words" and "I don't even have to consciously identify words, I just recognize their shapes and instantly move on" are both cognitive skills, done by the brain.

It is observable, but it is not material or physical.
Taboo "observation"?

That is an odd question to ask when we use a whole set of mechanics that are not physically observable at all to explain things.  Gravity for instance.
Gravity is not an explanation, it is an observation. (Except that "lifted things fall because F=GMm/r2" is sort of an explanation, but it's just one step of a chain of explanations.)

We think that spacetime is warped by gravity because we notice that time slows down around massive things. There is a neat mathematical way of representing all the various relativistic effects, and so we model it all with General Relativity.

To put my question differently, how do you know anything about the connection between the cause and effect, which would then mean that there is a cause?

Actually, what do you even mean by "causality"? I wouldn't say that gravity causes falling, but rather that there is a force exerted on bodies under certain conditions, which causes/is acceleration. This force's specifics can be predicted with an ever-more-accurate series of equations, from F=GMm/r2 onward. I have now sort-of Tabooed "causality" in gravity, except for the force-acceleration thing (which I think is just an axiom of physics, and not what you're talking about). Can you conceivably do the same for your dualistic theory of minds?

Minds are not real, minds are factual.  Real things are things that exist independently of the mind (or is that objective things?), minds do not exist independently of the mind.
That which does not go away when you stop believing in it, is how I have seen it put, but your mind does not go away when you stop believing in it.

It is a word, anyway. It can be defined however we like. Its use here is far more subtle and removed from everyday life that there are multiple valid ways of specifying reality. Can we stop quibbling over "real" vs "factual" and just discuss the actual concepts? My point was that I cannot explain away the mind, because the mind is a thing that is, and you can only explain away things that aren't.

People could not think that minds exist because there are no people thinking anything.  You cannot think that minds exist unless you have a mind, but you can falsely conclude that other entities have minds when they do not.
You have defined personhood as having Metaphysical Qualia, or whatever, but - we could conceivably simulate this, yes? Since the metaphysical world follows certain rules, right? Then we can imagine a hypothetical Material Person with the same sort of mind-function as a Metaphysical Person.

They would still have thoughts, because thoughts are material things. There is a measurable difference between your brain-state as it thinks different things. If thoughts were not physical, they could not affect the physical world. If thoughts cannot affect the physical world, then why are you talking about them? Your physical hands are twitching and talking about thoughts. They clearly have to be affected by your thoughts, then.

I know this is true because whenever I perceive something external there is a probability that is is illusory.  Illusory or not however, it is still a fact that *I saw* an appearance of something; that is to say the reality behind everything perceived is a question but the appearances are not questionable.
Do you think that all your philosophy about Metaphysical Minds follows directly from the fundamental fact of self?

It is easy for you to be programmed to simply state you are conscious.  It is harder though to program you to actually demonstrate a comprehension of the 'unobservable metaphysical source of consciousness', especially if the programmer has no consciousness itself.
As I've said - Bayes. How easy is it for anybody to comprehend the Unobservable Metaphysics of Consciousness? And how is a non-conscious thing supposed to create consciousness? Evolution did that, but how will this Zombie Master select consciousness from unconsciousness in order to fool others into thinking that its Zombies are Actually Conscious?

By consciousness the zombie does not mean actual consciousness, it means the behavior that it must exhibit in order to trick the universe's actual consciousnesses into falsely ascribing it consciousness.
What do you mean by Actual Consciousness, though? Why is that a good way to split minds up into categories? Does it carve reality at its joints?

This has the interesting consequence that one way to determine experimentally whether you are really dealing with a philosophical zombie or not is to draw a distinction between a fake and a real consciousness with a description of both.  This will ferret the fake consciousness out, unless it's programmer is actually a conscious being and specifically programmed a special script into the zombie such that it would always parrot the correct answers.
Wait, so Fake Consciousnesses are actually empirically distinguishable from Actual Consciousnesses? Huh, that changes things.

Let me reiterate something that I think you are not getting. Everything you say about consciousness can be traced back through physico-causality to neurons firing. There is no point where a Zombie Master or a Metaphysical Consciousness reaches in there and makes somebody say "I'm conscious!" You would do that anyway.

The most reliable evidence is my own experience, as already discussed above.  That is because all material/objective/real things are possibly fake/illusory and all other people are possible philosophical zombies.
How do you know that the fact that you experience things implies the existence of Metaphysical Consciousness Sources and Possible Philosophical Zombies and all that stuff?

You are still not comprehending the idea that two things can do the same and look the same, without being the same thing?
They're not literally the same thing, no. I feel like you are lacking a certain concept of unimportant differences. You often say "ah, your analogy does not work, because the two things being compared are not the same thing!" That is not how comparisons work, and this is not how categories work.

Just because two things have some difference does not mean that they must be in different categories. If I categorize things based on their appearances, then whether or not two things are Actually Literally Fundamentally Identical, things that seem similar are grouped together.

The word false consciousness means what it says on the tin.  Something that exhibits the observable behaviors of a true consciousness enough to trick the observer but has no underlying consciousness behind those behaviors.
You still haven't sufficiently defined consciousness, though. Can something be self-reflective and yet not Truly Conscious? Can something demonstrate self-awareness and yet lack True Awareness? I don't remember you clarifying these.

The projector is physical but not the projection.  It is the decentralization of the brain (see above) that makes the isolation process very difficult to pull off.  It is not a single thing but a whole network of things you have to isolate and you have to figure out of potentially millions of connections which set of connection is the essential connection you need to give your Strong AI so that you know you have actually made a genuine consciousness rather than a fake one.
What do you mean by "connection"? Is this interneural or interplanar?

What is wrong with treating chatbots well?
Nothing, but many people would probably disagree with you that chatbots should be treated well. Also, if it comes at a cost to actual people, I would not help chatbots, because they're literally just tiny pieces of code that spit back prerecorded messages on certain triggers. It's like treating... literally any random piece of code well, except that it will say "thank you" if you say "you're cool!" and most things will just sit there.

To be evidence means to be perceived, that is to appear in the non-material universe.  There is no material evidence for anything ever.
Perceptions are not immaterial. You perceive something when your neurons get entangled with it through your sensory organs. This is a material process.

Also, if you want to know how to make a bridge that won't fall over, you'll start re-inventing the idea of evidence pretty quickly in order to get the science of statics, material science, etc. running again. In practice, material evidence is totally a thing. Look. hits a rock with toe I call that "material."

There are no brains in this scenario.  We are talking about there being no material universe remember?  So yes what you are saying is just what I am saying but with the brain in particular rather than the material universe in general.
We are? Looking back, I don't think that's what's been happening... I don't know anything about this supposed scenario.

Exactly.  We only experience some functions of the brain because we are not the brain.  We only know what the brain tells us and the brain reveals nothing of it's own mechanics to us in what it tells us.
Clarification: we are running on limited portions of our brain. We are not our entire brain, but we are nothing but our brain (and assorted other entangled things, extended mind, whatever, it's all physical).

If there were somehow consciousness in a purely material universe, then it would have to be material as well.  If the brain is the physical mind, then we will understand the mechanics of the brain, since that is what we can be aware of in this universe.  If we are less than the whole brain, there would be some part of the brain we understood even if not everything; but all parts of the real brain work using the same mechanics.
You're asserting that, but I don't see how it follows. The brain is not quite the physical mind - it's not like I've thrown the Metaphysical Consciousness into the material world and shoved it into people's heads and called them "brains." It's more like... the mind is the process carried out by operations within the brain.

I could say objective if you wish.  Except that subjective experience is an objective fact, so material is a better word to use.
That's more a rephrasing. What I mean is...

Do you mean "made up of subatomic particles"? In that case, everything is partially non-material - are symbols and ideas made up of particles? Not quite. But that doesn't mean that they're anywhere else besides here.

The US is an institution. It's not really made up of atoms. It's an abstraction, a group. It's not on the Metaphysical Plane of Nations. It's just here.

In the physical world it appears only a mechanic to explain the behavior of objects.
In the physical world, consciousness is a property/function of certain kinds of objects. It doesn't really explain behaviors. It's just "this is a thinking object". An object's Consciousness Boolean can be used to predict its action, yes, but that is not all.

To itself (in the non-physical world) it appears as an object while the physical world appears a mechanic to explain it.
I don't get what you mean by "object" and "mechanic," or even "explain."

Also, what "non-physical world"? What do you even mean by that?? Is it a place where things are? Is it an idea? Is it a state of being?

Mechanics do not have locations.  Gravity is a mechanic and it does not have any location.
What is a "mechanic"? And gravity is a force, and any given instance of gravitational force has a location.

As for the rest, recall consciousness is not happening in the brain, the output of consciousness acts are a mechanic in the brain.
What is a consciousness act? How is its output a mechanic in the brain? Recall this - the brain is a physical, material object. It does not interface with metaphysics. It acts according to the strict laws of physics. (quantum mechanics is probabilistic, but strictly so - you cannot mess with probabilities any more than you can violate thermodynamics)

I would have projected that altering the projector would alter the projection in some unpredictable way.  That would in turn alter the decisions made consciously, which would in turn cause the mechanic effect of consciousness to change.
So in your dualistic model, there is a back-and-forth between mind and brain? How is this happening? How is the mind affecting the brain? How can it, when the brain is a physical object?

I have no idea what any of those mathematical terms mean in any case.   :-[
Eh, I think I got a bit too much on you case there. "Function" has a much vaguer but still valid meaning as "use" or "operation", come to think of it.

By function I mean either mechanism or object, I am looking for a word that includes both things that are physical objects and things which are mechanisms.  The word you seem to like (concept) excludes material objects.
This doesn't make much sense, though, since "function" pretty much means "use" or "what-is-done". It's an act or happening, or the kind of act or happening that is intended/possible for a thing. Objects are not happenings, and mechanisms are (I'm guessing) means to a happening.

My existence.
But your existence doesn't require there to be two interfacing planes. That doesn't really follow.

Uncertain things are not knowable.  Even if you can increase the probability to 99% that something is the case, nothing stops the 1% thing from actually happening to be so.
Nobody else, AFAIK, uses "knowable" to mean "can be known with certainty." Knowledge is probabilistic.

I am saying that because ethics require certainty
[citation needed]

it makes the most sense to base ethical judgements entirely on what is certain (appearances), rather than material facts that are always possibly going to be wrong.  To me it is right to sacrifice the apparent few for the apparent many, but not to sacrifice the apparent few for the theoretical many in effect.
But appearances are not people. That's comparing apples and oranges.

I need material facts because barring the extremely improbable situation where somehow I am all-powerful but don't know it, I need to explain why my appearances, certain as I am of their existence are not mine to alter as I wish; as I put it, the appearance of the absence of matrix spoon bending is the evidential basis for material reality.
So "material" just means "stuff that isn't you", then?
Logged
Quote from: King James Programming
...Simplification leaves us with the black extra-cosmic gulfs it throws open before our frenzied eyes...
Quote from: Salvané Descocrates
The only difference between me and a fool is that I know that I know only that I think, therefore I am.
Sigtext!
Pages: 1 ... 11 12 [13] 14 15 ... 25