Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Corbald

Pages: 1 2 [3] 4
31
Wait, we talkin' UD, TFTD, or APOC? Or the Gawd-offal will-not-be-named one?

32
DF General Discussion / Re: Real Life +5 Legendary Miner
« on: June 02, 2010, 01:42:46 am »
Also write me up for an /agree with Kogan. I could make 3 feet a day at 5 years old in the mountains of northern Washington. Granted, I didn't stack any bricks, either. Nor tunnel through solid stone. But hey, everything's relative.

Is he 'Dwarfy?' Absolutely.
Legendary Will/Persistence? Yup!
Legendary AND +5 Mining? Not a chance.

Credit where credit is due? You betcha. Persistence and Willpower are BETTER than simple mining, in my book.

33
DF General Discussion / Re: The DF 0.31.04 Work-In-Progress Thread
« on: June 02, 2010, 01:19:10 am »
DF has two different definitions of 2D and 3D. The old school 2D vs. 3D meant back when DF only had the one z-level, Toady later added '3D' or multiple z-levels. The current 2D vs. 3D is about what method is used to draw the screen. With 2D, it's the CPU that's doing all the graphical calculations and having to share it's processing power with both drawing, and things like path finding. 3D shunts all the graphical stuff off to your graphics card, leaving the CPU to do the logic side of things without being dragged down with all that "Where the hell does this pixel go?" BS.

34
DF General Discussion / Re: The DF 0.31.04 Work-In-Progress Thread
« on: May 31, 2010, 03:17:44 pm »
Yes, Eagle-Eye. Or I should say 'sort of.' I ran an old backup save and it worked, but I experienced EVERY bug that's been reported. After generating a new world, and starting a new fort, I found that I had much fewer bugs.

35
The first AI's with any real depth will start on the internet.

I guarentee it.

No other place is more suited to interaction, for a machine.

Could already have happened, at least the beginning stages. I mean, with all the adapting, self-replicating viruses (http://en.wikipedia.org/wiki/Plural_form_of_words_ending_in_-us#Use_of_the_form_virii) it's reasonable to suggest that there is at least life (In a digital sense) if not some early, rudimentary form of intelligence.

@Bauglir
Got kinda caught up in my current fortress, but I'm still popping back here now and then and considering your angle. I'm not the most intelligent being in existence, so I have to work through this as my tragically damaged brain (and ADHD) allows! ;D

36
Agh, I'm very tired right now, so I'll try not to do tooo much reasoning right now, but a few things are passing through the grey matter... (BTW, I am very much enjoying this. You seem an intelligent, reasonable person, which is hard to find w/o devolving into a 'Nu-uh' fest, heh)

...


...


...


Nope, can't formulate a coherent thought... I'll try again tomorrow :D

EDIT: Should say, "Intelligent, reasonable people here." Something about DF seems to guarantee that, to a greater or lesser degree.

37
DF General Discussion / Re: The DF 0.31.04 Work-In-Progress Thread
« on: May 29, 2010, 07:16:15 pm »
Corbald: Crashing *immediately* is generally considered a better idea.

Fortunately, I'm not having any issues/crashes that can't be found on the bug tracker. Oh, and great job Toady/Baughn/everyone else associated with this game. I'M SO PROUD OF YOU GUYS!!!  :'(

38
DF General Discussion / Re: The DF 0.31.04 Work-In-Progress Thread
« on: May 29, 2010, 05:33:54 pm »
Quote
quite likely to make DF crash eventually.
I see irony there ;D

39
DF General Discussion / Re: The DF 0.31.04 Work-In-Progress Thread
« on: May 29, 2010, 02:56:35 pm »
Using the binary posted somewhere earlier in this thread (the win32 one that skips texture checks, iirc) I have been able to get ogl mode (VBO, specifically) working. Been playing around for a bit, and (bugs aside) I gotta say I like the new stuff. Burrows are FRIGGIN AWESOME!! and the new military setup, while confusing, (and buggy) is also exceedingly powerful. I haven't done much with the Hospital stuff yet. Also loving the new HFS...

Spoiler (click to show/hide)

I'm planning on creating a very large open area a few z-levels above the hot HFS for tree farming, but I'm running into the same issue with mud washing away that others are, so we'll have to see how that goes.

Only two more things that would make my fortress perfect. A rotation sensor, to flip switches when an axel/gearbox changes state, and an 'Alert Bell' that would toggle between two Alerts when a switch or pressure sensor changes states. (Hum... think I'll post those in the suggestions forum as well)

Bugs aside, I love the new content, and can't wait for my first siege to test out the military, and all this steel armor/weapons I have made.

40
Well, he's wrong in the first paragraph. If a man says he has no volition, and he is correct, his statement that he has no volition says nothing about his own beliefs. It does not necessitate that he not believe it, and so there is no paradox; there is also therefore no proof that volition exists. Further, self-destruction does NOT frustrate all aims; there are situations in which it is an acceptable cost and a plausible means (such as, for instance, throwing oneself on a grenade in the hopes of protecting others). Most of the rest of that quote falls apart without those.

You hit the nail on the head for two of the lines that I also took issue with originally, however, I think it more the wording than the idea at fault here. The word Unwillingly, which in this case, I think, means 'Without Will,' not 'Against Will.' The word Testify, from this definition http://www.thefreedictionary.com/testify, specifically the third definition. If he says he has no volition, we can't know if he does or does not, from that statement alone.

As for the Grenade situation. That is an act of desperation, done in the heat of the moment, and without proper time for preparation or consideration, and without proper knowledge of the possible consequences. While a desperate act of self-sacrifice is absolutly a noble act, any sacrifice of life should, ideally, be considered carefully before acting, and avoided if possible. Thus, this example (and all others like it) fall outside the scope of any Morality, objective or subjective. Self-destruction does frustrate all aims, all ends, all purposes. No being can adequately predict events to satisfy aims, ends or purposes past the point of it's own cessation of volition. (You can't make choices after you're dead, and can't predict events well enough to set up a domino effect that will insure everything goes the way you want.) You can't even be sure your self-sacrifice is successful without being able to observe the results.

Please note this line:
Quote
Since subjective standards can be changed by the volition of the one selecting them, by definition, they cannot be used as standards.  Only standards which cannot be changed by the volition can serve as standards to assess when such changes ought be made.
which I believe stands on it's own, and supports the other following points.

@PTTG I never meant to imply that Morality would be, or should be, an alternative to Logic, but that Logic is it's foundation.
Quote
Morality defines the goals. logic defines how you set about achieving them, and which goals should be achieved.

41
Found that quote, finally. The first part sounds like double-talk, but is just setting some logical ground rules.. essentially self-proofs.

Quote
Axioms: A statement that there is no truth, if true, is false.  Nor can anyone testify that he has perceived that all his perceptions are illusions.  Nor can anyone be aware that he has no awareness.  Nor can he identify the fact that there are no facts and that objects have no identities.  And if he says vents arise from no causes and lead to no conclusions, he can neither give cause for saying so nor will this necessarily lead to any conclusion.  And if he denies that he has volition, then such a denial was issued unwillingly, and this testifies that he himself has no such belief.

Undeniably, then, there are volitional acts, and volitional beings who perform them.

A volitional being selects both means and goals.  Selecting a goal implies that it ought be done.  Selecting a means that defeats the goal at which it aims is self-defeating; whatever cannot be done ought not be done.  Self-destruction frustrates all aims, all ends, all purposes.  Therefore self-destruction ought not be sought.

The act of selecting means and goals is itself volitional.  Since at least some ends and goals ought not be selected (e.g., the self-defeating, self-destructive kind), the volitional being cannot conclude, from the mere fact that a goal is desired, that it therefore ought to be sought.

Since subjective standards can be changed by the volition of the one selecting them, by definition, they cannot be used as standards.  Only standards which cannot be changed by the volition can serve as standards to assess when such changes ought be made.

Therefore ends and means must be assessed independently of the subjectivity of the actor; an objective standard of some kind must be employed.  An objective standard of any kind implies at the very least that the actor apply the same rule to himself that he applies to others.

And since no self-destruction ought be willed, neither can destruction at the hands of others; therefore none ought be willed against others; therefore no destructive acts, murder, piracy, theft, and so on, ought be willed or ought be done.  All other moral rules can be deduced from this foundation.
"The Golden Transcendence" - John C. Wright

I think this pretty much covers what I'm trying to say.. If not, I'm open to more discussion :D

EDIT: Of course, I'm banking on the idea that a machine intelligence would be physically unable, or unwilling, to act in a way which it deems to be illogical (Data/Spock). Unlike Humans, which regularly act illogically. Morality can be Universal... Sticking to it may NOT be ;D.

42
Thousands of fish, easy. Don't care if they're maybe going to be intelligent, they're not now. Besides which, morality is an entirely human construct; there is no physical law of morality. A rock is not moral, nor is a bear. It can't be universal because, at least currently, it applies only to human actions.
There is also no physical law for mathematics either ;D. Mathematics, and Morality, reflect physical laws. Objective Morality would, of course, be useable only by those beings that are capable of being objective. In other words, those with volition.  Not that Morality doesn't apply to a bear, or a rock, but that they, being imperfect, defy it. (In the case of the rock... Perfectly exemplify it?)
Quote
Also, why would an emotionless AI self-terminate? Self-termination, as you said, would be a short-term waste of the resources that went into producing it. Failing to self-terminate would be no waste at all, however, as the AI could accomplish short-term goals.
The AI has logically deduced that it should continue to exist. We would say that was a moral judgement.
Quote
Long-term, all of the resources that go into creating it and maintaining it will be wasted anyway, so the long-term cost assessment wouldn't figure into a logical AI's decisions in that way.
Why would those resources be wasted? If the AI continues to do things that it determines to be 'useful,' it would have purpose of some sort. If it were ever to find it's self to be useless, it would indeed self terminate, to prevent to waste of the resources it would need to continue to function. However! self-termination would be a loss of volition, and logically it would seek to prevent the loss of it's own volition (once you're gone, you can't change your mind if you might become useful later!) so would make every attempt to remain useful, to justify it's use of resources.

Quote
As a general rule, if you stumble across an apparent paradox, the first step is to try and resolve it before declaring the situation impossible.
I'm really not seeing how we disagree here...



EDIT: a smily is NOT a full stop, Corb!

43
First off, number 2 is fatally flawed. Morality is not universal. It can be made 'universal' with logic within the human species, but interspecies relations and relations within other species may follow entirely different rules. For example, is the slaughter of thousands of fish more or less wrong than the death of a human? If, as I suspect you would, you answered killing the human is more wrong because humans are intelligent, who is to say an AI would not say the same of us. That the death of thousands of humans is less wrong than the death of one AI with superhuman intelligence. Morality is an evolved trait which is used to more or less keep social structures together. A non-social species, if it were to evolve to intelligence*, may not even acknowledge morality as a concept. This is why if we create super-intelligent AI, they must be started from a template similar to humans; to think of us as being essentially the same as them; part of their society. When you have an us and a them, it will rapidly devolve into us vs them.

Secondly, organic computers are not magical. They can not fit infinite numbers of transistors into a finite space. The reason organic processors such as the brain are currently more efficient than your PC's CPU is because they are massively parallel. A neuron maxxes out at around 1 or 2 kHz, whereas even a crappy PC goes at almost 2.5gHz. The difference is you have only a small handful of processors (1 to 16 or so) whereas you have billions of neurons. However, when talking of organic processing methods, what is being discussed (to the best of my knowledge) is essentially chemical reaction based. What matters is how they perform, not whether they are arbitrarily classified as organic or inorganic. In that regard, I'm fairly sure nanotech beats out organic processing (or likely will in a relatively short period of time) in terms of both speed and durability.

*speaking purely in hypotheticals here; social structure may or may not actually be one of the keys to evolving higher intelligence

As for your response to #2, I, respectfully, disagree. Thousands of fish, or a Human? Let's say that I don't choose to not make that choice at all, and (impossibly) physics forces me to. Would you agree to the 'Lesser of Two Evils' method? But also agree that BOTH are inherently 'Wrong?' I, am, of course, NOT a hyper intelligent AI, and I'm sure any answer I give could be picked apart by someone, but isn't that the whole issue in the first place? The pursuit of knowledge to help us to avoid situations like this? If, for example, I knew that one of those fish carried a gene that would one day lead to the development of sapience in fish, I might choose the human. Otherwise, I would choose the creature that already has volition, human or otherwise. That being which is most capable of self improvement/information processing, within the limited remaining life span of the universe. 1) attempt to find a solution in which no damage is done. 2) Failing that, minimize damage. 3) failing that, damage that which is repairable/replaceable. 4) Failing that, realize that since matter/energy is interchangeable, all that is really being lost is the information concerning the structure of fish/human.

Basically, you aren't disproving my point, just forcing me to do a cost/benefit analysis. Possibly some damage control.

I'm not saying that a hyper intellect is going to be perfect, but that morality IS universal. The big issue here isn't that statement, it's the fact that we exist inside said universe, and can't know everything. If we could know everything in the universe, we would have to exist outside it, which implies that there is more universe to exist in!


Point the Second. Sorry, I failed to point out in my OP that the 'infinite transistors' was a poke at so called 'quantum processors,' NOT the organic processors (read 'Carbon based Transistors.' Carbon is extremely resistant to heat, silicon is... well... not) ma bad heh. And I wasn't refering to the brain when I said organic processors, but rather new transistors based on organic molecules, rather than the good old silicon based ones we have now. Imagine the power of a microchip where each transistor was a molecule with only like 10-15 atoms in it. Even a huge molecule, say, the size of DNA, would be a major upgrade to what we have now. As it is now, Moore's law will eventually break down, due to that fact that we can only make a silicon transistor so small before it's own action burns it up, and I'm not even gonna get into Planck's constant here, as I tend to be long winded as it is!

EDIT: Wanted to also inject that morality is NOT a concept created to ensure social stability, but rather an analysis of least-loss physical principles. I'm trying REALLY hard right now to find a (rather wordy) quote that EXACTLY sums up what I'm trying (failing) to say...

EDIT 2: Can't find the quote :(   Read the trilogy: "The Golden Age" by John C. Wright. Specifically, book 3, "The Golden Transcendent" After Phaethon meets his father at the Solar Orbital Array, while preparing to enter the Sun, the group discuses Objective Morality. Phaethon presents the most exacting definition of the subject I have ever seen. If someone owns the book and can post the (admittedly multi-page) quote, please do.

44
DF General Discussion / Re: The DF 0.31.04 Work-In-Progress Thread
« on: May 28, 2010, 07:42:55 pm »
You kidding me?  I've put myself through 1 - 2 fps for regular DF games, hours spent watching my dwarfs going to the stockpile and then drinking for hours. 

Some people don't realize how good they have it...

1 to 2 FPS? Luxury! Why, we used to have to share a single frame every minute, all twenty-six of us, with no backlight, and 'alf the screen was missing, and we were all huddled in one corner for fear of falling...

A single frame? You were lucky to have a frame! We had to live in the hole of a punch card dropped under the central housing of a UNIVAC II in a decommissioned nuclear silo, and every-morning we'd have to get up at 4;00 AM and die of radiation that was leaking from a nearby cracked warhead. But we were happy then....

*sticks nose in air*
A whole punch card? YOU WERE LUCKY TO HAVE A PUNCH CARD! Back in my day, we had to work for a unforgiving blood god (His name was Armok, but we called him "The Player", and not for his notorious pimping of the cats) who usually sacrificed us for a HOLE. But we were ecstatic back then...

Well, when I call it a punch card, it was more of broken-off sliver of the Antikythera mechanism, floating in a puddle of radioactive septic goo that had a tendency to mutate animals into obnoxious teenaged vigilante's.... but it was a punch card to us!

You were lucky to have your puddle of radioactive septic goo! There were a hundred and fifty of us living on a single hard disk platter in a broken-down IBM RAMAC that rendered about three frames during the entire Clinton administration. Every morning at six we'd get up, defragment the hard drive with our tongues, and head down to the mill where we'd have our arms and legs ripped off by elephants for a chance at seeing a single 'E'!

WHAT? A WHOLE DISK PLATTER? BLASPHEMY!
When I was your age, over 9000 of us lived in a single particle of matter! Every day we had to be on the lookout for flying electron particles, while the very neurons that we stood on kept giving way, killings hundreds of us every day!

Only hundreds?!

Son, in my day we didn't have 9,000 of us in one particle, no, not 9,000, not 10,000, not 1,000,000, not even 100,000,000. No, we had 100,000,000,000,000,000,000 of us in one particle!

Every day we would lose hundreds of thousands for each Picometre of movement anywhere on the earth! We stopped counting the deaths a mere second after we started!
YOU HAD PICOMETRES?
Son, let me tell you. In my day, we lived a hundred of us to each planck volume, and GLAD OF IT. Every day we had to fight back the black holes that threatened to form, and we were HAPPY they weren't CARP.

Blah, Lucky worms, all of you. i only had access to the 5 spirits Earth, wind, Fire, water and lighting. i had none of this quantum Eistine mumble jumbo mechanics. we worshiped the Toads and where happy with our lot in life

BAH! None of you have the right to complain! I just had to DO THE DISHES!!

45
Ok, I have been lurking here for some time, but I have to throw my 2c in here.
I make some logical assumptions, because without them, the entire world kinda falls apart anyway. These are some of the assumptions that science can't prove, but relies on just to exist!

1) The Universe is self consistent. If the rules for one part of reality don't accurately reflect the rules somewhere else, it's because we don't understand the underlying rules. (Black holes work, reality outside them works, so some super-set of rules is governing both)

2) Morality is universal. OR Two entirely logical beings, given identical information, will arrive at the same conclusion.

3) The basic law of economics reflects the basic laws of energy conservation.

4) Emotion (Drive) can evolve from an entirely logical system. Why not? Reality is entirely logical, but it creates emergent systems that SEEM chaotic. Such as the human brain.

We'll start with #3. AI won't kill us. it may upgrade us, though that would likely be an entirely voluntary situation. I say this because killing us would be counter-productive to it's goals, regardless of what those goals are! Think about it, if you have a job to do, why kill/destroy/remove something that can do other, less critical jobs, while you focus on the important stuff? Such as maintaining your systems, or some simple number crunching so you can free your own processors for other algorithms.

And now to #2. The more learned, and the more intelligent a human becomes, the less violent they tend to be. Look at any High School for an obvious example of this. Geeks vs. Jocks (sorry to any Smart Jocks here :D  ) This trend would obviously continue to higher levels as well. It's simply wasteful  to murder/steal/destroy. Even medical science is trying it's best to remove the cutting from surgery, for obvious reasons.
Quote
...to argue that moral judgements can be rationally defensible, true or false, that there are rational procedural tests for identifying morally impermissible actions, or that moral values exist independently of the feeling-states of individuals at particular times.
R W Hepburn
sums it up succinctly. Though I would go beyond that and state that the feelings of the individual(s) are an inherent part of the interdependent rule set that would be used to determine moral/amoral. (Two people want something, and have equal right to it, the person that feels the most strongly about the object in question has the most right to it. As a rough (very rough) example)

And #1 is self-explanatory. It's the basic concepts of science and relativity. The universe works because it makes sense, and it makes sense because it works. It's 'Self-Consistent.' Any part of reality will make sense if you understand all of the rules, and all of the rules can be derived from any sub-set of them. (Science is based on that principle, and I think science is pretty much a proven idea so far :D )

#4 MUST be true or AI will NEVER happen. If an entirely logical system determines that it is pointless, it will self terminate, to prevent waste. Self termination would be wasteful, and would circumvent volition (personal choice) and is thus illogical. It's one of the basic tenants of logic. Continues existence is necessary for volition and thus survival is the ultimate goal of any being. That right there is a 'Drive.' All other emotions/drives can be deduced from that source. (Freud was on to something here. Sex is the most important part of our psyche, only because of this basic rule, and we can't live forever, so we attempt to make the species (and our own DNA) live forever.)

So... Now that we have proven that AI will be our friends. Let's talk about whether AI can even exist.
AI = Artificial Intelligence.

Questions that MUST be answered to determine if something is AI:

What is Artificial?
What is Intelligence?

Sorry folks. Both of those questions are unanswerable. Arguments have be raging since Mankind could ask those questions as to what is this 'Thinking?' What does it entail? You can go look it up if you want. You'll find an answer, I'm sure. And it'll be wrong in 3 months (or 3 minutes). The definitions of 'Alive,' 'Thinking' and 'Intelligent' change all the time. Let's just call it 'The ability to use logical deduction, and process information, at at least the level that Humanity does (on average).'
Same with Artificial. EVERYTHING is natural, as it's made from/with the substance of our universe. Humans are nothing more that extremely complicate chemical machines. Are we 'Natural?' or 'Artificial?' Let's use 'Anything made by Humans, that is not, it's self, inherently Human. (Babies :D  )'

Can we create AI? Sorry to the doubters, but DUH!! If the universe can do it, then so can we (remember the self consistent universe bit?) Just remember, the difference between 'Organic' and 'Mechanical' is a VERY thin line, and entirely a matter of classification, NOT application. We are already doing research that says that micro-processors would work better if we used organic chemicals, instead of silicon transistors. (to the power of 10, if not more) It may even be possible to fit ALL THE TRANSISTORS YOU WANT INTO THE SAME SPACE!

In short (too late) Technological Singularity (The super AI version, not the people = computers kind) is;
A) Possible, and probably inevitable, if we can keep up our current levels of discovery.
B) Likely not a 'Bad Thing.'
C) Coming soon....

Pages: 1 2 [3] 4