361
DF Suggestions / Conflicts and Incomplete Knowledge
« on: November 12, 2009, 01:45:10 pm »
In DF Talk #4, Toady talked about how he'd like to have characters and entities develop more interesting conflicts than just "Our ethics disagree" (on the entity level) and "We like different things" (on the personal level). His solution was to give each individual a set of motivations, which would clash with other motivations and cause difficult decisions, rivalries, and wars.
This works great for modeling situations where there are two (or more) sides, and they want different things, and they're eager to take actions to pursue their goals. Unfortunately, as Toady kinda sorta mentioned, it's not quite as effective in the cases when both sides want the same thing, and ought to be in agreement.
This kind of conflict, the kind where productive peace talks suddenly go sour, where friendships turn to hatred, where things fall apart despite everybody's best efforts to the contrary, don't really follow the conflict-of-interests mechanic. There's more to these conflicts than just direct opposition, and so they'll never show up in the current model. That's a shame; a lot of the saddest tragedies and funniest comedies fall into this category. We're going to need a different conflict model.
As it happens, I have a few ideas about how this model could work. Over the course of my life, I've seen and experienced more than my fair share of personal drama, in varying degrees and from every perspective. In that time, I've come to notice that the great majority--perhaps as many as 90%--of these conflicts follow the same basic template: Offense, Interpretation, Response.
You have some (let's say two) characters, A and B, and a bunch of events that occur. ("Events" can be as simple as somebody saying something or as complex as the gradual decline and collapse of an empire.) Then:
This model is surprisingly powerful. Go ahead, give it a spin on movie/play/story plots, historical events, and episodes from your own life. It won't cover everything, like those situations mentioned earlier where the sides have a legitimate disagreement, but it's still pretty damn good, if I may say so myself.
The real question here, however, is "How should we implement this in Dwarf Fortress?" While I don't have the same self-indulgent certainty about this that I had on the basic offense-interpretation-response model, I do think that I have some pretty solid ideas.
The crux of the concept is knowledge representation, and more specifically incomplete knowledge. Knowledge consists of a number of data (singular: datum) about various topics. There'll be three types of knowledge that a being can possess:
Okay, so. Here's how it works. When creatures interact, they do whatever they'd normally do. Talk, fight, anything else that's in the regular rules. In addition, each participant "mentions" a bunch of data. "Mentioning" a datum adds it to the other participants' knowledge, along with some appropriate metaknowledge about the other participants and the datum. In addition, each datum mentioned has an effect on each participant similar to if he had encountered the subject of that datum. (For instance, someone who hates turtles would get a minor negative thought from someone talking about turtles.)
Now, lest you start to decry my idea as just a rehash of Clue with a bit of Dogs in the Vineyard and The Sims thrown in, here's the cool part. Each time a datum is mentioned, the participants try to figure out the reason for that datum. (This is the Interpretation step.) They do this by applying the Physics rules they know to the subject of the datum, to the datum itself, and to any metaknowledge generated by the mention. It's a pretty straightforward constraint satisfaction problem, but since nobody knows all the factors involved, any deductions will be naturally imperfect. They'll fill in as many of the factors as possible with their own knowledge, but anything else they'll have to guess at.
When a participant decides that he's found the most likely reason for the datum, he adds the reason to his knowledge as a new datum. He also adds any data that he had to fill in to make the reason work. With so much guesswork involved, there's a chance he made a mistake somewhere, so he gives each datum some kind of "certainty rating" that says how certain he is that it's true. If applying a very certain datum to a very certain rule immediately implies a new datum, that datum will be very certain; if the reasoning requires a lot of guesswork, the result will be less certain.
The more certain a participant is about a datum, the more likely he is to use it in an interpretation; that is, he'll prefer to use a certain datum over an uncertain one, and an uncertain one over one that he doesn't "know" at all. The more a datum gets used in interpretations, the more certain it becomes, and the less certain any conflicting data becomes. What this all means is that it's very easy for a bunch of bad circumstances to collide and cause a disaster. One uncertain datum can influence someone's interpretation of other uncertain data until he's got a mass of mutually-reinforced suspicions, each baseless on its own, that all add up into absolute certainty about something that, from a distance, looks completely ridiculous.
Then he acts on his 'knowledge', and everybody wonders "What the hell was that for? Is he angry at me?"
So, whaddaya think?
Quote from: Toady
[...]they all want things, but there has to be conflict in those wants, not just between two people but within a person as well; their responsibilities might conflict with the things that they want. Then you need to be able to resolve those conflicts.
This works great for modeling situations where there are two (or more) sides, and they want different things, and they're eager to take actions to pursue their goals. Unfortunately, as Toady kinda sorta mentioned, it's not quite as effective in the cases when both sides want the same thing, and ought to be in agreement.
Quote from: Toady
[...] Right now [entities] bump into each other to do a peace agreement or whatever, but it doesn't really simulate what happened there. [...] You could simulate that event and of course it's more interesting when things go all wrong. I prefer to keep running it as a simulation where it just randomly decides based on if those two people really disagree, [...] rather than having it say 'there hasn't been a fight in this world for ten years so let's just start something'. Because the game could do that too. I'd like to lean away from that stuff but if a guiding hand is needed ... well, I should probably just improve the simulation.
This kind of conflict, the kind where productive peace talks suddenly go sour, where friendships turn to hatred, where things fall apart despite everybody's best efforts to the contrary, don't really follow the conflict-of-interests mechanic. There's more to these conflicts than just direct opposition, and so they'll never show up in the current model. That's a shame; a lot of the saddest tragedies and funniest comedies fall into this category. We're going to need a different conflict model.
As it happens, I have a few ideas about how this model could work. Over the course of my life, I've seen and experienced more than my fair share of personal drama, in varying degrees and from every perspective. In that time, I've come to notice that the great majority--perhaps as many as 90%--of these conflicts follow the same basic template: Offense, Interpretation, Response.
You have some (let's say two) characters, A and B, and a bunch of events that occur. ("Events" can be as simple as somebody saying something or as complex as the gradual decline and collapse of an empire.) Then:
- A receives information that B was involved in an event and is offended. (This information may or may not be true, but A believes it is.) E.g.: B says something to A that A finds hurtful; C tells A that B said something mean to C; A has a dream about B trying to kill him.
- A considers what he knows about B, and uses that knowledge to interpret a feasible reason for why B might have involved in the event. (This interpretation is likely to be flawed, as A doesn't actually know B's true intentions.) E.g: B must not want to be A's friend any more, and so is pushing A away; C must have wronged B somehow, so B is retaliating; B must be a demon, and so is controlling A's thoughts for fun.
- A involves himself in an event in a way that he feels would be an appropriate response to B's intentions, as A perceives them. (Note that A is not responding to the offensive event itself.) E.g: A says something mean back to B; A invites B and C to lunch, each without each other's knowledge, in hopes that B and C will make up; A puts on a tin foil hat to protect his brainwaves.
- B learns of this new event and A's involvement and is offended. E.g. B is hurt by A's nasty reply to his joke; B is offended that A would try to trick him like that; B finds A's hat kinda creepy.
- The cycle repeats.
This model is surprisingly powerful. Go ahead, give it a spin on movie/play/story plots, historical events, and episodes from your own life. It won't cover everything, like those situations mentioned earlier where the sides have a legitimate disagreement, but it's still pretty damn good, if I may say so myself.
The real question here, however, is "How should we implement this in Dwarf Fortress?" While I don't have the same self-indulgent certainty about this that I had on the basic offense-interpretation-response model, I do think that I have some pretty solid ideas.
The crux of the concept is knowledge representation, and more specifically incomplete knowledge. Knowledge consists of a number of data (singular: datum) about various topics. There'll be three types of knowledge that a being can possess:
- Facts. These are definite statements about the world and its inhabitants. This includes data about all the stuff you'd find in the raws (e.g. creature definitions, ethics tokens, entity definitions), plus geography, history, entity properties, information about specific individuals, and lots of other stuff. This can easily be represented with a slew of boolean variables, one for each datum, although the possibility that a being could "know" a false fact makes this a little more complicated.
- Metaknowledge. Knowledge about what beings know. This can get pretty convoluted, like if A knows B's secret and B knows that A knows and A knows that B knows that A knows but B doesn't know that A knows that B knows that A knows. I'm not entirely sure how best to represent all this, since it's an arbitrarily large structure that's in constant flux, but Toady is a much better programmer than I am and I'm sure he'd have no trouble with it.
- Physics. If facts are knowledge about the game world, then physics is knowledge about the game itself. Data migth include stuff like "Not eating makes you die", or "A king with strong links to his home town is X% more likely to respond to threats to that town." This allows beings to make deductions about the past and predictions about the future in interesting ways. This could be represented as a set of rules "X -> Y", a limited simulation of a small portion of the world, or more likely some combination of the two.
Spoiler: "more about knowledge" (click to show/hide)
Okay, so. Here's how it works. When creatures interact, they do whatever they'd normally do. Talk, fight, anything else that's in the regular rules. In addition, each participant "mentions" a bunch of data. "Mentioning" a datum adds it to the other participants' knowledge, along with some appropriate metaknowledge about the other participants and the datum. In addition, each datum mentioned has an effect on each participant similar to if he had encountered the subject of that datum. (For instance, someone who hates turtles would get a minor negative thought from someone talking about turtles.)
Spoiler: "more about mentions" (click to show/hide)
Now, lest you start to decry my idea as just a rehash of Clue with a bit of Dogs in the Vineyard and The Sims thrown in, here's the cool part. Each time a datum is mentioned, the participants try to figure out the reason for that datum. (This is the Interpretation step.) They do this by applying the Physics rules they know to the subject of the datum, to the datum itself, and to any metaknowledge generated by the mention. It's a pretty straightforward constraint satisfaction problem, but since nobody knows all the factors involved, any deductions will be naturally imperfect. They'll fill in as many of the factors as possible with their own knowledge, but anything else they'll have to guess at.
When a participant decides that he's found the most likely reason for the datum, he adds the reason to his knowledge as a new datum. He also adds any data that he had to fill in to make the reason work. With so much guesswork involved, there's a chance he made a mistake somewhere, so he gives each datum some kind of "certainty rating" that says how certain he is that it's true. If applying a very certain datum to a very certain rule immediately implies a new datum, that datum will be very certain; if the reasoning requires a lot of guesswork, the result will be less certain.
The more certain a participant is about a datum, the more likely he is to use it in an interpretation; that is, he'll prefer to use a certain datum over an uncertain one, and an uncertain one over one that he doesn't "know" at all. The more a datum gets used in interpretations, the more certain it becomes, and the less certain any conflicting data becomes. What this all means is that it's very easy for a bunch of bad circumstances to collide and cause a disaster. One uncertain datum can influence someone's interpretation of other uncertain data until he's got a mass of mutually-reinforced suspicions, each baseless on its own, that all add up into absolute certainty about something that, from a distance, looks completely ridiculous.
Then he acts on his 'knowledge', and everybody wonders "What the hell was that for? Is he angry at me?"
Spoiler: "some examples" (click to show/hide)
So, whaddaya think?