Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 46 47 [48] 49 50 ... 222

Author Topic: Dwarf Therapist (Maintained Branch) v.37.0 | DF 42.06  (Read 967935 times)

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #705 on: June 21, 2014, 07:50:55 pm »

Preferences.

Back to the drawing board.

Splinterz has prototyped my proposals, and we even got skills normalized and corrected to be 0% (if there was no contribution to the role % other than a padded skill value [a value that allows us to "normalize" the outputted rating]) if it's a skill only role (kind of, still a slight issue with preferences, but not too big a deal).

The problem with preferences, is they are binary.  Either a dwarf has it, or doesn't have it.

It's not like a skill, where-as a role will generally only have 1 skill, but sometimes more, (like a combat dwarf), but is like a skill in that the distribution is skewed; as in, dwarf's have preferences, and there are many preferences, and roles can have many preferences in them.  The major difference with preferences is: they have no value in themselves UNTIL they are added to a role.  Otherwise it's a yes/no question for a dwarf that can't be fairly reduced to a % by itself BEFORE a role calculation.

I can't deskew preferences in the same way I deskew skills in other words.  I run through all skills and scale them from 0 to 100%, but I can't do that with preferences since preferences aren't really comparable with each other. 

Either a dwarf has it or doesn't, I can't scale a dwarf's "preference" %, as the role defines what preferences are important.  It does me no good to reduce a preference binary value to a %.

In other words, I'm using rank transformation on a matrix of attributes x # dwarfs, same goes for traits, and skills... but I can't do that with preferences unless I do them from their roles (i.e. roles x # dwarfs), then I can calculate a normalized scale across all roles, but this is a different technique than I'm using for attributes and skills and traits which are done OUTSIDE OF ROLES.

So I'm not entirely sure on how to approach preferences. 

My BEST idea is to just give a straight additive value to the overall %, therefore the weight can have an affect on the role by multiplying it's preference score by the weight to give either a +/- to overall percent.

Although, I find this undesirable, as no other categories (skills, attributes, or traits) do this, but I don't really see how to scale preferences from a 0 to 100% range.

Another idea was to calculate all roles (preference category value), then exclude dwarf's that have 0 preferences for a role to get dwarf's that actually have ratings, then run this through an ecdf.  So I can identify those dwarf's that have the highest preference grouping and give them the highest preference %.

This is a new idea, as it would require going into the roles, finding the preferences defined, generating a quick binary count of those that have the preferences listed but DIVIDED BY THE ACTUAL PREFERENCES that exist in the population, then run these values through an ECDF, then applying similar deskew concepts used in skills.

Anyways, posted for anyone's thoughts.

Here's what a pic of the proposed role counts look like right now

http://imgur.com/qMX9ETx

The biggest difference is now all the roles are comparable left/right!
« Last Edit: June 21, 2014, 09:38:33 pm by thistleknot »
Logged

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #706 on: June 22, 2014, 11:54:10 am »

I think I have a proposal for how preferences are calculated.

if a dwarf has a preference, then it's % value = 1/(# of preferences in role - # of preferences that don't exist in population) % for each preference he has.  This can be used as a simple additive/subtractive value from the role calculation used for ((attributes * weight) + (skills * weight) + (traits * weight)) / (sum of weights)

...

to take that # a step further and seek a normalized .5 mean adjusted value, we could derive it's % across a population compared with all other roles (i.e. dwarfs * roles, each role would have a % calculated using the method described above), then remove 0%'s, run through ecdf, add in 0's, deskew, replace preference % used in role calculations with this new #.

Update:
with the potential for negative values [as in dwarfs that have a dislike for a desired preference within a role], we might not need to "deskew" it.

we can run it from it's negative values to 100% through an ecdf conversion, and I think that would be it
negative values should give us a median of 0.  negative values can actually help us with not having to deskew preferences... tricky though

that would most definately work, as we would most definately have negative values due to running the % across the matrix of dwarfs * roles.  Some dwarf somewhere would have a dislike for preferences checked and result in a negative result for the preference category.  Therefore normalizing preferences would be easier

that would mark our "median" value at 0.  So any dwarfs with matching preferences would be 50% or higher, and those with negative percents 50% or lower!

Update 2:
Preference weights would be applied internally as (# of matching preferences * their weights) / ((# of specific preferences within population] * sum of weights) )

prior to being fed into ecdf.  This includes if a value is flagged as negative!  So if ~'liking outdoors' is a preference for farming role, and it's weight is 3: DISLIKING outdoors would result in a -1 (flag) * 3 (weight) = -3 Value (when adding with the other preference matches) then dividing by # of preference categories that exist within population (this number is actually already calculated/stored for us in DT, the preference window groups preferences by their named 'category') when listing preference frequency within the fort.

Update:

Now that I think about it, for mods that incorporate lower/higher skill learning rates than default that include maklak's skill simulation method that produces a new 'interpolated' level being averaged with current interpolated level could be used to derive -skill levels that could be fed into the ecdf.  As is, skills that report as negative are truncated to 0, but if we could get a negative level to report from maklak's formula.  we wouldn't have to deskew skills (But only for mods that incorporate skill rate changes), for default non modded dwarf fortress, we do have to deskew.
« Last Edit: June 22, 2014, 12:32:31 pm by thistleknot »
Logged

krenshala

  • Bay Watcher
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #707 on: June 22, 2014, 12:39:43 pm »

The DFFD link in the OP isn't working right now.  It looks like a problem with dffd and not just Therapist.  Is there an alternate download (github or something) we can use until DFFD is back to serving pages?
Logged
Quote from: Haspen
Quote from: phoenixuk
Zepave Dawnhogs the Butterfly of Vales the Marsh Titan ... was taken out by a single novice axedwarf and his pet war kitten. Long Live Domas Etasastesh Adilloram, slayer of the snow butterfly!
Doesn't quite have the ring of heroics to it...
Mother: "...and after the evil snow butterfly was defeated, Domas and his kitten lived happily ever after!"
Kids: "Yaaaay!"

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #708 on: June 22, 2014, 01:02:47 pm »

https://github.com/splintermind/Dwarf-Therapist

Download zip


sorry led you astray, that is only source.
« Last Edit: June 23, 2014, 12:52:49 pm by thistleknot »
Logged

krenshala

  • Bay Watcher
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #709 on: June 22, 2014, 01:04:14 pm »

https://github.com/splintermind/Dwarf-Therapist

Download zip
Thank you.  I'm currently using 0.6.12, and felt it was well past time to upgrade. ;)
Logged
Quote from: Haspen
Quote from: phoenixuk
Zepave Dawnhogs the Butterfly of Vales the Marsh Titan ... was taken out by a single novice axedwarf and his pet war kitten. Long Live Domas Etasastesh Adilloram, slayer of the snow butterfly!
Doesn't quite have the ring of heroics to it...
Mother: "...and after the evil snow butterfly was defeated, Domas and his kitten lived happily ever after!"
Kids: "Yaaaay!"

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #710 on: June 22, 2014, 09:01:45 pm »

So we figured the last piece of the puzzle. Preferences. Basically # of matches / # of preferences in a role. For all roles x all dwarfs = our set of values to rank transform to %s. Splinters already coded it and it works well. I'd expect good things soon ;)

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #711 on: June 23, 2014, 10:03:20 pm »

this is how attributes are scaled before (raw values) and after our conversion, you'll notice that rank transform tends to join the distributions back towards each other at their tail ends, but relevant ranking information next to each other value is maintained.

http://imgur.com/67JzUDm

Attributes compared are: Strength   Agility   Toughness   Endurance   Recuperation   Disease resistance

over 45 dwarfs

Here's what 6 skills look like before and after transform (before, using a simple lowest to highest value, aka a minmax conversion).  Only non 0 values are shown.

skills are: Logging   Carpentry   Woodcrafting   Bowyery   Mining   Masonry



http://imgur.com/Ipw68Ng - correction, that's ecdf vs ecdf deskew

http://imgur.com/FKjBS8A
« Last Edit: June 24, 2014, 11:46:14 am by thistleknot »
Logged

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #712 on: June 25, 2014, 07:28:08 pm »

I thought we were really really really close to ready on this last beta/alpha until I noticed one problem that was about to derail this whole new version.

A description of the problem can be found here

http://stats.stackexchange.com/questions/104637/ecdf-skewed-distribution-wish-to-mean-adjust-to-5

Almost each and every question I've asked on stats.stackexchange I have failed miserably to communicate properly, but I have found my own answers.

The problem is how ecdf returns a %.  It works fine when the data is somewhat distinct uniformly.  [Distinct] Meaning more or less a majority of distinct comparable values.  When you have a set of data that are all the same, or a majority are the same, and if these values are 0 or if these values are either low in the distribution or high in the distribution, can have an affect on the %.

Rank() in excel returns the ordinal position of a value from a set of values in a list.  If there is a tie, it returns the earliest position, and skips the next position as it has been taken up by the tied value.  So... if 2 values are ranked at rank 3, the next rank displayed is rank 5.  Another example is if two values are ranked at rank 36, the next rank would be 38; if 3 values were tied at rank 36, the next rank position reported would be rank 39.

What's confusing is ECDF isn't called a ranking function, it derives a % based on rank.  I assumed it worked with the same position as rank() in excel, but when I actually compared the two, I found that it was working with the earliest position, where-as ecdf was working with the last position of the tie.  It was like ecdf was rank's evil other twin, and you needed them to combine to make a centered value.

It was this affect that was having a huge impact on the way we were trying to "deskew" skills if skills were mostly 0's, and a few were non 0 values. The non 0 values would have high %'s on ecdf by itself, but the 0's would have a huge percent that I back-end corrected and came up with a very convoluted formula to pad a 0%'s value to almost but less than 50%, and I transformed the rest of the non 0 values to above 50%, but based on an ecdf of just the non 0 values.  -I know it's complicated, but you don't have to remember it, because it's getting removed due to our new insight.

Very complicated hack.

Well.  Our problem occurred when we had **non 0 values** with maklak's skill emulation formula (plus some pre vs post weighting issues) for a starting embark of dwarfs with no skills, but because they had skill levels of 3 being reported, they were getting a high boost due to default skill rate formula (only old schoolers who really follow this thread will know what maklak's formula is - *this is is no way meant as a slight to mk, just trying to reference his contributions and our desire to preserve them in any future work)*.  I know it's confusing, but our deskew method assumed and really needed to work with 0 values vs minimum values.  We were about to just replace a minimum value with 0 if it was also median.  Then we thought, what happens if there is one little value below the large skew?  Splinterz found a null below a 0!  So yeah, we had to come up with something.

Anyways...  it was showing a bunch of [0 skilled] dwarf's for skill only roles as really good fits.

It was a conundrum.  We had dwarf's with 0 skills, but transformed to be level 3 due to skill rate) being listed as a good fit for the job than other labors.  I had to figure a way to autocorrect it, but I was failing.

Then came along rank.

It autocorrected it for me.  It took values that are similar but low, and gave them a % that started at their first position of a tie.  ECDF worked the opposite, gave last position and returned a %.  So I combined the two, and found that low value skews were under 50%, and values above this were 50%+, which was our desired behaviour.

This breakthrough should be able to replace all the other convoluted formula's we had worked on for preferences as well.

I believe it will make the whole system more robust and centered, and 50% will now mean neutral.  <50% = bad for job, 50%+ = good for job, there will no longer be columns of jarring red's, but instead column's of blanks, or 50%. 

It also means the labor optimizer will treat a [starting embark] population with no shearer skills (a skill only role at the time of this writing) as a 50% drawn value vs 0%.  It's basically saying, this person is neither bad, nor good at this job compared to the rest of the population (as in they are all tied)This is an important distinction in the behvaior of the labor optimizer, as before no skill meant 50%.  However, as soon as a dwarf starts to improve in that skill, you'll notice a 100% value and a ~<50% value for the rest.  This means during labor optimization, those who are considered truly bad at a job compared to the rest of the population will be scored lowered than these neutral values.  Which means the labor optimizer will assign neutral jobs before bad jobs.  It also means when looking at the screen 50% = good, and your labor optimizer shouldn't be overexhausted to assign values below 50% (as in trying to assign too many labors).

The way we derive %'s in this new setup is based on the comparable value of items within categories.

What's a category?  Attributes, traits, skills, and preferences.

So when we look at a category, we lay it out as a grid example:

Attributes
x = dwarfs, y = Attribute Names (19)
Traits
x= dwarfs, y = Traits(~60)
Skills
x = dwarfs, y = Skill names (~119)
Preferences
x = dwarfs, y = roles(~100+) - *

*
Spoiler (click to show/hide)

This gives us a large # of comparable elements of varying size that we can definately relate internally within it's own category to each other on a scale of 0 to 100%, but we cannot due it outside of this category.  So... what and how do we do it?  We use ecdf/rank % conversion on each value compared to each other value within a category (you get very large datasets when doing this, 53 dwarf's gave me 1007 comparable [attribute] values that turned into comparable %'s, that when combined with the %'s derived for the other categories, allows a distinct % combination for each and every combination.  Even for very small populations, a starting embark will have 133 comparable attribute values to draw 133 distinct % values.  If your fort dies and your down to your last man, 1 dwarf will have 19 distinct % values for his set of attributes.

Friggin' amazing right?

Here's how the %'s are derived from some standard #'s.

https://docs.google.com/spreadsheets/d/1gitnUzUyaROi-QroCXvXbY2raFBJTHZk7YCMWTlOHjw/edit?usp=sharing

here's what it looks like raw vs scaled (left is raw, top is attributes, bottom is skills, right is scaled)

http://imgur.com/WlulpOh

what your seeing is the distributions scaled based on their ordinal rank positions, centered within a % if ties exist.

The reason for the "squish" of values is due to how ordinal ranks work, every difference in value is only worth 1 point.  Where-as the raw values stress larger frames.  However, for deriving % purposes from a scale of 1 to 100%, this works perfectly, as it retains the ordinal position and achieves a mean/median of .5, and a min of ~1% and a max of ~100%.

The reason these individual comparisons don't have a mean of .5 (although the median of most of those skills is our <50% value), is because only the larger "category [this case 'skills']" is centered around a .5 median/mean.  These elements within roles are subsection views of attributes and skills within the larger grid of data we normalized.  Ordinal positions within categories are respected
(compare median with min and max).  It cuts off excessive differences in values and tries to preserve it as a % based on the # of elements being counted.

However, in the case of a skewed distribution, the range in values was preserved and produced a mixture model for me.

http://m.imgur.com/KKljFOg

so you can see, that 0 values - <50%, and there is a huge gap where the ~90% values start, which is entirely as intended.  These 90%'s represent the vast gap that is produced by the skew.  It's hard to wrap your head around, but this achieves an overall output target of 50% when we apply the same methods to all categories.  Which allows for maximum centered comparisons when defining roles.

A final picture showing quartile comparisons of attributes sorted by attribute median.

http://imgur.com/ki3fgyo
« Last Edit: June 28, 2014, 03:38:32 pm by thistleknot »
Logged

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #713 on: June 27, 2014, 10:51:43 am »

[deleted]
« Last Edit: June 27, 2014, 10:59:44 am by thistleknot »
Logged

splinterz

  • Bay Watcher
    • View Profile
    • Dwarf Therapist Branch
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #714 on: June 27, 2014, 11:53:07 am »

i've been working with Thistleknot to make some fairly major adjustments to how role ratings are determined within DT. the main issues were that it wasn't possible to compare roles to each other in the gridviews, since they were drawn relative to each role, and that the priorities in the optimizer couldn't be set accurately because too much information was hidden to make decisions on what jobs should have what priorities.

here's a before and after picture of a chunk of the roles gridview:

old roles:
Spoiler (click to show/hide)
new roles:
Spoiler (click to show/hide)

what you'll notice is that the new roles can be compared against each other very easily at a glance. you can see immediately who would be your best pick for which jobs based on the roles.

this also ties into the optimizer, as now the priorities you set will be upheld, as long as you have a decent spread among the priorities (explained in the GUI). so if you want to ensure certain jobs are chosen before others, you can.

the other feature visible in the new role view are the shaded cells which indicate if at least one labor associated with the role is currently enabled, and yes, you can toggle them on and off.

Thistleknot has uploaded the last beta version we've been using to test, and i'd very much like to get any feedback before rolling it out as the next version. a lot of extra information is currently dumped into the log file, specifically when running the optimizer, so you can check things out there as well.

Download Vanilla Test Version @ DFFD

Maklak

  • Bay Watcher
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #715 on: June 27, 2014, 01:47:52 pm »

The order of paragraphs is somewhat random and they're poorly edited, but I finally have some time and energy now.

Hm, this quantile normalisation would be an answer to a problem with weighting entrance exams I've seen a few years ago (there were 5 tests with 50 questions each, everyone took 2, then they multiplied the results, divided them by means for the tests you took and multiplied that by 2500 which gave the final score).

I do not think that quantile normalisation is a good way for sorting INSIDE a role (as it looses information and just gives the order), but if you want to compare roles to each other (weighting them by priority, so that soldiers are more important then farmers), then yes, I think this would work to a degree, but it has it's anomalies, especially for low population. If there is a big gap in skill (say 1+ really good at it and 1+ really bad), that gap won't show in the results. This is why I would prefer normalising everyone's score in a category to 0 - 100, where 100 is current best and 0 is literaly 0 (with no one being this bad). That way I would see a gap (if any) and make a more informed decission.

Quantile normalisation might work as part of aggregating skills, attributes and traits into a final score, I guess.

Another consideration is that quantile normalisation seems made for comparing a small number of data sets. You have so many that a rapid change of values within one (such as a squad dying), won't really show in the means for each rank.

For things like traits / preferneces, I think you'll just need a vector of arbitrary weights for each role, then do a cross product and factor that into the overal rating. It's far from ideal, but it should kinda work. In other words treat each trait as a 0 or 1 (or if you prefer 0 or 100%), then multiply them by arbitrary weights (defined for each role, most of them 0, they can be negative for undesired traits), sum up the results and add that to the final score with a smallish weight.
If you want, you can even sort of scale this "dot product of traits and weights" to 0-100% by taking -sum {abs(weigth)} as 0% and sum{abs(weight)} as 100%. Once you have that, just aggregate it with the skills and attributes into final score.
Quote
if a dwarf has a preference, then it's % value = 1/(# of preferences in role - # of preferences that don't exist in population) % for each preference he has.  This can be used as a simple additive/subtractive value from the role calculation used for ((attributes * weight) + (skills * weight) + (traits * weight)) / (sum of weights) [...]
Ah, you've figured out something like that already.

Quote
Max-min conversion = (x - min)/(max-min)
For attributes, It would be easy to scale them to the current highest, maximum highest for the species (min {5k, 2*max_starting_attribute} or just 5k. Then just give everyone an "attribute rank" of att/max, which would always be bigger than 0 (except odd cases in mods).
My method is better because if the population is oscillating around maximum (say 1200-3000 range), they will all get high scores (66 - 100%), which will not hide information when aggregating with other criteria and result in a more accurate final score and I don't think it can be improved upon.

To me it seems that you fiddle with the numbers until you get something you like without giving it a deeper thought on how or why. By looking at the last 10 or so posts it is obvious to me thet the results are mangled and I shouldn't trust them. This makes me much less eager to suggest anything. I agree with anything that tussock said about it.

Another reason I don't contribute more is that while the problem is certainly interesting and I learned some maths, optimisation and operational research during my studies, I don't have the energy nor the time to work on it. It took me 10 days to even answer. But then I admit that skewed or not, a working labour optimiser would be a useful thing. Especially for newbies or 150+ Dwarves where you don't really care anymore.

Quote
But a high Agility skill 15 miner will easily beat a high Strength skill 25 miner to the job and also mine faster than them until their endurance and persistence comes into play and various ones head off for a drink before finishing.
I've conclusively proven through tests that speed (factring in agility, strenght and SPEED token) improves the speed of working at workshops. My conclusion is that (almost) all actions take a certain amount of "turns" and speed lowers the delay between those turns. So speed should be a factor unto itself for any role and improve all roles equally.

Quote
Like, if my best dwarf is only 37% of the estimated maximum combination of numbers, that could be handy to know.
Yes, this. Also, if I have someone at 235% I don't really care about it being higer than 100%, just that this guy is really good. http://en.wikipedia.org/wiki/Udarnik (That was a joke.)

Quote
Normalising everything, it just seems like you're taking what little accurate data you do have and hiding it under more layers of ... I struggle to find a kind word.
Yep. Just keep it simple and scale by (value / max_value) for every number for everyone. Aggregating this will be more accurate than the complicated thing you're trying to do. Sure, you might not like having a 100% and a 0% everywhere, but I'd find it more informative to know that all my guys kinda suck at this job, so I can pick the most useless ones or that I gave a group of good candidates, then a gap, then poor ones.

Your typical response to criticism is "but we're working hard and our formulae work, how dare you say they have no merit". Well, my answer is that is not how I would go about solving this problem, but you're free to try. Yelling at you would only confuse you and make me look like an asshole.

Communication with you is also difficult. For example your graphs and streams of pre-processed numbers aren't adeqately described and I often can't even guess what I'm looking at. A graph of 1-0 on y axis and what I assume to be spreadsheet row number on the x axis tells me almost nothing. Well, if those are ordered, then ecdf is at least monothonic, while the orange line looks like random noise. But this is really not the level of information, I'd like to derive from a graph and I can't really have an informed discussion without understanding what I'm looking at na what you're doing. At the very least add a 3rd line thats basivally (value / highest_value_everywhere).

I do admire your dedication, though. I have trouble working on one project for a week. 

Finally, too much maths can be a bad thing. It introduces more work and more possible errors where a simpler method would suffice.

Quote
Almost each and every question I've asked on stats.stackexchange I have failed miserably to communicate properly, but I have found my own answers.

The problem is how ecdf returns a %.  It works fine when the data is somewhat distinct uniformly.  [Distinct] Meaning more or less a majority of distinct comparable values.  When you have a set of data that are all the same, or a majority are the same, and if these values are 0 or if these values are either low in the distribution or high in the distribution, can have an affect on the %.
Yep, this is pretty much my argument why this skews the results as compared to simple (value / max).

Quote
Rank() in excel returns the ordinal position of a value from a set of values in a list.  If there is a tie, it returns the earliest position, and skips the next position as it has been taken up by the tied value.  So... if 2 values are ranked at rank 3, the next rank displayed is rank 5.  Another example is if two values are ranked at rank 36, the next rank would be 38; if 3 values were tied at rank 36, the next rank position reported would be rank 39.
Well then, learn some real math software or a scripting language and stop using Excell. Matlab / Scilab / Octave should work pretty well.

Quote
It also means the labor optimizer will treat a [starting embark] population with no shearer skills (a skill only role at the time of this writing) as a 50% drawn value vs 0%.  It's basically saying, this person is neither bad, nor good at this job compared to the rest of the population (as in they are all tied).  This is an important distinction in the behvaior of the labor optimizer, as before no skill meant 50%.  However, as soon as a dwarf starts to improve in that skill, you'll notice a 100% value and a ~<50% value for the rest.  This means during labor optimization, those who are considered truly bad at a job compared to the rest of the population will be scored lowered than these neutral values.  Which means the labor optimizer will assign neutral jobs before bad jobs.  It also means when looking at the screen 50% = good, and your labor optimizer shouldn't be overexhausted to assign values below 50% (as in trying to assign too many labors).
I find this highly undesireable behaviour and it is not how my formula worked at all. For a skill, it was supposed to give 100% only for level 20, then lower (but never quite reaching 0) values as the skill and it's learning rate drop. Then 0 for 0 skill and 0 learning rate.
Telling me everybody has 50% to begin with, then differentiating the values (but always giving me a 0% and a 100% from that point on) is highly confusing. I'd much rather see 5-20% fit on unskilled Dwarves, then the values eventually increasing as they skill up. I mean if a role has a 12.4%, an 8.7% and 2 times 5% (4 Dwarves total for simplicity), that gives me a reasonably good idea of the situation, when I compare to another role. For example I might have someone really talented with spears to the point that it's worth it to make another squad. If all military roles are 0 for current lowest and 100 for current highest, I won't notice it.

Hm, I guess one idea would be to display a score based on (value / max), but highlight them green / red based on how good a Dwarf is compared to other Dwarves within the same role, so that in my earlier example the 12.4% guy gets a green and 2 times 5% get a red.

For a labour optimiser, some roles ashould be more important then others (either by listing them in order or giving them weights), There should be a cap on how many roles a Dwarf can have (or a max counter with roles having different scores). There should also be a counter for how many Dwarfs are needed for a role... and a lot of other variables, but from what I see, you just display the suggestions, not autocommit them to DF.
Logged
Quote from: Omnicega
Since you seem to criticize most things harsher than concentrated acid, I'll take that as a compliment.
On mining Organics
Military guide for FoE mod.
Research: Crossbow with axe and shield.
Dropbox referral

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #716 on: June 27, 2014, 02:17:54 pm »

Hey mk

All the posts you read up to this point were over 3 diff proposals up to the current method.

The current method uses an ecdf/rank function that draws a percent. There is no longer a problem with skewed data being non .5 mean after we derive a percent as this new function draws a percent from the center of tied values (ie returns the rank position of the center of ties).  Which solved my ecdf skew problem.  It also preserves gaps in values as percent gaps. Pushing positive of median values above .5 and those below, below .5

I talked to a few software guys and normalizing to a .5 is normal. Where half your data is below and half your data is above.  When you have all your dwarfs at the same skill level (whether their all level 20 or level 0).  A player would see that all his dwarfs are equal if he sees them all at 50 percent. Unfortunately this is a behaviour of normalizing. Its not bad in itself. It just means it makes no difference who you assign to the task because they are all neutral for said skill.

I hope to address your entire post. But I'm at work. The minmax is a horrible idea as it doesn't preserve any sense of comparative median.

We also opted to normalize all attributes at once vs individually (all 19 vs normalizing each of the 19 individually, this places values closer together as they approach 0 and 100% respectively but allows for direct comparison between attribute types as well (aka ranknposition relative to each other).  My last post before the (deleted) one is all relevant to how 22 alpha works.  The rest of the posts really document my journey to the current conclusion.
« Last Edit: June 27, 2014, 04:06:08 pm by thistleknot »
Logged

splinterz

  • Bay Watcher
    • View Profile
    • Dwarf Therapist Branch
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #717 on: June 27, 2014, 02:38:04 pm »

the biggest issue was that when using value/max the different aspects of a role were aggregated (attributes, traits, skills, preferences) the weights applied to them ended up being fairly inaccurate because the underlying sets of data were badly skewed. so you'd have to do things like give skills very high weights, to compensate for their very low value/max percentages compared to attributes or traits.

so the main work here was to equalize those sets of data, so that aggregation was more even across the attributes, traits, skills and preferences. currently it's using a very simple method of taking an average of ecdf rating and rank rating. based on what you said about value/max i guess that's close (not a maths person). that's probably not sufficient for someone as technical as you, so i'll give you an example (assuming vanilla df) and hopefully that will make more sense. to determine a rating for a skill, the skill levels for the whole populations' skills are put into a sorted list. a single skill level is then passed in, and it returns the average of the ecdf from the list, and the rank/list total count. the same method is used for traits, attributes and preferences. now as i've mentioned i'm not to keen on the maths, so if you've got time, let me know why this is a bad thing.

i may end up trying to get speed in there as well, but after looking at some dfhack scripts, it's affected by a lot of variables (attributes, terrain, syndromes, curses, and more). but maybe to start just pulling the caste's speed token would help.

your skill simulation is still in there as well, so if you changed the weight on the skill rate, it actually does what it sounds like you expect: simulated fast dwarves are instantly identifiable due to higher role ratings.

the optimizer does allow you to specify jobs per dwarf and dwarfs per job (via ratio). when you run an optimization, it toggles all the labors, and it just up to you to review, if you want, and commit. auto-commit could be added as an option.

Maklak

  • Bay Watcher
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #718 on: June 27, 2014, 03:19:42 pm »

OK, now I understand a bit more and disagree less with your methods, although not quite agree with them.

Yep, (value / max{values}) or my formula for value and skill rate for skills would often result in pretty low skill ratings all arnoud, which would have to be aggregated with a relatively heavy weight. But since I weight skill much above anything else when it comes to labours, I didn't have an issue with it.

Using median as 50%, highest as 100% and 0 as 0% sort of works, except for a few anomallies for low number of Dwarves, similar values, etc. For important skills and 30+ population it should kinda work. Except I can fish out "average for this population" just by looking at (value / max), which also gives me a sort of an "objective" measure of how good the current population is at some taks.

I think we understand "skewed" data differently. For me a population with 0-3 skill is not skewed, it is just how it is. For you it is skewed towards lower values. For me scaling these guys to 0->0, 3->100% is skewed because it hides the information about how good they are objectively, not just relative to each other.

If your optimiser works best with having 50% be the median of the current population, then I don't really have enough info to argue, but I would stil prefer having an alternate rating that I can choose to display, which is pretty much a weighted arithmethic mean of skill rating (with 100% only for 20 skill), attribute rating (value / 5000, weighted for each role), speed and prefenrences. With all the work you did, this should be fairly simple to add.

Quote
When you have all your dwarfs at the same skill level (whether their all level 20 or level 0).  A player would see that all his dwarfs are equal if he sees them all at 50 percent. Unfortunately this is a behaviour of normalizing. Its not bad in itself. It just means it makes no difference who you assign to the task because they are all neutral for said skill.

A player would also see a bunch of 10% OR (in another fort) a bunch of 70% to be about equal, without turning both groups into all 50% (where 50% means different thing in each group).

Quote
i may end up trying to get speed in there as well, but after looking at some dfhack scripts, it's affected by a lot of variables (attributes, terrain, syndromes, curses, and more). but maybe to start just pulling the caste's speed token would help.
There should be a final speed / turn delay somewhere in DF variables, unless it is calculated on the fly by a C function and not cached somewhere in Creature's struct. The important factors are SPEED token, agility, strength, weight of worn equipment and injury (Crutch walkers are slower), so I guess it makes sense if dfhack library can't fetch it.

Quote
your skill simulation is still in there as well, so if you changed the weight on the skill rate, it actually does what it sounds like you expect: simulated fast dwarves are instantly identifiable due to higher role ratings.
Good to hear that. I came here in the first place because my guys had 0 skill, but vast differences in learning rates. Come to think of it, attributes have learning and decay rates too, but those are more difficult to factor in, because attributes train slowly for everyone except military.

Well, you seem to have figured out what you want to do and I'm not able to make a contribution at this point. Maybe I should read up on this ECDF.

I was going to propose that the aggregate function raises sub-ratings to the power determined by their weights, then takes a root of that value of the power of sum of those weights, but that would promote all-arounders (must have decent skill, stats and traits), while weighted arithmethic mean is more forgiving about really poor values when others are high to compensate. Plus it's weights are much simpler to understand. Anyway, I thing I'm blabbering and I'm tired. so goodnight.
Logged
Quote from: Omnicega
Since you seem to criticize most things harsher than concentrated acid, I'll take that as a compliment.
On mining Organics
Military guide for FoE mod.
Research: Crossbow with axe and shield.
Dropbox referral

thistleknot

  • Bay Watcher
  • Escaped Normalized Spreadsheet Berserker
    • View Profile
Re: Dwarf Therapist (Maintained Branch) v.21.8
« Reply #719 on: June 27, 2014, 04:13:39 pm »

Quote
"A player would also see a bunch of 10% OR (in another fort) a bunch of 70% to be about equal, without turning both groups into all 50% (where 50% means different thing in each group). "

Mk. This new method bases all its drawings on your forts current data. So comparing one games percents to anothers is not going to work. Everything is relative to current pop.

Quote
"For me scaling these guys to 0->0, 3->100% is skewed because it hides the information about how good they are objectively, not just relative to each other. "

Initially, the skills were stretched from 50% to 100%, and 0's were set to just below 50%.  However, this update their behavior is subject to the size of the population.  If you have a transformative suggestion to stretch them down to 50% (if you think it's a good idea).  It would have to not destroy our mean of .5 though (harder than I thought, when I first attacked the problem).  I'd love to hear it.  Otherwise your suggestions might involve a large rewrite of entire models we base our initial values on.

You may feel that way, but the way it's transformed is relative to it's distribution amongst all skills in the population.  Giving those values a truly low representation, which is transformed into a high % when present.  It's similar to my 128 unique values out of 2400+ values, those 128 unique values represent 95%+ of the distribution, and should be listed as such when counting skills compared to the non 0 values, which will not adjust an overall mean for a role [when a dwarf doesn't have a skill], except by a little bit, but it's ~<50% value. Unfortunately, trying to account for it any other way would require a post transform of %'s, because we are using ranks, and transforming ranks still retains the same rank position post transformation generally (well, with linear formulas).

[Update]   Skills are truly represented [when comparing] against other skills.  Skills are otherwise not an issue, because skills are only an issue when compared with others skills; otherwise if a role doesn't use skills, it's output will not be affected by a role's skill role output.

But. Attributes scale from 0 to 100, so does everything else, but over the whole of all attributes, the whole of all skills, etc.  The only true comparative value between skills is when comparing the values of skills to themselves. This is where we get our percents.

To see the formula in action on a small set of data here it is, (but in production we work with a much larger dataset, across a matrix of a given category).  I used spreadsheets to verify this simple algorithm.  It also has the behaviour of emulating mixture models apparently. 

Quote
""I was going to propose that the aggregate function raises sub-ratings to the power determined by their weights, then takes a root of that value of the power of sum of those weights, but that would promote all-arounders (must have decent skill, stats and traits), while weighted arithmethic mean is more forgiving about really poor values when others are high to compensate. Plus it's weights are much simpler to understand. Anyway, I thing I'm blabbering and I'm tired. so goodnight. "

Sometimes (KISS) "keep it simple stupid" works, especially when everyone sees and understands how it operates.  It more or less pads the value around large [ties] % of values by reporting the center of a % space of a tied rank.

https://docs.google.com/spreadsheets/d/1gitnUzUyaROi-QroCXvXbY2raFBJTHZk7YCMWTlOHjw/edit?usp=sharing

btw, I may be a little behind on my math, but I do have educational background with statistics.  So at least when I use my knowledge of statistics, ranks, and matrices, I know I can confirm when I say.

The preservation of ranks within an attributes matrix allows us 19 degrees of freedom in variation inbetween rank values.  That means, that they will all start at relatively low rank positions, but those will be relative to their distance from min, they will also culiminate to 100% faster, but that will be because some distributions will have reached their max position already, but others have not, so others will draw closer to 100% as other distributions have ceased to be counted.  The rest of the difference in spacing will be due to the difference in ranks inbetween 19 different sets for the middle ranks, which will retain a lot of a relationships between each other by their measurements of ranks.

This rank is converted to a % based on the count of the matrix.

This preserves rank position to % position when compared to other distributions for weighted averages using weights.  This allows us to maintain an output of 50% mean.

Btw, we preserved your formula's.  The attribute potential formula is applied to the raw value, as well as skill simulation formula is applied to the raw value (i.e. raw + simulated value based on either attribute upper limit, or skill rate formula's respectively), by taking the raw value and adding the simulated value in a weighted average formula, aka:

Code: [Select]
((raw value * weight) + (simulated value * weight) ) /sumOfWeights
this is the value that is fed into our rank function when calculating roles and attribute potential or skill rate weights are applied in the options.

Just to note, I had a formula originally that was also mimicing the behaviour of our (rank% + ecdf%) / 2 that was targetting a desired mean for skills, but it didn't work for non zero (aka skill emulated) values.  I was about to throw in the towel until I realized I could probably apply some sort of slope formula to the ecdf %... then when I realized I wasn't gonna be able to do that easily (I had one idea to factor down the median), I found that the rank% worked differently than the ecdf%, that's when I noticed how they were reporting, and averaging them together recieves the center between the two positions, which is an entirely acceptable conclusion to use, and when it reached the desire "padding" value I was using before, I realized I didn't have to do anything else, it auto corrected skewed distributions for me, such as skills and preferences.  For example, out of 2400+ skill values, I only had 2400 ~0 values, and 128 unique values overall ranging from 0 to 21.3.  Those 2400 ~0 values were rated ~46%, and the rest were ~95 to 100%.

Weird behavior, but not when you think 128/2400 = 0.0533333333333333%, that mean those ~127 non 0 values were rated in the upper 5% region, where-as the approximately 95% of the distribution was taking up, from the 2nd rank position to like the 2272 position, was reported % wise as the center of those two values, in this case the real number was 46.7% approx.

So this lead me to believe that undesired values were reported as below 50% [in this case as well], and desired values (aka above median) were above 50% with an overall mean desired of ~50%.

BTW, I'm totally open for suggestions, improvements, etc.  I mean, if you think this is a "bad" system, please explain why, and offer a replacement, and suggest how we could use it.  However, it's generally best for us to understand what is being implemented, otherwise we're coding "magic numbers", at least to us.
« Last Edit: June 28, 2014, 03:50:21 pm by thistleknot »
Logged
Pages: 1 ... 46 47 [48] 49 50 ... 222