Back in March I put up the following testbed for path-finding:
http://idefix.uchicago.edu/~bhudson/src/astar_test.tgzIt reads a file (look at test-input for an example), stores it as a grid, then does 500 random A* searches. It's easy to do 500 random searches using a different algorithm. Ideally we'd have a library of input files, and a library of paths to find in those files.
It is in C++, which is of course very slightly slower than C
except when it's much faster.
User annotations are not my idea of a good time: users will get them wrong, and they've (we've) got more fun things to do than help the pathfinder.
A convex subdivision of above-ground space would be a fairly big boon for flyers. I'm not sure it would be for below-ground space, where much less is convex. We have to make sure our subdivisions are enough bigger than mere grid squares.
We don't have to decompose geometrically though. We have the whole field of graph partitioning to draw on. For example, one subdivision commonly used for clustering in multi-commodity flow problems and the like (i.e. network routing -- which is pretty much what we're doing here, if you think of dwarves as packets): start a seed at a random spot somewhere in the graph. Build a cluster by taking in neighbours of nodes in the cluster until the ratio (number of neighbours) / (number of nodes in the cluster) is some small value. Intuitively, this means you repeat until you hit a chokepoint. Repeat until all nodes have been put into some cluster. Now, create a new graph by saying that clusters are nodes now, and there's an edge between two clusters if in the original graph, there was an edge between a node in one cluster and a node in the other. Repeat until you have a small number of clusters, and you've created a clustering hierarchy. Under various assumptions that I'm fairly sure we satisfy (but which I don't remember so I can't check -- however, grids usually satisfy most of what you'd want to satisfy), clustering is linear or so time and yields a log n depth structure. The size is linear, but depending on how you set the constants, particularly on the lowest-level clustering, is basically tiny. Edge weights (e.g. restricted zones) are easy to include and don't change anything fundamental: instead of number of neighbours / number in cluster, it's sum of edge weights to neighbours / sum of edge weights in cluster.
Given the hierarchy, it's relatively cheap to find an approximately shortest path from s to t: find the least common ancestor of s and t in the hierarchy, and path in that graph. Then, you recursively find a path from s to the next "node" (actually, itself a cluster) and so on. Each cluster is approximately round, so whatever path you find is pretty much a straight-line path through the graph -- that's why it's approximately a shortest path. The path-finding time is near-linear in the length of the path.
It seems non-hard to run this dynamically (maintain the clustering as you add or remove edges and nodes), but we never did work out the details.
There are two types of pathfinding we should be interested in: (1) going from s to t, where s is the dwarf and t is the dwarf's unique bed. (2) going from s to T, where s is the dwarf and T is the set of all booze barrels, of which there are many, strewn about all over the place. What I mentioned above handles (1), but I'm not clear on whether it handles (2) nicely. Note that A* also only handles (1).