They do not know that they're in a simulation. But the point is moot, because they're dumb as bricks and are not sentient at all. If Toady hasn't programmed them to be sentient, they aren't. It can't be otherwise. They can't think or feel, their thoughts are only an approximation. I don't really care either way and regularly go on rampages.
They know as much about the simulation that they're in as we know about the simulation that we are in.
I think that the point isn't metaphysical knowledge, as that could be easy enough to achieve given a sufficiently simple simulation. A "creature" with a single dimension of travel, that knows that it exists on a single dimension of travel and knows how and why to travel along it, could be said to know everything important.
I also think that the point isn't knowledge of wether or not one is in a simulation, as wether or not one is is an entirely moot point unless the machine running the simulation either malfunctions or else intentionally messes with the program.
So long as the same turing-complete algorithm with a single path of execution runs on a turing machine capable of running it without interference, then that algorithm cares not what machine it runs on. So long as the program executes properly, medium of execution is irellevant.
Or for that matter, a given set of possible paths of execution (from, say, every possible result of a hypothetical random number generator) would be the same set of possible paths, so long as the algorithm, any initial conditions, and the machine running the simulation are all maintained.
Perhaps more to the point, however, is the ability of an emergent algorithm (a "creature") to be able change itself relative to the environment, in a way that is not wholy dependent on an environment, eg. creature "A" observes creature "B" attacking creature "C" over trying to take a resource... that is extremely plentiful.
Creature "A" can take a number of abstracted ideas from this scenario, which may be applied elsewhere. For example, taking the resources of another needlessly (or, for that matter, attacking another needlessly), is something to avoid oneself and discourage in others.
That , perhaps, is what I think sentience means in a world of hard-determinism. It is self-aware modification of behavior in response to external circumstances.
...But to bring this around to DF, such is not really possible emergently. Only those exact issues/beliefs/practices that were programmed in can be decided on, and even then, they exist in a way totally abstracted from their supposed meaning.
For example, the ethic [ETHIC:TORTURE_FOR_FUN:ACCEPTABLE] lacks any possible nuance to the topic, and is effectively only a numerical value that civs can disagree on. Others such as [ETHIC:THEFT:PUNISH_SERIOUS] aren't much better, as they do only that and govern how pre-existing creatures will react to your presence given their knowledge of your character's alegiance and deeds. The effects created from specific triggers, instead of organically.
And this doesn't even touch on the fact that only creatures that are actively loaded and "running" are able to percieve things and/or react to them at all! After all, what is sapience if you are only a consious, free-willed being if you stay within five meters of the player?