"Self-improving" does not mean "evolved".
Evolutionary algorithms exist, and they are useful for some problems, but they're basically
dumb.
A self-improving AI... well, in the limit, if we can make an AI just a
little bit smarter than a human, then that AI can write a smarter AI yet, and so on. Think intelligent design, not evolution; this also has a much better chance of maintaining invariants like ethics and whatnot. (See
CFAI for details)
That's the conservative assumption, for futurism. For safety, the conservative assumption is that an AI can start self-improving at an intelligence level considerably below that of a human. After all, unlike a human, it could be able to understand artifacts like source code and programming in general at an intuitive level.
When a human programs, it's as if a person without a visual cortex were to paint a picture, pixel by pixel. It can be done, but we can't measure up to beings that really
understand it, at a more fundamental level.
Of course, that's just surmise. We don't have AIs yet. I'd be surprised if it turned out to be false, but I can't say for sure - yet.