Whether or not a singularity is possible doesn’t prevent us from acknowledging other consciousnesses, even if they inhabit bodies that aren’t the same as ours
It does make those consciousnesses less likely, though. It is sometimes helpful, in thinking about technological advancement, to imagine aligning it on some scale of our capability to do X thing faster or more efficiently or bigger or whatever figure of merit you like. When you do, the rate of progress along that axis generally looks a bit like a jagged sine wave: the really big leaps are generally preceded by and interspersed with smaller incremental advances to make them possible, often across a wide range of other capabilities. A singularity, then, is just a lot of those leaps overlapping to produce the temporary appearance of extraordinary advancement on many fronts.
This impacts the potential for sapient AI in that one of the drivers behind the advancement of artificial intelligence is recognizing how fundamentally simple a lot of what we do actually is, so we can reduce it to something a relatively simple calculation can be optimized to do perfectly. To extend your analogy from earlier, this makes the AI of the foreseeable future less like our kids and more like our organs: they have a single job to do and a whole lot of variably generic components that have all been optimized to do it, with no real capacity to suddenly do other things. That's partly because, unlike things that evolved in meat space, AI are given very limited sense data. Present thinking is that consciousness evolves as a way of integrating disparate stimuli; we only give the AI very specific types of input, so even if we make them arbitrarily complex, they won't do anything other than process it. The rest is simply that we become able to make AI do certain tasks by understanding how to simplify them so consciousness is unnecessary.
That said, there are tasks we'd like to automate that would involve integrating disparate data sets into value judgements that would probably be optimally solved by something more complex, and that might eventually end up producing something that could legitimately be said to want things. It is admittedly an edge case, but if it helps you feel better, there are contingencies in place that incidentally help ensure that efforts along those lines are accompanied by the ability to detect and accommodate unexpected complexity. In effect, if we ever do make an AI of the type that wants things, we'll have also advanced our ability to understand and give it what it could conceivably want just as an outgrowth of making optimal use of it -- and we'd certainly never delete it.
That's not to say there's a box in my office marked "in case of accidental hard takeoff to singularity, break glass", just that it's not very likely we'd get far enough along to make something like that without also having developed the capacity to detect it for what it is.
It also helps that AI is frankly
way down the list of things likely to bring torch-bearing mobs to our doors, so there are already multiple layers of armed security between us (and therefore the hardware that could likely run it) and the AIphobes just as a matter of sensible operating procedure. I can't think of anyone else likely to need this sort of thing who doesn't already have similar measures in place.