An ant can think in some sense of the word, trundling along with their little pedometers clicking away as they check inputs against an "is this food" filter....
That's a general school of thought, yes. The issue is that the more "primitive" versions of those things aren't necessarily considered "thinking". After all, an ant trundling around is basically just a collection of biological control systems trying to obtain food, avoid danger, etc. Does a thermostat "think"? It is also just some control loops.
Most of the complaints about the "AI" in the media today is that all we have so far are just really advanced control loops - yeah image recognition and language recognition and processing and patter matching, yeah - but they are just at core optimization solvers or control loops.
My thoughts on the "mechanics" of it is that "thinking" or "consciousness" is itself another kind of control loop or set of control loops that creates and modifies the other control loops, including itself. That is, it's a controller for which the "goals" are very abstract. I mean, it's not just a single loop saying "I'm hungry, must find food" but "Given that I'm hungry, thirsty, am attracted to that person over there, enjoy music, etc. what should I do in the next minute? 10 minutes? Day?" It's a continually self-updating prioritization of all the other control loops, including adding and subtracting control loops.
Consciousness is that mechanism or ability or desire to decide what we are going to do. Deep-learning chess machines, Go machines, or even autonomous driving machines don't have that - they can only do that one task. They can't even 'decide' to apply their learning and forecasting ideas onto any other problem space. To get "general AI" I think all we need is to just put another deep learning machine whose job is to just manage several other deep learning machines. The difficult part is - we don't have any idea what the learning data needs to be to train that. For humans, it either came from survival pressure and/or divine influence (depending on your beliefs). But what would such pressures even be for an AI? I mean, what is the point, other than mimicry, of an AI being able to choose between playing Go and looking for cancer markers? An AI doesn't need rest, doesn't have to prioritize food, doesn't have reproduction pressure...
It's my belief that, for this reason, we won't have general AI except for specialized situations like companion AI that need to mimic humans. In reality we'll just end up with specialized AI control logic for various specific tasks.