I think we're too technologically primitive to create true artificial life just yet (at least, life that follows our definition of sentience) and we've got our heads jammed too far up our own arses to recognise sentient life if it didn't meet our conditions.
There is nothing wrong with reading/watching shitload of sci-fi. But do not read too much into it.
Those presuming that "we wouldn't recognize sentient life because it wouldn't meet our conditions" don't have their head up their ass ? How so ?
I'm not even sure that self-perception is a necessary part of intelligence.
Intelligence != Consciousness (also Self-perception != Self-consciousness != Self-awareness != ... whole lotta other self-things).A dog / rat may display intelligent behavior, but are they conscious ? Lets say intelligence is also one of the raw, enabling conditions.
Spech / communication (sufficiently sophisticated) would be yet another one. Let's say that for consciousness as we know it, there needs to be an above-critical amount of enabling conditions.
Better to talk/reason about something we know/can imagine than to say "fuck thinking, reality will be different" and start pipe-dreaming about "what if universe worked differently and we met a life-form so alien we cannot even imagine it... It would be different, somehow, like, i don't know but it would really be different".Anybody interested might want to read up on experiments with apes (who are able to learn sign-language) and big parrots (who are able to talk, not just repeat but understand the meaning of ~4000 words, recognize them when they hear them and reply in context - in effect have conversation). Interesting question would be whether they discovered their concept of I (which they demonstrated) or whether humans imprinted it upon them through teaching them to communicate and therefore to think like them.
It would be interesting to select a big group of apes, train them individually (to not hold each other back / fall back to their natural behavior), then put them together and monitor the development of brain/brain activity in the generations to come.
Closer to topic, though: He's hyping the "artificial intelligence" too much for me to buy in. If Alife had been conquered, it'd be bloody bigger news.
He's not hyping (quite the contrary, he is very humble and is constantly saying that he doesn't know whether he'll succeed in creating something at least self-sufficient). But he's got further than anybody before. What he (partly, for the time being) got is something that, given sufficiently large storage/computing capacity, could develop into self-aware beings.
In - of course - very simplified environment; but hey, if it works, the simulation could be gradually "deepened", to the point where you then just replace the simulation with robotic body & set of sensors and ... (sorry for the obvious joke) get maimed because you are mistakenly interpreted as danger ;-))))))
By not trying to "create intelligence by stuffing more and more data into database hoping it will become self aware", but mimicking life as we know it, he might finally create something not requiring to be programmed, something able to "program itself", learn.
Once you have a dog that doesn't age, you can always teach him new tricks ;-)