Roko's basilisk is just a variant of Pascal's wager that plays into the hands of Silicon Valley techbros, and they love to anthropomorphize their little toys. Be nice to our AI, and it won't torture your immortal soul. It seems divested from any real problems that people might have, even the ones that could be actually be enabled with so-called AI. Say the government wants to create a registry of undesirables: criminals, political dissidents, immigrants and their descendants, the neurodivergent, any number of sexual minorities; and track them. That's something that AI can actually do, scraping their posts and being able to detect their faces.
If there's one thing that shows AI being incapable of value judgements, it's the countless incidents where a chatbot says something inappropriate, misleading, or legally inadvisable. There isn't an actual solution to this, because it's a lot of smoke and mirrors around taking a word and predicting the most likely sequence of letters to follow. Of course, AI companies want you to think that they've somehow managed to tap into this Platonic ideal of all truth, because it's more profitable to fool suckers and run a call center in the third world to clean up any mistakes they find in the output.
An AI that is able to perceive true forms rather than mathematical representations thereof is more fantasy than theory. We all want to imagine Commander Data, who is considered a person, capable of moral intelligence, and advances the Federation ethos that all intelligent beings are respected. I'm not sure that even making true artificial intelligence in that sense is really socially viable, either - neural networks are already fundamentally unreliable, and people with power would prefer an amoral tool meant to execute their will in exactitude to an independent (and inhuman) actor. The crossbow did not choose its own targets, it was the archer's decision.
The last thread devolved into every rhetorical sleight of hand possible because it's mental gymnastics to say that dwarves and sims have some moral nature that invites non-harm towards them and then boot up the game and put them through the wringer. It is coming from a place of empathy, but debating the ontology of it is misplaced.
It comes from the same place as people getting mad that their favorite character dies, attributing evil to the author for crimes against the imagined. There can be metatextual criticism of a moral nature, like the "Bury your Gays" trope where gay characters are killed in service of the real-world idea that sexual divergence is unacceptable. I personally find the insistence on hating elves, for example, too close to real-world discrimination to be comfortable. But this isn't any defense of the game's representation of an elf, but how players choose interact with these images.