PARIS: Silicon Valley's favourite philosophy, longtermism, has helped to frame the debate on artificial intelligence around the idea of human extinction.
Yet the movement and linked ideologies like transhumanism and effective altruism hold huge sway in universities from Oxford to Stanford and throughout the tech sector. This kind of thinking makes the ideology"really dangerous", said Torres, author of Human Extinction: A History of the Science and Ethics of Annihilation.
When asked in March by a user of Twitter, the platform now known as X, how many people could die to stop this happening, longtermist idealogue Eliezer Yudkowsky replied that there only needed to be enough people"to form a viable reproductive population". "Do I support eugenics? No, not as the term is commonly understood," he wrote in his apology, pointing out it had been used to justify"some of the most horrific atrocities of the last century".Despite these troubles, longtermists like Yudkowsky, a high school dropout known for writing Harry Potter fan-fiction and promoting polyamory, continue to be feted.