The fight over a 'dangerous' ideology shaping AI debate

  • 📰 ChannelNewsAsia
  • ⏱ Reading Time:
  • 50 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 23%
  • Publisher: 66%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

PARIS: Silicon Valley's favourite philosophy, longtermism, has helped to frame the debate on artificial intelligence around the idea of human extinction. But increasingly vocal critics are warning that the philosophy is dangerous, and the obsession with extinction distracts from real problems associated wi

PARIS: Silicon Valley's favourite philosophy, longtermism, has helped to frame the debate on artificial intelligence around the idea of human extinction.

Yet the movement and linked ideologies like transhumanism and effective altruism hold huge sway in universities from Oxford to Stanford and throughout the tech sector. This kind of thinking makes the ideology"really dangerous", said Torres, author of Human Extinction: A History of the Science and Ethics of Annihilation.

When asked in March by a user of Twitter, the platform now known as X, how many people could die to stop this happening, longtermist idealogue Eliezer Yudkowsky replied that there only needed to be enough people"to form a viable reproductive population". "Do I support eugenics? No, not as the term is commonly understood," he wrote in his apology, pointing out it had been used to justify"some of the most horrific atrocities of the last century".Despite these troubles, longtermists like Yudkowsky, a high school dropout known for writing Harry Potter fan-fiction and promoting polyamory, continue to be feted.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 6. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines