Another Warning Letter from A.I. Researchers and Executives

  • 📰 NewYorker
  • ⏱ Reading Time:
  • 32 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 16%
  • Publisher: 67%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

In newyorkerhumor, the world’s most prominent A.I. researchers and executives are taking action. By writing a letter. (This one.)

We write this letter as humbly as a collection of overeducated and overcompensated executives can, in the hope that you will hear our cries and do something about A.I. before it’s too late. Humanity is in grave danger of becoming extinct, and we—the world’s most prominent A.I. researchers and executives who got us into this mess—are writing a letter. That’s right.

In writing this letter, we acknowledge that there are many other actions we might have taken. We could have banded together to create a global regulatory agency that would set guidelines and standards and monitor the development of A.I. systems possessing human-competitive intelligence. We could haveindefinitely, or at least until we were more certain of its risks and rewards.

We understand that you may perceive the tiniest hint of hypocrisy in a letter warning against the threat of A.I. written by the very people who created that threat. But we honestly didn’t know that this would happen. How could we? After all, we

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 90. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines