We write this letter as humbly as a collection of overeducated and overcompensated executives can, in the hope that you will hear our cries and do something about A.I. before it’s too late. Humanity is in grave danger of becoming extinct, and we—the world’s most prominent A.I. researchers and executives who got us into this mess—are writing a letter. That’s right.
In writing this letter, we acknowledge that there are many other actions we might have taken. We could have banded together to create a global regulatory agency that would set guidelines and standards and monitor the development of A.I. systems possessing human-competitive intelligence. We could haveindefinitely, or at least until we were more certain of its risks and rewards.
We understand that you may perceive the tiniest hint of hypocrisy in a letter warning against the threat of A.I. written by the very people who created that threat. But we honestly didn’t know that this would happen. How could we? After all, we