year, as the startling capabilities of artificial intelligence have emerged into public view, attention has been drawn to the existential risk, or “x-risk”, that the technology may pose. The concern is that computers endowed with superhuman intelligence might destroy most or all human life.
will be made superintelligent in the future. Researchers are divided on how soon that may happen, or even if it will. Still, today’smodels are impressive, and arguably possess a form of intelligence and understanding of the world; otherwise they would not be so useful. Yet they are also easily fooled, liable to generate falsehoods and sometimes fail to reason correctly.
Intelligence surely plays a role in natural selection. But extinctions are not the outcomes of struggles for dominance between “higher” and “lower” organisms. Rather, life is an interconnected web, with no top or bottom . Symbiosis and mutualism—mutually beneficial interaction between different species—are common, particularly when one species depends on another for resources. And in this case,s depend utterly on humans.
Perhaps regulations could be designed so as to reduce the potential for x-risk while also attending to more immediatex-risk are often in tension with those directed at existingmodels or datasets make sense if the goal is to prevent the emergence of an autonomous networkedbeyond human control. However, such restrictions may handicap other regulatory processes, for instance those for promoting transparency insystems or preventing monopolies.