Over the last year, the rapid advancement of artificial intelligence has captivated the world’s attention, sparking both excitement and concern. As AI systems like OpenAI’s ChatGPT become increasingly sophisticated, policymakers are grappling with how to ensure that this transformative technology is developed and deployed responsibly. In California, State Senator Scott Wiener has introduced legislation aimed at mitigating the potential existential risks posed by AI.
The shut down switch and incident reporting requirements make the most sense in SB 1047, but even these are not failure proof. For example, a shut down switch on an AI system running the electricity grid creates a vulnerability for potential hackers. Thus, this solution creates new risks even while reducing others.
While SB 1047 may have limited impact on existential risks, it could have unintended consequences of its own. By imposing burdensome regulations on AI development, the legislation risks slowing down innovation and putting California’s technology companies at a competitive disadvantage in the global race to develop advanced AI. This is particularly concerning given that these companies may be our best hope for super intelligent AI that is aligned with human values and interests.