. The move, ostensibly aimed at managing the risks posed by AI and protecting Americans’ rights and safety, has provoked a range of questions, the foremost being: What does the new voluntary AI agreement mean?
At first glance, the voluntary nature of these commitments looks promising. Regulation in the technology sector is always contentious, with companies wary of stifling growth and governments eager to avoid making mistakes. By sidestepping the direct imposition of command and control regulation, the administration can avoid the pitfalls of imposing excessively burdensome rules.
Furthermore, the commitments lack specificity and seem to be broadly aligned with what most AI companies are already doing: ensuring the safety of their products, prioritizing cybersecurity, and aiming for transparency. Although the presidentthese commitments as groundbreaking steps, it might be more accurate to view them as the formalization of existing industry practices.
Despite its rhetoric, the Biden administration hasn’t taken much in the way of action to regulate AI. To be clear, this may well be the right approach. But it suggests this agreement might be primarily seen as a symbolic gesture aimed at placating the so-called nervous ninnies — the vocal critics concerned about the impact of AI – rather than a move toward aggressive regulation.