The White House said Friday that it has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don't detail who will audit the technology or hold the companies accountable.
They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said. "History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations," said a statement from James Steyer, founder and CEO of the nonprofit Common Sense Media.
But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc.