technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.
The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement. “It’s a big deal to bring all the labs together, all the companies,” said Suleyman, whose Palo Alto, California-based startup is the youngest and smallest of the firms. “This is super competitive and we wouldn’t come together under other circumstances.”
“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.
A number of technology executives have called for regulation, and several attended an earlier White House summit in May. A number of countries have been looking at ways to regulate AI, including European Union lawmakers negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.