By MATT O’BRIEN and ZEKE MILLER
WASHINGTON (AP) — President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.
Biden announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure that their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of the next generation of AI systems, though they don’t detail who will audit the technology or hold the companies accountable.
“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.
“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”
A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.
The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.
That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.
The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images or audio known as deepfakes.
Executives from the seven companies met behind closed doors with Biden and other officials Friday as they pledged to follow the standards.
“He was very firm and clear” that he wanted the companies to continue to be innovative, but at the same time “felt that this needed a lot of attention,” Inflection CEO Mustafa Suleyman said in an interview after the White House gathering.
“It’s a big deal to bring all the labs together, all the companies,” said Suleyman, whose Palo Alto, California-based startup is the youngest and smallest of the firms. “This is supercompetitive and we wouldn’t come together under other circumstances.”
The companies will also publicly report flaws and risks in their technology, including effects on fairness and bias, according to the pledge.
The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.
Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.
“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”
While voluntary, agreeing to submit to ” red team” tests that poke at their AI systems is not an easy promise, said Suleyman.
“The commitment we’ve made to have red-teamers basically try to break our models, identify weaknesses and then share those methods with the other large language model developers is a pretty significant commitment,” Suleyman said.
Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI and is working closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.
A number of technology executives have called for regulation, and several attended an earlier White House summit in May.
Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”
Some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems adhere to regulatory strictures.
The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by recent models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.
U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.
Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said Friday that it has consulted on the voluntary commitments with a number of countries.
The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.
Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.