Leading AI companies reveal their safety plans following White House summit
Seven leading AI companies participated in a White House summit, where they agreed with the Biden administration to implement new safety measures and guardrails to enhance the security of their AI systems and protect users.
The companies involved in the summit include OpenAI, Amazon, Google, Anthropic, Meta, and Inflection. These are the companies at the forefront of the AI evolution.
During the summit, President Joe Biden emphasised the importance of responsible AI innovation and the critical role these companies play in ensuring safety by design.
As part of their commitment to safe AI development, the companies have agreed to invest in research and safety measures, perform security stress testing, and assist in third-party audits to identify system vulnerabilities.
The goal is to ensure that AI benefits society and is developed and deployed responsibly. Transparency is also highlighted, with companies urged to collaborate with industry, government, academia, and civil society to foster responsible AI development.
One consumer-facing commitment from the companies is the implementation of watermarks on AI-generated content. This measure will apply to audio and visual content created by individual users, allowing for the identification of content originating from their systems.
OpenAI clarified that AI voice assistants would not be covered under this commitment. Google has pledged to deploy similar disclosures, integrating watermarking and metadata into its upcoming generative systems.
While the companies issued statements confirming their commitments, specific details on implementation were not provided in the announcement. Additionally, the White House has not outlined enforcement mechanisms to ensure companies adhere to their promises.
The summit represents a serious responsibility, and President Biden underscored the significance of getting AI right regarding safety and security. Before the summit, several companies had made similar commitments to safeguard their AI technologies, including Meta's decision to open-source its large language model Llama 2 for research and commercial use.