IT heavyweights, including Microsoft and Google, launched a forum focusing on safeguards for artificial intelligence projects
Illustration © Cottonbro Studio / Pexels
Major players in the AI industry launched a group dedicated to “safe and responsible development” in the field on Wednesday. Founding members of the Frontier Model Forum include Anthropic, Google, Microsoft, and OpenAI.
The forum aims to promote and develop a standard for evaluating AI safety, while helping governments, corporations, policy-makers and the public understand the risks, limits, and possibilities of technology, according to a statement published by Microsoft on Wednesday.
The forum also seeks to develop best practices for addressing “society’s greatest challenges,” including “climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.”
Anyone can join the forum so long as they are involved in the development of “frontier models” aimed at achieving breakthroughs in machine-learning technology, and are committed to safety of their projects. The forum plans to form working groups and partnerships with governments, NGOs, and academics.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,” Microsoft President Brad Smith said in a statement.
Read more
AI thought leaders have increasingly called for meaningful regulation in an industry some fear could doom civilization, pointing out the risks of runaway development that humans can no longer control. The CEOs of all forum participants, except Microsoft, signed a statement in May urging governments and global bodies to make “mitigating the risk of extinction from AI” a priority at the same level as preventing nuclear war.
Anthropic CEO Dario Amodei warned the US Senate on Tuesday that AI is much closer to overtaking human intelligence than most people believe, insisting they pass strict regulations to prevent nightmare scenarios like AI being used to produce biological weapons.
His words echoed those of OpenAI CEO Sam Altman, who testified before the US Congress earlier this year, warning that AI could go “quite wrong.”
The White House assembled an AI task force under Vice President Kamala Harris in May. Last week, it secured an agreement with the forum participants, as well as Meta and Inflection AI, to allow outside audits for security flaws, privacy risks, discrimination potential, and other issues before releasing products onto the market and reporting all vulnerabilities to the relevant authorities.