Home of Lisa's Top Ten, the daily email that brings you the world.
DONATE
SUBSCRIBE
The first task of the day

Sign Up for Lisa's Top Ten

Untitled(Required)

Government Report: U.S. Must Prep to Fight ‘Extinction-Level’ AI Threat

Programs have ‘the potential to destabilize global security.’
wnd.com
wnd.com

A new report commissioned by the federal government suggests it needs to move immediately to get the United States ready for the possibility of an "extinction-level" threat from artificial intelligence.

A report from Time cites "An Action Plan to Increase the Safety and Security of Advanced AI" that was done by Gladstone AI.

The report suggests wide-ranging policy actions that "would radically disrupt the AI industry."

It recommends that Congress make it illegal to train AI models using more than a certain level of computing power.

In recent months, OpenAI has unleashed GPT-4 and Google has released Gemini, the latter largely to guffaws over its black George Washington and other unlikely computer-created images.

The report calls for a new federal AI agency that could require companies on the cutting edge of the tech to get federal permission to "train" such software beyond a certain point.

And Washington should ratchet down limits on the manufacture and export of AI chips.

Time explains the report was commissioned by the State Department in 2022 and written by Gladstone, a four-person company that "runs technical briefings on AI for government employees."

Although commissioned by the feds, the report says it does not reflect the views of the government now.

"Current frontier AI development poses urgent and growing risks to national security. The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons," the report said.

Time explained AGI is a hypothetical technology that could perform most tasks at or above the level of a human. That technology doesn't exist now, but labs claim to be moving that direction.

The authors drew on interviews with members of OpenAI, Google, DeepMind, Anthropic and Meta to "paint a disturbing picture" about "perverse incentives that driving decision-making at various corporations."

"New tools, with more capabilities, have continued to be released at a rapid clip … As governments around the world discuss how best to regulate AI, the world’s biggest tech companies have fast been building out the infrastructure to train the next generation of more powerful systems—in some cases planning to use 10 or 100 times more computing power," Time reported.

And, the report noted, some 80% of Americans think AI accidentally could cause a catastrophic event.

Time noted, "Outlawing the training of advanced AI systems above a certain threshold, the report states, may 'moderate race dynamics between all AI developers' and contribute to a reduction in the speed of the chip industry manufacturing faster hardware."

Gladstone officials Jeremie and Edouard Harris have been warning the government of AI risks for several years already.

The brothers say their company identified the State Department's Bureau of International Security and Nonproliferation as one agency with a responsibility in the field.

That office is the one to address risks from emerging tech, including from chemical and biological weapons and nuclear risks, Time said.

The report addresses the risk of weaponizing tech, as those systems even could "execute catastrophic biological, chemical, or cyber attacks."

Also, the report addresses the possibility of losing control of a system so advanced it would take over from its creators.

One way to slow down the advances would be to restrict the use of chips that are used for AI.

"The report also raises the possibility that, ultimately, the physical bounds of the universe may not be on the side of those attempting to prevent proliferation of advanced AI through chips," Time found.

The report itself noted, "As AI algorithms continue to improve, more AI capabilities become available for less total compute. Depending on how far this trend progresses, it could ultimately become impractical to mitigate advanced AI proliferation through compute concentrations at all."

Read More

Total
0
Shares
Related Posts