Artificial intelligence poses an “existential risk” to humanity, a key innovator warned during a visit to the United Arab Emirates on Tuesday, suggesting an international agency like the International Atomic Energy Agency oversee the ground-breaking technology.
OpenAI CEO Sam Altman is on a global tour to discuss artificial intelligence.
“We face serious risk. We face existential risk,” said Altman, 38. “The challenge that the world has is how we’re going to manage those risks and make sure we still get to enjoy those tremendous benefits. No one wants to destroy the world.”
OpenAI’s ChatGPT, a popular chatbot, has grabbed the world’s attention as it offers essay-like answers to prompts from users. Microsoft has invested some $1 billion in OpenAI.
ChatGPT’s success, offering a glimpse into the way that artificial intelligence could change the way that humans work and learn, has sparked concerns as well. Hundreds of industry leaders, including Altman, have signed a letter in May that warns “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Altman made a point to reference the IAEA, the United Nations nuclear watchdog, as an example of how the world came together to oversee nuclear power. That agency was created in the years after the U.S. dropping atom bombs on Japan at the end of World War II.