top of page
Writer's pictureZang Langum

OpenAI CEO says international agency like UN’s nuclear watchdog should oversee AI laws, regulations.

During a visit to the United Arab Emirates on Tuesday, Sam Altman, CEO of OpenAI, a prominent innovator in artificial intelligence, warned that AI poses an "existential risk" to humanity. He suggested the establishment of an international agency similar to the International Atomic Energy Agency (IAEA) to oversee this groundbreaking technology. Altman, 38, is currently on a global tour to discuss artificial intelligence and its. He emphasized the need to manage the risks associated with AI while ensuring that its tremendous benefits can still be enjoyed. Altman stressed that nobody desires to bring harm to the world.

One of OpenAI's notable AI systems, ChatGPT, a popular chatbot capable of providing detailed answers to user prompts, has garnered significant attention worldwide. Microsoft has invested around $1 billion in OpenAI. While ChatGPT's success demonstrates the potential of AI to revolutionize human work and learning, it has also raised concerns. In May, Altman and hundreds of other industry leaders signed a letter highlighting the importance of addressing the risks of AI on a global scale, alongside other major risks like pandemics and nuclear war.

Altman drew attention to the IAEA, the international nuclear watchdog, as an example of global cooperation in overseeing nuclear power. He proposed a similar approach for AI, suggesting the implementation of guardrails. Altman expressed hope that the UAE could play a significant role in this effort, stating, "We talk about the IAEA as a model where the world has said, 'OK, very dangerous technology, let's all put some guardrails.' And I think we can do both. I think in this case, it's a nuanced message because it's saying it's not that dangerous today, but it can get dangerous fast. But we can thread that needle." AI regulation is also being considered by lawmakers worldwide. The European Union, consisting of 27 nations, is pursuing an AI Law that could become the de facto global standard for artificial intelligence. Altman emphasized the critical role of government intervention in governing the risks associated with AI during his testimony to the U.S. Congress in May.

However, the UAE, an autocratic federation of seven hereditary ruled sheikhdoms, presents a contrasting aspect of AI risks. Freedom of speech remains tightly controlled, and rights groups have raised concerns about the UAE and other Persian Gulf states using surveillance software to monitor activists, journalists, and others. Such restrictions impact the flow of accurate information, which is essential for AI systems like ChatGPT that rely on reliable data to provide accurate responses to users.

Andrew Jackson, CEO of the Inception Institute of AI, spoke at the event in Abu Dhabi alongside Altman. The institute is described as a company affiliated with G42, which has connections to Sheikh Tahnoun bin Zayed Al Nahyan, Abu Dhabi's influential national security adviser and deputy ruler. G42's CEO, Peng Xiao, previously oversaw Pegasus, a subsidiary of DarkMatter, an Emirati security firm that has drawn scrutiny for employing former CIA, NSA personnel, and individuals from Israel. G42 also owns a video and voice calling app that reportedly served as a surveillance tool for the Emirati government. During his speech, Jackson positioned himself as a representative of "the Abu Dhabi and UAE AI ecosystem" and expressed their intention to be a significant force in global AI regulation, referring to their political power.


Comments


bottom of page