During his address at the United Nations General Assembly, U.S. President Joe Biden outlined his strategy to cooperate with international competitors in order to harness the potential of artificial intelligence for positive purposes while safeguarding citizens from its potential risks.
Biden emphasized the dual nature of emerging technologies like artificial intelligence, which carry both tremendous promise and significant perils. He stressed the need to ensure that AI is employed as a tool for creating opportunities rather than as a means of oppression. In partnership with leaders from across the globe, the United States is actively working to fortify regulations and policies that ensure the safe deployment of AI technologies before they reach the public domain. The goal is to maintain control over this technology, preventing it from dictating terms to society.
These remarks come as U.S. policymakers are engaged in efforts to gain a deeper understanding of AI's functioning in order to establish the appropriate safeguards for the American populace, all while fostering constructive innovation. This discussion occurs against the backdrop of intense competition with China, which is also striving to establish itself as a global leader in AI technology.
On the following day, Senate Majority Leader Chuck Schumer, representing New York, convened a meeting with prominent tech CEOs, including Elon Musk of Tesla and SpaceX, as well as Mark Zuckerberg of Meta, in addition to labor and civil rights leaders. The purpose of the meeting was to engage with senators on the topic of AI as they contemplate legislative measures for its regulation. Schumer subsequently informed the press that everyone present at the meeting agreed that government intervention is necessary to oversee AI.
The specifics of how this regulation will take shape remain a subject of debate among lawmakers. There are differing opinions on which entity should be responsible for AI regulation and the extent of government involvement required. Schumer cautioned against hasty decisions, citing the European Union's rapid creation of the AI Act as a potential counterproductive approach. However, he emphasized the need for a timeline, acknowledging that it should fall within the realm of months, rather than days or years.
In the interim, several government agencies have asserted their ability to curb AI abuses through existing legal authority. The National Institute of Standards and Technology within the U.S. Department of Commerce has already introduced a voluntary risk management framework for AI earlier this year.
Additionally, the Biden administration has secured voluntary commitments from leading AI companies to subject their tools to security testing before making them available to the public.
コメント