President Biden takes action to tackle AI Risks with New Executive Order

TrendsWatch
By TrendsWatch 4 Min Read

U.S.President Joe Biden has signed a significant executive order aimed at mitigating the risks associated with artificial intelligence (AI) for consumers, workers, minority groups, and national security. The order, signed at the White House, seeks to establish clearer guidelines for the development and deployment of AI systems.

One of the key provisions of the executive order requires developers of AI systems with potential risks to U.S. national security, the economy, public health, or safety to share the results of safety tests with the U.S. government.

This sharing of information will occur in alignment with the Defense Production Act, and it must happen before these AI systems are released to the public.

Additionally, the order directs government agencies to establish standards for AI testing. It will also address concerns related to chemical, biological, radiological, nuclear, and cybersecurity risks associated with AI systems.

“In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run,” President Biden emphasized. He stressed the importance of governing AI technology to harness its potential while minimizing risks such as using the tech to make nuclear or biological weapons.

The executive order has garnered mixed reactions from industry and trade groups. While some, like Bradley Tusk, CEO at Tusk Ventures, welcomed the move, others expressed reservations. 

In a Reuters report, Tusk pointed out concerns that tech companies might be hesitant to share proprietary data with the government due to fears of it falling into the hands of rivals.

NetChoice, a national trade association that includes major tech platforms, criticized the order, describing it as an “AI Red Tape Wishlist” that could stifle innovation and expand federal government power.

The new order goes beyond the voluntary commitments made earlier by AI companies, such as OpenAI, Alphabet, and Meta Platforms. These companies had pledged to watermark AI-generated content to enhance safety.

As part of the executive order, the Commerce Department will develop guidance for content authentication and watermarking to ensure clear labeling of items generated by AI, especially in government communications.

Further, the order also outlines requirements for intellectual property regulators and federal law enforcement agencies to address the use of copyrighted works in AI training. It includes an evaluation of AI systems for IP law violations. Tech companies and artists have been embroiled in disputes over the use of creative content in training AI systems.

The executive order aligns with ongoing international efforts to address AI concerns. The Group of Seven industrial countries is set to agree on a code of conduct for companies developing advanced AI systems. However, some experts suggest that the United States lags behind Europe in terms of AI regulation.

Also, President Biden called on Congress to play a more substantial role in regulating AI, particularly in safeguarding personal data. U.S. Senate Majority Leader Chuck Schumer expressed his hope to have AI legislation ready in a few months.

U.S. officials have expressed concerns about AI exacerbating bias and civil rights violations. The executive order seeks to address these issues by providing guidance to various sectors to prevent discrimination and harm caused by AI systems, such as job displacement.

Share this Article
Leave a comment

FREE
Trends In Business
Magazine

SIGN UP TO DOWNLOAD INSTANTLY