
President Biden has recently taken a decisive step by signing an executive order to regulate the development of artificial intelligence (AI), expressing his concerns about AI’s potential role in amplifying social media addiction and facilitating fraudulent activities.
Biden’s apprehensions deepened after encountering a “deep fake” video featuring himself, a term referring to increasingly sophisticated manipulated videos. The catalyst for this heightened concern was reportedly his viewing of Tom Cruise’s latest action-packed film, “Mission: Impossible – Dead Reckoning Part One,” at Camp David in July.
Biden Takes Action: Executive Order Sets Standards for AI Security and Privacy
Bruce Reed, White House deputy chief of staff, highlighted that Biden’s curiosity about AI technology has been profound. The president had personally witnessed AI-generated images of himself and his dog, utilizing advanced technology like voice cloning, which can transform a mere three seconds of one’s voice into an entire fabricated conversation.
The dynamic and rapidly evolving field of artificial intelligence presents multifaceted challenges, encompassing legal, national security, and civil rights concerns. Recognizing the urgency, some U.S. cities and states have enacted legislation restricting AI use in areas like police investigations, and the European Union has proposed comprehensive regulations for the technology.
Public Concerns Rise: Mitre-Harris Poll Reveals Unease Over AI Risks
AI, already prevalent in products ranging from toothbrushes to drones, has the potential to revolutionize various industries. However, the shift towards machine learning introduces risks, including the spread of misinformation, bias amplification, test integrity compromise, and privacy violations. Notably, facial recognition technology, powered by AI, has led to false accusations and even impacted financial markets.
In response to these challenges, President Biden’s executive order on AI establishes standards for security and privacy protections. The order builds on voluntary commitments from numerous companies and directs government agencies to assess AI products for potential national or economic security risks.
While Congress explores legislation on AI, the President’s executive order aims to ensure safety in the absence of a comprehensive legislative strategy. Proposed bills address specific concerns, such as prohibiting the automated launching of nuclear weapons without human input and mandating clear labeling of AI-generated images in political ads.
Simultaneously, the European Union is developing an AI Act, slated to take effect by 2026. This groundbreaking legislation aims to oversee developers’ handling of risks associated with AI deployment, prohibiting exploitative systems like social scoring and emphasizing transparency in interactions with AI systems.
Major technology companies, including Amazon, Alphabet, IBM, and Salesforce, have pledged to adhere to the Biden administration’s voluntary transparency and security standards. This includes subjecting new AI products to rigorous internal and external testing before release.
A recent Mitre-Harris Poll underscores public sentiment, revealing that 54% of surveyed U.S. adults are more concerned about the risks of AI than excited about its potential benefits. The poll also highlights specific concerns, with respondents expressing more worry about AI’s involvement in cyberattacks and identity theft than in causing harm to disadvantaged populations or replacing jobs.
In conclusion, the growing landscape of AI regulation reflects the intricate challenges posed by this transformative technology. As stakeholders navigate security, privacy, and ethical considerations, regulatory efforts become pivotal to ensuring responsible AI development and deployment.