US lawmakers unite against AI abuse, support NO FAKES Act
"The NO FAKES Act aims to combat AI-generated deepfakes and protect users from crypto scams. Learn about the legislation's impact on free speech and private censorship."
US Lawmakers Introduce NO FAKES Act to Combat AI Deepfakes and Cryptocurrency Scams
On September 12, 2024, U.S. Representatives Madeleine Dean and María Elvira Salazar introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. This bill is a timely response to the growing misuse of artificial intelligence (AI) in the creation of deepfakes—hyper-realistic yet fake digital media that have become increasingly dangerous, especially in the realm of cryptocurrency fraud.
AI deepfakes, which once seemed like an amusing novelty, have now evolved into a potent weapon for fraudsters and cybercriminals. These technologies, designed to imitate authentic video or audio content, have been manipulated to trick investors and users in the cryptocurrency space, causing severe financial losses. This legislative move by U.S. lawmakers aims to protect citizens and bring accountability to individuals using AI to harm others.
The NO FAKES Act to combat AI deepfakes is primarily focused on safeguarding individuals from unauthorized and malicious uses of AI-generated replicas, whether in the form of videos, images, or other digital content. Lawmakers argue that as artificial intelligence technology becomes more accessible, the risks to privacy, security, and digital integrity rise exponentially.
AI Deepfakes and Crypto Scams on the Rise
Over the past few years, AI deepfakes have caused significant concern due to their ability to deceive even the most experienced tech users. In the cryptocurrency sector, deepfakes have led to financial scams, where cybercriminals create realistic AI-generated videos or voiceovers to manipulate and steal funds from crypto holders.
The second quarter of 2024 saw at least $5 million lost to scams driven by AI-generated deepfakes, according to cybersecurity firm Gen Digital.
With AI-generated content becoming harder to detect, users are increasingly falling victim to sophisticated scams. This alarming trend has prompted security firms and lawmakers alike to stress the need for protective legislation like the NO FAKES Act.
CertiK, a leading Web3 security company, has issued warnings about AI-powered attacks that could soon extend beyond video and audio deepfakes. They believe that AI attacks may even target cryptocurrency wallets that rely on facial recognition for access.
A spokesperson from CertiK advised that developers of wallets using facial recognition technology should evaluate their systems to ensure they are adequately prepared for potential AI-driven attack vectors. This growing threat landscape underscores the urgency of enacting legislation to combat AI misuse.
Legal Concerns: A Recipe for Private Censorship?
Despite the necessity for laws like the NO FAKES Act, concerns about unintended consequences have emerged, particularly in relation to free speech and digital rights.
Corynne McSherry, the legal director at the Electronic Frontier Foundation (EFF), has raised warnings that the bill could inadvertently lead to private censorship. McSherry, whose organization focuses on defending digital civil liberties, believes that the legislation might suppress legitimate content in its pursuit of curbing harmful deepfakes.
In an op-ed published in August 2024, McSherry argued that the NO FAKES Act, although well-intentioned, could become a “recipe for private censorship.”
According to McSherry, the act offers fewer protections for lawful speech than the existing Digital Millennium Copyright Act (DMCA). The DMCA allows for a simple counter-notice process, enabling individuals to restore their content if it is wrongfully removed. However, under the NO FAKES Act, individuals whose lawful content is removed must file a lawsuit within 14 days to challenge the takedown.
McSherry voiced concerns that while large companies and powerful individuals often have legal teams at their disposal, most independent creators, activists, and citizen journalists do not have the same resources. The burden of filing a legal case within two weeks could be overwhelming for these individuals, potentially stifling their voices in the digital space.
“The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not,” McSherry noted, underscoring the disproportionate impact the law could have on smaller content creators. She further explained that the flaws in the bill might inadvertently doom it, as the act’s restrictive nature might prevent it from effectively protecting legitimate digital expression.
NO FAKES Act’s Aim to Balance Innovation and Protection
Despite these criticisms, the NO FAKES Act's sponsors remain adamant about its necessity in the digital age. In a press release announcing the legislation, U.S. Representative Joe Morelle explained that the bill is designed not only to prevent malicious actors from abusing AI technology but also to protect innovation and free speech.
One of the key provisions of the NO FAKES Act is that it shields media platforms from legal liability if they remove offending content. This clause encourages platforms to take down harmful or unauthorized AI-generated media quickly and efficiently, without the fear of facing legal repercussions from those who post such content. Lawmakers believe this will help curb the spread of deepfakes while allowing platforms to continue providing open spaces for lawful expression.
Proponents of the bill argue that the legislation is vital for maintaining the integrity of digital media and protecting consumers. By holding malicious actors accountable and giving individuals the tools to challenge unauthorized uses of their likeness, the NO FAKES Act aims to ensure that the misuse of AI technology does not go unchecked.
At the same time, supporters claim that the bill's framework allows for innovation to continue in the AI and cryptocurrency sectors. The focus is on creating a balance where technological advancements can thrive without being weaponized for criminal activities or harm. Lawmakers involved in the drafting of the act emphasized that the bill is an evolving piece of legislation that could be refined as the digital landscape changes.
Conclusion
The NO FAKES Act is an important step toward regulating the use of AI-generated content, particularly in the fight against cryptocurrency fraud and unauthorized digital replicas. While the bill is designed to protect individuals from the misuse of AI deepfakes, critics argue that it may have unintended consequences, including potential private censorship and barriers to free speech.
As AI technologies continue to evolve, there is a delicate balance that lawmakers must strike between protecting consumers and encouraging innovation. The NO FAKES Act is an initial attempt at addressing these challenges, and its progress will likely be closely watched by tech experts, digital rights advocates, and the general public. Whether the bill can adapt to changing digital realities without stifling lawful creativity remains to be seen.