AI-generated “deepfake” materials are flooding the internet, sometimes with dangerous results. In just the last year, AI has been used to make deceiving voice clones of a former US president and spread fake, politically-charged images depicting children in natural disasters. Nonconsensual, AI-generated sexual images and videos, meanwhile, are leaving a trail of trauma impacting everyone from high schoolers to Taylor Swift. Large tech companies like Microsoft and Meta have made some efforts to identify instances of AI manipulation but with only muted success. Now, governments are stepping in to try and stem the tide with something they know quite a bit about: fines.
This week, lawmakers in Spain advanced new legislation that would fine companies up to $38.2 million or between 2 percent and 7 percent of their global annual turnover if they fail to properly label AI-generated content. Within hours of that bill being signed, lawmakers in South Dakota pushed forward their own legislation seeking to impose civil and criminal penalties for individuals and groups who share deep fakes intended to influence a political campaign. If it passes, South Dakota will become the 11th US state to pass legislation criminalizing deepfakes since 2019. All of these laws use the threat of potentially drained bank accounts as an enforcement lever.
According to Reuters, the Spanish bill follows guidelines set by the broader EU AI Act that officially took effect last year. Specifically, this bill is intended to add punitive teeth to provisions in the AI Act that impose stricter transparency requirements on certain AI tools deemed “high risk.” Deepfakes fall into that category. Failing to properly label AI-generated content would be considered a “serious offense.”
“AI is a very powerful tool that can be used to improve our lives … or to spread misinformation,” Spain’s Digital Transformation Minister Oscar Lopez said in a statement sent to Reuters.
Get the Popular Science newsletter
Breakthroughs, discoveries, and DIY tips sent every weekday.
In addition to its rules on deepfake labelling, the legislation also bans the use of so-called “subliminal techniques” to certain groups classified as vulnerable. It would also place new limits on organizations attempting to use biometrics tools like facial recognition to infer the race or political, religious, or sexual orientation of individuals. The bill still needs to be approved by Spain’s lower house to become law. If it does, Spain will become the first country in the EU to enact legislation enforcing AI Act’s guidelines around deepfakes. It could also serve as a template for other nations to follow.
A handful of US states are taking the lead on deepfake enforcement
The newly proposed South Dakota bill, by contrast, is more narrowly tailored. It requires individuals or organizations to label deepfake content if it is political in nature and created or shared within 90 days of an election. The version of the bill that advanced this week includes exemptions for newspapers, broadcasters, and radio stations, which had reportedly expressed concerns about potential legal liability for unintentionally sharing deepfake content. The bill also includes an exception for deepfakes that “constitute satire or parody,” a potentially broad and difficult-to-define carveout.
Still, watered down as it may be, South Dakota’s bill represents the latest addition to a gro