A future privacy idea is to let users purchase the copyright for their image and likeness because with the growth of AI this will be needed. Back in 2021, Danish live-streamer and video game enthusiast Marie Watson got an unexpected direct message on Instagram: a photo of herself that had been tampered with using AI. It was a familiar vacation picture from her own feed, but altered to strip away her clothes and depict her nude—a classic example of a deepfake.

The shock hit her hard. “It just completely overwhelmed me,” Watson remembered. “I broke down crying right away, seeing myself exposed like that.”

Since that incident, deepfake technology—AI-created media that convincingly mimics real individuals or occurrences—has exploded in accessibility and sophistication. Thanks to breakthroughs in generative AI from companies like OpenAI and Google, anyone can now produce eerily lifelike videos, images, or audio clips. This has led to widespread misuse, from fake nudes of stars like Taylor Swift and Katy Perry, to election meddling and the targeted shaming of young people and women.

Updating Copyright Protections

To counter this, Denmark is pushing forward a legislative change aimed at safeguarding everyday people, along with entertainers and creators whose images or voices could be exploited. Set for approval in early 2026, the proposed amendment to copyright rules would outlaw the distribution of deepfakes that replicate someone’s personal traits—like their face or voice—without explicit approval.

Under the new framework, individuals would essentially own the copyright to their own image and voice, empowering them to request the swift removal of unauthorized content from digital platforms. Exceptions would cover humorous parodies and satirical works, though the exact boundaries remain to be clarified.

Advocates, including AI specialist Henry Ajder of Latent Space Advisory, hail the move as a groundbreaking governmental effort to tackle deepfake-driven falsehoods. “It’s fantastic that Denmark is acknowledging the need for legal evolution here,” Ajder noted. Currently, victims often hear the grim advice: “Your options are limited—basically, vanish from the online world,” which he calls impractical. He emphasized, “We can’t treat these assaults on our identity and self-respect as if nothing has changed.”

Tackling Misinformation Risks

Similar initiatives are emerging globally. In the U.S., former President Donald Trump enacted a cross-party law in May 2025 criminalizing the intentional release or threats involving non-consensual intimate visuals, encompassing deepfakes. South Korea, meanwhile, intensified penalties and platform oversight last year to stamp out deepfake pornography.

Denmark’s Culture Minister Jakob Engel-Schmidt attributes the bill’s strong parliamentary backing to its role in preserving trust in information and preventing democratic erosion. “Fabricating a video of a leader without recourse to delete it erodes faith in our systems,” he warned journalists at a September conference on AI and intellectual property.

Striking a Fair Equilibrium

The rules would be Denmark-specific, focusing enforcement on major platforms rather than individual posters—no jail time or minor penalties for users. However, non-compliant tech giants could incur hefty fines, according to Engel-Schmidt.

Ajder pointed to YouTube’s robust mechanisms for balancing infringement takedowns with creative expression as a model, signaling Big Tech’s growing awareness of the escalating threat. Platforms like Twitch, TikTok, and Meta (parent of Facebook and Instagram) declined to comment.

As the EU’s current rotating president, Denmark has sparked curiosity about the bill among peers like France and Ireland. IP attorney Jakob Plesner Mathiasen views it as a timely response to deepfakes infiltrating daily life, from fabricated news and political campaigns to explicit content targeting celebrities and ordinary folks alike. “This wouldn’t be on the table without real-world triggers,” he observed.

The Danish Rights Alliance, a defender of online creative rights, backs the proposal, arguing existing laws fall short. Take voice performer David Bateson, known for “Hitman” games and Lego ads: When AI duplicates of his voice went viral, platforms dismissed removal requests for lack of a clear Danish rule, said alliance director and lawyer Maria Fredenslund.

Lasting Scars of Exposure

Watson knew of other creators enduring similar violations but never imagined it for herself. Exploring underground forums where anonymous creators trade deepfake nudes—mostly of women—she was stunned by the simplicity: A quick Google search for “deepfake maker” yields countless free tools.

At 28, she’s encouraged by official steps but skeptical without tougher platform accountability. “These images shouldn’t be uploadable at all,” she insisted. “Once they’re out there, it’s irreversible—you lose all say in it.”

© 2024 Richart Ruddie - Internet Entrepreneur, All Right Reserved.