Deepfake Alert: AI Synthetic Media Takes a Dangerous Turn

Deepfake Alert: AI Synthetic Media Takes a Dangerous Turn

Meet the App that Makes You a Movie Star (or a Bad Sexy Fantasy)

“Do you want to see yourself acting in a movie or on TV?” the app’s blurb reads, luring anyone who’s ever dreamed of Hollywood glory—or just wanted to prank their friends with a face swap. It ups the ante with “Do you want your best friend, colleague, or boss dancing?” Then it throws in a meme‑ready line: “Ever wonder what you would look like if you swapped faces with a celeb or your buddy?”

From Casual Fun to Adult-Only Ads

While the same software also pops up on dozens of adult sites, the copy changes to something a bit more rouge: “Make deepfake porn in a sec.” And the tagline? “Deepfake anyone.” The shift is smooth—one ad suggests star‑studded selfies; the other prunes slick porn for the big league.

Why This Stuff is a Technological Rollercoaster

The heart of the operation is machine learning, which scours images to build a digital model of your face before artfully injecting it into any clip. In just four years, the tech has gone from experiments to the point where the average user with a phone can create videos that look almost indistinguishable from reality.

According to a mix of company spokes, researchers, and safety groups, the barrier has dropped to a one‑click, no‑effort level. “Once the entry point is so low that it requires no effort at all, and an unsophisticated person can create a very sophisticated non-consensual deepfake pornographic video— that’s the inflection point,” says Adam Dodge, founder of EndTab and lawyer-turned‑digital‑safety advocate.

“That’s where we start to get into trouble,” Dodge adds, as the concern grows that anybody, without any tech chops, can produce a potentially harmful deepfake.

Consent = Catastrophic

For the safety squad, consent isn’t just a polite nod—it’s the single vital safeguard. But getting every party on board is easier said than done. As Ajder, head of policy at Metaphysic, notes, “The vast majority of harm caused by deepfakes right now is a form of gendered digital violence.” His research hits a stark statistic: 96% of over 14,000 deepfake videos identified in a 2019 study by Sensity were non-consensual. The proliferation is alarmingly fast—every six months the number of deepfakes reportedly doubles.

Meanwhile, stopping the spread of non-consensual content is a step everyone agrees on. The experts contend that a stricter approach is urgent, especially when the tech can be weaponized against vulnerable groups.

Bottom Line

At first glance, it’s a fun app that lets you swap faces, dance with your boss, or see the world through superhero lenses. But beneath the delight lies a legal, ethical, and social minefield. The key takeaway? If you want to play with deepfakes, make sure everyone involved is on board—or better yet, give your privacy a second opinion before you clock in the filter.

Ad network axes app 

ExoClick Pulls App From Ads Amid Deepfake Concerns

When the controversial “Make deepfake porn in a sec” app popped up on the App Store, the ad network ExoClick stepped in. Their spokesperson, Bryan McDonald, shared that the app’s face‑swap tech was a new territory for them, and they’d pulled it from all advertising campaigns to avoid any irresponsible promotion of the technology.

What ExoClick Said

  • New Product Alert: ExoClick hasn’t seen anything quite like this face‑swap software before.
  • Advertising Pull: The app was removed from their ad inventory after a review of its marketing materials.
  • Responsible Use: While most users likely enjoy the app for entertainment, McDonald warns it could also facilitate malicious use.
  • Compliance Notice: The wording used in the app’s promos was deemed unacceptable by the network’s standards.

Other Networks’ Silence

Reuters reached out to six other major ad networks. None of them responded with any policies or experiences involving deepfake software, leaving a gap in the industry’s collective stance.

Apple’s Front

  • No Deepfake Rules: Apple says it doesn’t have specific regulations governing deepfake apps.
  • Broad Guidelines: Apps that contain defamatory, discriminatory, or potentially humiliating or harmful content are disallowed.
  • Marketing Standards: Developers can’t mislead about their products, either inside or outside the App Store. Apple is working with the app’s developers to ensure compliance.

Google’s Quick Move

  • Takedown: After noticing the app’s adult‑site ads, Google pulled its Play Store page, which had previously been rated “E for Everyone.”
  • Reinstated: Roughly two weeks later, the app returned but now carries a “T for Teen” rating for sexual content.
  • No Response: Google didn’t reply to additional comments from Reuters.

In short, the deepfake porn app has sparked a tug‑of‑war between tech giants and ad networks over how to handle emerging, potentially dangerous technology while still upholding user safety and advertising ethics.

Filters and watermarks

Face‑Swap Fiesta: Keeping the Fun in Check

The world of face‑swapping apps is booming, but not all tools are playing nice. Ajder, a vocal advocate for ethical synthetic media, highlights that many apps are actively putting out barriers to stop the bad guys from misusing the tech.

What These Apps Are Doing Right

  • Scene‑Limited Swaps: Some apps force users to stick their faces into pre‑approved backgrounds—think of it like a safe video‑editing sandbox.
  • Identity Checks: A few require an ID snapshot of the person whose face you’re swapping in. Spoiler alert: not foolproof.
  • Auto‑Porn Filters: Using AI to flag any adult content. Again, it’s a first defence, but it’s not a guarantee.

Reface: The Superstar of the Gen‑Z Meme League

This US‑based venture has racked up over 100 million downloads since 2019. Users love tossing their faces onto celebrities, comic book heroes, or meme icons to churn out snappy videos. Reface has pledged extra safeguards:

  1. Both automated and human moderators patrol content.
  2. A dedicated pornography filter scoops out unwanted footage.
  3. Watermarks and labels slap the “AI‑made” tag on every synthetic clip.

“Since the very beginning, we knew the tech could be twisted,” the company told Reuters. “We’ve built controls to keep it playful, not perilous.”

Bottom Line

While face‑swap apps are a blast, developers are hard‑hitting the potential pitfalls head‑on. The fusion of tech, acceptance checks, and a sprinkle of human oversight keeps the digital world a little safer—and a whole lot more entertaining.

‘Only perpetrator liable’

Deepfakes Are Stepping Up Their Game—and So Are We

Ever since a smartphone became a portable powerhouse, everyone’s been surfing the wave of cutting‑edge AI. The catch? The same tech that lets you turn a simple photo into a next‑gen deepfake is sharpening at a breakneck pace.

From “Do I Need a LOT of Photos?” to “One Snap Is Enough”

  • Back in 2017: Making a convincing deepfake meant uploading thousands of images—yeah, we fed the computer a photo feast.
  • Fast forward to today: One crisp selfie can produce a realistic masterpiece. The magic sauce is now deeper, busier, and freakier.
  • Result? “It looks like me” is no longer hearsay. If it looks like you, the psychological sting is the same, whether or not it’s actually you.

Councils Pull the Triggers, But the Bullseyes Keep Blazing

Governments have rolled out negative‑style laws to thwart online scams and pornography that sneak in via deepfakes. Case in point: California, China, and South Korea can hand out up to US$150,000 in damages for a single illicit gig.

Legislators Still Look for the Right Target

  • European Parliament researchers admit: “We’re only putting the blame on the perpetrator.” That’s a problem because many are hiding behind anonymity, slipping past both police and social‑media watchdogs.
  • You’ll find that the tech developers and distributors aren’t held accountable under current rites—yet.
  • Experts predict that bringing these stakeholders into the sentence will be the yin to the law’s yang.

What Wonka‑ish Laws Are Coming Our Way?

“A heightened AI Act in the United States and the EU’s GDPR may give us a handle on deepfakes,” says Marietje Schaake of Stanford’s Cyber Policy Center.

“The draft AI Act wants manipulated content flagged. That’s a start, but the real test is: Does knowing the truth stop the drama? Conspiracy theories prove that absurdity can still spread like wildfire.”

Bottom Line

It’s a ride where the car is learning faster than we’re stepping into the driver’s seat. The only thing we can do right now is keep our eyes peeled, understand that “fake” is never truly harmless, and lobby for laws that not only punish the outright bad guys but also the tech folks who make the weaponry available.