How AI-generated deepfakes are targeting Taylor Swift and her fans

How AI-generated deepfakes are targeting Taylor Swift and her fans

Taylor Swift is one of the most popular and influential celebrities in the world, but she is also a victim of a disturbing and unethical trend: AI-generated deepfakes. These are fake images or videos that use artificial intelligence to manipulate the appearance and voice of a person, often without their consent or knowledge.

In recent weeks, Swift and her fans have been exposed to two types of deepfake attacks: one that creates NSFW pictures of the singer, and another that uses her likeness to promote a cookware brand.

NSFW pictures of Taylor Swift

On Wednesday night, some highly explicit images of Taylor Swift, generated by artificial intelligence, started to spread on X (formerly Twitter). The actual source of these images is unknown, but they are a clear and disgusting violation of Swift’s privacy and dignity.

Swift’s fans, Swifties, were quick to express their anger and disgust at the images of the pop star, and they took action to bury the ‘Taylor Swift AI’ trending topic with unrelated posts. They showed their support for Swift, who has been a victim of this appalling trend of creating fake and NSFW pictures.

This trend has become more prevalent and alarming in recent years, as AI art has gained popularity and accessibility. However, this also raises serious ethical questions about the nature and implications of AI, and how it can be used to harm and exploit people.

The Grammy Award winner is not the only one who has been targeted by this kind of malicious content. Other eminent A-listers in the entertainment industry, such as 23-year-old TikTok star and actress Addison Rae, have also faced similar attacks, including deepfake videos that manipulate their faces and voices.

This kind of content is no different from the personal photo leaks that celebrities have endured over the years due to hacking. It is extremely distressing and unfair for anyone to have their image and identity used without their consent and in such a degrading way.

Unfortunately, the legal system has not caught up with this surfacing threat. As MSNBC reports, “there is no such [federal] crime that covers AI-generated nudes” — even though such content can have devastating effects on a person’s mental and emotional health, as well as their professional, personal and social reputation.

Many celebrities have been victims of hacks and nude photo leaks over the years, including Bella Thorne, Miley Cyrus, Jennifer Lawrence, Demi Lovato, Kristen Stewart, Dakota Johnson, Rihanna and more.

Fake endorsement of Le Creuset cookware

Taylor Swift also became a victim of AI-generated deepfake technology in a different way. Earlier this month, an advertisement for cookware brand Le Creuset surfaced on social media, using Taylor Swift’s likeness by deepfaking her voice.

In the ad, AI Swift claims she is giving free Le Creuset cookware sets to her “loyal fans.” Viewers are prompted to click on a button and answer a few questions in order to receive the free cookware set — any personal information entered is stolen by the scammers.

The fraudulent ads, which have made the rounds on Facebook, Instagram, and TikTok, are part of a sophisticated scam that features an AI-generated deepfake endorsement. These scams are now “more convincing than ever,” according to the Better Business Bureau, which warned consumers of a “rise in deepfake scams and ever-improving AI technology” in April.

Le Creuset says it has no involvement in the ads, and representatives for Swift have not provided comment on the videos.

Celebrity deepfakes are gradually edging their way into advertising and media, posing a threat to the credibility and authenticity of public figures and brands. In recent years, several celebrities have openly distanced themselves from ads featuring their AI-generated voice or likeness.

Oprah Winfrey expressed frustration over the prevalence of fraudulent ads in 2022, when a fan asked her about weight loss gummies. “It’s come to my attention many times over, somebody’s out there misusing my name, even sending emails to people, advertising weight loss gummies,” she said in an Instagram video. “I don’t want you all to be taken advantage of by people misusing my name.”

Journalist Gayle King asked her fans not to “be fooled” by AI-generated advertisements of her promoting a weight loss product. “They’ve manipulated my voice and video to make it seem like I’m promoting it,” King wrote in an Instagram post in October.

How to protect yourself from deepfake scams

Deepfake technology is advancing rapidly, and it can be hard to tell the difference between real and fake content. However, there are some steps you can take to protect yourself from falling prey to deepfake scams:

  • Be skeptical of any offer that sounds too good to be true, especially if it involves celebrities or free products.
  • Check the source and authenticity of the content before you click on any link or share any information. Look for signs of manipulation, such as poor quality, mismatched audio, or inconsistent lighting.
  • Do your own research and verify the information from multiple sources. If you are unsure, contact the official website or social media account of the celebrity or brand involved.
  • Report any suspicious or fraudulent content to the platform where you saw it, and warn your friends and family about it.

Deepfake technology can be used for creative and positive purposes, such as entertainment, education, or art. However, it can also be used for malicious and unethical purposes, such as fraud, blackmail, or defamation. As consumers and citizens, we need to be aware of the potential risks and benefits of this technology, and exercise caution and critical thinking when we encounter it.

Join The Discussion

Compare listings

Compare
Translate »