YouTube Rolls Out Likeness Detection Technology to Protect Creators from AI Misuse

YouTube Rolls Out Likeness Detection Technology to Protect Creators from AI Misuse
Deepanker Verma October 21, 2025 Internet

YouTube has officially rolled out its likeness detection technology to eligible creators in the YouTube Partner Program, after months of testing. This new feature lets creators find and request the removal of AI-generated videos that copy their face, voice, or overall likeness without permission.

This is the first phase of the rollout, and YouTube has already emailed eligible creators about it. The company says more creators will get access gradually over time.

The likeness detection system is designed to help creators identify when their identity, especially their face or voice, is being used in AI-generated content. This includes deepfake videos, fake endorsements, or any misleading content that makes it look like the creator said or did something they never did.

To get started, creators need to go to the new “Likeness” tab on their YouTube Studio dashboard. They must first agree to data processing, then scan a QR code on the screen using their phone. This leads to a verification page where they upload a photo ID and a short selfie video.

Once verified, YouTube will begin scanning the platform for videos that resemble the creator. Any detected videos will appear in their dashboard, where they can:

  • Request removal under YouTube’s privacy policy,
  • File a copyright complaint, or
  • Archive the video if they decide not to take action.

Creators can also opt out of this program at any time. YouTube says the system will stop scanning for their likeness within 24 hours after opting out.

AI-generated deepfakes have become a growing concern in recent years. Many creators and celebrities have seen their voices and faces used without permission. In most cases, deepfakes are used to promote products they never endorsed or to spread fake news. One well-known case involved YouTuber Jeff Geerling, whose AI-generated voice was used by a company called Elecrow to promote its products. Cases like this show how easily AI can be misused to damage someone’s reputation or mislead viewers.

Also see: How to Spot the Deepfakes Videos

I also found so many deepfake videos of creators and celebrities on Facebook ads promoting scams. But Facebook has not even acknowledged the issue.

YouTube’s new technology aims to prevent such misuse and give creators more control over their identity. It is not just about privacy; it is about protecting trust and authenticity on the platform.

YouTube has been testing this feature since early 2024. Last year, the company also partnered with Creative Artists Agency (CAA) to help well-known personalities, including actors, athletes, and major creators, identify videos that use their AI-generated likeness.

In April, YouTube publicly supported the NO FAKES Act, a proposed U.S. law that would make it illegal to create or distribute fake AI replicas of someone’s image or voice without their consent. The act could become a key legal protection for digital identities in the near future.

This move from YouTube feels both necessary and overdue. Deepfakes and AI clones are becoming more realistic and harder to detect manually. For many creators, especially those with large audiences, this technology could be a real relief.

However, there are still some unanswered questions. For example, it is unclear how accurate YouTube’s detection system is in spotting AI-generated videos. Will it flag false positives? And how quickly will YouTube respond once a creator requests removal? These factors will decide how effective the tool actually is.

There is also the issue of who gets access first. Right now, it is limited to creators in the YouTube Partner Program. That means, small creators might not have access yet. But smaller creators are often the most vulnerable to impersonation since they lack the resources to fight back.

On the positive side, this could be the start of something much bigger. YouTube might expand this system in the future to protect non-partnered creators, and perhaps even regular users. It could also inspire other platforms like TikTok, Instagram, and X to build similar tools.

Affiliate Disclosure:

This article may contain affiliate links. We may earn a commission on purchases made through these links at no extra cost to you.

About the Author: Deepanker Verma

Deepanker Verma is a well-known technology blogger and gadget reviewer based in India. He has been writing about Tech for over a decade.

Related Posts

Stay Updated with Techlomedia

Join our newsletter to receive the latest tech news, reviews, and guides directly in your inbox.