You can now make takedown requests for AI-generated YouTube videos that mimic your likeness


YouTube has changed its privacy policies to allow people to request the removal of AI-generated content that simulates their appearance or voice.

“If someone has used AI to alter or create synthetic content that looks or sounds like you, you can ask for it to be removed,” YouTube’s updated privacy guidelines state. “In order to qualify for removal, the content should depict a realistic altered or synthetic version of your likeness.”

YouTube quietly made the change in June, according to TechCrunch, which first reported on the new policy.

A removal request won’t be automatically granted; rather, YouTube’s privacy policy states the platform may give the uploader 48 hours to remove the content themselves. If the uploader doesn’t take action in that time, YouTube will initiate a review.

The Alphabet-owned platform says it will consider a number of factors in determining whether it will remove the video:

  • Whether the content is altered or synthetic
  • Whether the content is disclosed to viewers as altered or synthetic
  • Whether the person can be uniquely identified
  • Whether the content is realistic
  • Whether the content contains parody, satire or other public interest value
  • Whether the content features a public figure or well-known individual engaging in a sensitive behavior such as criminal activity, violence, or endorsing a product or political candidate

YouTube also notes that it requires “first-party claims,” meaning only the person whose privacy is being violated can file a request. However, there are some exceptions, including when a claim is being made by a parent or guardian; when the person in question doesn’t have access to a computer; when the claim is being made by a legal representative of the person in question; and when a close relative makes a request on behalf of a deceased person.

Notably, the removal of a video under this policy doesn’t count as a “strike” against the uploader, which could lead to the uploader facing a ban, withdrawal of ad revenue or other penalties. That’s because it falls under YouTube’s privacy guidelines and not its community guidelines, and only community guidelines violations lead to strikes.

The policy is the latest in a series of changes that YouTube has made to address the problem of deepfakes and other controversial AI-generated content appearing on its platform.

Last fall, YouTube announced it’s developing a system to enable its music partners to request the removal of content that “mimics an artist’s unique singing or rapping voice.”

That came in the wake of a number of musical deepfakes going viral last year, including the infamous “fake Drake” track that garnered hundreds of thousands of streams before it was pulled down by media platforms.

YouTube has also announced that AI-generated content on its platform must be labeled as such and introduced new tools allowing uploaders to add labels alerting viewers to the fact the content was created by AI.

“Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” YouTube said.

And regardless of labels, AI-generated content will be removed if it violates YouTube’s community guidelines, the platform said.

“For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers.”

YouTube is not alone in attempting to address the problem of deepfakes on its platform; TikTok, Meta and others have also been working to address the problem in the wake of controversies surrounding deepfakes that appeared on their platforms.


Legislation incoming

The problem is also being addressed at the legislative level. The US Congress is deliberating a number of bills, including the No AI FRAUD Act in the House of Representatives and the NO FAKES Act in the Senate, that would extend the right of publicity to cover AI-generated content.

Under these bills, individuals would be granted intellectual property rights over their likeness and voice, allowing them to sue creators of unauthorized deepfakes. Among others, the proposed laws are meant to protect artists from having their work or image stolen, and individuals from being exploited by sexually explicit deepfakes.


Even as it works to mitigate the worst impacts of AI-generated content, YouTube is itself working on AI technology.

The platform is in talks with the three majors – Sony Music Entertainment, Universal Music Group, and Warner Music Group – to license their music to train AI tools that will be able to create music, according to a report last month in the Financial Times.

That follows YouTube’s partnerships last year with UMG and WMG to create AI music tools in collaboration with musical artists.

Per the FT, YouTube’s earlier efforts at creating AI music tools fell short of expectations. Only 10 artists signed up to help develop YouTube’s Dream Track tool, which was meant to bring AI-generated music to YouTube Shorts, the video platform’s answer to TikTok.

YouTube hopes to sign “dozens” of artists up to its new efforts to develop AI music tools, people familiar with the matter told the FT.Music Business Worldwide



Source link