YouTube Launches New AI Process Requirements for Uploads

YouTube Launches New AI Process Requirements for Uploads

YouTube‘s looking to expand its disclosures around AI generated content, with a new element within Creator Studio where creators will have to disclose when they upload realistic-looking content that’s been made with AI tools.

As you can see in this example, now, YouTube creators will be required to check the box when the content of their upload “is altered or synthetic and seems real”, in order to avoid deepfakes and misinformation via manipulated or simulated depictions.

When the box is checked, a new marker will be displayed on your video clip, letting the viewer know that it’s not real footage.

As per YouTube:

“The new label is meant to strengthen transparency with viewers and build trust between creators and their audience. Some examples of content that require disclosure include using the likeness of a realistic person, altering footage of real events or places, and generating realistic scenes.”

YouTube further notes that not all AI use will require disclosure.

AI generated scripts and production elements are not covered by these new rules, while “clearly unrealistic content” (i.e. animation), color adjustments, special effects, and beauty filters will also be safe to use without the new disclosure.

But content that could mislead will need a label. And if you don’t add one, YouTube can also add one for you, if it detects the use of synthetic and/or manipulated media in your clip.

It’s the next step for YouTube in ensuring AI transparency, with the platform already announcing new requirements around AI usage disclosure last year, with labels that will inform users of such use.

This new update is the next stage in this development, adding more requirements for transparency with simulated content.

Which is a good thing. Already, we’ve seen generated images cause confusion, while political campaigns have been using manipulated visuals, in the hopes of swaying voter opinions.

And definitely, AI is going to be used more and more often.

The only question, then, is how long will we actually be able to detect it?

Various solutions are being tested on this front, including digital watermarking to ensure that platforms know when AI has been used. But that won’t apply to, say, a copy of a copy, if a user re-films that AI content on their phone, for example, removing any potential checks.

There will be ways around such, and as generative AI continues to improve, particularly in video generation, it is going to become more and more difficult to know what’s real and what’s not.

Disclosure rules like this are critical, as they give platforms a means of enforcement. But they might not be effective for too long.  

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : SocialmediaToday – https://www.socialmediatoday.com/news/youtube-launches-new-ai-process-requirements-for-uploads/710622/

Exit mobile version