YouTube is making new rules about videos made or changed by artificial intelligence (AI), and this includes adding labels.
The video platform, owned by Google, said in a blog post on Tuesday that it will be making some changes in the coming months. One of these changes is that creators will need to tell if their video was made using AI when they upload it. This information will be shown as a label on the video to let viewers know.
YouTube’s New Rules: Making AI Videos Clear and Responsible
YouTube used an example to explain why these rules are important. They mentioned videos created by AI that make it look like something happened, but it didn’t. This could be showing a person saying or doing something they never did. YouTube said this is crucial, especially when the content talks about sensitive topics like elections, ongoing conflicts, public health crises, or public officials.
Under the new rules, creators who don’t tell viewers that their content was made by AI might have their videos removed from YouTube if they repeatedly break these rules, according to the company.
The descriptions of videos using AI will now show labels indicating their use of artificial intelligence. For content discussing sensitive topics, an additional, more noticeable label will be placed on the video player.
YouTube’s Content Management
YouTube also mentioned a new option. People will soon be able to ask for the removal of AI-generated content that imitates a specific person, using their face or voice. This can be done through the privacy request process at the company. However, it’s important to note that not all requests for removal will be accepted.
The company mentions that YouTube won’t remove all content and will take different things into account when looking at removal requests. Factors considered may include whether the content is parody or satire, if the person asking for removal can be easily recognized, or if it involves a public official or someone well-known, which might have stricter criteria.
YouTube also stated that it plans to incorporate a sense of responsibility into its AI tools and features.
“We’re thinking carefully about how we can build upon years of investment into the teams and technology capable of moderating content at our scale,” the announcement said. “This includes significant, ongoing work to develop guardrails that will prevent our AI tools from generating the type of content that doesn’t belong on YouTube.”
YouTube is making rules for videos created by AI, asking creators to tell viewers if AI was used and adding labels. This is to avoid misleading content, especially on sensitive topics. If creators don’t follow these rules, their videos might be taken down. YouTube is also letting people ask for the removal of AI-generated content imitating specific individuals. But not all removal requests will be accepted. The company is working to improve content moderation and make AI tools more responsible.