YouTube to Revise Monetization Rules Amid Rising Concerns Over AI-Generated Content

Wesley Pattinson
3 Min Read

YouTube is set to revise its policies to tackle the monetization of what it deems ‘inauthentic’ content, including mass-produced and repetitive videos that have proliferated due to advancements in AI technology. On July 15, the platform will refresh its YouTube Partner Program (YPP) Monetization policies, providing clearer guidelines on which types of content will be eligible for creators to earn revenue.

While the specific policy wording has yet to be disclosed, YouTube’s Help documentation states that creators have long been required to produce ‘original’ and ‘authentic’ material. The upcoming update aims to help creators better recognize what constitutes ‘inauthentic’ content today.

Some creators have expressed concern that the impending changes could restrict their ability to monetize various video formats, such as reaction videos or those displaying clips. However, Rene Ritchie, the Head of Editorial & Creator Liaison at YouTube, clarified in a video update that this is merely a ‘minor update’ to existing YPP policies, intended to more effectively identify mass-produced or repetitive content.

Additionally, Ritchie noted that this type of content has historically been ineligible for monetization, as it is often perceived by viewers as spam. What remains unaddressed is the ease with which such videos can be created in the current landscape.

The surge in AI technology has resulted in a flood of what some are calling ‘AI slop,’ a term used to describe low-quality media generated by generative AI. For example, AI-generated voices are frequently overlaid on images, video clips, or repurposed content using text-to-video tools. Channels filled with AI-generated music boast millions of subscribers, while fake AI-generated news videos, such as those related to the Diddy trial, have garnered significant viewership.

Notably, a true crime series on YouTube that gained viral attention was reported earlier this year to be entirely AI-generated, according to 404 Media. Even YouTube CEO Neal Mohan’s likeness was misused in an AI-generated phishing scam, despite the platform offering users tools to report deepfake content.

Though YouTube characterizes these forthcoming adjustments as a ‘minor’ update, the reality is that allowing the unchecked growth of this type of content and the resultant profitability for creators could pose risks to YouTube’s reputation and overall value. Therefore, it is understandable that the company is working to establish firm policies that empower it to enact widespread bans on creators of AI slop from the YPP.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *