Artificial intelligence content has proliferated online over the past few years, but those early conversations with mutated hands have turned into synthetic images and videos that are hard to distinguish from reality. Helping to create this problem, Google bears some responsibility for AI video control on YouTube. To that end, the company has begun rolling out its promised similarity detection system for creators.
Google's powerful and freely available AI models have fueled a rise in AI content, some of which aims to spread misinformation and harass individuals. Creators and influencers fear their brands could be tarnished by a flood of AI videos showing them saying and doing things that never happened. I'm worried about it. Google has made a big bet on the value of AI content, so banning AI from YouTube, as many want, is simply not happening.
Earlier this year, YouTube promised tools that would identify face-stealing content on the platform. The similarity detection tool, similar to a website's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first group of eligible creators have been notified that they can use similarity detection, but interested parties will have to give Google even more personal information to receive protection from AI spoofing.
Sneak Peek: Similarity detection on YouTube.
Similarity detection is currently in beta and undergoing limited testing, so not all creators will see it as an option in YouTube Studio. When it appears, it will be placed in the existing Content Discovery menu. In the YouTube demo video, the setup process assumes that there is only one host on the channel whose image needs to be protected. That person must verify their identity, which requires a photo of a government-issued ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so-attractive faces, but rules are rules.