AI Gone Wrong: TikTok Tool Lets Users Create Offensive Content

In a security snafu, TikTok accidentally shared a link to an unfinished version of its new AI tool. This tool allows users to create videos featuring digital avatars that can speak anything. Unfortunately, the early version lacked safety measures, and some users, like CNN, were able to generate videos containing harmful content, including quotes from Hitler and dangerous messages. While this unpolished version has been taken down, TikTok assures users that the final, secure tool is still on track for launch.

TikTok’s new advertising tool, Symphony Digital Avatars, launched this week with a surprising glitch. The tool allows businesses to create ads featuring digital spokespeople modeled after real actors. These avatars can even be programmed to speak specific lines using AI dubbing technology, all within TikTok’s advertising guidelines.

However, CNN discovered a security issue. While the intended version of Symphony Digital Avatars restricts access to users with business accounts, CNN found a temporary version that anyone with a regular TikTok account could access. This earlier version lacked safeguards, allowing anyone to create videos with the avatars saying potentially harmful things.

https://x.com/jonsarlin/status/1804172061113487606

This allowed them to create videos featuring the AI avatars speaking harmful content, including excerpts from Osama bin Laden’s “Letter to America,” a white supremacy slogan, and even misinformation about voting dates.

These videos lacked a crucial safeguard present in the final version of the tool: a watermark indicating that the content is AI-generated.

Thankfully, this unsecured version has been taken down. TikTok assures users that the official launch version will be secure and only accessible to authorized businesses, complete with AI watermarks.



Get Alerts

Follow ProPakistani to get latest news and updates.


ProPakistani Community

Join the groups below to get latest news and updates.



>