TikTok claims to clean up its feeds as it increases the removal of fake accounts, ads and pre-teen users
TikTok released its latest “community guidelines enforcement report” on Wednesday, which showed the short-form video app has removed many millions of ads, videos and accounts for violating a range of policies.
- Removed 59 million+ accounts in Q2, more than half of which were fake
- Removed 20 million+ accounts for age violations
- Removed 5 million+ ads for policy and guideline violations
- Removed 113 million videos for various violations related to safety, illegal activities, violence and other issues
The report itself has become somewhat of a new standard for social media companies which have taken to releasing quarterly disclosures — summarized internally — that address these topics as they chase proving their brand safety to users, ad dollars and regulators. Since Google released its first transparency report in 2010, others have followed with their own versions in more recent years such as Facebook in 2018 and TikTok itself for the firs time in 2019.
TikTok’s quarterly disclosures come as the rapidly growing platform faces more regulatory pressure. Members of Congress, both Democrats and Republicans, have grown increasingly concerned with TikTok on a number of fronts ranging from data privacy and child safety to misinformation. TikTok is also under the microscope in Europe where the company is facing a potential $29 million fine in the United Kingdom over claims that it didn’t protect children’s privacy between 2018 and 2020. And during a forum in Brussels earlier this week, Brendan Carr, a Federal Communications Commissioner, joined European Union officials to discuss international data flows and whether TikTok’s poses a potential national security threat if user data can be accessed by government officials in China.
A TikTok spokesperson declined Digiday’s interview request about its own report. However, in a blog post about the findings, Cormac Keenan, head of trust and safety, wrote that TikTok recognizes “the impact misinformation can have in eroding trust in public health, electoral processes, facts, and science.”
“We are committed to being part of the solution,” Keenan wrote. “We treat misinformation with the utmost seriousness and take a multi-pronged approach to stop it from spreading while elevating authoritative information and investing in digital literacy education to help get ahead of the problem at scale.”
Here’s a summary of the findings:
Finding and removing fake accounts
TikTok, which now claims to have more than 1 billion active users, removed 59.4 million accounts during the second quarter of 2022, according to the company’s new report — up from 14.9 million during the same period in 2021. Of the accounts it removed between April and the end of June, more than half — or 33.6 million — were deemed to be fake accounts. (The company got rid of 20.8 million fake accounts during the first three months of 2022, but just 1.7 million during the second quarter of 2021.) TikTok also removed another 20.6 million accounts suspected to have been created by kids under the age of 13, nearly twice as many as it removed for age violations a year ago.
As the company continues to grow its advertiser base, it’ll be increasingly important to have an accurate count so it doesn’t face the same backlash as other social networks when it comes to accurate marketing measurement.
The stakes are especially high right now for TikTok, said Claudia Ratterman, a director analyst at Gartner. Although major social networks release regular transparency reports, she said TikTok’s parent company, the China-based tech company ByteDance, makes it even more important to keep “building trust for continued growth” amid U.S. concerns that Chinese government officials could access U.S. users’ data.
“When users trust a platform, they are more likely to continue to use it and spend time on it regularly which translates into revenue,” Ratterman said. “Same goes for advertisers, if they trust the platform, they are more likely to invest advertising dollars on the platforms.”
Removing more ad and video content
The total ads that TikTok removed for violating policies and guidelines declined from 5.5 million in the first quarter to 5.1 million in the second quarter. (For comparison, the company removed 1.8 million ads in the second quarter of 2021.) TikTok also removed another 4.2 million ads in the second quarter due to actions taken against accounts—down from 8.7 million ads removed during the first three months of the year — but didn’t disclose what the enforcement actions were so that malicious actors couldn’t gain insights for evading detection.
In addition to removing ads and accounts, TikTok also removed more videos in the second quarter, taking down 113 million compared to 102 million during the first quarter of the year. Of the total videos removed in the past three months, it took down 43.7% with “minor safety” violations, 21.2% related to illegal activities and regulated goods, 10.7% that had adult nudity or sexual activity, 9.3% with violence and just 1.7% for having hateful behavior.
Advertiser impact
David Hook, who leads marketing at the Portland-based influencer agency Outloud Group, said TikTok’s disclosures about fake accounts contrast it with Twitter, which has faced allegations from Elon Musk and whistleblower Peiter Zatko questioning its true total number of bots. According to Hook, TikTok’s use of artificial intelligence to eliminate harmful content also helps make it a safer platform for its core audience of younger users. (According to Statista, around 25% of TikTok users in the U.S. were between the ages of 10 and 19 as of September 2021 and another 22.4% were between 20 and 29.)
TikTok’s latest report “demonstrates an ongoing maturation process for the industry on brand safety issues,” according to Mike Zaneis, CEO of Trustworthy Accountability Group and the Brand Safety Institute.
“TikTok has leaned into these efforts and it is great to see them increasing transparency into their operations as well as achieving real improvements to protect marketers and consumers,” Zaneis said.
Yomei Kajita, svp of paid social at digital marketing agency 3Q/Dept, said some smaller advertisers don’t want to pull their ads unless there’s something “extremely bad” happening that they want to avoid. In some ways, it’s similar to what many said about Facebook: for years, the ROI was too good to justify pulling spending even if advertisers had issues with brand safety on the platform. Kajita said TikTok’s not as effective yet in the same way, but it’s too popular to ignore.
Along with its quality control efforts, TikTok has been rolling out a number of new features for advertisers looking to reach the platform’s user base. So far this year, it’s introduced new tools for creators, beta tests for search ads, new formats for shoppable content and even a BookTok feature with Penguin Random House.
“It’s a good thing that they have the tech to find these accounts and shut them down,” Kajita said. “But as an advertiser, when I’m talking to my clients and they ask if I recommend TikTok for them, are the people they’re targeting real people or are they bots?”
More in Marketing
What does the Omnicom-IPG deal mean for marketing pitches and reviews?
Pitch consultants predict how the potential holdco acquisition could impact media and creative reviews heading into the new year.
AdTechChat organizers manage grievances amid fallout of controversial Xmas party
Community organizers voice regret over divisive entertainment act at London-hosted industry party, which tops a list of grievances.
X tries to win back advertisers with self-reported video stats
Is X’s big bet on video real growth or just a number’s game?