TikTok has published its latest Transparency Report, as required under the EU Code of Practice, which outlines all of the enforcement actions it undertook within EU member states over the last six months of last year.
And there are some interesting notes in regard to the impact of content labeling, the rise of AI-generated or manipulated media, foreign influence operations, and more.
You can download TikTok’s full H2 2024 Transparency Report here (warning: it’s 329 pages long), but in this post, we’ll take a look at some of the key notes.
First off, TikTok reports that it removed 36,740 political ads in the second half of 2024, in line with its policies against political information in the app.
Political ads are not permitted on TikTok, though as the number would suggest, that hasn’t stopped a number of political groups from seeking to use the reach of the app to expand their messaging.
That highlights both the rising influence of TikTok more broadly, and the ongoing need for vigilance in managing potential misuse by these groups.
TikTok also removed almost 10 million fake accounts in the period, as well as 460 million fake likes that had been allocated by these profiles. These could have been a means to manipulate content ranking, and the removal of this activity helps to ensure authentic interactions in the app.
Well, “authentic” in terms of this coming from real, actual people. It can’t do much about you liking your friend’s crappy post because you’ll feel bad if you don’t.
In terms of AI content, TikTok also notes that it removed 51,618 videos in the period for violations of its synthetic media videos for violations of its AI-generated content rules.
“In the second half of 2024, we continued to invest in our work to moderate and provide transparency around AI-generated content, by becoming the first platform to begin implementing C2PA Content Credentials, a technology that helps us identify and automatically label AIGC from other platforms. We also tightened our policies prohibiting harmfully misleading AIGC and joined forces with our peers on a pact to safeguard elections from deceptive AI.”
Meta recently reported that AI-generated content wasn’t a major factor in its election integrity efforts last year, with ratings on AI content related to elections, politics, and social topics representing less than 1% of all fact-checked misinformation. Which, on balance, is probably close to what TikTok saw as well, though that 1%, at such a massive scale, that still represents a lot of AI-generated content that’s being assessed and rejected by these apps.
This figure from TikTok puts that in some perspective, while Meta also reported that it rejected 590k requests to generate images of U.S. political candidates within its generative AI tools in the month leading up to election day.
So while AI content hasn’t been a major factor as yet, more people are at least trying it, and you only need a few of these hoax images and/or videos to catch on to make an impact.
TikTok also shared insights into its third-party fact-checking efforts:
“TikTok recognizes the important contribution of our fact-checking partners in the fight against disinformation. In H2 we onboarded two new fact-checking partners and expanded our fact-checking coverage to a number of wider-European and EU candidate countries with existing fact-checking partners. We now work closely with 14 IFCN-accredited fact-checking organizations across the EU, EEA and wider Europe who have technical training, resources, and industry-wide insights to impartially assess online misinformation.”
Which is interesting in the context of Meta moving away from third-party fact-checking, in favor of crowd-sourced Community Notes to counter misinformation.
TikTok also notes that content shares were reduced by 32%, on average, among EU users when an “unverified claim” notification was displayed to indicate that the information presented in the clip may not be true.
In fairness, Meta has also shared data which suggests that the display of Community Notes on posts can reduce the spread of misleading claims by 60%. That’s not a direct comparison to this stat from TikTok (TikTok’s measuring total shares by count, while the study looked at overall distribution), but it could be around about the same result.
Though the problem with Community Notes is that the majority are never displayed to users, because they don’t gain cross-political consensus from raters. As such, TikTok’s stat here actually does indicate that there is a value in third-party fact checks, and/or “unverified claim” notifications, in order to reduce the spread of potentially misleading claims.
For further context, TikTok also reports that it sent 6k videos uploaded by EU users to third-party fact-checkers within the period.
That points to another issue with third-party fact-checking, that it’s very difficult to scale this system, meaning that only a tiny amount of content can actually be reviewed.
There’s no definitive right answer, but the data here does suggest that there is at least some value to maintaining an impartial third-party fact-checking presence to monitor some of the most harmful claims.
There’s a heap more in TikTok’s full report (again, over 300 pages), including a range of insights into EU-specific initiatives and enforcement programs.