Report Finds Community Notes Fail to Address Proven Misinformation on X

Report Finds Community Notes Fail to Address Proven Misinformation on X


With Meta launching the first stage of its roll-out of Community Notes, which will replace third-party fact-checkers, and put the onus of slowing the spread of misinformation into the hands of its users, a new report has once again highlighted the flaws of the Community Notes system that’s currently in place on X, which Meta is building its own approach around.

According to new analysis conducted by Bloomberg, which looked at over a million Community Notes that have been listed in X’s system, the vast majority are never actually shown to users of the app, despite many of those unpublished notes being deemed to be both helpful and accurate.

As per Bloomberg:

A Bloomberg Opinion analysis of 1.1 million Community Notes — written in English, from the start of 2023 to February 2025 — shows that the system has fallen well short of counteracting the incentives, both political and financial, for lying, and allowing people to lie, on X. Furthermore, many of the most cited sources of information that make Community Notes function are under relentless and prolonged attack — by Musk, the Trump administration, and a political environment that has undermined the credibility of truly trustworthy sources of information.”

According to Bloomberg’s analysis, fewer than 10% of the Community Notes submitted via X’s notes system are ever shown in the app, primarily because of the qualifier that all notes have to gain consensus from people of differing political opinions in order to be displayed.

As X explains:

“Community Notes assesses “different perspectives” entirely based on how people have rated notes in the past; Community Notes does not ask about or use any other information to do this (e.g. demographics like location, gender, or political affiliation, or data from X such as follows or posts). This is based on the intuition that Contributors who tend to rate the same notes similarly are likely to have more similar perspectives, while contributors who rate notes differently are likely to have different perspectives. If people who typically disagree in their ratings agree that a given note is helpful, it’s probably a good indicator the note is helpful to people from different points of view.”

That means that notes on the most divisive political misinformation, in particular, are never seen, and thus, such falsehoods are not addressed, nor impacted by crowd-sourced fact-checking.

Which is similar to what The Center for Countering Digital Hate (CCDH) found in its analysis of X’s community notes, published in October last year, which showed that 74% of proposed notes that the CCDH found to be accurate and rightful requests for amendment were never displayed to users.

As you can see in this chart, it’s not difficult to understand why notes on these specific topics fail to reach cross-political consensus. But these narratives are also among the most harmful forms of misinformation, sparking unrest, distrust, and broad-ranging division.

And in many cases, they’re wholly untrue, yet Community Notes is utterly useless in limiting such from being amplified, within an app that has 250 million daily users. And it’s about to become the primary tool against the spread of similar misinformation, in an app that has 12x more users.

Yet another study, conducted by Spanish fact-checking site Maldita, and published earlier this year, found that 85% of notes remain invisible to users on X.

Some have suggested that these stats actually prove that the Community Notes approach is working, by weeding out potentially biased and unnecessary censorship of information. But rejection rates of 80% to 90% don’t seem to befit an efficient, effective program, while the CCDH report also notes that it independently assessed the legitimacy of the notes in its study, and found that many did rightfully deserve to be displayed, as a means of dissuading misleading info.

In addition to this, reports also suggest that X’s Community Notes system has been infiltrated by organized groups of contributors who collaborate daily to up and downvote notes.

Which is also alluded to in Bloomberg’s analysis:

From a sample of 2,674 notes about Russia and Ukraine in 2024, the data suggests more than 40% were unpublished after initial publication. Removals were driven by the disappearance of 229 out of 392 notes on posts by Russian government officials or state-run media accounts, based on analysis of posts that were still up on X at the time of writing.”

So almost half of the Community Notes that had been both appended to, and then approved by Community Notes contributors on posts from Russian state media accounts, later disappeared, due to disputes from other Community Notes contributors. 

Seems like more than a glitch or coincidence, right?

To some degree, there will always be a level of erroneous or malicious activity within the Community Notes process, because of the deliberately low barriers for contributor entry. In order to be approved as a Community Notes contributor on X, all you need is an account that’s free of reports, and has been active for a period of time. Then all you need to do is basically tick a box that says that you’ll tell the truth and act in good faith, and you go onto the waiting list.

So it’s easy to get into the Community Notes group, and we don’t know if Meta is going to be as open with its contributors.

But that’s kind of the point, that the system uses the opinions of the average user, the average punter watching on, as part of a community assessment on what’s true and what’s not, and what deserves to have additional contextual info added.

That means that X, and Meta, don’t have to make that call themselves, which ensures that Elon and Zuck can wash their hands of any content amplification controversies in future.

Better for the company, and in theory, more aligned with community expectations, as opposed to potentially biased censorship.

But then again, there are certain facts that are not disputable, that there is clear evidence to support, that are still regularly debated within political circles.

And in a time where the President himself is prone to amplifying misleading and incorrect reports, this seems like an especially problematic time for Meta to be shifting to the same model.

At 3 billion users, Facebook’s reach is far more significant than X’s, and this shift could see many more misleading reports gain traction among many communities in the app.

For example, is Russia’s claim that Nazis are taking over Ukraine, which it’s used as part of justification for its attack on the nation, accurate?

This has become a talking point for right-wing politicians, as part of the push to lessen America’s support for Ukraine, yet researchers and academics have refuted such, and have provided definitive evidence to show that there’s been no political uprising around Nazism or fascism in the nation.

But this is the kind of claim that won’t achieve cross-political consensus, due to ideological and confirmation bias.

Could misinformation like this, at mass scale, reduce support for the pushback against Russia, and clear the way for certain political groups to dilute opposition to this, and similar pushes?

We’re going to find out, and when it’s too late, we’re likely going to realize that this was not the right path to take.    



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *