Advertisment

Meta discontinues fact-checking, shifting to community-driven moderation similar to X

author-image
Piyush Singh
New Update
Meta

Meta is making a significant shift in its content moderation strategy. Inspired by X, the company is replacing its fact-checking program with a community-driven system. Here’s what you need to know.

Meta CEO Mark Zuckerberg has announced a significant shift in the company’s content moderation approach, signaling the end of its fact-checking program in favor of a community-driven system modeled after Elon Musk's X (formerly Twitter). The new policy will initially roll out in the U.S. across Meta platforms, including Facebook, Instagram, and Threads, and represents a marked change in how Meta handles misinformation and moderates speech. 

In a video announcement, Zuckerberg said it’s time for Meta to go back to its roots. The 2024 U.S. presidential election, he explained, was a cultural turning point that showed the need to prioritize free speech. “It’s time to get back to our roots around free expression,” he said. He was candid about the trade-offs, admitting the move could lead to more harmful content but stressed the importance of minimizing the unintentional removal of legitimate posts and accounts.

Meta first rolled out its fact-checking program in 2016 to tackle misinformation after criticism over Facebook’s role in spreading false information during the U.S. presidential election that year. Over time, the program expanded to work with nearly 100 organizations in more than 60 languages. But according to Zuckerberg, the system just wasn’t cutting it anymore. “Too many mistakes, too much censorship,” he said.

Also Read: Unlocking the integration of Meta AI across platforms and its impact on our digital experience

What to expect from the new moderation strategy:

Letting the community take the lead

The big idea now? Let the users help moderate. Meta’s new Community Notes program is inspired by a similar system on X, where users can flag and correct inaccuracies in posts. Zuckerberg called this a more democratic and tailored way of handling misinformation, saying it allows for “free expression with accountability.”

To support this shift, Meta is moving its trust and safety team from California to Texas. The idea is to work in a less polarized environment and potentially reduce bias in moderation decisions.

The risks and the rewards

Of course, this approach isn’t without its risks. Zuckerberg himself acknowledged that this change means more questionable content could slip through. “The reality is that this is a trade-off,” he said. “We’ll catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

Critics worry the move might fuel the spread of misinformation and hate speech, while free speech advocates see it as a step toward less corporate control over what’s shared online.

The tech giant is dealing with this transition of going from a heavily moderated platform to one that leans on its global user base to self-regulate the information it shares. Whether this shift will succeed in finding the right balance between free expression and responsible content management remains to be seen.

Here is how people on social media are reacting to the decision: 

What are your thoughts on this shift? Tell us in the comments below 

For more such content, follow us @socialketchup

 

Social media indian social media news Meta meta creator accounts Meta Verified meta creators Meta AI