/socialketchup/media/media_files/2025/01/15/nyYZfpM2i7HqEPNPi636.png)
Meta is making a significant shift in its content moderation strategy. Inspired by X, the company is replacing its fact-checking program with a community-driven system. Here’s what you need to know.
Meta CEO Mark Zuckerberg has announced a significant shift in the company’s content moderation approach, signaling the end of its fact-checking program in favor of a community-driven system modeled after Elon Musk's X (formerly Twitter). The new policy will initially roll out in the U.S. across Meta platforms, including Facebook, Instagram, and Threads, and represents a marked change in how Meta handles misinformation and moderates speech.
In a video announcement, Zuckerberg said it’s time for Meta to go back to its roots. The 2024 U.S. presidential election, he explained, was a cultural turning point that showed the need to prioritize free speech. “It’s time to get back to our roots around free expression,” he said. He was candid about the trade-offs, admitting the move could lead to more harmful content but stressed the importance of minimizing the unintentional removal of legitimate posts and accounts.
Meta first rolled out its fact-checking program in 2016 to tackle misinformation after criticism over Facebook’s role in spreading false information during the U.S. presidential election that year. Over time, the program expanded to work with nearly 100 organizations in more than 60 languages. But according to Zuckerberg, the system just wasn’t cutting it anymore. “Too many mistakes, too much censorship,” he said.
Also Read: Unlocking the integration of Meta AI across platforms and its impact on our digital experience
What to expect from the new moderation strategy:
Letting the community take the lead
The big idea now? Let the users help moderate. Meta’s new Community Notes program is inspired by a similar system on X, where users can flag and correct inaccuracies in posts. Zuckerberg called this a more democratic and tailored way of handling misinformation, saying it allows for “free expression with accountability.”
To support this shift, Meta is moving its trust and safety team from California to Texas. The idea is to work in a less polarized environment and potentially reduce bias in moderation decisions.
The risks and the rewards
Of course, this approach isn’t without its risks. Zuckerberg himself acknowledged that this change means more questionable content could slip through. “The reality is that this is a trade-off,” he said. “We’ll catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
Critics worry the move might fuel the spread of misinformation and hate speech, while free speech advocates see it as a step toward less corporate control over what’s shared online.
The tech giant is dealing with this transition of going from a heavily moderated platform to one that leans on its global user base to self-regulate the information it shares. Whether this shift will succeed in finding the right balance between free expression and responsible content management remains to be seen.
Here is how people on social media are reacting to the decision:
This! Is! Pathetic!
— Christopher Webb (@cwebbonline) January 7, 2025
X is a disinformation dumpster fire—so why would anyone use it as a model for fact-checking? Oh wait…
🚨 Meta is ditching its fact-checking program for a ‘community notes’ system like X. Because who needs truth when you can crowdsource chaos? pic.twitter.com/lKUBbfiTVg
As Meta announces the end to professional fact-checking, please -please- accept finally that social media is not a viable way to get factual information. It was a noble experiment. It failed. Traditional journalism, while imperfect, is the way. Please pay a little bit and use it.
— Brian Mann (@BrianMannADK) January 7, 2025
Mark Zuckerberg says Meta's fact-checking program became "something out of 1984" due to political bias on behalf of the fact-checkers and it was destroying so much trust that it had to be stopped pic.twitter.com/LY5WKTfALT
— Tsarathustra (@tsarnick) January 10, 2025
Guy spent billions making AI models that know the whole internet and didn’t even consider just having his models do the fact checking. I wonder why? $META pic.twitter.com/gFp4798ipw
— Dr_Gingerballs (@Dr_Gingerballs) January 7, 2025
Mark Zuckerberg is removing fact-checking from Meta. 👀
— Ryan Shead (@RyanShead) January 7, 2025
I hope most rational people are asking themselves why removing fact-checking is necessary for free speech and expression.
Only liars and con-artists don’t like being fact-checked. pic.twitter.com/RuER0Yhpt5
What are your thoughts on this shift? Tell us in the comments below
For more such content, follow us @socialketchup