By now you’ve probably heard: Meta’s CEO, Mark Zuckerberg, announced in a video on Tuesday that beginning soon his company, which includes Facebook, Instagram, WhatsApp, and Threads, will no longer fact-check user posts on its platforms. Instead, it will rely on a “Community Notes” style, user-based fact-checking system similar to what Elon Musk implemented at Twitter, which he renamed as “X.”
It is less known and understood that the elimination of most content moderation includes a lifting of restrictions on highly abusive and dehumanizing content, particularly when directed at women, immigrants, and the LGTBQ+ community.
It’s about to get very ugly out there.
Zuckerberg styles this as a return to his free speech ideals and the need to avoid too much censorship. What he’d love us to ignore, however, is what has happened on Meta’s sites when misinformation or foreign propaganda was permitted to propagate unchecked.
From Covid conspiracies to right-wing militia calls to arms, to even the deadly rhetoric of ethnic cleansing, Zuckerberg’s platforms have been at the center of some of the most dangerous and violent societal moments in recent years. Before, the company’s moderators and outside fact-checkers often stepped in to tamp down the worst of it, banning extremist groups and harmful, inaccurate statements before they could spread widely. When they failed to act, deadly violence sometimes erupted.
Now Zuckerberg has told us we’re on our own, left to battle it out using what amounts to community post-it notes on a few of the most viral bits of mis- or disinformation.
The likely result is entirely, and tragically, predictable. We need only look at what has happened over at Twitter/X since Musk took over to see what the future will hold on Meta’s platforms: a sharp rise in hate groups and hate speech; an unending gusher of false or misleading claims; and the gradual drowning out of voices for tolerance, acceptance, and peace.
With this change in policy now looming, it’s important to understand and review how we got to this point, how Meta’s positions on content moderation evolved over time, and what precisely the new policies at Meta will mean, especially for some of our most vulnerable communities.
Why the change?
Much of the press coverage to date has focused on Zuckerberg’s apparent capitulation to Trump and the MAGA radicals, who have been pressuring him to ditch fact-checking and moderation. The coercion from Trump has even included threats of jail time. To Zuckerberg’s critics, the announcement on Tuesday was just another example of a tech bro bowing to the new powers in Washington. Indeed, Trump himself indicated that the changes were “probably” the result of his own threats upon Zuckerberg.
Those who know Zuckerberg personally, however, offer a somewhat different view, one that aligns with Facebook’s origin story (it was, after all, originally a site to rate female classmates’ appearances) and his recent interest in hypermasculine culture, including taking up mixed martial arts fighting and appearing on Joe Rogan. Just as we’ve seen with Musk, the embattled Meta CEO’s politics have shifted noticeably to the right, and he appears eager to wash his hands of what he perceives as “woke” leftist handwringing over content on his sites.
The changes at Meta are more than at the policy level. As Kevin Roose, a tech writer for the Times and host of the Hard Fork podcast noted, they also include important personnel changes:
Last week, Meta’s global policy chief, Nick Clegg — a former British deputy prime minister who was chosen for his centrist bona fides — was replaced by Joel Kaplan, a longtime Republican operative who has acted for years as Mr. Zuckerberg’s liaison to the pro-Trump right.
On Monday, Meta announced the appointment of three new board members, including Dana White, the chief executive of the Ultimate Fighting Championship and a close friend and political ally of Mr. Trump’s.
Democracy and human rights activists are understandably alarmed. Some of the biggest social platforms, where billions of people obtain their daily news and information, are now set to go the way of X. And in that cesspool of a site, misinformation and hate speech have proliferated rapidly since Musk’s acquisition of the company.
From mea culpa to not my problem
Facebook used to champion the idea of fact-checking. In 2016, after the first election of Donald Trump, the company really took it on the chin. Critics charged, correctly, that Facebook had permitted the Russians to sow distrust and target specific groups of citizens such as African American voters. It also had allowed companies like Cambridge Analytica to harvest psychographic profiles of voters so they could be directly targeted with political ads.
Zuckerberg responded by turning to outside fact-checking organizations from AP to Snopes, along with third parties approved by the International Fact-Checking Network, to help identify and flag or remove misleading or false posts. For years, Facebook (and later Meta) struggled to keep misinformation in check.
But these efforts, which cost billions of dollars and absorbed millions of people-hours, came at a high political cost. The right viewed Facebook’s efforts as politically motivated and biased against their information ecosystem. In fairness, the fact-checkers probably did lean far harder on the right, though not for the reasons MAGA folks asserted. Most misinformation and conspiracies originate and spread from the right, so it’s hardly surprising that their content would be flagged or removed more often, with occasional over-policing happening on the left, too.
Zuckerberg found, however, that playing the content police, especially over politically controversial content, was a thankless job. The right was howling, and the left was still suspicious of his motives and future plans. Most users would never see how much dangerous disinformation got suppressed; they would only remember the times they got put in Facebook jail.
In 2019, Zuckerberg began to push back. In September of that year, he announced that his companies would no longer moderate politicians’ speech or fact-check their political ads—a move that could give a big boost to serial liars such as Trump and GOP politicians generally.
This move seemed only to confirm Democrats’ suspicions that Zuckerberg wouldn’t stand up to right-wing attacks and was only ever in it for his company’s bottom line.
To make matters worse, Zuckerberg followed up this move with a speech in October of 2019 at Georgetown University in which he declared that he didn’t want to see his company become an “arbiter of speech.” As the Times reported,
To make his case, Mr. Zuckerberg invoked Frederick Douglass, the Rev. Dr. Martin Luther King Jr., the Vietnam War and the First Amendment. He contrasted Facebook’s position with that of China, where the authorities control and censor speech, and which he tried unsuccessfully for years to enter to turbocharge his company’s business.
“People having the power to express themselves at scale is a new kind of force in the world— a Fifth Estate alongside the other power structures of society,” Mr. Zuckerberg, [then] 35, said.
He added that despite the messiness of free speech, “the long journey towards greater progress requires confronting ideas that challenge us.”
“I’m here today because I believe we must continue to stand for free expression,” he said.
His words were not well received by many on the left, who were still smarting from his new policy allowing right-wing political lies in ads.
A spokesperson for the Biden campaign at the time derided the speech as a “feigned concern for free expression” which “demonstrates how unprepared his company is for this unique moment in our history and how little it has learned over the past few years.” Sen. Elizabeth Warren (D-MA) tweeted, rather prophetically in hindsight, that “Facebook is actively helping Trump spread lies and misinformation. Facebook already helped elect Donald Trump once. They might do it again—and profit off of it.”
Even a daughter of Dr. King tweeted that she’d heard the speech and took issue with it. “I’d like to help Facebook better understand the challenges #MLK faced from disinformation campaigns launched by politicians. These campaigns created an atmosphere for his assassination.”
Covid as a tipping point
When the Covid-19 pandemic began in 2020, Facebook was flooded with dangerous misinformation about the virus and later vaccine safety. Beginning in 2020, eager to present itself as a good global citizen, Facebook began removing false claims about Covid-19 vaccines that had been debunked by public health experts. It expanded that list in February of 2021. These included claims that
COVID-19 is man-made or manufactured
Vaccines are not effective at preventing the disease they are meant to protect against
It’s safer to get the disease than to get the vaccine
Vaccines are toxic, dangerous or cause autism
You can almost hear the pride behind these moves in the company’s public statements about its efforts:
These new policies will help us continue to take aggressive action against misinformation about COVID-19 and vaccines.
We will begin enforcing this policy immediately, with a particular focus on Pages, groups and accounts that violate these rules, and we’ll continue to expand our enforcement over the coming weeks. Groups, Pages and accounts on Facebook and Instagram that repeatedly share these debunked claims may be removed altogether. We are also requiring some admins for groups with admins or members who have violated our COVID-19 policies to temporarily approve all posts within their group. Claims about COVID-19 or vaccines that do not violate these policies will still be eligible for review by our third-party fact-checkers, and if they are rated false, they will be labeled and demoted.
Finally, we are continuing to improve Search results on our platforms. When people search for vaccine or COVID-19 related content on Facebook, we promote relevant, authoritative results and provide third-party resources to connect people to expert information about vaccines. On Instagram, in addition to surfacing authoritative results in Search, in the coming weeks we’re making it harder to find accounts in search that discourage people from getting vaccinated.
But to hear Zuckerberg tell it today, the impetus for all this was external pressure from politicians, not internally driven. According to him now, the Biden administration had forced the company to take down false or misleading Covid-19 content. He described this in a rather craven letter in August of 2024 to Rep. Jim Jordan (R-OH), who heads the House Judiciary Committee.
The letter fairly reeks of revisionism. In it, Zuckerberg alleged that officials “repeatedly pressured” Facebook for months to take down “certain COVID-19 content including humor and satire.” The officials “expressed a lot of frustration” when the company didn’t agree, he wrote.
“I believe the government pressure was wrong and I regret that we were not more outspoken about it,” Zuckerberg stated.
Comparing the company’s tough stance in 2020 and 2021 to its head down and kneeling position today, it’s hard not to conclude that Zuckerberg is playing political chameleon and weathervane, changing his colors as the political winds blow.
“Too many mistakes”
In his video address announcing the policy changes at Meta, Zuckerberg stated that the content moderation and fact-checking efforts his company implemented, especially following the 2016 election, created “too many mistakes” and “too much censorship.” In place of its current system, Meta will turn to a “Community Notes” style program after watching “this approach work on X.”
This is a highly dubious claim. Community Notes has not meaningfully impacted the spread of false information on Twitter/X. This is in part because it takes time for a Community Note to appear and be vetted by the “community” on X. By the time a note is attached to a post, that misinformation likely has already gone viral—enough for a note to be warranted. Meanwhile, screenshots and shares of the same information usually won’t carry the note, or the false information can simply be copied and reposted in a new note.
Without some kind of automated review and actual shutting down of the worst offending accounts, it is hard to see how a few Community Notes here and there would ever be enough to arrest the spread of false information and conspiracies.
Zuckerberg nonetheless is jettisoning fact-checking, even though studies have shown that it actually works. In 2020, a group of researchers tested the effectiveness of fact-checking across four countries: Argentina, Nigeria, South Africa, and the United Kingdom. It evaluated the effect of 22 fact-checks, including two that were used in all four countries, and found that they reduced belief in misinformation, with most effects persisting even two weeks later. It further concluded,
A meta-analytic procedure indicates that fact-checks reduced belief in misinformation by at least 0.59 points on a 5-point scale. Exposure to misinformation, however, only increased false beliefs by less than 0.07 points on the same scale. Across continents, fact-checks reduce belief in misinformation, often durably so.
Similarly, a study at Stanford concluded that, due to efforts to combat misinformation online, by the 2020 election a far smaller portion of Americans had visited websites with false or misleading information, even though the number of such sites had increased dramatically. The average number of visits among those who had gone dropped, along with their time spent on site.
It is disingenuous for Zuckerberg to claim today that they must shut the entire program down because it creates “too many mistakes” and “too much censorship.” The truth is, Meta’s fact-checking has become too politically costly in his mind to continue. The company is still constantly monitoring, flagging and deleting posts from the right, and Zuckerberg is simply uninterested in doing it any longer, regardless of the consequences.
He hinted at this a bit in his video address when discussing what will happen to what remains of the company’s U.S.-based trust and safety and content moderation operations. According to Zuckerberg, the team will be moved to Texas instead of California “to do this work in places where there’s less concern about the bias of our teams.”
The dangers of limiting content moderation
Author Michael Harriot is known for his takedowns and unique perspectives, and last night he reminded readers what’s at stake with the coming abandonment of content moderation and fact-checking by Meta. He provided a few examples:
When the alt-right and Nazis marched in Charlottesville, Facebook removed the event page the day before it took place. It was late to the game, but if that had remained up, how much bigger might the rally have been?
Thirteen hours prior to Kyle Rittenhouse grabbing his AR15 and heading across state lines, leading to three shootings by him in Kenosha, Wisconsin, a Facebook page actively solicited armed individuals to protect neighborhoods that night. It urged followers to “take up arms” and defend the city “from the evil thugs.” Over 300 people RSVPed to an event titled “Armed Citizens to Protect Our Lives and Property.” Infowars picked up that call to arms and broadcast it to its listeners. Facebook was late to take the page down, but imagine today if such pages and events were simply left up.
While 95 percent of the George Floyd protests were peaceful, those that were not saw right wing groups instigating violence. Will these groups be permitted to organize online on Meta platforms again? In a time where civil protest is again likely, given it is Trump once again who will be in the White House, will Meta do nothing as the far right plans openly for disruptions and chaos?
There was also “Pizzagate,” a debunked conspiracy theory that falsely claimed Democrats were running a pedophile ring inside the basement of a D.C. pizza shop (which had no basement). One unstable adherent was so worked up that he entered the shop with a gun and began shooting off locks, terrifying employees and patrons. Thankfully no one was injured or killed, but is this the kind of conspiracy Meta will allow to proliferate now?
In places in the world without sufficient content moderation guardrails in place, misinformation can lead to deadly consequences and even genocidal campaigns. For example, in Sri Lanka, false rumors spread about a Muslim plot to destroy that country’s Buddhist majority. On Sinhalese-language Facebook groups, extremists with large followings goaded members to attack Muslims, and a man was burned to death. That was but one part of spiraling attacks across the island nation.
As the Times reported at the time, this provides a chilling glimpse into what the future could hold:
A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.
Facebook declined to respond in detail to questions about its role in Sri Lanka’s violence, but a spokeswoman said in an email that “we remove such content as soon as we’re made aware of it.” She said the company was “building up teams that deal with reported content” and investing in “technology and local language expertise to help us swiftly remove hate content.”
With these teams soon depleted or dissolved and hate speech permitted to proliferate, mob rule, fed by false rumors and likely also aided by AI, will almost certainly lead to more such incidents.
Nowhere was the ultimate risk of unfettered misinformation and hate speech more apparent than in the nation of Myanmar. Amnesty International investigated the violence that erupted there against an ethnic minority, the Rohingya, as a part of a state-sanctioned pogrom. It concluded,
“In 2017, the Rohingya were killed, tortured, raped, and displaced in the thousands as part of the Myanmar security forces’ campaign of ethnic cleansing. In the months and years leading up to the atrocities, Facebook’s algorithms were intensifying a storm of hatred against the Rohingya which contributed to real-world violence.”
The use of Facebook to spread genocidal rhetoric was chilling. Per Amnesty,
In the months and years prior to the crackdown, Facebook in Myanmar had become an echo chamber of anti-Rohingya content. Actors linked to the Myanmar military and radical Buddhist nationalist groups flooded the platform with anti-Muslim content, posting disinformation claiming there was going to be an impending Muslim takeover, and portraying the Rohingya as “invaders.”
In one post that was shared more than 1,000 times, a Muslim human rights defender was pictured and described as a “national traitor.” The comments left on the post included threatening and racist messages, including ‘He is a Muslim. Muslims are dogs and need to be shot,’ and ‘Don’t leave him alive. Remove his whole race. Time is ticking.’
Content inciting violence and discrimination went to the very top of Myanmar’s military and civilian leadership. Senior General Min Aung Hlaing, the leader of Myanmar’s military, posted on his Facebook page in 2017: “We openly declare that absolutely, our country has no Rohingya race.” He went on to seize power in a coup in February 2021.
There are strong parallels here with our own politics, especially given the use by Trump and the right of the same labels (“invaders” and “traitor”) and similar dehumanizing language (“dogs” in their case, “vermin” in ours).
Allowing hate speech again
Lost in the details over Meta’s policy shift on content moderation is precisely what kind of language will now be permitted on its sites. Meta has answered vaguely by crossing out parts of its old policy and adding new policies.
Wired recently compiled a list of changes that the new policy will bring, and online commentators have pointed out others. Below are some examples.
Under the old policy, users were not permitted to “refer to women as household objects or property.” That part of the former policy is now crossed out, so we can assume objectification of women is now fair game.
A new policy will now also permit “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality.”
Meta’s new policy has removed language prohibiting the targeting of people based on race, ethnicity or gender when paired with “claims they have spread the coronavirus.” This is likely to impact AANHPIs the most, as they are frequently falsely attacked as carriers or superspreaders of diseases.
Meta appears to have limited its prohibition on hateful speech to anything that will “incite imminent violence or intimidation.” Before, it warned against the use of hate speech generally to incite “offline” violence. In other words, if the hate speech isn’t directly calling for immediate violence, it could pass muster. We know from experience, however, that by the time violence is imminent, it is far too late to implement controls.
To underscore how horrible and dehumanizing the new Meta policy is, the company will now even permit users to refer to transgender or non-binary people as “it” based on parts of the old policy that are crossed out.
Technology and the mob
In his latest book, Nexus: A Brief History of Information Networks From the Stone Age to AI, author Yuval Noah Harari reminds us that, despite what we have been led to believe, new technology doesn’t always bring about greater enlightenment. In fact, the record is quite grimly the opposite.
Take the movable-type printing press, which flooded Europe with books and supposedly spawned the scientific revolution. Harari reminds us, “Nothing guaranteed that printing would be used for science.” Indeed, the best seller wasn’t Copernicus or Isaac Newton. It was The Hammer of the Witches, a story published in 1486 about a supposed satanic conspiracy of women who bedded demons and cursed men’s penises. Writes the Atlantic,
The historian Tamar Herzig describes Kramer’s treatise as “arguably the most misogynistic text to appear in print in premodern times.” Yet it was “a bestseller by early modern standards,” she writes. With a grip on its readers that Harari likens to QAnon’s, Kramer’s book encouraged the witch hunts that killed tens of thousands. These murderous sprees, Harari observes, were “made worse” by the printing press.
There were no content moderators or fact-checkers to The Hammer of Witches. The people of the Middle Ages were left to decide on their own if what was printed was true or not.
Zuckerberg and Musk, armed with massive platforms and artificial intelligence, expect humanity to fare far better this time around, again without any meaningful guardrails against the sudden proliferation of misinformation spread rapidly by new technologies.
Tragically, all of the evidence so far strongly suggests that we will not.
Everyone should get off FaceBook and X. I've done that. If millions of us would do that, that would sure send a message. Meanwhile, I love BlueSky. We're the consumer of all this stuff. Let's demand and support better.
My kids scoff at Facebook. “That’s for old people!”