When “Fake” Turns Into a Spam Beast – The Story Behind Mariupol’s Hospital Bombing Sparks a Social Media Meltdown
Three lives were lost in a frantic blast at a maternity and children’s hospital in Mariupol on March 9, sending shockwaves not just across Ukraine but straight into the heart of social‑media giants. While the world mourns, a new battle rages on the comment sections of Facebook and Instagram – one that has turned a Ukrainian beauty blogger into a target of a ruthless online “ reality‑check” campaign.
From Glam to Ground‑Zero
On a hard‑pressed day of war, Mariana Vishegirskaya – a well‑known fashion wizard with a knack for stage‑ready looks – stumbled across a hospital stairwell, clutching her belly like a literal shield. Snapshots from an AP photographer captured her looking every bit the “glitz queen” while dodging rubble, and they made the rounds fast. The Internet, usually full of selfies, suddenly turned into a battleground.
Instant Global Outrage
Images of pregnant women fleeing a blast, one of them wincing with pain, poured into feeds worldwide. They sparked protests, pleas for relief, and – unfortunately – a flood of comments accusing the victims of fabricating evidence.
The Propaganda Storm
Leaders in Russia seized on these wildly different shots, pitting them side‑by‑side with Vishegirskaya’s spin‑perfect Instagram posts. The Kremlin’s narrative? She staged the entire event for “national propaganda.” The claim was false, of course. Yet it rubbed off on millions.
On state TV, in social media accounts, and even in a UN Security Council brief, Moscow painted her as a “performer” orchestrated by Ukrainian forces. Their messaging was clear: if you can’t see a dramatic fall‑out in X photo, you’re not seeing the truth.
Sneaking Apart the Conspiracy
Inside Meta’s content‑moderation trenches, the truth took the front seat. Two contractors – hands‑on judges of over‑the‑relatable filter – reported that the judge’s queue was nearly full of anti‑Vishegirskaya commentary. “The posts were vile,” one chanted, “like a well‑timed spam sweep.” However, the majority of these alerts slipped through because they didn’t directly mention the attack. So while the comment rigging was evident, it largely stayed under the radar of policy violations.
When the Buffer Packed Full
The moderators expressed feeling stuck on a “siloed wall” of misinformation, a mixture of bait‑and‑switch posts that pulled them in like a video ad monster.
“I couldn’t do anything about them,” one said. That silent despair echoed throughout the moderation curtains.
Meta’s Response (and a Hint of Get‑Ready‑Board)
Meta declined to admit ownership of the mishandle. They mentioned, in a brief, vaguely nebulous statement, that “separate, expert teams and outside partners” were rounding up misinformation and “inactivating fake behaviour.” The company’s policy chief, Nick Clegg, suggested new steps were in the pipeline to counter Russian hoaxes, but the specifics were hush‑hush.
Djinn & Drones of Dialogue
In the world of social‑media oversight, every minute a post is flagged and every reporter is an “anonymous contractor,” it takes a force of one to hold to the standard of not violating hateful or fake‑content rules. And that always leaves room for a puzzle‑piece or two to be lost.
What’s Next?
- Comment queues will remain swamped. Moderators must juggle an ever‑growing “fake” tide.
- Meta promises to elevate Toxic‑Page checks. Are we ready for “Nazis” in out‑of‑context images, or is the problem still in moderation?
- And if the push for satire and sub‑tle language is something we can realistically employ? The next step is open to adaptation.
In the end, the quick‑fire response swung, but it was also a stark reminder that even the most beloved of faces can get caught in a web of propaganda. The story remains ongoing, as Meta chairs their circus while at the same time dancing around the exact stance they’ve taken on the global stage.
Spirit of the policy
Based at a moderation hub of several hundred people reviewing content from Eastern Europe, the two contractors are foot soldiers in Meta’s battle to police content from the conflict. They are among tens of thousands of low-paid workers at outsourcing firms around the world that Meta contracts to enforce its rules.
The tech giant has sought to position itself as a responsible steward of online speech during the invasion, which Russia calls a “special operation” to disarm and “denazify” it’s neighbour.
Just a few days into the war, Meta imposed restrictions on Russian state media and took down a small network of coordinated fake accounts that it said were trying to undermine trust in the Ukrainian government.
It later said it had pulled down another Russia-based network that was falsely reporting people for violations like hate speech or bullying, while beating back attempts by previously disabled networks to return to the platform.
[[nid:568110]]
Meanwhile, the company attempted to carve out space for users in the region to express their anger over Russia’s invasion and to issue calls to arms in ways Meta normally would not permit.
In Ukraine and 11 other countries across Eastern Europe and the Caucasus, it created a series of temporary “spirit of the policy” exemptions to its rules barring hate speech, violent threats and more; the changes were intended to honour the general principles of those policies rather than their literal wording, according to Meta instructions to moderators seen by Reuters.
For example, it permitted “dehumanising speech against Russian soldiers” and calls for death to Russian President Vladimir Putin and his ally Belarusian President Alexander Lukashenko, unless those calls were considered credible or contained additional targets, according to the instructions viewed by Reuters.
The changes became a flashpoint for Meta as it navigated pressures both inside the company and from Moscow, which opened a criminal case into the firm after a March 10 Reuters report made the carve-outs public. Russia also banned Facebook and Instagram inside its borders, with a court accusing Meta of “extremist activity.”
Meta walked back elements of the exceptions after the Reuters report. It first limited them to Ukraine alone and then cancelled one altogether, according to documents reviewed by Reuters, Meta’s public statements, and interviews with two Meta staffers, the two moderators in Europe and a third moderator who handles English-language content in another region who had seen the advisories.
The documents offer a rare lens into how Meta interprets its policies, called community standards. The company says its system is neutral and rule-based.
Critics say it is often reactive, driven as much by business considerations and news cycles as by principle. It is a complaint that has dogged Meta in other global conflicts including Myanmar, Syria and Ethiopia. Social media researchers say the approach allows the company to escape accountability for how its policies affect the 3.6 billion users of its services.
The shifting guidance over Ukraine has generated confusion and frustration for moderators, who say they have 90 seconds on average to decide whether a given post violates policy, as first reported by The New York Times. Reuters independently confirmed such frustrations with three moderators.
ALSO READ: Instagram users in Russia are told service will cease from midnight
After Reuters reported the exemptions on March 10, Mr Clegg said in a statement the next day that Meta would allow such speech only in Ukraine.
Two days later, Mr Clegg told employees the company was reversing altogether the exemption that had allowed users to call for the deaths of Mr Putin and Mr Lukashenko, according to a March 13 internal company post seen by Reuters.
At the end of last month, the company extended the remaining Ukraine-only exemptions through April 30, the documents show.
Reuters is the first to report this extension, which allows Ukrainians to continue engaging in certain types of violent and dehumanising speech that normally would be off-limits.
Inside the company, writing on an internal social platform, some Meta employees expressed frustration that Facebook was allowing Ukrainians to make statements that would have been deemed out of bounds for users posting about previous conflicts in the Middle East and other parts of the world, according to copies of the messages viewed by Reuters.
“Seems this policy is saying hate speech and violence is ok if it is targeting the ‘right’ people,” one employee wrote, one of 900 comments on a post about the changes.
Meanwhile, Meta gave moderators no guidance to enhance their ability to disable posts promoting false narratives about Russia’s invasion, like denials that civilian deaths have occurred, the people told Reuters.
The company declined to comment on its guidance to moderators.
Denying violent tragedies
In theory, Meta did have a rule that should have enabled moderators to address the mobs of commenters directing baseless vitriol at Vishegirskaya, the pregnant beauty influencer. She survived the Mariupol hospital bombing and delivered her baby, the Associated Press reported.
Meta’s harassment policy prohibits users from “posting content about a violent tragedy, or victims of violent tragedies that include claims that a violent tragedy did not occur,” according to the Community Standards published on its website.
[[nid:571668]]
It cited that rule when it removed posts by the Russian Embassy in London that had pushed false claims about the Mariupol bombing following the March 9 attack.
But because the rule is narrowly defined, two of the moderators said, it could be used only sparingly to battle the online hate campaign against the beauty influencer that followed.
Posts that explicitly alleged that the bombing was staged were eligible for removal, but comments such as “you’re such a good actress” were considered too vague and had to stay up, even when the subtext was clear, they said.
Guidance from Meta enabling commenters to consider context and enforce the spirit of that policy could have helped, they added.
Meta declined to comment on whether the rule applied to the comments on Vishegirskaya’s account.
At the same time, even explicit posts proved elusive to Meta’s enforcement systems.
A week after the bombing, versions of the Russian Embassy posts were still circulating on at least eight official Russian accounts on Facebook, including its embassies in Denmark, Mexico and Japan, according to an Israeli watchdog organisation, FakeReporter.
One showed a red “fake” label laid over the Associated Press photos of Mariupol, with text claiming the attack on Vishegirskaya was a hoax, and pointing readers to “more than 500 comments from real users” on her Instagram account condemning her for participating in the alleged ruse.
Meta removed those posts on March 16, hours after Reuters asked the company about them, a spokesman confirmed. Meta declined to comment on why the posts had evaded its own detection systems.
The following day, on March 17, Meta designated Vishegirskaya an “involuntary public person,” which meant moderators could finally start deleting the comments under the company’s bullying and harassment policy, they told Reuters.
But the change, they said, came too late. The flow of posts related to the woman had already slowed to a trickle.
FacebookRUSSIARussia-Ukraine conflictSocial media
