As the world learned over the weekend of Hamas’ surprise military incursion and hostage-taking in southern Israel and Israel’s subsequent airstrikes in Gaza, millions flocked to X for news.
But the site, formerly known as Twitter, has become a massive engine for false, fake and manipulated information, clouding the reality of events on the ground. In many cases, X users appeared to be following economic and political incentives to muddy the waters. And Musk’s overhaul of the site, several experts observed, could be fueling the widespread misinformation.
Unlike in years past, nearly any X user can now pay for a blue checkmark, which previously indicated someone was a journalist or notable public figure and that Twitter had verified their identity.
X users can now also monetize their content ― they can receive a portion of the revenue for ads displayed on their content, in addition to tips and subscription revenue ― providing an economic incentive for amplifying emotionally charged material, even if it’s fake or misleading.
“People who have paid for blue checks have a financial incentive to LARP [live action role-play] as war reporters by dredging up old stories or fake footage,” Emerson T. Brooking, a researcher at the Atlantic Council’s Digital Forensics Research Lab, wrote. “Elon Musk enables this.”
Examples of that sort of content have been overwhelming in recent days.
One fake document, which became widespread on X, falsely stated that President Joe Biden had approved $8 billion in emergency aid for Israel. Though some posts showing the fake document included “Community Notes” from X users clarifying that it was fake, many more did not, NBC News reported, noting that a group of “verified” accounts eligible for monetization were among the early disseminators of the fake document. While Community Notes were appended to some other false and misleading posts elsewhere on X, they’re often attached well after a given post has gone viral.
Another post, now with 3.9 million “views” according to X’s metrics, falsely claimed that Hamas has bragged of purchasing U.S.-funded weapons from Ukraine for the attack, according to the fact-checking website Lead Stories.
A video post shared by top right-wing influencers ― including Charlie Kirk and Ian Miles Cheong ― claimed to depict Hamas militants going door-to-door killing Israelis. It actually showed Israeli police assembling outside of a single house at an unspecified time. Though some users have removed the post, others reposted the clip ― and plagiarized the false commentary – well after it had been identified as misleading.
One video, claiming to show Israeli helicopters being shot out of the sky by Hamas, actually showed a video game. Another, purportedly showing Israeli generals taken into custody by Hamas, was actually old footage showing Azerbaijani forces arresting Karabakh separatist leaders, the BBC reported.
“I’ve been fact-checking on Twitter for years, and there’s always plenty of misinformation during major events. But the deluge of false posts in the last two days, many boosted via Twitter Blue, is something else,” commented Shayan Sardarizadeh, a BBC journalist covering disinformation who’s done yeoman’s work recording lengthy tallies of dozens of misleading posts on the platform in recent days. “Neither fact-checkers nor Community Notes can keep up with this.”
X responded to HuffPost’s request for comment Wednesday morning with a single automated sentence: “Busy now, please check back later.” But the company said in a post Monday night that it was enrolling new accounts to participate in the Community Notes program and that “Community Notes typically appear within minutes of content posting.” Avi Asher-Schapiro, a Thomson Reuters Foundation journalist, noted the statement included “no comment on the proliferation of verified (paid) accounts spreading fabricated & fake images/information.”
Scores more misleading posts featured real footage but falsely claimed that it depicted the past two days’ events in Israel and Gaza when it showed years-old developments in other countries, including Syria and Algeria. NBC News reported that accounts spreading misleading and recycled videos were largely conservative Twitter Blue, now known as “X Premium,” users who had previously spoken about participating in X’s monetization program.
Even Musk himself took part, recommending to X users that they follow two accounts for war coverage that, actually, were notorious sources of bad information ― including the A.I.-fueled lie last May that there had been an explosion at the Pentagon, The Washington Post noted. After X users flagged several anti-Semitic posts by one of the accounts, @WarMonitor, Musk deleted his recommendation.
The other account Musk recommended, @sentdefender, posted in July about making money on the platform just as monetization payments began to hit users’ bank accounts.
“Thank You to all of my Followers and to the People over @TwitterCreators for making this possible, I never expected to ever make much money off of this App because I primarily do it as a Hobby but this could honestly change so much about what I do on here,” the account wrote.
At the time, @sentdefender had just over 400,000 followers on the platform. After Musk’s promotion, it was at nearly 780,000 on Tuesday.
The monetization changes are hardly the only explanation for the high rate of false claims on X recently: Musk punctuated his takeover of Twitter with historic layoffs, dismissing staff who’d worked on countering disinformation and hateful content. He also bought (or rather, was essentially forced to buy) the company with the explicit pledge to loosen Twitter’s content moderation rules. Finally, Musk broke promises to manually authenticate blue-checkmark, or “Twitter Blue,” accounts.
Emma Steiner, the information accountability project manager at Common Cause, told HuffPost that Musk’s monetization program, combined with his new system for receiving “verified” badges, had encouraged misleading information.
“The new verification system means that it’s almost impossible to discern real news from fake news on the platform now, especially since people are posting specifically to gain revenue for engagement,” Steiner said. “That creates some really perverse incentives for breaking news events.”
If a false post generates views ― and therefore money ― deleting it may be responsible, but it’s also financially irrational.
For example, an account identified as Sulaiman Ahmed claimed in one tweet that Israeli forces had bombed the Saint Porphyrios Orthodox Church in Gaza. Even though the church quickly corrected the rumor ― and even after Ahmed acknowledged he was mistaken ― Ahmed kept his initial post up rather than deleting it. At press time, it had garnered more than 3 million views according to X’s metrics. HuffPost was unable to reach Ahmed for comment, though he has acknowledged participating in the monetization program in the past.
Eliot Higgins, founder of the investigative outlet Bellingcat, said Tuesday that the church misinformation was indicative of a broader issue under Musk’s tenure.
“Last night, we saw a totally false claim about Israel bombing a church in Gaza go viral, thanks to multiple blue tick accounts repeating an unverified claim that had no evidence to back it up,” Higgins said. “Musk has created a fundamental issue with Twitter’s credibility in moments of crisis.”
Nonetheless, Ahmed ― who also runs a YouTube account mostly dedicated to Andrew Tate news ― wasn’t finished, posting a video later Monday night of what he claimed showed “ISRAEL ATTEMPTING TO CREATE FAKE FOOTAGE OF DEATHS.” In fact, he’d posted old footage of a film set. That post, too, is still on X, having racked up 1.5 million views, the bulk of which, as with the church video, came after it was revealed to be misleading.
“THEIR ARE CLAIMS THIS IS FROM A MOVIE SET,” Ahmed followed up. “Even if it is, the general propaganda is real.”
Source: Huff
Author: Matt Shuham
No comments:
Post a Comment