Democracy Gone Astray

Democracy, being a human construct, needs to be thought of as directionality rather than an object. As such, to understand it requires not so much a description of existing structures and/or other related phenomena but a declaration of intentionality.
This blog aims at creating labeled lists of published infringements of such intentionality, of points in time where democracy strays from its intended directionality. In addition to outright infringements, this blog also collects important contemporary information and/or discussions that impact our socio-political landscape.

All the posts here were published in the electronic media – main-stream as well as fringe, and maintain links to the original texts.

[NOTE: Due to changes I haven't caught on time in the blogging software, all of the 'Original Article' links were nullified between September 11, 2012 and December 11, 2012. My apologies.]

Sunday, July 29, 2018

Putin’s Trolls Used the Texas Church Massacre to Sow More Chaos

False information inundated social media after Sunday’s mass shooting in Sutherland Springs, Texas, and Russian trolls were in the thick of it.

Conspiracy theorists like Mike Cernovich led the way, falsely branding shooter Devin Patrick Kelly as a member of the far-left antifa movement, and the Russian media outlet RT America had the lie posted on Facebook for five hours, according to BuzzFeed. The hashtags #antifa, #sutherlandsprings and #texas were three of the top 10 recorded over the weekend by Hamilton 68, a nonpartisan research project that tracks Russian influencers on Twitter in real time.

Much of last week’s congressional hearings focused on Russia’s interference operations on Facebook during the 2016 election, when Kremlin-planted ads and fraudulent posts reached upwards of 130 million users. But disinformation attacks by Putin’s trolls seeking to sow chaos in American politics have continued apace ever since—and Twitter continues to provide an optimal platform for them, according to researchers.

 “Twitter is ideal for spreading messages anonymously,” says Ben Nimmo, a senior fellow at the Atlantic Council’s Digital Forensics Research Lab, who studies automated bots on Twitter. “You can make the account look more popular, or a post look more popular, using bots. It’s relatively easy to distort the message.”

Bots amp up the volume on a message by retweeting at rates far faster than any human and by adding legions of fake followers to specific accounts. Nimmo documented one attack by Russian-linked accounts against ProPublica after the news outlet exposed connections between Russian bots and far-right accounts following the violence in Charlottesville last August. In his analysis of the attack, Nimmo found a bot network of fake accounts that amassed 23,000 retweets on one post within just a few hours. When the bot armies then attacked him personally, Nimmo showed Twitter that tens of thousands of accounts were involved by triggering 50,000 bot accounts to tweet to @TwitterSupport. Twitter Support subsequently shut down many, but not all, of the bot accounts.

Some bots are dedicated exclusively to political networks, but commercial bots also can be rented from “bot herders” to tweet on any message that’s paid for, including political ones. It’s impossible for bot spotters to know the source of the accounts; they can only see the behavior, Nimmo says. “A political bot will only retweet a particular point of view.” Conversely, he says. “With a commercial bot, there’s no linking logic. If you look down the feed, you’ll see them selling fast cars, high-heeled shoes, bitcoin, often porn. It will be maybe 90 percent commercial and 10 percent selling a point of view.”

In an analysis of election-related tweets in the weeks before the election, University of Southern California researchers found that about one in five election-related tweets were generated by bots. The bots supporting Donald Trump outnumbered the bots supporting Hillary Clinton by more than 3:1, and the Trump bots overwhelmingly tweeted positive sentiments about their candidate. “The fact that bots produce systematically more positive content in support of a candidate can bias the perception of the individuals exposed to it, suggesting that there exists an organic, grassroots support for a given candidate, while in reality it’s all artificially generated,” the researchers wrote.

Twitter claims that only 5 percent of its overall traffic comes from automated bot accounts—but that’s a highly misleading number, says USC professor Emilio Ferrara, one of the authors of the 2016 election study. “Bots are not uniformly scattered throughout the conversation,” Ferrara says, pointing out that Twitter’s 5 percent figure would represent all bots tweeting on all topics – positive, negative and benign – across the network. “I don’t care about the average across the platform and the bots tweeting about fashion and cats. That number doesn’t tell anything about the involvement of bots in the political conversation.” Joint research out of USC and Indiana University estimated that the percentage of all bots on Twitter is likely much higher – up to 15 percent. And in a focused look at political messaging, Ferrara’s 2016 election study put the number of bot-generated election-themed tweets at almost 20 percent.

Some researchers also questioned the number of Russian-linked bots that Twitter identified in its testimony to Congress last week. The network said it found 36,746 bots linked to Russian accounts and suspended them. Nimmo speculates that number reflects a conservative approach to removing only botnets definitively linked to the St. Petersburg-based Internet Research Agency troll factory identified as a source of Kremlin-linked accounts. “That sounds like a single medium-sized botnet,” he says, noting the largest network he’s seen involved 108,000 accounts.

Thomas Rid, professor of strategic studies at Johns Hopkins University, told Mother Jones that Twitter’s methods to identify the bot accounts it removed required indicators like a Russian IP address or the use of Cyrillic language on the account. “It’s very easy to defeat that kind of analysis by not using a Russian phone number or IP address, or to use English instead of Cyrillic,” he says.

“If we give [tech companies] the benefit of the doubt, it’s because they want to be as safe and conservative as possible” in removing potentially malicious accounts, Rid adds. “I think the actual reason is: For Facebook, bots are a threat to their business model. But for Twitter, bots are a part of their business model.”

In a recent piece for Motherboard, Rid expands on this point: “Twitter’s poor market performance makes the problem worse. The social news platform, in contrast to Facebook or Google, has never made money. It therefore pays more attention to its shareholders. One of the most important metrics for its stock price is the ‘active user base.’ Millions of bots and fake accounts are boosting the numbers, making the active user base appear much larger than it actually is. The open market is thus creating an incentive to hide the bots.”

In his Motherboard piece, Rid also explores how Twitter’s privacy policies complicate efforts to measure the bot accounts and their impact: “Twitter is granting the same level of privacy protection to hives of anonymous bots commandeered by authoritarian spy agencies as it grants to an American teenager tweeting under her real name from her sofa at home.”

And while it’s impossible to know how much the election-related bots’ efforts could have influenced actual votes, research last year from the RAND Corp. suggests that sheer volume matters: “Endorsement by a large number of users boosts consumer trust, reliance, and confidence in the information, often with little attention paid to the credibility of those making the endorsements,” RAND wrote in its “Firehose of Falsehood” study of contemporary Russian propaganda tactics.

Rid told Mother Jones that the use of Twitter is a continuation of an old propaganda strategy deployed by Soviets, East Germans and others in the Cold War. “The old style was mailing a leak to a journalist you could trust,” he says. “Twitter is the perfect surfacing tool.”

And Twitter is a particularly effective way for propaganda to reach mainstream media, Nimmo says: “Twitter tends to be where the politicians and the journalists congregate.” He pointed out that the fake Russian account TEN_GOP had been quoted in The Washington Post and Huffington Post.

“If social media is the ocean, an operation on Twitter is like detonating an explosion,” Nimmo says. “It creates a loud bang and a big wave.” Influencers then surf the wave, spreading the message even farther.

It’s more challenging to create a fake account on Facebook, Nimmo adds, but once it exists, the potential for covert operations increases because of the ability to microtarget audiences. “With Facebook, you’re in a submarine. People on the surface don’t see the attack until you’ve hit your target.”

Original Article
Source: motherjones.com
Author: Denise Clifton

No comments:

Post a Comment