The day Donald Trump was elected president, nearly 2,000 Twitter accounts that had pumped out pro-Trump messages in the run-up to the vote suddenly went dark. Then, in spring 2017, these bot-controlled accounts reemerged to campaign en français for Marine Le Pen in the French election, and then once again this fall, to tweet auf Deutsch on behalf of the far-right party in Germany’s election.
The bots were part of a larger group tracked over a month-long period before the US election by University of Southern California researchers, who discovered that bots were deeply entwined in political conversation on Twitter—accounting for 1 in 5 election-related tweets. And the bots were just as effective at spreading messages as human-controlled accounts were, says USC professor and lead researcher Emilio Ferrara, who has studied the influence of bot networks since 2012. “Botnets accrued retweets at the same rate as humans,” he says of the pre-election activity. His most recent research explores how bots are particularly effective at getting a message to go viral among authentic human users.
Ferrara has found that up to 15 percent of all Twitter accounts are run by automated bots. He focuses on understanding bots’ effectiveness, though he doesn’t track their provenance. But researchers for the cybersecurity firm FireEye told the New York Times recently they had determined that possibly thousands of Twitter accounts that campaigned against Hillary Clinton likely were controlled by Russian interests, including many automated by bots.
Ferrara is among various researchers trying to quantify the impact of social-media propaganda efforts, particularly with respect to the 2016 US election. As Mother Jones has reported, Oxford University research found that key battleground states saw a higher concentration of tweets spreading fake news and Russian-pushed content just prior to the vote. Lead researcher Samantha Bradshaw recently shared some examples of the Oxford-tracked tweets with Mother Jones and pointed out that such Russian-pushed content and fake news peaked in the swing state of Michigan—which Trump won narrowly—the day before the election.
Russian influencers on Twitter, including bot accounts, are tracked in real time by the Hamilton 68 dashboard launched in August by the Alliance for Securing Democracy. As evidence continues to emerge about Russian-planted propaganda on Twitter, Facebook, YouTube, Instagram and Google during Election 2016, one of the dashboard creators, former FBI special agent Clint Watts, warned recently that the full scope of the Russian attack is still unknown.
Along with the other tech giants, Twitter has come under sharp criticism from Congress for responding sluggishly to Russian interference in the 2016 election; Sen. Mark Warner (D-Va.) recently called Twitter’s follow-up “inadequate on every level.”
While independent researchers can’t easily quantify what impact the Twitter bots controlled by Russia and other malicious operators may have had on voters’ decisions, Ferrara’s newest work sheds some light on how influential this automated messaging might prove to be. Here are some of the key findings:
Distorting the political debate
In their analysis of 20 million election-related tweets between Sept. 16, 2016, and Oct. 21, 2016, Ferrara’s team found that 19 percent of them were tweeted by bot-controlled accounts. The bot-driven messaging was more intense in the Midwest and South, especially in Georgia, while human-driven messaging was more intense in the most populous states (California, New York, Texas, Florida and Massachusetts.) With an analysis of partisan hashtags–#donaldtrump, #neverhillary, #hillaryclinton and #imwithher–they found that pro-Trump bots outnumbered pro-Clinton accounts by 3 to 1.
“Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the presidential election,” Ferrara wrote in the team’s report.
Infectious messaging—positive or negative
In research published this fall, Ferrara and a team from Denmark explored how bots spread their messages. They programmed 39 bots to gather 25,000 human followers in the San Francisco Bay Area and send out positive messaging. “We don’t want to inject negative things into the system,” Ferrara says, so the messages were innocuous tweets featuring unique hashtags at the time, like #getyourflushot and #highfiveastranger. To track how contagious the messages were, the researchers’ bots tracked the impressions and retweets associated with each message. They found that when humans were exposed to a single message spread by multiple accounts, the humans were more likely to then spread it themselves. “You’re more likely to retweet something if you see it tweeted by many different sources,” Ferrara says.
Ferrara’s team touts their research as evidence that bots can be used for social good—but they also recognized the potential for viral misinformation. Social-media networks indeed act as an echo chamber: “You’re more likely to ‘catch’ a meme because of pre-existing beliefs,” Ferarra notes.
The role of echo chambers figured in a 2016 report from RAND Corp. on the effectiveness of Russian propaganda online, which the report said “is produced in incredibly large volumes and is broadcast or otherwise distributed via a large number of channels.” With this “Firehose of Falsehood” strategy, “messages received in greater volume and from more sources will be more persuasive.”
A market for redeploying bots?
Ferrara’s research into connections among bot accounts used in the U.S. and French elections appeared in a paper published in August—and additional connections to the German election were identified this month, he told Mother Jones. The use of the same Twitter accounts in separate political efforts around the world suggests there could be a type of black-market demand for these accounts, he says: “The sheer number of these accounts suggests there may be organizations that provide these bots as a service. The scale suggests a mix of state and non-state actors.”
The battle is just beginning
“We don’t think it’s a good idea to use the research so far to create measures for countering disinformation,” Ferrara says. “At best you do no harm. More likely, you make it worse.”
That’s because there isn’t enough research to go on yet, he says. “There has been a lot of focus on the characterization of political manipulation on Twitter and other platforms, but little to understand the mechanisms that explain why these strategies are effective in changing people’s information sharing and consumption behaviors,” he says. He hopes his continuing research will reveal motivations that could inform ways to counter disinformation on social media. “We are currently starting to investigate what factors play a role in the tendency to share fake news or political rumors, including cognitive biases, socio-demographics, and social influence.”
In other recent research into the impact of propaganda on social media, teams at Oxford and at the University of Washington have tracked connections between the American political far-right American and Russian-controlled accounts. And this week, Oxford researchers spotlighted propaganda efforts that specifically targeted US troops. “We find that on Twitter there are significant and persistent interactions between current and former military personnel and a broad network of Russia-focused accounts, conspiracy theory focused accounts, and European right-wing accounts,” the researchers wrote. “These interactions are often mediated by pro-Trump users and accounts that identify with far-right political movements in the US.”
The Pell Center for International Relations at Salve University also recently issued a report on Russian influence during the election, with contributors from the U.S., Europe and Australia. The report’s authors advocated increased public education about Russia’s influence operations and urged Congress to go beyond the current House and Senate committee investigations to create a bipartisan commission on Russian influence operations. “Among the commission’s objectives, it must look beyond the 2016 campaign and expose the activities of trolls, bots, and other foreign actors,” they wrote, “including those that are still active today, whether in battleground states or in states with active secession movements or vulnerable to exploitation over divisive social issues.”
It remains an open question whether President Trump or his team colluded with Russia in the 2016 election. But, “the Russian effort is larger than the election of a president,” the Pell Center report notes. “It seeks to sow division within the United States and within the broader community of Western democracies.”
Original Article
Source: motherjones.com
Author: Denise Clifton
The bots were part of a larger group tracked over a month-long period before the US election by University of Southern California researchers, who discovered that bots were deeply entwined in political conversation on Twitter—accounting for 1 in 5 election-related tweets. And the bots were just as effective at spreading messages as human-controlled accounts were, says USC professor and lead researcher Emilio Ferrara, who has studied the influence of bot networks since 2012. “Botnets accrued retweets at the same rate as humans,” he says of the pre-election activity. His most recent research explores how bots are particularly effective at getting a message to go viral among authentic human users.
Ferrara has found that up to 15 percent of all Twitter accounts are run by automated bots. He focuses on understanding bots’ effectiveness, though he doesn’t track their provenance. But researchers for the cybersecurity firm FireEye told the New York Times recently they had determined that possibly thousands of Twitter accounts that campaigned against Hillary Clinton likely were controlled by Russian interests, including many automated by bots.
Ferrara is among various researchers trying to quantify the impact of social-media propaganda efforts, particularly with respect to the 2016 US election. As Mother Jones has reported, Oxford University research found that key battleground states saw a higher concentration of tweets spreading fake news and Russian-pushed content just prior to the vote. Lead researcher Samantha Bradshaw recently shared some examples of the Oxford-tracked tweets with Mother Jones and pointed out that such Russian-pushed content and fake news peaked in the swing state of Michigan—which Trump won narrowly—the day before the election.
Russian influencers on Twitter, including bot accounts, are tracked in real time by the Hamilton 68 dashboard launched in August by the Alliance for Securing Democracy. As evidence continues to emerge about Russian-planted propaganda on Twitter, Facebook, YouTube, Instagram and Google during Election 2016, one of the dashboard creators, former FBI special agent Clint Watts, warned recently that the full scope of the Russian attack is still unknown.
Along with the other tech giants, Twitter has come under sharp criticism from Congress for responding sluggishly to Russian interference in the 2016 election; Sen. Mark Warner (D-Va.) recently called Twitter’s follow-up “inadequate on every level.”
While independent researchers can’t easily quantify what impact the Twitter bots controlled by Russia and other malicious operators may have had on voters’ decisions, Ferrara’s newest work sheds some light on how influential this automated messaging might prove to be. Here are some of the key findings:
Distorting the political debate
In their analysis of 20 million election-related tweets between Sept. 16, 2016, and Oct. 21, 2016, Ferrara’s team found that 19 percent of them were tweeted by bot-controlled accounts. The bot-driven messaging was more intense in the Midwest and South, especially in Georgia, while human-driven messaging was more intense in the most populous states (California, New York, Texas, Florida and Massachusetts.) With an analysis of partisan hashtags–#donaldtrump, #neverhillary, #hillaryclinton and #imwithher–they found that pro-Trump bots outnumbered pro-Clinton accounts by 3 to 1.
“Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the presidential election,” Ferrara wrote in the team’s report.
Infectious messaging—positive or negative
In research published this fall, Ferrara and a team from Denmark explored how bots spread their messages. They programmed 39 bots to gather 25,000 human followers in the San Francisco Bay Area and send out positive messaging. “We don’t want to inject negative things into the system,” Ferrara says, so the messages were innocuous tweets featuring unique hashtags at the time, like #getyourflushot and #highfiveastranger. To track how contagious the messages were, the researchers’ bots tracked the impressions and retweets associated with each message. They found that when humans were exposed to a single message spread by multiple accounts, the humans were more likely to then spread it themselves. “You’re more likely to retweet something if you see it tweeted by many different sources,” Ferrara says.
Ferrara’s team touts their research as evidence that bots can be used for social good—but they also recognized the potential for viral misinformation. Social-media networks indeed act as an echo chamber: “You’re more likely to ‘catch’ a meme because of pre-existing beliefs,” Ferarra notes.
The role of echo chambers figured in a 2016 report from RAND Corp. on the effectiveness of Russian propaganda online, which the report said “is produced in incredibly large volumes and is broadcast or otherwise distributed via a large number of channels.” With this “Firehose of Falsehood” strategy, “messages received in greater volume and from more sources will be more persuasive.”
A market for redeploying bots?
Ferrara’s research into connections among bot accounts used in the U.S. and French elections appeared in a paper published in August—and additional connections to the German election were identified this month, he told Mother Jones. The use of the same Twitter accounts in separate political efforts around the world suggests there could be a type of black-market demand for these accounts, he says: “The sheer number of these accounts suggests there may be organizations that provide these bots as a service. The scale suggests a mix of state and non-state actors.”
The battle is just beginning
“We don’t think it’s a good idea to use the research so far to create measures for countering disinformation,” Ferrara says. “At best you do no harm. More likely, you make it worse.”
That’s because there isn’t enough research to go on yet, he says. “There has been a lot of focus on the characterization of political manipulation on Twitter and other platforms, but little to understand the mechanisms that explain why these strategies are effective in changing people’s information sharing and consumption behaviors,” he says. He hopes his continuing research will reveal motivations that could inform ways to counter disinformation on social media. “We are currently starting to investigate what factors play a role in the tendency to share fake news or political rumors, including cognitive biases, socio-demographics, and social influence.”
In other recent research into the impact of propaganda on social media, teams at Oxford and at the University of Washington have tracked connections between the American political far-right American and Russian-controlled accounts. And this week, Oxford researchers spotlighted propaganda efforts that specifically targeted US troops. “We find that on Twitter there are significant and persistent interactions between current and former military personnel and a broad network of Russia-focused accounts, conspiracy theory focused accounts, and European right-wing accounts,” the researchers wrote. “These interactions are often mediated by pro-Trump users and accounts that identify with far-right political movements in the US.”
The Pell Center for International Relations at Salve University also recently issued a report on Russian influence during the election, with contributors from the U.S., Europe and Australia. The report’s authors advocated increased public education about Russia’s influence operations and urged Congress to go beyond the current House and Senate committee investigations to create a bipartisan commission on Russian influence operations. “Among the commission’s objectives, it must look beyond the 2016 campaign and expose the activities of trolls, bots, and other foreign actors,” they wrote, “including those that are still active today, whether in battleground states or in states with active secession movements or vulnerable to exploitation over divisive social issues.”
It remains an open question whether President Trump or his team colluded with Russia in the 2016 election. But, “the Russian effort is larger than the election of a president,” the Pell Center report notes. “It seeks to sow division within the United States and within the broader community of Western democracies.”
Original Article
Source: motherjones.com
Author: Denise Clifton
No comments:
Post a Comment