After combing through 17 million tweets in the 10 days leading up to the French presidential election, a new study was able to determine exactly when and how Twitter bots fueled a disinformation campaign against now-elected French President Emmanuel Macron, Venture Beat reported.
The study, led by computer science researcher Emilio Ferrara at the University of Southern California, discovered a subset of those 17 million tweets from more than 2 million Twitter users dedicated to spreading false documents dubbed the Macron Leaks that discussing the election.
The online campaign against Macron started on the afternoon of April 30, peaking at nearly 300 tweets per minute in the days leading up to the election, the study found. Of the 17 million tweets evaluated, a small batch of 350,000 was dedicated to the MacronLeaks — a trove of falsified and doctored documents, photos, and correspondence that purportedly came from Macron and his campaign staff.
The false documents made news headlines but did little to sway French voters, which overwhelmingly elected Macron over nationalist opponent Marine Le Pen. However, Ferrara’s study highlights how influential bots can be when humans get duped by fake news.
Ferrara tracked the documents from an “email dump” on an anonymous 4chan thread two days before the election. They wound up being shared by prominent alt-right activist Jack Posobiec and controversial government transparency site Wikileaks, which further amplified the disinformation campaign that was already underway.
The human element was crucial to the spread of disinformation, elevating bot accounts from relative obscurity and increasing their follower accounts 100 times over thanks to retweets.
For example, the now deleted account @jewishhotjean had only 46 followers. That number jumped to 14,033 after just 39 retweets. Another suspended account, @yhesum, jumped from 21 to 9,476 followers after 291 retweets.
French users didn’t engage much with MacronLeak accounts, according to the study. The users that did were “mostly foreigners belonging to the alt-right Twitter community.”
The study also found that about one in five Twitter bots used to spread disinformation were active in both the French and U.S. presidential elections, suggesting “the possible existence of a black-market for reusable political disinformation bots” Ferrara wrote in the study.
The findings shed new light on the behavior patterns of online campaigns and their ability to sway public opinion. In the case of the French election, the campaign mainly lured people who had no say or vote.
Ferrara didn’t discuss political ties in the study, but his findings regarding the reuse of political bots from the U.S. election confirms some of the patterns that security researchers and policy analysts have found in terms of Russia’s involvement.
As the Atlantic Council’s information defense researcher Ben Nimmo previously told ThinkProgress, the likelihood of Russia’s involvement increases when the person or issue being discussed has political implications for the country. Incontrovertible attribution in cyberspace is very difficult, but when bots are involved patterns emerge and activity clusters around certain issues and accounts. But Macron has been very critical of the Kremlin, criticizing Russia’s state-run media before and shortly after winning the presidency.
That alone was enough to make him a target because, as Nimmo said, “in Russia’s view, it’s about convincing people that Russia’s right and the West is evil.”
Original Article
Source: thinkprogress.org
Author: Lauren C. Williams
The study, led by computer science researcher Emilio Ferrara at the University of Southern California, discovered a subset of those 17 million tweets from more than 2 million Twitter users dedicated to spreading false documents dubbed the Macron Leaks that discussing the election.
The online campaign against Macron started on the afternoon of April 30, peaking at nearly 300 tweets per minute in the days leading up to the election, the study found. Of the 17 million tweets evaluated, a small batch of 350,000 was dedicated to the MacronLeaks — a trove of falsified and doctored documents, photos, and correspondence that purportedly came from Macron and his campaign staff.
The false documents made news headlines but did little to sway French voters, which overwhelmingly elected Macron over nationalist opponent Marine Le Pen. However, Ferrara’s study highlights how influential bots can be when humans get duped by fake news.
Ferrara tracked the documents from an “email dump” on an anonymous 4chan thread two days before the election. They wound up being shared by prominent alt-right activist Jack Posobiec and controversial government transparency site Wikileaks, which further amplified the disinformation campaign that was already underway.
The human element was crucial to the spread of disinformation, elevating bot accounts from relative obscurity and increasing their follower accounts 100 times over thanks to retweets.
For example, the now deleted account @jewishhotjean had only 46 followers. That number jumped to 14,033 after just 39 retweets. Another suspended account, @yhesum, jumped from 21 to 9,476 followers after 291 retweets.
French users didn’t engage much with MacronLeak accounts, according to the study. The users that did were “mostly foreigners belonging to the alt-right Twitter community.”
The study also found that about one in five Twitter bots used to spread disinformation were active in both the French and U.S. presidential elections, suggesting “the possible existence of a black-market for reusable political disinformation bots” Ferrara wrote in the study.
The findings shed new light on the behavior patterns of online campaigns and their ability to sway public opinion. In the case of the French election, the campaign mainly lured people who had no say or vote.
Ferrara didn’t discuss political ties in the study, but his findings regarding the reuse of political bots from the U.S. election confirms some of the patterns that security researchers and policy analysts have found in terms of Russia’s involvement.
As the Atlantic Council’s information defense researcher Ben Nimmo previously told ThinkProgress, the likelihood of Russia’s involvement increases when the person or issue being discussed has political implications for the country. Incontrovertible attribution in cyberspace is very difficult, but when bots are involved patterns emerge and activity clusters around certain issues and accounts. But Macron has been very critical of the Kremlin, criticizing Russia’s state-run media before and shortly after winning the presidency.
That alone was enough to make him a target because, as Nimmo said, “in Russia’s view, it’s about convincing people that Russia’s right and the West is evil.”
Original Article
Source: thinkprogress.org
Author: Lauren C. Williams
No comments:
Post a Comment