Facebook Shuts Misleading Accounts Ahead of 2020 ElectionNewly Discovered Pages Tied to Russia, Iran
Facebook has announced that it has removed four networks from its platform - three connected to Iran and one from Russia - after an internal investigation revealed that these accounts were spreading misinformation related to the 2020 U.S. presidential election as well as other political events around the world.
The removal of these four networks of accounts, pages and groups on the Facebook and Instagram platforms is part of a larger effort by the company to clamp down on what it calls "coordinated inauthentic behavior," says Nathaniel Gleicher, the company’s head of cybersecurity policy.
In addition to the U.S., these four networks were also targeting Facebook users in parts of North Africa as well as Latin America, Gleicher says.
"The Iranian operations were relatively small and exhibited links to previous operations we've removed. They frequently repurposed Iranian state media content and tailored their content for particular countries they targeted around the world," Gleicher says.
Gleicher notes that the Russian operation appeared to be better funded and to have links to Russia’s Internet Research Agency, which has been tied by several investigations to interference in the 2016 U.S. presidential election (see: Russian Troll Farm Targeted With Fresh US Sanctions).
"They [the Russian-backed network] took operational security steps to conceal their identity and location. And it appears that this operation was still in the early stages and was focused on trying to build its audience when we took it down,” Gleicher adds.
The removal of these pages comes as Facebook, as well as other social media platforms, are under increasing scrutiny by elected officials in the U.S. and elsewhere for how bad actors and trolls are using technology to spread misinformation and fake news stories with an eye toward swaying voters (see: Feds, Tech Giants Meet to Coordinate 2020 Election Security).
Iran and Russian Networks
Facebook, along with Graphika, a social network analysis company, offered details about the four networks in took down.
The first network linked to Iran consisted of 93 Facebook accounts, 17 pages and four Instagram accounts, all of which were removed, Gleicher notes. This network’s activity targeted the U.S. as well as some French-speaking users in North Africa. Overall, this network had about 7,700 followers on Facebook, and around 145 people followed one or more of the Instagram accounts, the company says.
The second network from Iran consisted of 38 Facebook accounts, six pages, four groups and 10 Instagram accounts targeting countries in Latin America, including Venezuela, Brazil, Argentina, Bolivia, Peru, Ecuador and Mexico, the company says. There were about 13,500 accounts that followed one or more of these pages. About 4,200 accounts joined at least one of these groups, and roughly 60,000 people followed one or more of the Instagram accounts, according to Facebook.
The third Iranian-linked network was smaller, with four Facebook accounts, three pages and seven Instagram accounts targeting U.S users. In this case, however, the group admins used apps to push content to a page called BLMNews, which was cloned to resemble a legitimate news source related to the "Black Lives Matter" cause, the company says. The network had about 45 accounts that followed one or more of these pages and about 7,300 people followed one or more of these Instagram accounts, the company says.
For the Russian-linked network, the company found only one Facebook account and 50 Instagram accounts targeting U.S. users, with about 246,000 in total on both platforms, the company says.
This is the second time in a month that Facebook has eradicated fake networks and accounts from its platforms. In September, the company removed certain accounts related to misinformation from Iraq and Ukraine (see: Facebook Removes Hundreds of Fake Accounts).
In addition to the removal of these four networks, Facebook announced Monday other steps that it plans to take as the 2020 U.S. presidential election approaches. Much of these steps come as a result of the scandal involving Cambridge Analytica - the now-defunct voter profiling firm improperly obtained profile data for 87 million Facebook users without their consent (see: It's Official: FTC Fines Facebook $5 Billion).
Among the initiatives is Facebook Protect, which is designed for accounts that have a high risk of hacking, including those of elected officials, their staff and potential candidates. Facebook and Instagram will now monitor these accounts for suspicious behavior, such as logging in from an unknown device or an unusual location, the company says.
Starting in November, the company will also start labelling media outlets that are wholly or partially under the editorial control of their government as state-controlled media. This will be both on their pages as well as Facebook’s ad library.
On Facebook and Instagram, misinformation and false content will now be prominently marked as such, the company says. Before a user can share the content, they would again be informed that it is false information, Facebook adds.
This labelling of both political advertisements and misinformation was a key suggestion made earlier this month when the U.S. Senate Intelligence Committee released a new report on foreign interference in the 2016 election (see: Preventing Election Interference: New Recommendations).
Facebook will also ban paid advertising that suggests that voting is useless or advises people not to vote.