With reports of Russia using social media and bots to push fake news to influence the 2016 U.S. presidential election, questions are arising over how
With reports of Russia using social media and bots to push fake news to influence the 2016 U.S. presidential election, questions are arising over how these same tactics could be used against an enterprise.
“Twitter bots could absolutely be used against a company,” said Dan Olds, an analyst with OrionX. “Someone using bots could manufacture a fake groundswell of opinion against a company or a product.”
The subject of Twitter bots has made headlines since federal investigations into Russia’s interference with the presidential election unearthed evidence that the Kremlin used chatbots, particularly on Twitter, to seed fake news stories in order to confuse discussions and taint certain candidates, especially Democratic candidate Hillary Clinton.
A bot is a simple software program that uses artificial intelligence to perform automated tasks, such as sending out messages or reposting other messages.
During the presidential election, Twitter bots were used to pick up on tweets that included certain topics or hashtags, such as #HillaryClinton.
Once the bots detected those tweets, they would respond, often flooding the Twitter user or the hashtag with Twitter rants or even phony stories, such as promoting the falsehood that Clinton was in jail or about to go to jail.
A University of Southern California study last November showed that this tactic wasn’t simply a few rogue bots at work.
Between Sept. 16 and Oct. 21, 2016, researchers at USC’s Information Sciences Institute found that Twitter bots produced 3.8 million tweets, or 19% of all election tweets during that period.
The USC report also showed that social bots accounted for 400,000 of the 2.8 million individual users tweeting about the election, or nearly 15% of the population under study.
Researchers at Oxford University also reported last fall that bots were part of a concerted effort to influence what people were learning about the candidates, particularly Clinton, on social media.
There were, of course, Twitter bots working for both Clinton and Republican presidential candidate Donald Trump, but the Oxford study shows that the bot tide was heavily in Trump’s favor.
During the first presidential debate, for instance, pro-Trump bots generated seven tweets for every one pro-Clinton bot posted during the three presidential debates, according to the Oxford study. The study also found that 23% of the tweets on the first debate were from bots, while 27% were from bots during the third debate.
“Political actors and governments worldwide have begun using bots to manipulate public opinion, choke off debate, and muddy political issues,” the report said.
Now federal investigators are looking into whether Russia was behind the pro-Trump Twitter bots.
“Operatives for Russia appear to have strategically timed the computer commands, known as ‘bots,’ to blitz social media with links to the pro-Trump stories at times when the billionaire businessman was on the defensive in his race against Democrat Hillary Clinton,” the McClatchy news service reported, citing anonymous sources last month.
Last week, in testimony at a Senate Intelligence Committee hearing on Russia’s meddling in the U.S. elections, Clinton Watts, a former FBI special agent and senior fellow at the George Washington University Center for Cyber & Homeland Security, said influencing debate on social media isn’t a role simply for bots.
Watts said the practice is a combination of automated bots and “humans that work in their psychological warfare groups” and command less-automated bots.
“That amplifies your appearance,” Watts testified. “It games the social media system, such that such a high volume of content being pushed at the same time raises that into the Trends …The goal is to get that in the top of Twitter stream so mainstream media has to respond to that story. When mainstream media responds to it or looks at it without commenting on it, it takes over organically and you’ll see it move over the internet like a virus.”
Sen. Mark Warner (D-Va.), a member of the Senate Intelligence Committee, said at the hearing that the disinformation being spread by Internet trolls and bots was designed to disparage Clinton and appeared to target key swing states in the weeks leading up to the November election.
Twitter bots are not new.
Sam Woolley, one of the researchers on the Oxford study, noted that bots are an important part of Twitter that have been used on the site since it was launched.
Normally, bots are used to send spam or tweets about a news story or event at a particular time of the day. They also can be used to be humorous. For instance, @Betelgeuse_3 sends automatic replies in response to tweets that include the phrase, “Beetlejuice, Beetlejuice, Beetlejuice,” in reference to the movie Beetlejuice.
The difference is that bots are increasingly being used to flood discussions and sway opinion by appearing to represent massive groups of real-life users.
How bots could hurt a company
While bots were used to influence a presidential campaign, they also could be easily used to taint the image of a company or to plant phony news items about a corporate executive or enterprise.
“Malicious social media bots could feasibly be launched against any entity that has an online presence,” Woolley told Computerworld. “They have long been used as a marketing tool for spreading information on companies’ products … But there is no reason that social media bots couldn’t be used to launch a campaign of disinformation or slander at a company.”
Jenny Sussin, an analyst with market research firm Gartner, said she hasn’t seen bots used with any strength in favor of or against a particular company, but she has seen them used in online enterprise discussions.
“You can look at any trending topic associated with a particular event or organization and you’ll see bots slide in there with typically inappropriate comments,” she said. “Those bots work off volume prompts. If something is increasing in frequency of mention, tweet “xyz.” Again, because these are all rules-based, they could be used for or against to extend the reach of a message that may or may not be truthful.”
It would be easy to set up bots to try to harm a company’s reputation by making it appear that real people are complaining about a company’s product or making false claims about something a company has done.
Because of that risk, executives need to watch out for bots.
“All you can do is pay attention to the popularity of specific messages,” Sussin said. “What messages about your company are being retweeted the most? Where did they originate? All companies can do is try to disprove any false story that would come out about them.”
While Olds said he hasn’t yet seen any companies using Twitter bots against their competitors, it’s not out of the realm of possibility.
“I think companies should take some time to consider exactly what might happen if they had a social media, particularly a Twitter, campaign mounted against them,” Olds said. “They need to be ready to investigate the social media claim, then respond as quickly and thoroughly as possible. A negative campaign could pick up traction very quickly with today’s social media, and the company in the crosshairs of such a campaign had better be ready to deal with it.”