One of the first things cybersecurity researcher Aaron Barr noticed in his analysis of the recent trading frenzy involving stocks mentioned on the Reddit channel WallStreetBets was the pace and frequency at which promotional posts propping up these stocks were flooding other social media platforms.
The timing of posts about stocks such as GameStop, observed Mr. Barr, the co-founder and chief technology officer of PiiQ Media, a cybersecurity intelligence firm based out of Boston, was too structured to be organic, he realized. For example, posts appeared exactly on the hour at the start of the trading day and again before the end of the trading day.
He also observed that while day traders had been using social media for years to promote specific stocks, it was out of the ordinary to see such high volumes of conversations pumping up a stock for a prolonged period of time.
A deep dive confirmed Mr. Barr’s suspicions – beyond the army of day traders that made a living out of speculating on stock movements, non-humans also played a role in the GameStop saga. Specifically, Mr. Barr’s team found that bots – automated social media accounts designed to perform repetitive tasks – were dominating conversations around certain stocks on social media platforms such as Twitter and Facebook, with the ultimate intention of manipulating the price of those specific stocks.
In both the U.S. and Canada, paid stock promotion by companies is legal, as long as the information disseminated to investors is factually accurate and disclosed appropriately. Lately, however, evidence suggests that stock promoters have stepped up their game, using more sophisticated technological methods beyond the traditional radio ads, e-mail newsletters or Facebook posts to aggressively pump a stock.
It is unclear if campaigns using bots are run by companies or by individuals with an interest in the stock, and little is known about the extent to which bots are used as a stock promotion tactic. But experts are beginning to sound the alarm on its potential impact in manipulating financial markets, especially if securities regulators do not develop a quick and effective grasp of the technology that is being employed.
PiiQ Media’s research analyzed the use of two key phrases frequently used on the WallStreetBets Reddit channel: “Stonks,” which signalled to investors that a hot, volatile stock was on the market, and “hold the line,” a battle cry to hold on to a stock even though its value was plummeting.
“The idea was to analyze how these words, imported from conversations on Reddit around certain stocks, were used on other social media platforms like Twitter and Facebook – and, more importantly, how often they were used – so that we could determine if they were coming from posts made by a real person,” Mr. Barr said.
A bot account on Twitter – one that usually has no picture associated with it and a profile name consisting of a first name and a string of numbers – would automatically compose a tweet instructing investors what to do with a particular stock, using keywords in conversations (such as “stonks” and “hold the line”) also found on Reddit. For example, “$GME HOLD THE LINE” indicates that investors should not sell GameStop stock just yet.
Based on when and how an account was posting tweets, Mr. Barr and his team came up with a methodology called a “trust score” that rated the authenticity of social media profiles.
“What we found was a significant number of accounts that were active in conversations using keywords from WallStreetBets were set up in the two months leading up to the trading frenzy. Most of them had a trust around the 20- to 30-per-cent range, indicating that they were inauthentic … or bots,” he said.
The danger of bots, according to Italian researcher Stefano Cresci, who specializes in disinformation on social media, is that they can quickly amplify an existing message from a legitimate user on social media to create the illusion that something is a bigger trend than it actually is. Used in the context of trading, bots can generate immense excitement over a stock, effectively coercing an unassuming investor to believe that a buy or sell move is worth making at that particular moment in time.
Mr. Cresci’s own research at the Institute of Informatics and Telematics in Pisa, Italy, studied how bots played a role in promoting stocks by analyzing seven million tweets over the span of two months in late 2017. One of his findings was that bots designed to tamper with financial markets – financial bots, as he called them – were often created as very simplistic fake accounts, meaning that they usually had no picture or profile description. This was in part because these bots were targeting automatic trading algorithms, rather than humans.
“The idea here was to tamper with a sentiment related to certain stocks, or simulate a grassroots discussion around a particular stock to create positive content about that stock,” Mr. Cresci said.
His research identified that as many as 71 per cent of the authors of what he deemed “suspicious financial” posts turned out to be bots. After the results of his investigation were published in early 2019, Twitter suspended the accounts of 37 per cent of these suspicious bots.
Ultimately, however, he found that while bots did attempt to manipulate trading behaviour of targeted stocks, it was hard to definitively determine how successful they were, especially with stocks that had a large trading volume.
“With big stocks like Apple and Tesla, It is extremely difficult to measure the impact of bots because so many other factors are at play in their stock movements,” Mr. Cresci said. “With the smaller stocks, we did find some evidence that there was impact, but I cannot say it is definitive.”
In Canada, most securities regulators have not indicated that they are even aware of the existence of bots in stock promotion. The last time a bulletin was issued warning companies about unlawful stock promotion activity on social media was back in November, 2018, and it did not mention bots or fake accounts.
The Canadian Securities Administrators, an umbrella organization representing all 13 provincial and territorial securities regulators, indicated in an e-mail to The Globe and Mail that it did not currently have a strategy to monitor bot activity in stock promotion.
“CSA members respond quickly to misleading stock promotion, which is a constant with the advancement of new technology, especially social media,” CSA spokesperson Ilana Kelemen wrote.
The British Columbia Securities Commission told The Globe and Mail that it had not come across any examples of bots being used in stock promotions, but that it was “aware of this activity” and would actively monitor such gambits.
Not keeping up with the evolution of new and aggressive stock promotion tactics is a grave mistake on the part of regulators, argues Cromwell Coulson, president and CEO of U.S.-based OTC Markets Group. His exchanges list thousands of early-stage, speculative stocks ripe for promotion scams, including hundreds of Canadian companies.
“In the U.S., promotion rules were written at a time where newsletter writers or a radio-show host were being paid off … so people knew the human who was promoting that stock. Now we are seeing things that are very, very hard to track,” Mr. Coulson told The Globe.
The team within the OTC focused on identifying and quashing illegal stock promotion scams found that while numerous fake accounts are created as part of a stock promotion campaign, they are not terribly effective. With GameStop, Mr. Coulson said, bots may have been “out there doing things,” but the echo chamber of retail investors hyping up individual stocks probably had more of an effect in inducing volatility.
“The biggest challenge for regulators as information becomes more real-time and markets move faster, is: How can the regulator step in more quickly? Market regulators need to think more like credit-card issuers – immediately stop a trade when something’s not right,” he said.
There is also the question of who exactly is employing bots to promote stocks. Stock promoters are usually paid by the companies they promote, or by hedge funds and other wealthy investor groups that have an interest in either pumping or shorting a stock. In Canada, disclosure requirements for companies are lax, and the source of promotion is often unknown.
“The concerning thing about bots is that if you were a CEO of a penny-stock trading company, and you didn’t have a good moral compass, you could theoretically hire someone to buy your company’s visibility in a massive way,” PiiQ Media’s Mr. Barr said.
A simple Google search of “buy Twitter followers” reveals hundreds of companies that specialize in adding fake followers to an existing account or creating a new fake account with fake followers.
“Anyone can buy credibility. Anyone can buy visibility. It depends how sophisticated you want to get, and it is incredibly difficult to find out who is behind what,” Mr. Barr added.
Mr. Cresci, the Italian disinformation researcher, warns that the use of fake accounts or bots in financial markets is only going to grow more prevalent.
“Right now, the bots are not even working very efficiently. But if a bot works efficiently, it can take an original tweet or message and replicate it exponentially, really impacting smaller stocks with lower valuations,” he said. “We need to start paying attention to the fact that stock promoters are in this bot game.”
Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.