Look out, corporations – there’s a new metric in town, and it’s changing the way media measurement and analytics should be conducted. It’s a disinformation metric, which calculates the effect of malicious bot attacks on news stories or social media conversations. And it’s a metric that should be a fundamental part of your analytics program to protect your corporate reputation.

You can use this disinformation metric to learn the amplification (shares or retweets) of a social story, video or image based upon damaging bot activity. More importantly, you can find out if a negative bot-generated conversation – for example, a fake, manipulated or old news story — gets elevated or legitimized through mainstream media.

I’ve seen the results of this metric firsthand, and was amazed at the volume and vitriol of some of these assaults.

Yes, there are good bots for things such as customer service, fraud detection and earthquake reporting, but it’s also been well-documented that harmful bots left an impact on American and international elections. What is less known is that these same tactics are now being harnessed across all media platforms to damage corporate and brand reputations.

“Based on extensive media analysis of Fortune 1000 companies, Zignal Labs has found that virtually all businesses, regardless of size or industry, are potential prey to this malicious bot activity,” said Josh Ginsberg, CEO. “We have seen massive amounts of bot activity lately. People are spinning up fake accounts or fake sites to amplify false or negative news stories and impact the bottom line of a company.

“Bots are accounts controlling millions of online conversations in a coordinated manner to influence the outcome. To give you a sense of the magnitude, we have literally not found a company in the past six months that has not gotten hit with a major bot attack.”

By the Numbers

Here are some additional stats from Zignal’s research:

  • Bots make up 52% of online traffic versus 48% of traffic by actual people.
  • False news stories are 70% more likely to be retweeted than true stories.
  • Botnets have produced more than 150,000 tweets per day.
  • 30% of people are deceived by online bot activity.
  • Bots have been found with 350,000 fake followers.
  • The day before the 2016 U.S. election, bots generated nearly 20% of all election-related messages.
  • In 2017, artificial mentions also started attacking companies, and by 2018, the cultural reputation of some big names in the news came under fire.
  • 92% of global communications executives cite false news as the most challenging ethical threat to their profession.

Aligned with this research, Zignal Labs developed bot intelligence software to detect nefarious bot activity; in many cases, more than one botnet is at work at the same time. (A botnet is a group of computers connected in a coordinated fashion for malicious purposes. Each computer, or in this case fake social media profile, in a botnet is called a bot.)

Using the Zignal bot software, I witnessed the live visualizations of some attacks. These images reflect what I saw – on the left, the white nodes represent conversations by actual people. When bots are involved, the nodes are different colors and clustered. The more colors and the more clustered together, the nastier the assault, as seen in the right image.

             

Techniques Used by Bad Bots to Spread Disinformation

Bad bots are used maliciously with four motives: manipulate stock prices; damage corporate and brand reputation; impact politics and the democratic process; and cultural weaponization.

One of the worst results for corporations is generated by stock price manipulation. Bots specifically target earnings-day announcements, planting disinformation that can wreak havoc on both reputation and valuation.

Other reputation attacks come from bots that effectively recycle old news – this means a past crisis or negative story resurfaces as breaking news, long after the event actually occurred. In September 2017, a high-profile CPG company was hit with an unfavorable story. For the next six months, bots kept this story alive and prominent, ultimately fueling a steady stream of negative conversations.

Bots are also known to target high-profile cultural debates. For example, more than 60 percent of online conversations involving Roseanne Barr and Samantha Bee were originated and/or amplified by bots, which retweeted conversations and news stories.

For cultural weaponization, bots are programmed to amplify sentiment on hot-button issues, both political and financial. These bots can sometimes lie dormant or post irregularly until keywords or phrases come up in social media conversations, at which point they re-post to amplify the stories. In some cases, the same unfavorable story is amplified every month, which can be very damaging.

How do they do it? A nefarious bot network is usually controlled by one entity – a “bot master” consisting of one person or a group of people. The botmaster typically follows four techniques:

  • Create or purchase social media profiles to enlist in a “bot army.” It can take less than 30 minutes to create an army of bots and cost less than $100.
  • Automatically find a story, hashtag or author to follow, using AI.
  • Use bots to comment on or republish specific posts.
  • And finally, amplify a story by sending hundreds or thousands of posts in a day.

A coordinated assault across every media platform can move very quickly. “They’re lightning fires throughout the Internet,” Ginsberg told Kara Swisher, host of the Recode Decode podcast. “This is happening every single day, and you are at a major disadvantage if you don’t know where it originated, and if you don’t have a strategy to deal with it.”

Bot Detection and Strategic Response

With bot detection intelligence, corporations can measure how bots are influencing media conversations. First, companies need to be aware that they are being hit by a damaging bot attack. A bot communications plan should be developed, so they are prepared for one. The plan should include targeted messages and both offensive and defensive strategies to push back – or not, in some cases – on these assaults.

For example, if your stock prices are affected, a plan might outline a strategy for your CEO to appear on TV and elsewhere to talk about stock manipulation. Your plan might also include people from your teams for investor relations, cybersecurity and risk.

If your company’s reputation is in danger during a bot attack, particularly if it’s about your C-Suite, a targeted strategy is also important.

But sometimes these outbreaks do not generate widespread public attention; in these cases, the best approach might be not to react at all. You don’t want your executives to draw attention to a situation that a majority of people don’t know about.

Of note, bot networks frequently utilize a “three-wave strategy” – three bot incidents that happen over two-to-four weeks. According to Ginsberg, in the first two waves, the bots test messages to see what’s resonating with certain groups. “The third wave is where they use what they learned to make an impact, to pack a big punch.”

If you are aware of the first two waves with bot detection intelligence, you can quickly activate your bot communications plan, and therefore be prepared with a reaction, if and when the third wave occurs.

Summary

Harmful bot attacks take place across all media platforms, but you cannot see the full impact with the naked eye. As mentioned, Zignal Labs has released a Bot Intelligence Solution to detect and alert companies whenever their reputations are at risk from bot-led disinformation and media manipulation campaigns.

Learn more about how to protect your company’s reputation from social media bots in our eBook: Your Guide to Protecting Your Brand from Social Media Bots.