The Disinformation Dilemma (Part I)

Disinformation attacks create the perfect storm on a global level by traversing hemispheres and social classes in a matter of moments.

Ed. note: Disinformation attacks and influence campaigns, once confined to advancing political objectives and outcomes, have entered the commercial arena.  This two-part series will discuss how threat actors increasingly target the corporate sector with cyber-centric disinformation campaigns and the accompanying need for lawyers to lean into cyber. Part one of this article discusses this new wave of cyber-attacks. In part two, we will delve into what lawyers can do to be part of the solution when it comes to getting out in front of disinformation campaigns against companies.

The old axiom “seeing is believing” began its slide toward antiquity rapidly with the advent of technology. Within decades of cameras coming to market, photos were capable of being enhanced or composited to portray something other than what they actually captured. Two centuries later, virtually undetectable photo and video manipulation (“deepfakes”) can trick the public into adopting new viewpoints on an almost instantaneous basis.

The geopolitical arena was the first proving ground for the use of deepfakes and disinformation to influence public opinion in a meaningful and disruptive manner.  Today, countries like China and Russia manage countless social media accounts designed to rapidly swell public commentary toward nation state-favored viewpoints on issues like the Hong Kong riots or the U.S. elections.

As with most industries, disinformation as a strategy to control conversations and viewpoints within public forums is tied to the ability to scale and profit. Tactics have grown in scale and complexity and the dark web provides cheap solutions for threat actors who don’t want to invest much time or money in developing online personas from scratch.

The Case for Corporate Vigilance

Malicious actors intent on spreading disinformation about companies have one or more of various motivations: to damage a company for competitive reasons, to satisfy a disgruntled employee’s grudges, or to further a socio-political agenda.  Cyber criminals grow increasingly adept at using tools like deepfakes to extort money.  The website Threatpost recently reported that the CEO of an energy company transferred $243,000 to a malicious actor who used manipulated audio that convinced the CEO he was his boss at the parent company.

Disinformation attacks are also becoming more common on social media, where it’s relatively easy to hack into someone’s account and create PR havoc.  In one recent case, a Twitter user’s account was hacked to spread the rumor that Olive Garden was helping fund President Trump’s reelection campaign — a claim that was quickly refuted by campaign finance data.  Olive Garden does not donate to presidential candidates. Nevertheless, the actors’ aims were at least achieved in part: the hashtag #BoycottOliveGarden was mentioned in more than 52,500 tweets and retweets.

Sponsored

If a threat actor wants to use these avenues to disrupt an organization by causing financial harm, there are many ways to do it.  Whether from a competitor or an M&A perspective, the damage that results from a sophisticated disinformation program outweighs traditional cyberattacks in both speed and effectiveness due to virality.

Tactics for Online Disinformation

Social media hacks: As in the Olive Garden exploit, hackers have plenty of opportunity to hijack a VIP’s account or any employee’s account and spread false information like wildfire.  In 2013, the Syrian Electronic Army, a cyber-crime group, hacked the Twitter account of the Associated Press and tweeted that a bomb had exploded in the White House. The group gained access to the account through a successful email phishing operation to AP employees. Within three minutes after the tweet, the stock market dropped briefly by an estimated $130 billion.  Employee education around common hacker tactics is one way to circumvent these attacks, according to expert Cindy Otis, Director of Analysis at Nisos and author of True or False: A CIA Analyst’s Guide to Identifying and Fighting Fake News (full disclosure: we are colleagues).  Yet there’s increasing pressure, she says, on the social media companies to do a better job of so-called self-policing, or taking down fake content as soon it’s discovered.  A recent report by NYU’s Stern School details concerns and sets forth detailed recommendations around social media companies and their responsibility to counter disinformation campaigns.

Many of these hacks propagate through use of bot farms, troll networks (non-automated fake accounts), fake websites, memes, misleading content, manipulated photos, encrypted messaging apps, and a crowded menu of tools such as virtual private networks used to obscure a person or group’s digital identity.

Deepfakes:  Using a machine learning technique known as generative adversarial network, deepfakes replace existing images, audio, and video with altered ones to represent something that is not true.  In the past, deepfakes have been used to create fake celebrity pornographic videos and to alter news media videos.  One such altered video that was widely distributed targeted Barack Obama and another Nancy Pelosi, who appeared to stumble over her words as if drunk.  Notably, the Facebook video of Pelosi was not fake but had been slowed down to alter her voice.  Technology to produce deepfakes is available using widely available applications such as FakeApp and DeepFaceLab.  “In reality, pulling this off requires a level of sophistication and effort most people don’t have, but new tools and apps allow individuals of any technical skill levels to easily manipulate and edit videos and recordings that can look quite convincing” explains Otis.  “Those kinds of manipulated videos are the real threats.”

Sponsored

Disinformation attacks create the perfect storm on a global level by traversing hemispheres and social classes in a matter of moments.  The unifying themes are disruption and conversion of funds, influence, and power. Yet by leaning into this swelling trend of cyber threats and understanding their impact on security tactics, organizations can take back some power in what is otherwise becoming a zero-sum game.   In the next part of this two-part series, we will address how companies can begin to deter or strategically mitigate the fallout from disinformation and influence campaigns.


Jennifer DeTrani is General Counsel and EVP of Nisos, a technology-enabled cybersecurity firm.  She co-founded a secure messaging platform, Wickr, where she served as General Counsel for five years.  You can connect with Jennifer on Wickr (dtrain), LinkedIn or by email at dtrain@nisos.com.

CRM Banner