← Horiz Logo

A Tech Adoption Guide for Lawyers

in partnership with Legal Tech Publishing

Technology

The Disinformation Dilemma (Part II)

Countering disinformation in a deepfake world.

Ed. note: Disinformation attacks and influence campaigns have gone mainstream. Once a conduit for influencing political opinions, this new threat model now poses immeasurable risk to the private sector. This two-part series will discuss the growing trend of disinformation campaigns disrupting the commercial sector, and the accompanying need for lawyers to lean into cyber to provide effective counsel when analyzing risk or addressing the fallout. Part one of this article discussed this new wave of cyber-attacks against companies. Here in part two, we will delve into what lawyers can do to help companies mitigate disinformation attacks when they occur.

The number of countries in the cross-hairs of political disinformation campaigns more than doubled to 70 in the last two years, according to a recent report from researchers at Oxford University.  Given the efficacy of such attacks, it’s not surprising that disinformation campaigns are also becoming a business problem.  Companies as varied as Olive Garden, Koch’s Turkeys, and Columbine Chemicals have been recent victims of massive social media hoaxes spreading false information connected to their product or brand.

Beyond maintaining tight controls over online accounts and training staff in basic cybersecurity practices, such as strong password management, there isn’t much that companies can do to prevent disinformation attacks which can ramp up quickly and threaten corporate value. Criminal groups are even beginning to offer disinformation campaign services to help bad actors get started, according to ZDNet.

Free open source tools like Social Bearing and Hoaxy, along with sophisticated brand-monitoring tools like Sysomos, can monitor social media accounts to provide advance warning of disinformation attacks in process.  Yet mainstream social media isn’t where attack coordination and planning take place, according to Cindy Otis, author of True or False: A CIA Analyst’s Guide to Identifying and Fighting Fake News and Director of Analysis at Nisos (full disclosure: we are colleagues).  It pays to have cybersecurity experts ready to discover disinformation campaigns before they show up on mainstream media and determine the extent and source of these campaigns when they appear, says Otis.

Part of disarming the enemy means first understanding who they are.  Attributing the attacks is a rigorous process which can involve sifting through tens of thousands of tweets and messaging to understand the identities and motivations of the players.   But attribution, when done properly, can pay generous dividends.  Often, the ability to directly attribute disinformation activities to a threat actor are enough to quell public chatter, but attribution is often beyond the scope for internal IT/security teams.  A company called New Knowledge purports to identify and mitigate disinformation campaigns for companies.  And while these services can get expensive, the spend may still be worthwhile given the reputational or actual damages they can prevent or at least limit.

In certain cases, when bringing on outside cyber experts to investigate these incidents, companies are best off directly engaging with outside counsel.  This type of approach is effective in preserving legal privileges and can yield precious evidence when criminal justice or regulatory measures are implicated, that can be provided directly to law enforcement.  Before deciding whether to pursue an investigation, the risks of any negative information about the targeted company that may come out of it, such as gaps in compliance and security, should be carefully considered along with the drain on resources, likelihood of success, and the relevance of potential findings to possible outcomes, including legal implications.

It’s important to note that governments across the world are beginning to creating programs and laws designed to prevent and thwart misinformation campaigns.  In January 2019, CNN reported that a company that profited millions of dollars by selling fake social media posts and comments settled a case with the New York state attorney.  The settlement was based on the finding that selling fake social media activity was deemed illegal, as matter of first impression.

Deepfakes are harder to detect. Software developers and companies like Facebook are experimenting with applying AI to speed up and improve detection..  Other technologies that could disrupt deepfakes include the use of watermarks or tracking technology to monitor original content for alteration.  The Pentagon’s research arm, DARPA, is also spearheading industry efforts to better understand disinformation attacks and discover solutions for combating them. However, for now, Otis says, “The reality is that human solutions and human eyes on target are still the best solution even though it’s incredibly difficult to correct false information once it is out there, or at least to get the truth in front of the same people who bought into the false information.”

Nailing the Response

While solutions are slim for preventing disinformation attacks, a rapid, effective response plan is critical to minimize the fallout.

  1. The issue of whether to respond is as critical as the question of how to respond

The first thing any company should assess is whether the level and potential overall impact of disinformation in the public domain merits a response or whether a response might in fact further inflame the issue.  There’s a marked difference between a person or a handful of people spreading false information with limited reach and a more sophisticated and calculated attack, Otis says.  In the former scenario, it’s important to keep in mind that legally, anyone can render an unfavorable opinion about a company, invoking their right to free speech.  In the latter scenario, a far more insidious situation, a network of individuals or bots using automated fake accounts can quickly spread fabricated or misleading content.  Fake websites are programmed to propagate the same nefarious messaging while coming off as credible and neutral.

  1. Crisis communications are central to a strong defense

Company executives should develop a crisis communications plan as a playbook for lodging a rapid response to the public and the media.  Plans should incorporate a clear assessment about the origins and breadth of the attack before the company issues any public statements.  A proper assessment includes understanding the methodology behind the attack, the perpetrators themselves (subject to the variable degree to which the actors can be positively attributed) and their intentions. “In short, companies need solid attribution,” Otis says. Otherwise, sharing information about a disinformation attack against your company may appear as unfairly targeting critics or even worse, competitors.   Finally, a comprehensive crisis communications plan should encompass a logistical plan of attack including dissemination channels — whether social media platforms, press releases, or interviews with news outlets.  A PR firm or a company’s internal PR/marketing team, in conjunction with legal, can ensure that the message to the public is on track to dispel the disinformation and set the record straight.

In this era of fake news, disinformation and influence campaigns, things are not what they seem and public mistrust of companies, institutions, and leaders has never been higher.   But disinformation paralysis and unbridled skepticism can only exacerbate the problem.   With the right preparation, a healthy dose of vigilance, and a strong team of stakeholders, we can hold precious ground.


Jennifer DeTrani is General Counsel and EVP of Nisos, a technology-enabled cybersecurity firm.  She co-founded a secure messaging platform, Wickr, where she served as General Counsel for five years.  You can connect with Jennifer on Wickr (dtrain), LinkedIn or by email at dtrain@nisos.com.