The Case For ‘Smart’ Security (Part I)

Artificial Intelligence is on track to disrupt well, everything -- but when it comes to AI and cybersecurity, there’s plenty to consider before implementing AI-based technology in your organization.

Ed. note: This is the first article in a two-part series about AI, its potential impact on how organizations approach security, and the accompanying considerations around implementation, efficacy, and compliance. 

Is Artificial Intelligence (AI) on track to help the world streamline and solve against tasks that are better left to a machine? One might think so, given everything we’ve seen and heard about the impact of AI on our society — from our phones telling us the best way to drive home, to chatbots on e-commerce sites answering product questions, to devices as small as a thermostat or as large as an electric vehicle removing friction from everyday life.

Now AI is entering the space of cybersecurity, promising to bring greater speed and accuracy in detecting and responding to breaches, user behavior analysis, or predicting new strains of malware. AI and machine learning technologies can help protect organizations from a continuously evolving threat landscape — but AI is not just for sophisticated attacks, AI can also help protect against classic attack scenarios.

For example, take a real-world example in which an investigations firm identifies the presence of malware running on a workstation which was traced back to a phishing email that contained a malicious Word document disguised as an invoice that was opened and executed by an organization’s controller. Once executed, the malicious document created a backdoor to the organization’s internal network.  The malware ran on the employee’s system for six months, evading multiple antivirus products and a highly trained internal security team.

In this scenario, AI could have helped detect changes to the employee’s system better and faster than its human counterparts.  Even though commercial systems are still at an early stage of functionality, and only the largest companies can afford the expertise to build or support AI systems in-house, security teams should start thinking about how to best apply AI tools when they become more readily available.

According to Rodger Sadler, Senior Counsel with the global IP Center of Excellence at Bank of Montreal (BMO Financial Group), it’s no surprise that hackers are incorporating AI into their bags of tricks.  AI algorithms enable exploits like spear-phishing on social media platforms to more effectively and efficiently target victims with messaging designed to elicit a much higher response rate than a human could hit.

State of the Technology

Sponsored

The reality is that machine learning technology has a long way to go, says Will Pearce, Senior Operator with Silent Break Security, a cybersecurity consulting firm. “In terms of implementation on networks, machine learning systems still suffer from the classic issues: false positives, poor software development practices, misconfigurations, and a lack of network logging,” he says. Solutions for AI-enabled security solutions are still relatively immature, but vendors are investing heavily to improving their solutions. The problem is, says Pearce, most of these new cybersecurity solutions are not yet designed to allow algorithms to make better decisions, but merely to aid decision-makers — not reaching the true potential of the technology and in some cases just being redundant.

“We already have the required knowledge to create alerts for particular events,” Pearce says.  “Teaching an algorithm to alert on the same events only adds complexity and adds yet another system for security teams to integrate and manage.” He recommends organizations begin experimenting with AI technologies to monitor malicious web traffic, weed out phishing emails, and conduct user behavior analysis on external portals such as VPNs.  “The data sets are smaller, the algorithms are simpler, and the logging is already there in most cases.”

Peter Clay, COO of Dark Cubed, a cybersecurity software platform that detects cyber threats, recommends that companies use AI-enabled tools to solve discrete and well-defined problems. “For small to mid-sized businesses, this probably means using a single AI tool, placed where it will do the most good on the endpoint.”

Both Clay and Pearce agree that having the proper infrastructure and data strategy are as vital as having the right tools.  (We will delve deeper into best practices in the second article of the series.) “Bad or incomplete data limits the utility of the AI and from a cybersecurity standpoint, despite all of the marketing hype, no single tool can make accurate decisions from simply its own data without reference to what else is happening with the state/other tools managing and protecting the data,” Clay says.

The Dark Cloud Over AI

Sponsored

The potential of AI technologies to simplify and improve security is clear but it’s too soon to expect that AI will be able to comprehend and properly classify nuanced existential threats. Privacy considerations are another concern.  AI tools that are invasive to users’ privacy are concerning but there are still worse scenarios that exist, such as an “intelligent” clinical system misdiagnosing a patient.

IT and security professionals should take care to closely evaluate how AI systems use and protect the data their organizations collect, especially as relates to personally identifiable information (PII). Clay points to the EU’s GDPR requirements, which identify IP addresses that are tied to user behavior as PII which must be protected.  Since security systems collect IP data in their analysis, and may combine it with other PII data sets, that increases a company’s risks of being out of compliance with GDPR or other privacy laws. “From a private perspective, creating additional repositories of PII is seldom, if ever, a good thing from the point of view of corporate counsel,” Clay says.  Adds Pearce: “The concept of ‘reidentification’ attacks in which an application can analyze anonymous information across multiple data sets to identify an individual is a concern too,” from a privacy law perspective.

IT and security departments must also be on the lookout for even more aggressive tactics by hackers skilled in AI. “In the cybersecurity space, network defenders will have the additional task of securing their algorithms and datasets against attackers who actively try to influence or break machine learning systems,” Pearce remarks.

Then there’s the alarming concept of “Deepfakes,” in which an AI tool uses PII data, such as an image or recording, to falsify identities and trick employees.  “Imagine the CEO gives a talk or is featured in marketing material,” Pearce says. “Attackers could take the recording, create a Deepfake, then use it to phish employees, and how are they supposed to know the difference?”

The Inevitable Push

The bottom line is: AI technology is powerful and complex, and companies should do plenty of research before using it to buttress or replace existing cybersecurity measures.  But organizations with sophisticated security teams also know that they must meet their enemies on the battlefield, which means that security teams will increasingly need to know more about this technology in general, due to attackers’ use of AI to breach enterprise networks.

The merits of AI don’t point to immediate procurement of AI-enabled security technology or training staff in algorithm design and machine learning software development. Instead, just like map applications or chips embedded into virtually all imaginable consumer technology that will turn regular devices into “smart” ones, machine learning technology will be built directly into applications from vendors. “I don’t think organizations will have to spend significant time or energy building skills internally to gain the benefits of machine learning, as vendors will build it right into their products,” Pearce says. Given the immaturity of commercial systems, he also recommends proceeding cautiously until the market dynamics play out: “Organizations shouldn’t be in a rush to implement this technology. Machine learning is here to stay it just needs some time. It’s okay to wait.”


Jennifer DeTrani is General Counsel and EVP of Nisos, a technology-enabled cybersecurity firm.  She co-founded a secure messaging platform, Wickr, where she served as General Counsel for five years.  You can connect with Jennifer on Wickr (dtrain), LinkedIn or by email at dtrain@nisos.com.

CRM Banner