Spectacular recent developments in Artificial Intelligence (AI) are feeding many fantasies in the world of cybersecurity. Almost everything can be heard on the topic, from the looming obsolescence of even the best defence solutions to an open war between AIs developed by various tech powers – including states. It often feels very complicated for executives to prepare themselves for what’s ahead.
Experts agree that AI should eventually benefit both attackers and their victims – progresses made by a malicious AI necessarily pushing defensive AI to better themselves, and vice-versa. The future equilibrium of this cat-and-mouse game is however hardly consensual, and it seems unclear which camp will lead the race in the end.
Beyond such considerations, which sometimes feel like mere sci-fi speculation, speaking of Artificial Intelligence is in fact an abuse of language that often describes the improvement and acceleration of techniques that have been around for dozens of years like the algorithmic mining of event logs to detect relevant security events. Very often, AI is used as a generic term to talk about what really is Machine Learning (ML) – a crucial difference between these two intertwined concepts being the inability of current algorithms to extrapolate new conclusions in the absence of new data.
As a consequence, the quality – and quantity – of the data that are used to train those cyber-defence tools is absolutely crucial. A defence solution will only be good at detecting threats it has already seen in the past. This link is already well-understood by hackers who often train their AI on humongous datasets shared via the internet. On the other hand, organizations seem to be lagging behind on these issues, despite the fact that sharing data between each other would obviously benefit them all. Except for the recent publishing by Endgame – a cybersecurity vendor of ML-powered solutions – of a huge dataset (“EMBER”) aiming at helping the development of defensive AI, this kind of initiative remains rare. This reluctance to share what is often perceived as sensitive data by companies can of course be explained by the difficulty in anonymizing them as well as by a legacy attitude rooted in competition and fear of reputational damages rather than in cooperation. On this last issue, lines will need to move if the good guys intend to effectively – i.e. collectively – face the bad guys.
The confusion between AI and ML would be benign if it was not hiding an important reality: People remain – and will remain for a long time – central to the cybersecurity practice of any organisation. In the near future, AI should assist – not replace – humans in their war against cyberattacks by efficiently automating the detection of suspicious activities for the analyst to inspect.
The human aspect is even more important given that many organisations are at cybersecurity maturity levels that are way too low – for reasons having more to do with human and governance issues than technological underinvestment.
Indeed, no AI – regardless of its intelligence and cost – will ever protect you against employees’ bad practices, be they cultural or managerial: Passwords that are too simple or mindlessly scribbled on post-it notes, visiting dangerous websites, downloading malware, systematic sharing of confidential data, deprioritisation of security patch deployments, etc.
In the end, let’s repeat one more time that security good-practices – which have been well-established for ages – are often good enough to effectively protect against many threats.
Before moving on to topics as complex as Artificial Intelligence, it’s our very human common sense which should prevail for most firms.