“Within the next 10 to 50 years artificial intelligence systems will have been developed, that will not only be more intelligent than humans but also exhibit a significant number of advantages; they will be faster, more reliable, quicker to learn and more robust.”Prophetic words by Kevin Warwick “In the Mind of the Machine”, written in 1998.
In 2018, 20 years on – we find ourselves in a more or less cashless society in which protecting our personal wealth – including our most personal data – is not as simple as being aware of who is hanging around or locking things up in our houses. We and our personal assets are as exposed to theft, manipulation and destruction as perhaps never before.
Crime has gone digital, taking full advantage of our tech dependency and desires for ever more convenience. Purchasing has turned into digital transactions and our smartphones contain more personal assets than we ever had in our apartments. Cyber-criminals also have it easier – crimes can be committed remotely, no stuff to carry away and so on.
Artificial intelligence (AI) is expanding into our daily lives and it is not all to our benefit. Cyber-Criminals are using AI to find system vulnerabilities and attack them, thus opening a seemingly endless challenge between autonomous AI hacking and the protection of such systems. If cyber-attacks are increasingly using artificial intelligence, then obviously, we need to counter this effectively so as to remain on par and if we are lucky, a step ahead.
Although AI in IT-Security is a popular buzzword among vendors to show their innovativeness, just two years ago there were only several vendors utilizing AI in an IT-Security context. For example, Cylance uses trained neural networks to detect malware instead of using millions of virus signatures. Currently more and more vendors are bringing AI in into their products. However, in most cases this is using Big-Data processing with Machine-Learning on their systems to create signatures etc. This approach does not strictly include AI.
A pure AI approach would be a self-learning system evolving against sophisticated and continually mutating attacks in the same way that we might imagine an artificial brain would behave. The core of this is an Artificial Neural Network (ANN) which consists only of nodes (neurons) with weighted connections. The weighting of each connection changes and only this represents the “knowledge” of the ANN. Continual training influences the weighting and therefore improves the “knowledge”.
A plausible AI adoption in IT-Security could be an Intrusion Protection System (IPS) that does not utilize attack-patterns to recognize a security breach, but instead an ANN to understand the difference between good and bad network traffic. This trained ANN can protect networks without daily updates of its patterns, and instead learns and evolves whilst protecting against continually evolving threats. Training such a system is a huge challenge, yet, Image recognition – as one of the most investigated AI technologies – would be an interesting approach to achieve this. For example, if good network traffic and networks-attacks are represented by gray-scale-pixel images (every byte can be shown as a pixel), images of good and bad traffic could be generated. With the same training and technology used in Convolutional neural networks (CNN), as in a search engine can recognize a dog in unknown images, the IPS can recognize bad traffic in all facades of these images. In this scenario, the system is learning from experience. An interesting metaphor is that unlike software which may have “bugs” there is no code to fix. A “bug” in an ANN can only be “trained” away through experience learning, in a way just as humans learn not to do things incorrectly.
All in all, it seems that protecting ourselves comes down to staying a step ahead. That doesn´t just mean being the fastest – whereby we have already succumbed this advantage to machines many moons ago – but also to be able to commutate enormous complexity and process knowledge to continually improve. As Kevin Warwick predicted in 1998, AI doesn´t just have one advantage, it has all of them.