Recently there has been a rise in cyber attacks with 81% of UK organizations suffering some form of cyberattack in 2021\. In the UK the cost amounts to $1.08 million per incident while the lack of a specialised workforce is the largest challenge. Companies including SMEs, large organisations, and the public sector are expected to continue expanding their online presence. Especially when critical sectors are concerned (e.g. power, oil and gas, defence, health) this online presence introduces a larger surface for potential attacks with more and more private and protected information being threatened. As a result, there is a rise in the demand for cybersecurity solutions which are now more relevant than ever. Much research is undertaken in the UK and globally in developing more secure hardware and software solutions to address this increasing need. One of the directions that aim to automate cybersecurity efforts is Intrusion Detection and Management (IDM) and Intrusion Prevention Systems (IPS).
IDM and IPS will become necessary for any organisation that handles and stores sensitive information (e.g. GDPR protected information, critical infrastructure, operations management etc.). However, as many modern solutions depend heavily on the use of Artificial Intelligence (AI) they demand that their users are highly specialised both in cybersecurity and AI. This in turn introduces issues of trust specifically for organisations in the critical domains. The main challenge for innovative AI approaches is that they either focus on signature or anomaly-based detection in large volumes of network traffic monitoring data. In both categories, AI has been applied but industrially used products still lack in explainable, robust, and transparent solutions. Also, there are limited publicly available datasets that allow models to train over known limited patterns. Models are usually trained in well-known datasets and cannot identify or respond to more sophisticated attacks. This is a barrier to their widespread use at present.
We propose to address this barrier by developing solutions compliant with Ethical AI intelligence strategy, following legislation, and legal frameworks. This ascertains that the AI approach will be Transparent, Reliable, and Accountable ensuring that AI models will be designed and developed with an increased level of explainability, and reliability thus reducing the training overhead for companies and organisations. This project's objectives are to expand our solutions for the IDM use case and evaluate the explainability and robustness through stress testing under orchestrated attacks from single and multiple sources.