We're seeing an increased use of artificial intelligence (AI) and machine learning (ML) systems in all sectors and industries. The use of artificial intelligence is allowing newer and better products to be introduced to the market. There is almost an arms race when it comes to AI and ML. 

Those that can get a new product first to market are likely to capture a bigger share of the market due to having first mover advantage. Furthermore, more and more organisations are making AI and ML key aspects of their digital transformation projects as they try and gain an upper hand against the competition.

Before we can start discussing the threats and countermeasures related to AI and ML, we first need to understand some key contextual information behind AI and ML. These are:

- AI and ML systems require large data sets and more complex data sets to be able to power and train the complex algorithms that actually make AI and ML systems work

- Due to the large data requirements, organisations are increasingly using cloud platforms as these platforms provide greater storage and processing capabilities

- AI and ML systems typically require three sets of data:

  • Training data to build a predictive model
  • Testing data to assess how well the model works
  • Live transactional or operational data when the model is put to work.

The context above is important to understand as it drives some of the key threats which I will discuss now.

Some of the common threats that we see with AI and ML systems include:

AI and ML sprawl – not knowing what AI and ML systems are in use and where they may be located. Without this information it can be very difficult to secure these systems as you do not know that they even exist

Cloud related risks – as more and more AI and ML systems are been put in the cloud, common cloud security vulnerabilities creep in. These include misconfigurations and implementation of poor cloud security controls

AI and ML data needs create risk. This includes use of production data in test and training systems

AI projects are generally built by non-security professionals. As a result, required security measures may not be applied to the systems leading to common security vulnerabilities existing within these systems

Inversion attacks – this is where an attacker can get the AI model to give them information about itself and what it is trained on potentially leaking confidential and sensitive data

Black box systems that lack explainability - AI systems can be created without adequate documentation. This can create a major problem if there is an issue with the system, including a security issue, as it will be difficult to resolve this without knowing what the system does

Insecure interaction with users and backend systems – AI and ML systems do not exist in isolation. They still need to interact with other systems and users. If these interfaces are not created securely, that could create an avenue for attacks into the AI and ML systems

Data poisoning – this is where an attacker feeds information into an AI and ML system to force it to make inaccurate predictions based on poisoned data

Bias and model drift – this is where malicious data is fed into the model causing it to either become biased or drifting leading to inaccurate or malicious predictions. This can in turn cause compliance and ethical issues particularly if the model is biased against a particular group of people, etc.


Having discussed some of the common threats that are applicable to AI and ML systems, let's now look at some countermeasures that could help mitigate these threats:

- Create a complete and auditable AI and ML inventory. This will allow you to track and secure your AI and ML systems

- Align AI risk management with broader risk management efforts. AI risk management follows the same principles as general risk management and as a result, the same principles that apply to broader risk management must be applied to AI risk management

- Have a single executive in charge of AI related risks. This will ensure a single point of accountability and ensure that AI related risks are appropriately managed

- Cloud security considerations – ensure common cloud controls such as authentication, authorisation, encryption, logging, backup and recovery, appropriate jurisdictional location, etc. are considered and in place

- Don't store or use data if it's not needed. Strictly adhere to local privacy requirements and avoid use of live Personally Identifiable Information (PII) or Personal Health Information (PHI)

- Have a process in place to identify and purge PII or PHI from test or training systems

- Only use dummy, anonymised, tokenised, or encrypted data in test or training systems

- AI security by design: all AI powered projects must be built around core data security principles, including appropriate use of encryption, logging and monitoring, authentication, authorisation and access controls

- Create a secure development process for both traditional code and the new AI and ML powered systems:

  • Use ISO or other industry standards on developing secure products and code
  • Include penetration testing and red teaming as part of the development lifecycle
  • Apply secure coding principles

  • Check open source code for vulnerabilities before reuse 
  • Consider SoC 2 certification for your AI and ML system.

- Explainability – ensure AI and ML systems are well documented with algorithms used well known. Without this, any issues will become hard to resolve as they will be difficult to diagnose and remediate

- Secure interaction with users and back-end platforms:

  • Use strong authentication and the principles of least privilege
  • Secure connections to the back-end databases 
  • Secure connections to third-party data sources
  • Ensure user interface is resilient against injection attacks

- Secure access to the algorithms, model AND training data. Any access at this level can allow poisoning of the model

- Add specific goals to the model to counter bias e.g., to prevent gender bias, add diversity goals

- Train practitioners on how to recognise and resolve ethical issues around AI

- Collaborate with external parties on leading practices around sound AI ethics

- Establish policies or a board to guide AI ethics.

AI and ML systems are here to stay, and their application will improve our lives in many known, and as yet unknown ways. However, as with any new technology, new risks emerge. Within this paper I have discussed some of these key risks and some key ways to counteract them.