It’s A.I. risky business

Tuesday 3rd August, 2021

Introduction

Just as businesses are finally starting to get their heads around the GDPR (in its various forms), another hoop to jump through materialises in the shape of the EU Commission’s proposal for the Artificial Intelligence Regulation (the “Regulation”). Published in April this year, the proposal confirms that the Regulation will seek to harmonise the rules for the placing on the market, putting into service and the use itself of artificial intelligence systems.

An artificial intelligence system is defined very broadly to be any, “software that is developed with one or more of the techniques and approaches listed in Annex I [which include machine learning and knowledge based approaches] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. 

There are some exceptions, for example AI systems used or developed solely for military purposes, to which the Regulation will not apply but otherwise, the scope is incredibly broad.

The sliding scale of risk

Different practices are deemed to have differing classes of risk; from practices that are prohibited altogether (such as systems that deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is likely to cause harm, or systems that exploit vulnerabilities of a group due to their age, physical or mental disability), to practices of “limited” or “minimal” risk (in respect of which transparency requirements only are imposed). Perhaps the most interesting category, and indeed the category that requires the most scrutiny, is the category of “high risk” practices.

High risk practices

Depending on the nature of the activity, mandatory requirements (including risk management measures, a conformity assessment and the ongoing maintenance of key documentation) will need to be adhered to in connection with the provision of high-risk AI. What is deemed to amount to “high-risk” is not defined, but indicative criteria is set out which may be used to determine whether a system should be considered high risk (see chapter 1 of the proposal for further information). Certain AI systems are deemed to be high risk by nature, including where the AI system is intended to be used as a safety component of a product. For others, the position is less clear cut.

Engagement of a high-risk practice applies to both providers and users of high-risk AI. For users, the obligations are less onerous and include using the systems in accordance with the instructions of the provider and implementing all technical and organisational measures stipulated by the provider to address the risks of using the high-risk AI system. Manufacturers of products covered by EU legislation are responsible for compliance as if they were the provider of the high-risk AI system and distributors, importers, users and other third parties may also be caught if they place a high-risk AI system on the market, or they seek to modify an AI system already on the market.

Who will the Regulation apply to and when?

The Regulation will not apply to the UK directly and so the UK will need to consider how it introduces its own AI regulation (it is expected that the UK will publish an update on this later this year). However, like the GDPR, the Regulation is extra-territorial in scope. It therefore applies to:

  • providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established in the EU or not; and
  • providers or users of AI systems based in the EU or, if not based in the EU, where the “output produced by the system is used” in the EU.

There is some uncertainty over limb (ii) and further guidance is awaited.

The intention is for the Regulation to come into force 20 days following its publication and from the date of implementation, there is a grace period of 24 months, but some provisions may apply sooner. The timeframe for implementation is currently uncertain, but an update on progress, as a minimum, is expected in 2022.

What might happen if I don’t comply?

The fines that may be enforced for non-compliance with the AI Regulation are on par with, and in some instances, for example engaging in a prohibited practice, in excess of the fines businesses may face if they fail to comply with the GDPR. Below we set out a summary of the maximum fines that may be issued, depending on the nature of the infringement:

  • Up to €30m or 6% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements of prohibited practices or non-compliance related to requirements on data;
  • Up to €20m or 4% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation; and
  • Up to €10m or 2% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.

It will be up to each Member State to appoint a national competent authority to oversee the implementation of the Regulation. As mentioned above, it is yet to be seen what approach the UK will take. Interestingly, the AI Regulation does not replicate the “one stop shop” system under GDPR which could result in a lack of consistency across Member States. However, a European Artificial Intelligence Board will provide advice and assistance to the Commission in connection with the consistent application of the Regulation.

What do I need to do now (if anything)?

Regardless of whether your business is a provider or user of AI, certain steps can be taken now in order to prepare for the coming into force of the Regulation. The Regulation will not have retrospective effect and so will only apply to AI systems placed onto the market from the date the Regulation comes into effect. However, once implemented, providers of high-risk AI systems will need to put in place a quality management system and to maintain up to date logs with details of any high-risk AI systems. If any AI systems are in a development phase, it would be wise to continue the development of such systems with the Regulation in mind, in particular regarding the requirement to draw up documentation of a high-risk AI system before it is placed on the market or put into service.

Sector specific considerations

If a business operates in a specified industry or sector, there may be sector-specific guidance to follow. For example, the financial services sector has come under the spotlight due to certain financial products being deemed to be likely to be high-risk, for example, products used to evaluate a person’s creditworthiness, and certain systems used to monitor and evaluate work performance and behaviour. Those in the recruitment sector would also be wise to consider the use of application screening tools to recruit staff and the making of promotional decisions; the use of certain interview-related tests in the candidate hiring process may also be caught.

No matter what industry or sector you operate within, due to the wide-reaching scope of the Regulation, there will undoubtedly be relevant considerations for every business. Even if immediate positive action is not required, it is worth businesses educating themselves on the requirements of the Regulation in the context of both the current and future use, or provision, of AI systems.