A guide to buying Artificial Intelligence (AI) tools for your business

Your business will no doubt be considering many of the AI tools available on the market to 1. help run your business smarter, faster and more cost-effectively and/or 2. transform how you are delivering services to customers.

Before you sign on the dotted line with your preferred AI vendor, there are a number of considerations you will need to work through to mitigate your legal and reputational risk. To get you started, we’ve prepared this AI procurement guide, which you can use to steer your conversations with your internal business leaders, lawyers and the AI vendor.

"AI is an enabling layer that can be used to improve everything and will be in everything" - Jeff Bezos

AI is clearly here to stay and most, if not all, businesses are looking at how to integrate AI into their business. It is likely that if you have not already, you will be thinking about contracting with an AI vendor in the short to medium term.

Whether your business is at the start of or far into its AI journey, if you would like to talk about any of the points in this guide or would like to discuss your AI and technology procurement plans more broadly, please get in touch.

Get in touch

Key considerations for you, the AI customer

Think carefully about your objectives and consider AI’s current limitations e.g. it may produce inaccurate or misleading answers.

    • Will the AI system be used to generate or manipulate images, audio or video content as “deep fakes”? Deployers of AI systems who are subject to the EU AI Act must disclose that the content has been artificially generated or manipulated.
    • Will you be using the AI system for internal business purposes only? Or will the AI system and/or its outputs be used or interacted with externally e.g. by your customers and/or website users? Under the EU AI Act, AI systems that interact with humans need to be designed and developed so that individuals interacting with it are informed that they are interacting with AI (unless obvious to a reasonably well-informed individual).

You’ll need to ensure that your licence is broad enough to include the entities that require use of the AI system.

Your use may be subject to AI regulations in different jurisdictions.

It’s important to have a clear understanding of how the AI system works and to feel confident in explaining how it’s used.

Under the EU AI Act, providers and deployers of limited risk AI systems must comply with specific transparency requirements. The exact obligations depend on the nature of the AI system, but they all envisage the provision of specific information to individuals in a clear and distinguishable manner no later than their first interaction or exposure with the AI system concerned.

Under the EU AI Act, deployers of all AI systems must ensure a sufficient and appropriate level of AI literacy of their staff using AI systems on their behalf i.e. they must ensure that staff possess the skills, knowledge and understanding of the AI systems to make informed use of them and to be aware of both the opportunities and the risks of potential harm.

Having appropriate insurance coverage in place is critical to protect the business against risks related to the AI vendor’s breach, as well as data protection and cybersecurity incidents.

Some vendors will use AI systems for back-office tasks such as payroll functions and data analysis. If you object to this, you could include a restriction in your agreement with the vendor.

If you agree to this, you’ll need to ensure that the use of AI systems in this way is strictly limited to these purposes and that the AI vendor has appropriate safeguards in place.

Key questions to ask your AI vendor

    • Does it integrate a foundation model or General Purpose AI (GPAI) model? Some AI vendors offer AI applications that are built on top of underlying GPAIs such as LLMs (e.g. OpenAI’s GPT-3 and GPT-4, foundation models that underpin the conversational chat agent ChatGPT). Providers of GPAI models are subject to specific requirements under the EU AI Act including providing AI system providers who intend to integrate the GPAI model into their AI systems with the information they require to have a good understanding of the capabilities and limitations of the GPAI model.
    • Does the AI system incorporate any other third-party AI systems? If the AI system incorporates third-party elements, the business may be expected to comply with the terms of use of these third parties and the agreement with your AI vendor should make it clear where responsibility for this compliance lies.

Under the EU AI Act, this would be classed as a limited risk AI system and deployers must inform individuals exposed to operation of the AI system that they are subject it its use and adhere to data protection law with a limited exception for law enforcement.

Automated decision-making and/or profiling can be very useful for organisations but they also carry risks of bias and discrimination and need to be assessed very carefully.

AI is not infallible. Human oversight is a helpful way to mitigate risk and achieve quality and accuracy of outputs.

AI systems are not static and require an ongoing approach to risk and harms mitigation. This is a key theme underlying the Government’s Model for Responsible Innovation.

 

Your IPR is valuable and you must process personal data in accordance with your obligations under the UK GDPR. Ideally, you would not allow your IPR or personal data to be used for training or for inputting into an AI system unless the vendor provides you with robust warranties and indemnities against IPR infringement and/or data protection breaches.

 

The business’ data should be siloed and in accessible to other users of the AI system outside of your organisation to avoid personal data and confidentiality breaches as well as potential IPR infringement.

Depending on the AI provider, some data may be processed or hosted outside the UK/EEA. International data transfers must be subject to appropriate safeguards and measures to protect your personal data.

Ownership of the AI system’s outputs will depend on the vendor agreement, but as explored above, typically the business will own the outputs unless otherwise specified.

The vendor should provide sufficiently robust warranties and indemnities in respect of the security of the AI system. Data breaches wreak havoc on organisations, operationally, financially and reputationally.

Using fake information generated by an AI system could cause reputational damage and is potentially negligent. The recent case of Ayinde R  v The London Borough of Haringey, which concerned actual or suspected use by lawyers of generative AI tools to produce legal arguments and witness statements which contained false information offers a stark reminder of the danger of overreliance on AI systems.

Using fake information generated by an AI system could cause reputational damage and is potentially negligent – Ayinde R  v The London Borough of Haringey.

Responsible Artificial Intelligence (RAI) is an approach to developing and deploying AI systems in a safe, trustworthy and ethical way. Opting for vendors that embrace RAI could save your business from unexpected costs and potential embarrassment.

Key contacts

Related