
A guide to buying Artificial Intelligence (AI) tools for your business
Your business will no doubt be considering many of the AI tools available on the market to 1. help run your business smarter, faster and more cost-effectively and/or 2. transform how you are delivering services to customers.
Before you sign on the dotted line with your preferred AI vendor, there are a number of considerations you will need to work through to mitigate your legal and reputational risk. To get you started, we’ve prepared this AI procurement guide, which you can use to steer your conversations with your internal business leaders, lawyers and the AI vendor.

"AI is an enabling layer that can be used to improve everything and will be in everything" - Jeff Bezos
AI is clearly here to stay and most, if not all, businesses are looking at how to integrate AI into their business. It is likely that if you have not already, you will be thinking about contracting with an AI vendor in the short to medium term.
Whether your business is at the start of or far into its AI journey, if you would like to talk about any of the points in this guide or would like to discuss your AI and technology procurement plans more broadly, please get in touch.
Get in touch
Key considerations for you, the AI customer
Think carefully about your objectives and consider AI’s current limitations e.g. it may produce inaccurate or misleading answers.
-
- Will the AI system be used to generate or manipulate images, audio or video content as “deep fakes”? Deployers of AI systems who are subject to the EU AI Act must disclose that the content has been artificially generated or manipulated.
- Will you be using the AI system for internal business purposes only? Or will the AI system and/or its outputs be used or interacted with externally e.g. by your customers and/or website users? Under the EU AI Act, AI systems that interact with humans need to be designed and developed so that individuals interacting with it are informed that they are interacting with AI (unless obvious to a reasonably well-informed individual).
You’ll need to ensure that your licence is broad enough to include the entities that require use of the AI system.
Your use may be subject to AI regulations in different jurisdictions.
It’s important to have a clear understanding of how the AI system works and to feel confident in explaining how it’s used.
Under the EU AI Act, providers and deployers of limited risk AI systems must comply with specific transparency requirements. The exact obligations depend on the nature of the AI system, but they all envisage the provision of specific information to individuals in a clear and distinguishable manner no later than their first interaction or exposure with the AI system concerned.
Under the EU AI Act, deployers of all AI systems must ensure a sufficient and appropriate level of AI literacy of their staff using AI systems on their behalf i.e. they must ensure that staff possess the skills, knowledge and understanding of the AI systems to make informed use of them and to be aware of both the opportunities and the risks of potential harm.
Having appropriate insurance coverage in place is critical to protect the business against risks related to the AI vendor’s breach, as well as data protection and cybersecurity incidents.
Some vendors will use AI systems for back-office tasks such as payroll functions and data analysis. If you object to this, you could include a restriction in your agreement with the vendor.
If you agree to this, you’ll need to ensure that the use of AI systems in this way is strictly limited to these purposes and that the AI vendor has appropriate safeguards in place.
Key considerations for your legal team
If so:
- Is the AI system an “AI System” within the meaning of the EU AI Act? The EU AI Act defines “AI System” as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This would include everything from a simple chatbot to a sophisticated image-generator like Midjourney.
- What role does the business play (if any) under the EU AI Act? Would it be classed as a provider, deployer, importer, distributor or authorised representative? Each of these “operators” has different obligations under the EU AI Act.
- What is the level of risk involved in the AI system under the EU AI Act? Would it constitute a prohibited AI system, high-risk AI system, limited risk AI system or a general-purpose AI model (“GPAI”) (i.e. they can have many intended purposes, have the ability to do several different tasks and can be integrated into a variety of downstream systems or applications). Each risk category carries key compliance obligations.
Adopting the EU AI Act’s governance principles can help futureproof your business, demonstrate accountability, and build trust – even if the Act doesn’t directly apply.
For example, the business may choose to conduct an AI impact assessment to assess and manage the risks posed by the AI system to the business and its people.
For example, informing employees and/or customers as to when their personal data may be used to train, test or deploy an AI system.
It’s likely that the business will want to own the IPR in any outputs and if so, the agreement will need to contain a valid assignment.
While market practice is still developing, many AI vendors consider that as between the vendor and the customer, the customer owns the IPR in the AI outputs. This is not a given though, so the IPR provisions will need to be carefully reviewed and potentially negotiated.
Including:
- A warranty that the AI vendor and AI system comply with all applicable laws, including but not limited to the EU AI Act (where applicable).
- A warranty that use of the AI system and its outputs by the business will not infringe a third party’s IPR and an indemnity against any third-party claims that may arise from such use. Using proprietary material to train and/or develop an AI system could lead to potential copyright, database right and/or trademark infringement claims.
- A warranty that no confidential information of the business will be input into the AI system and an indemnity against breach of the vendor’s confidentiality obligations.
- If the EU AI Act applies, a warranty that staff using the AI system have a sufficient and appropriate level of AI literacy.
If it cannot be expected to provide these, it should not agree to do so.
Assuming the vendor limits their liability, the limitation should not be so low as to fail to adequately protect the business against the risks of the vendor’s breach.
Key questions to ask your AI vendor
-
- Does it integrate a foundation model or General Purpose AI (GPAI) model? Some AI vendors offer AI applications that are built on top of underlying GPAIs such as LLMs (e.g. OpenAI’s GPT-3 and GPT-4, foundation models that underpin the conversational chat agent ChatGPT). Providers of GPAI models are subject to specific requirements under the EU AI Act including providing AI system providers who intend to integrate the GPAI model into their AI systems with the information they require to have a good understanding of the capabilities and limitations of the GPAI model.
- Does the AI system incorporate any other third-party AI systems? If the AI system incorporates third-party elements, the business may be expected to comply with the terms of use of these third parties and the agreement with your AI vendor should make it clear where responsibility for this compliance lies.
Under the EU AI Act, this would be classed as a limited risk AI system and deployers must inform individuals exposed to operation of the AI system that they are subject it its use and adhere to data protection law with a limited exception for law enforcement.
Automated decision-making and/or profiling can be very useful for organisations but they also carry risks of bias and discrimination and need to be assessed very carefully.
AI is not infallible. Human oversight is a helpful way to mitigate risk and achieve quality and accuracy of outputs.
AI systems are not static and require an ongoing approach to risk and harms mitigation. This is a key theme underlying the Government’s Model for Responsible Innovation.
Your IPR is valuable and you must process personal data in accordance with your obligations under the UK GDPR. Ideally, you would not allow your IPR or personal data to be used for training or for inputting into an AI system unless the vendor provides you with robust warranties and indemnities against IPR infringement and/or data protection breaches.
The business’ data should be siloed and in accessible to other users of the AI system outside of your organisation to avoid personal data and confidentiality breaches as well as potential IPR infringement.
Depending on the AI provider, some data may be processed or hosted outside the UK/EEA. International data transfers must be subject to appropriate safeguards and measures to protect your personal data.
Ownership of the AI system’s outputs will depend on the vendor agreement, but as explored above, typically the business will own the outputs unless otherwise specified.
The vendor should provide sufficiently robust warranties and indemnities in respect of the security of the AI system. Data breaches wreak havoc on organisations, operationally, financially and reputationally.
Using fake information generated by an AI system could cause reputational damage and is potentially negligent. The recent case of Ayinde R v The London Borough of Haringey, which concerned actual or suspected use by lawyers of generative AI tools to produce legal arguments and witness statements which contained false information offers a stark reminder of the danger of overreliance on AI systems.
Using fake information generated by an AI system could cause reputational damage and is potentially negligent – Ayinde R v The London Borough of Haringey.
Responsible Artificial Intelligence (RAI) is an approach to developing and deploying AI systems in a safe, trustworthy and ethical way. Opting for vendors that embrace RAI could save your business from unexpected costs and potential embarrassment.