What FCA wants to see from advisers using AI

Recent speeches and discussion papers have set out the Financial Conduct Authority’s plans to use the existing regulatory regime to regulate the use of tech in financial services.

It is focusing on artificial intelligence (AI) because the financial services sector is investing heavily in it. AI has the potential to cause harm to consumers of financial services if not deployed responsibly.

At a firm level, use could harness AI’s predictive capabilities to make or inform decisions that impact clients.

This could improve the efficiency of advisers and improve how they serve and interact with their clients in a competitive market. AI can streamline tasks such as collating and summarising research, giving advisers a competitive edge.

But if AI systems are trained on incomplete data or data that reflects historic biases then outputs can be unfair, wrong or discriminatory. For example, there has been widespread concern within the financial services sector that algorithms used to automate assessments of credit risk, streamline KYC processes, or predict consumer needs or suitability could result in unfair outcomes without careful scrutiny.

The alternative isn’t perfect – human decision making is also flawed. But AI’s ability to work at pace and scale, combined with the fact use-cases often involve removing humans from processes, increase potential risks. AI products can be complex proprietary systems owned by third parties and so not be sufficiently transparent or explainable to enable regulated firms to demonstrate regulatory compliance.

So, how can advisers innovate responsibly with AI?

Senior manager engagement and governance

Though precise details have not been published, the FCA has said the Senior Managers and Certification Regime will provide overall accountability for the impact of AI systems.

The FCA has consulted on whether there should be a prescribed responsibility for AI to be allocated to a senior management function (SMF), whether it should be added to statements of responsibilities for existing SMFs, or whether specific guidance is needed to help senior managers understand how their reasonable steps obligation extends to AI.

Whatever is decided, we anticipate any guidance the FCA produces will stress the importance of senior manager accountability for governance and oversight of AI as well as having adequate skills and experience within the firm and senior team.

Evaluate impact on customers

The FCA wants AI to be applied fairly to all populations and to ensure vulnerable customers and those less digitally literate are not disadvantaged, either directly by the system or indirectly by not being able to use the system.

Firms will need to work closely with suppliers to understand how products work, what data they collect and how they process this data – all in the context of Consumer Duty obligations. They will need to evaluate the performance and fairness of AI systems and scrutinise supplier claims. This may require independent external technical expertise. The FCA has highlighted its regulatory sandbox as an environment where innovative AI use cases can be tested.

Supplier due diligence

In the eyes of the FCA, firms will be responsible for the impacts of AI products even if these are provided by third parties (Consumer Duty again!). Supplier contracts must set out roles and responsibilities and give assurances and information firms need to fulfil their regulatory obligations and audit the AI system on an ongoing basis. Standard terms may need to be negotiated for the UK regulatory context, particularly when terms are produced by overseas suppliers.

Collaborative approach to procurement and risk management

Evaluating the risks of AI involves engagement across regulatory domains and business functions. Key stakeholders will vary depending on the AI use-case. Use-cases that influence services provided to clients will be higher risk than those that are purely back-office.

Data protection, equality and human rights laws will be relevant, as well as copyright and intellectual property considerations relating to ownership of the underlying data used by the AI system (and potentially ownership of the systems outputs).

Financial risk management teams will need to be heavily involved if AI systems use or develop financial models. Firms will need a collaborative approach within the business and engage with external advisors where relevant.

Communications and change management

Most harms caused by AI systems result from decisions made by humans either in the design or implementation of the system. Beyond the initial procurement process, firms will need a clear plan to train staff to use any new system, assess whether it is meeting its objectives, and flag concerns.

Firms implementing AI will also need to communicate clearly with clients (and importantly there are specific requirements in data protection legislation if firms are using automated decision-making in respect of client data).

AI offers huge opportunities for advisers but advisers remain responsible for the outputs of AI systems, particularly when these impact the services they provide for clients. Advisers must evaluate AI tech products on an ongoing basis to protect clients, ensure their governance structures are robust, and clearly communicate to clients the impacts of AI tools to remain on the right side of the regulator.

Key contacts

Related