Introduction
Whilst regulators and the UK government debate how best to regulate artificial intelligence (AI) to ensure that it delivers the best outcomes for consumers and markets, firms and their senior managers are already having to grapple with the governance and control frameworks required internally to enable them to be able to evidence to regulators compliance with existing regulatory requirements (particularly in light of recent regulator messaging around the need for robust governance in this area). In this briefing, we consider some of the emerging risks and challenges for regulated firms with the continually evolving AI-related technology, as well as steps that firms can be taking now with a view to ensuring that they are best placed to meet future regulatory expectations.
1. Risk management
As firms seek to keep pace with the increasing sophistication in technology, and adopt use of AI tools themselves, one of the key challenges likely to be faced by many is finding the balance between greater efficiency (using more automation and machine learning for example) whilst retaining an appropriate element of human judgment and control of such tools, such that firms are still able to demonstrate to the regulator that: (1) they understand and can explain the AI tools being adopted; and (2) human judgment is exercised to an adequate extent in their application and is not limited such that using these tools risks having a detrimental impact on the effectiveness of the firm’s governance.
As the regulatory landscape of AI continues to evolve, firms are likely to be required to consider and evidence how they address the following key risks:
- Limiting human oversight: Where technology is used for automated, autonomous decision-making, this can restrict or remove human engagement and control, which may in turn limit the firm’s ability to evidence to the FCA or PRA sufficiently robust governance and oversight, and risk the appearance of AI bias (the assumption that an automated process is better than human judgment).
- Regulatory enquiries/challenge: Firms should be sufficiently familiar with how their use of AI works, to be able to respond to any queries from the regulator with regards to how it has been embedded, risk-assessed and monitored such that any changes or self-taught developments of the AI can be explained.
- Calibration: The update on the FCA’s progress on reducing and preventing financial crime, published 8 February, reiterated the FCA’s expectation that firms must “calibrate” how they use technology to their individual requirements, and “keep fine tuning” to respond to the changing threats and risks to the business. The FCA has previously criticised firms for not sufficiently testing or tailoring ‘off the shelf’ automated surveillance systems in the context of market abuse.
- Insufficient skills and resources: Lacking the requisite skills, expertise and resources (both financial and non-financial) to monitor advances in technologies and therefore identify evolving or additional risks to the business which might need to be addressed through changes to systems and controls, will expose firms to greater risk of exploitation through increasingly sophisticated AI being used to target consumers and firms. For instance, firms may need to consider whether their defences are sufficiently robust to detect increasingly sophisticated techniques used by bad actors such as voice cloning, to trick consumers into thinking they are speaking to a genuine employee from their bank.
- Senior management responsibility: Absence of clearly defined senior management responsibility and oversight of the firms’ use of AI and lack of monitoring of the evolving AI regulatory landscape, may result in ineffective challenge to the business and an inability to scrutinise AI models and underlying data to promote good customer outcomes, for example. Whilst lack of technical expertise may be mitigated if such experts are employed within the business, a sufficient understanding will be required by the management team to support individual risk and accountability in relation to the firm’s adoption of AI (in other words, the SMFs cannot leave this to the IT team to manage).
2. FCA Messaging: With great power comes great responsibility
The FCA’s 8 February update on its progress in tackling financial crime reiterated its warning to firms that criminals are using increasingly sophisticated technology to target consumers and that “cyber fraud, cyber attacks and identity fraud are increasing in scale, sophistication and impact as artificial intelligence becomes more widespread”.
In that context, the FCA highlighted how some firms are using behavioural biometrics to combat that threat and tackle APP fraud. By using behavioural biometrics tools to monitor internet and mobile banking of customers to collate information on how a customer uses their devices and how frequently, these firms are building a detailed picture of what typical customer behaviour looks like, and therefore what unusual or suspicious patterns of behaviour might look like.
Similarly, the FCA’s 5 October 2023 speech suggested AI could help bridge the advice gap amongst everyday investors and ensure better tailored communications and information to consumers, and applauded participants who developed AI techniques and tools to identify greenwashing in financial services.
However, firms need to ensure they have appropriate governance around the use of AI tools. In the speech in Q4 of last year the FCA commented that a key feature of AI is the “ability to process large volumes of data, and to detect and exploit patterns in those data” and that it is therefore necessary to consider the question of ethical data usage. When firms have the power to process data, they need to consider whether and how to use that power in a way that is helpful and not harmful and “responsible AI” depends on data quality, management, governance and accountability and ownership structures.
3. What could firms be doing now?
In order to get ahead of the regulatory agenda, mitigate the risks and put themselves in the best position to respond to regulatory enquiries and deal with any issues or complaints, firms and their senior managers may wish to consider the following:
- Senior management accountability: Appointing an appropriate individual to be accountable in respect of: (i) any new technology or AI used; and (ii) keeping abreast of new illicit uses of technology that might threaten the business/ emerging risks to the business as a result. Training and education may also be needed for the wider management team given the regulators’ expectations around challenge and informed decision making. Robust record-keeping will be needed so that appropriate engagement by relevant stakeholders can be evidenced, including by way of meeting agendas, packs and minutes; training materials and attendance records; responsibility maps and role descriptions; and dissemination of policies and procedures. Basic governance hygiene can often be the first casualty in a fast-paced environment and when teams are stretched, but keeping it up is vital to being able to meet a regulatory challenge when it comes.
- Internal governance framework: In connection with ensuring accountability at senior management level, in order to facilitate that the appropriate senior manager (and Board) is receiving quality MI relating to the business use of AI, and relevant regulatory developments, firms might want to consider establishing a central steering committee or other subcommittee, involving relevant stakeholders with clear escalation procedures, to support exchange of knowledge and expertise in relation to both AI risk to the firm, and its own use of AI, as well as reporting to the Board, either as a separate forum/ committee with its own terms of reference as part of an existing committee that addresses other data and technology issues, operational resilience and/ or cyber risk for example. Where there are opportunities for lessons to be learned, these should be identified and embedded in internal processes.
- Controls: The effectiveness of current controls in place to mitigate the increasing sophistication of criminal/ financial crime risks in particular, should be reviewed; specifically, firms might want to consider the following:
- Testing: of any new technology being rolled out; whether a phased approach to implementation might be appropriate; how the basis for a ‘go live’ decision would be explained and evidenced and whether there is sufficient assurance in place from all relevant parties; continuous testing of existing technology and ensuring relevant humans understand how it is currently working;
- Avoid a ‘plug and play’ approach: Rather, calibrate and tailor the use of AI models to individual requirements of the firm and continuously monitor and “fine tune” their effectiveness; avoid over reliance on ‘off-the-shelf’ models;
- Update policies & procedures: Internal policies may require updating in light of the firm’s adoption of new technology, including those that may not appear directly related, such as data protection policies if the firm changes its approach to communication monitoring through the use of AI (as considered further below);
- Contingency planning:Consider whether improvements or enhancements could be made to existing crisis management processes to cater for potential AI failure (which can be implemented without reliance on AI technologies).
- Data protection:UK firms must comply with the DPA 2018 and the GDPR which means that any monitoring and processing of data must be proportionate and necessary. Taking specialist advice may be appropriate in order to obtain comfort regarding compliance both at the implementation stage and also ensuring the advice is updated as necessary as technology develops.
- Adequate training and resources: As regulators determine what the AI regulatory framework will be, both in the UK and globally, staff in all jurisdictions may require tailored training as both AI and the regulatory landscape advances, including on any forthcoming regulatory and industry guidance, as well as on new technologies being implemented across business areas. Engaging with AI experts and/or fraud prevention experts for example, might be necessary to upskill and train staff and senior management on the risks where there is a skills gap in internal resources. Those with responsibility for advising, monitoring and challenging aspects of the business need to understand what the business is doing in order to provide adequate oversight and fulfil their roles adequately and so those in functions such as compliance, legal, risk and internal audit should be well versed in new developments.
- Notification and reporting obligations: In the wake of increasing risks (as well as benefits) with emerging AI based technology, firms will need to be alive to where that risk tips over into a reporting obligation to the FCA or PRA, for example where that risk might give rise to a breach of an FCA rule, or have a significant impact on the firm’s reputation or could result in serious detriment to a customer.
- Monitoring developments/announcements from Government and regulators: There is a steady stream of regulator communication relating to AI and in this evolving area it pays to keep abreast. For example:
- on 29 March 2023 the Government published a white paper detailing its plans for implementing a pro-innovation approach to AI regulation, which set out a new framework underpinned by five principles: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; (5) contestability and redress;
- on 22 April 2024 the FCA published its AI Update which included, at a high level, an outline of how the FCA’s existing regulatory framework maps to each of the white paper’s five principles. For instance, the FCA considers its Principles for Business, Threshold Conditions and the more granular SMCR and SYSC rules are relevant to firms’ safe, secure and robust use of AI systems, and in particular the regulatory regime relating to operational resilience. Although, at this stage, there is little by way of practical guidance on what steps firms should be taking to comply, in the context of continually evolving AI, the FCA’s plans for the next 12 months include:
- Continuing to further the FCA’s understanding of AI deployment in UK financial markets, including diagnostic work, re-running a third edition of the machine learning survey jointly with the Bank or England and collaborating with the PSR;
- Building on existing regulatory foundations such as those relating to operational resilience, outsourcing and critical third parties, and feeding into those regimes any lessons learned from the FCA’s understanding of AI deployment in the financial markets;
- Collaboration with other domestic regulators such as the Bank of England and PSR, as well as international partners and forums including IOSCO, the FSM, the OECD, and the G7;
- Testing for beneficial AI by delivering the pilot AI and Digital Hub alongside other DRCF member regulators, continuing to run the Digital and Regulatory Sandboxes – both of which the regulator notes has supported a range of AI applications already;
- Using AI itself including tools such as web scraping and social media tools to detect and triage potential scam websites. The regulator plans to invest more into these technologies to proactively monitor markets including for market surveillance purposes and is exploring AI models to assist with this work;
- Looking towards the future by conducting research into emerging AI technological developments including in relation to deepfakes, and monitoring advancements in quantum computing.
- On 30 May 2024, the European Securities and Markets Authority (ESMA) issued a statement providing initial guidance for firms that use AI technologies when providing investment services to retail clients. Whilst the statement acknowledges that AI offers potential benefits to firms and clients it warns of inherent risks, including algorithmic biases and data quality issues; opaque decision-making by firms’ staff; over-reliance on AI by both firms and clients; and privacy and security concerns. See here for further information.
Against this background, firms’ ability to monitor and review the effectiveness of implementation of new technology (whether to combat the risks of AI, or enhance existing systems and controls), through adequate reporting and MI will be critical, to ensure that senior management and the Board are kept up to date with the challenges faced by the firm and its exposure to regulatory risks, in such a fast-moving area.
4. Key regulatory risks
Whilst the technology may be advancing at pace, the regulators will expect firms to deliver the same good outcomes for market participants and consumers irrespective of the technology involved. Failing to implement and stress test governance to appropriately challenge and oversee the development of AI and machine learning tools, is likely to limit compliance and senior management visibility as to which specific business areas may be at risk of or over reliant on AI or machine learning at any one time. This might be compounded by a lack of reporting and escalation and/or record keeping, thereby resulting in poor quality management information.
Firms would be well-advised to understand the potential and likely consequences in the event of an AI failure, to be better equipped when something goes wrong with the AI itself or the firm’s use of it, and to consider whether it might be appropriate to have contingency plans in place to deal with such incidents. Depending on the scale of the incident, this might require crisis management policies to be updated, for example.
In light of the drive towards more data-driven regulation by the FCA, firms should be ready for information gathering and investigation powers being exercised to gather potentially broader more detailed data and information requirements, for example on how firms are making use of algorithms, how an algorithm operates, what data is being fed into that algorithm, the parameters being used and how firms are monitoring outputs, and not simply adopting a ‘plug and play’ policy.
The FCA and PRA are acutely aware of the importance of governance and accountability and will not refrain from intervention or enforcement action where there have been failures in a firm’s systems and controls and/or poor customer treatment, for example. Any investigation could also extend to senior managers including where the regulator takes the view that a relevant individual is ‘knowingly concerned’ in a firm’s failure and/or failed to take reasonable steps to prevent a breach by the firm. Regulators are encouraging firms to keep pace with new technology, whilst being alive to the risks and the increasingly sophisticated ways in which bad actors can use it. Accordingly, all algorithms, AI systems, machine learning techniques and others, must be able to withstand challenge if scrutinised, and firms and senior managers should be ready to explain why those tools work in a fair, effective and proportionate way, and how they are monitored such that they enhance rather than hinder, good consumer and market outcomes.
In the event that, despite all best efforts on the governance front, an issue arises and/or a regulator makes an enquiry, developing a co-ordinated response strategy will be key to optimising the outcome, particularly given the potential for cross border implications and the need to respond at pace in often fast-moving crisis situations. Putting in place a pre-planned crisis-management governance structure and response plan that can be implemented effectively when required will assist in the effective handling of such situations and facilitate decision-making.