What does the ECB think about AI?
The ECB has only talked in general terms about banks’ use of artificial intelligence. That may soon have to change
Rise of the Machines
The growth of artificial intelligence (AI) is without question one of the most significant trends of our time. Hardly a week goes by without a new prediction of how AI will transform society - or a warning of the dangers the technology could pose. This has driven a global debate on ensuring AI safety and regulation - whose European answer has been the EU AI Act, passed into law this summer.
Banking and finance have been a key part of this debate. Across the world, financial firms are discussing how AI could help them improve services, increase efficiency and cut costs. Meanwhile a range of potential risks - from biased credit scoring models entrenching economic inequality to hallucinating chatbots offering ruinous investment advice - have been widely discussed. Regulators are now considering how to ensure the benefits of the new technology can be harnessed while risks are appropriately managed (see for example, recent ESMA guidelines on AI in investment services and last week’s European Commission consultation on AI in Finance).
The ECB view?
In the midst of this debate, the ECB has been remarkably quiet. The ECB has issued no formal guidelines on the use of AI by the banks under its supervision. And when ECB leaders have discussed AI in public, they have done so only at a very general level. For example, in a March interview Supervisory Board Chair Claudia Buch said banks should not "blindly" follow AI models' recommendations when making decisions. In a similar vein, Supervisory Board member Elizabeth McCaul last June emphasised that AI "can only augment human judgement, not replace it." Few would argue with these as general principles. But they are a long way from a detailed picture of what the ECB considers acceptable in practice.
(Interestingly, the ECB has said much more about how it intends to use AI itself. Last month Buch said the ECB “will explore how generative artificial intelligence and large language models can support supervisors” in her announcement of reforms to the Supervisory Review and Evaluation Process (SREP). Similarly McCaul described a suite of AI-powered supervisory tools the ECB is using or developing in a speech in February.)
For now, the ECB may see little need to go further. After all, AI deployment by banks is still in its infancy, and the AI Act is not yet in force. So the ECB may feel it is too soon for detailed policy pronouncements. Given how fast the technology is developing, however, there is a limit to how long the ECB can wait and see. Here are some thoughts on a few areas the ECB will - sooner or later - need to address.
Model scrutiny
Credit scoring models are one of the most widely discussed potential use-cases for AI in banking. The AI Act classified "AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score" as high-risk. This subjects them to strict rules on risk management, accuracy and human oversight. The ECB will not be responsible for policing banks' compliance with these rules, as the AI Act's primary concern is preventing discriminatory lending - more of a conduct than a prudential issue (and credit models for prudential purposes, such as calculating capital requirements, are excluded from the high-risk definition).
That said, scrutinising banks' credit models is a key task of the ECB - to ensure models accurately assess the risks banks face. So even though it is not the 'AI supervisor', the ECB will need to consider how the use of AI in credit modelling affects banks' safety and soundness - and how the ECB should adapt its current expectations for models' robustness for a world of AI. Key questions will include:
what are controls are needed on the training data used by AI models?
how can risk managers and supervisors effectively validate super-complex self-designing AI models? and
how should rules governing model changes apply to AIs that continuously learn?
The answers to these and similar questions could amount to a fundamental redesign of how bank models are supervised.
Regulatory cooperation
The AI Act gives responsibility for monitoring compliance with its requirements to national authorities. For AI in financial services, EU governments can assign this oversight role either to existing financial regulators or to newly designated AI authorities. This means that in supervising banks' use of AI, the ECB will have to work alongside multiple national partners - some familiar, others not. Banks will fear facing inconsistent, even contradictory, sets of expectations from these various authorities.
For the ECB, an important question will be how it approaches cooperation with these different agencies (some of which may have only limited experience of financial supervision). The ECB has substantial experience of working with other authorities - both within the Single Supervisory Mechanism (SSM) for banks and beyond (e.g. EU and national market regulators). The more the ECB can successfully collaborate with its new AI partners, the higher the chance that a single, coherent set of expectations for AI in European banks can be forged.
AI Governance
Internal governance is one of the four pillars of the SREP and a long-standing supervisory priority for the ECB. And governance is at the heart of many of the AI Act's requirements, such as those on risk management, transparency and human oversight. So how will the ECB integrate AI considerations into its expectations for bank governance? Will the ECB include AI-specific requirements in its new Guide to Governance and Risk Culture, due later this year? (Will this be where the ECB specifies what exercising human judgement and not blindly following their AI tools means in practice?) More broadly, will the ECB consider breaches of AI Act requirements to be indicative of poor governance overall, analogous to its treatment of anti-money laundering breaches?
Related is the question of AI expertise. The AI Act (article 4) requires AI deployers to ensure sufficient AI literacy among relevant staff. Meanwhile the ECB has recently updated its policy on bank board expertise to require that at least one non-executive member of a bank's management body has expertise in IT and cyber security. Will the ECB issue a similar requirement for AI expertise on bank boards?
Setting Expectations
Responding to innovation is never easy for regulators. There is always a tension between policymakers' understandable wish to see how a new technology (or business model) plays out before making rules, and businesses' equally understandable desire for swift clarity over the rules they must comply with.
AI is no exception. So as more banks adopt AI technology in more areas of their business, the ECB will face increasing pressure to move beyond generalities and issue more specific guidelines. This will be a complex process, covering many areas of supervisory activity. (And it may be further complicated by the fact that Elizabeth McCaul, the Board member who has led on digital issues, will leave the ECB later this year: the ECB has not yet disclosed who will take over the digital portfolio when McCaul has moved on.) Banks will be hoping that the ECB rises successfully to the AI challenge.