AI Principles
Pedro Machado outlines ECB expectations for banks' use of AI
Amid all the external hype about how AI could transform our world, the ECB has taken its time deciding what it thinks about the implications of the new technology for the banking system.
As I wrote last June, by that point the ECB had made only very high-level comments about the opportunities and risks of AI. This summer, the ECB then issued guidance on the narrow topic of the use of machine learning in banks’ internal models. But we have had no detailed statement of policy the use of AI more broadly.
So it was significant - and welcome - that ECB Supervisory Board member Pedro Machado set out some general principles for the use of AI in an important speech last week. The speech revealed an emerging set of supervisory expectations for how banks could benefit from the technology, and the risks they should have to manage.
Self-Experimentation
For several years now the ECB has been talking up its efforts to use the latest digital technology in its supervisory work. In particular, ECB leaders have emphasised supervisors’ adoption of advanced analytical and process-management tools to spot the main risks facing each bank and streamline supervisory analysis and decision-making. (See for example, this speech by ‘robo-supervisor’ Elizabeth McCaul.) Public sector IT projects are always easy to mock - and privately banks do grumble about continuing mistakes, glitchy interfaces or long timelines for decisions. Nonetheless the ECB has taken this topic seriously and invested significantly in modernising its information infrastructure to ensure supervisors can collaborate effectively using consistent data. By developing and deploying new tools, the ECB has also given itself the opportunity to build up IT skills among supervisors and to understand better the technology being used by banks.
The latest stage in this process has been experimenting with AI tools. In his speech, Machado described some now in use across the Single Supervisory Mechanism (SSM - i.e. ECB banking supervision plus national supervisors). These include the Delphi risk scanner, and Medusa analysis tool. Further AI supervision aids are in development, Machado said.
Machado discussed the ECB’s AI deployment not only to burnish supervisors’ high-tech image or establish the ECB’s credentials to participate in the AI debate (important though it is to demonstrate that supervisors are not automatically suspicious of or opposed to technological innovation). He also drew out some broad reflections from the ECB’s experience on the potential benefits and risks of AI.
Benefits
The ECB’s main gain from using AI, Machado said, was in the depth and quality of analysis it enabled. Supervisors use AI tools to scan a wider range of data than is possible unaided and spot patterns that a human alone might miss. AI analysis does not replace supervisory judgement - but it provides a completer, more consistent view. Meanwhile, by automating routine processes such as the assembly of data AI frees up time for supervisors to concentrate on analysis and forming conclusions.
Machado argued that AI tools thus support ‘human-centred supervision’ - empowering ECB staff so they can more effectively oversee a financial system that is become ever more complex. AI does not, Machado emphasised (and repeated in a recent interview) replace human judgement or staff.
That may make sense for the ECB - but I couldn’t help thinking that Machado’s emphasis on not replacing staff is a little at odds with the ECB’s wider message that banks should use digitalisation to cut costs (e.g. automate processes to reduce staff costs) and improve efficiency and profitability. No doubt this is informed by the very different objectives of a central bank (and perhaps the currently very tense state of employment relations at the ECB), but I wondered if this is an area where Machado may not want commercial banks to follow the ECB’s example in how to make the best use of AI.
Risks
On the risk side, Machado highlighted five main risks the ECB has identified its experience with AI so far:
Hallucination and inaccuracy: AI tools producing ‘confident but wrong’ outputs;
Skill erosion: Staff becoming so used to trusting AI recommendations that they lose the ability to scrutinise and challenge them;
Opacity: AI reasoning being intransparent, making it impossible to explain how a recommendation or decision was reached;
Cyber insecurity: AI techniques making cyber attacks more potent;
AI talking to AI: Human staff not seeing key information because communication between banks and supervisors is increasingly automated.
(I was intrigued that Machado did not include risks of bias or discrimination in his list, despite these being widely discussed in the AI debate, and a key concern of the EU AI Act. Perhaps these are simply not relevant to the specific set of AI tools the ECB is using? Presumably those don’t include AI-powered HR systems for recruitment or performance management.)
Machado briefly described how the ECB is managing each of these risks: by ensuring AI systems base results on authoritative (and checkable) sources and make their reasoning clear; by checking underlying data; by investing in skills; and by upgrading cyber defences. More broadly, the ECB has established a governance framework for AI, including risk assessments for all tools it develops and deploys. I took this as a strong hint that banks should learn from the ECB’s example.
Emerging Expectations
As of now the ECB has not published an AI Guide setting out its expectations for banks. But privately ECB officials say it is only a matter of time before supervisors require banks to put in place an explicit (written) AI Policy, setting out the governance arrangements and risk controls for all the AI systems a bank chooses to employ.
Machado’s comments provide an early indication of what the ECB will expect an AI governance and risk management framework to include:
Humans must remain in the loop - and ultimately responsible for all decisions;
AI models should be accurate and trained on appropriate data;
AI systems’ workings should be sufficiently transparent, and employees sufficiently skilled, that the rationale for all AI recommendations can be understood and explained;
Robust cyber security measures must be in place for all systems; and
Banks should have effective mechanisms to ensure compliance with the requirements of the EU AI Act and all other relevant legislation.
AI Control Framework
Setting up a comprehensive AI governance and risk management framework will be a major task, involving several functions and units within a bank, including frontline businesses, risk management, compliance, legal and HR as well as IT (this is not a job to be left to IT departments alone).
As for what precise structures and rules banks put in place, the ‘right answer’ will likely not be the same for every bank. The risks - and hence the appropriate controls - for one AI use-case (e.g. credit scoring system) will be different from those for another (e.g. customer service chatbot). Differences in business and operating models may also lead to banks’ adopting different arrangements. So I hope the ECB will resist the urge to be too prescriptive when it does start to develop its expectations and/or best-practices for banks’ use of AI.
Nonetheless, the general principles Machado set out in his speech are broadly applicable. And while formal supervisory guidance may still be some way off, the ECB will no doubt assess banks’ AI control frameworks as part of its regular scrutiny of internal governance and risk management under the annual Supervisory Review and Evaluation Process (SREP).
So banks would do well to study Machado’s comments closely, and check that they are adequately managing the risks of a potentially transformative technology.


Wow, the part about Pedro Machado's speech setting general principles really stood out to me. Do you think these initial principles are enough to build on, or will banks need much more detialed guidance soon? You always have such a clear perspective on this.