Faced with the rise of artificial intelligence, financial players must adapt their governance to reconcile process automation with human intervention. Legal column by Marie-Aude Noury, lawyer at the Paris Bar, partner at Squair A.A.R.P.I.
The use of artificial intelligence in the financial sector is now both widespread and growing. There are numerous opportunities for regulated players to increase efficiency, particularly in processing very large volumes of data, performing repetitive tasks, and even identifying discrepancies in documentation or anomalies in behaviour. This was echoed in the recent AMF-AEFR discussions held in autumn 2025.[1]. The subject was naturally embraced by regulators.[2] as well as by the financial community[3].
Multiplicity of cases of’use of AI par financial players, regulatory challenges and new risks
There are numerous opportunities identified for the financial sector for all players (banks, investment firms, insurance companies, asset managers), whether in areas such as lending, risk management, algorithmic trading, etc. robo-advisors, customer relations or regulatory compliance.
Faced with these use cases, regulated players are naturally required to comply with existing regulations, which were initially designed with human intervention in mind.
They must also comply with a complex, multifaceted and evolving body of regulations, including regulations on artificial intelligence, DORA and GDPR.
The growing use of AI by financial players reduces the risks associated with human error but exposes them to new risks related to AI (technological risks, data quality, bias, hallucinations, exacerbated vulnerabilities with systemic risks, cyber attacks, etc.).
In this context, ESMA reiterated that management bodies remain responsible for decisions, whether these are made by individuals or AI-based tools, and emphasised the importance of human judgement.[4].
Human intervention
The line between using AI to support human decision-making and using AI as a decision-maker can be blurred. Purely automated decision-making raises several questions and difficulties.[5].
At present, investment funds that formally promote the use of AI in the investment process appear to be in the minority. Instead, AI is mainly used to support human investment decisions and to improve the productivity of certain activities such as risk management, compliance and administrative tasks.[6].
ESMA emphasises the need for management to understand the AI models deployed and the need for human oversight mechanisms to monitor the results produced by AI.[7]
Featured revised governance
Financial players are encouraged to integrate AI into their global governance systems.[8], carry out regular tests of the model, ensure that the data entered into AI systems is relevant, sufficient and representative, monitor and evaluate processes.
A number of measures can be implemented and documented (AI oversight committee, regular audits, continuity plan, team training, etc.).
This governance must respond to two specific challenges posed by AI due to the opacity of the systems, namely the requirement for transparency and explainability, in other words the ability to understand and explain how decisions were made.
Such governance will be necessary for financial players to respond to the naturally evolving challenges of AI and the risks of litigation that can be anticipated in the face of automated decisions that are insufficiently transparent and difficult to explain.
Naturally, regulators and supervisors at both international and national level are placing such challenges and developments at the heart of their priorities for the new year.
[1] AMF-AEFR conference on technological frontiers in the financial sector, 3 October 2025
[2] See recently «Implementing effective AI oversight in the financial sector», Denis Beau, First Deputy Governor of the Banque de France, published online on 12 December 2025, Banque de France.
[3] HCJP report on the legal and regulatory impacts of artificial intelligence in banking, finance and insurance, 20 June 2025
[4] ESMA, Public Statement on the use of Artificial Artificial Intelligence (AI)n the provision of retail investment services strategies and portfolio exposures (ESMA35-335435667-5924), 30 May 2024
[5] See the aforementioned HCJP report in this regard.
[6] ESMA, Artificial intelligence in the EU investment funds: adoption, strategies and portfolio exposure (ESMA50-43599798-9923), 25 February 2025
[7] ESMA 35-335435667-5924, as mentioned above
[8] IOSCO, Artificial Intelligence in Capital Markets : Use Boxes, Risks and Challenges, Consultation Report, CR/01/2025, March 2025