Will the EU AI Act Stifle the Banking Sector’s AI-Adoption Plans?
2
By Steve Morgan, Banking Industry Market Lead, Pegasystems
As we kick off the second quarter of the year, it is positive that banks are steaming ahead by embracing more tech applications to automate their services and provide more efficient and secure solutions for both their employees and customers. Machine learning (ML), artificial intelligence (AI) and, most recently, generative AI (GenAI) have been key players in innovating the sector.
The growing trend to apply AI solutions is accompanied by a wave of new regulations that seek to ensure the technology is safe, responsible and ethical. The first of these to become law is the European Union’s Artificial Intelligence Act (EU AI Act), which received final approval from the European Parliament in March 2024.
Once such regulations come into effect, it will be vital for banks to be properly equipped to address the new requirements while keeping up with the competition. Failure to do so could lead to hefty fines and reputational damages. So, how will the new EU AI Act impact the sector and its ongoing AI-adoption plans?
Breaking down the EU AI Act
The EU AI Act, which will become enforceable next year, is the world’s first comprehensive legal framework for AI. Its main goal is to encourage reliable AI use in Europe and beyond by ensuring AI systems respect fundamental rights, safety and ethical principles, specifically targeting the potential risks of powerful AI models.
Like other EU regulations, the Act takes a risk-based approach to ensure businesses and their products comply with the law before being introduced to the public. The regulation directly affects not only tech businesses but any other sector leveraging AI in some capacity, from manufacturing to education and, of course, banking.
The implications for the sector
At the core of the EU AI Act is an assessment of how risky AI use cases could potentially be. Notably, such systems recognised as high risk, including AI technologies used in credit checks that could turn down a customer loan, are cited by the European Commission (EC). Additionally, the new regulation includes guidance on how financial services use AI to direct pricing and risk assessment in life and health insurance. Having said that, there is still work to be done by standardisation organisations and national authorities to impose new AI governance and risk-management requirements and standards on how banks and other financial institutions leverage AI.
The EU AI Act also aims to address generative AI and large language models (LLM), such as OpenAI’s GPT-4, to mitigate any concerns about how these powerful but new forms of AI could influence people’s lives. Additionally, the European Union introduced its new European AI Office to further support the development and use of trustworthy AI. Its main goal will be to enforce and closely monitor the rules for general-purpose AI systems used directly or indirectly by financial-services businesses. Not only that, but the AI Office will seek to foster international cooperation while forming the foundation for a single European AI governance system.
Although the AI Office was established to support and guide AI usage, financial institutions should bear in mind that they are solely responsible for any tools and services they bring onboard, including AI-powered decision-making. They should, therefore, ensure all solutions follow the appropriate guidelines to avoid tribulation.
Compliance is at the heart of operations
With new regulations being proposed and existing ones being adjusted, the stakes for compliance have never been higher. This heightened regulatory scrutiny requires businesses to stay on top of the new regulations and ensure their AI services comply with the most recent changes. Any oversights could potentially lead to hefty fines and reputational damages.
For example, when it comes to the EU AI Act, businesses that are operating systems prohibited by the new Act will be subject to fines as large as €35,000,000, or up to 7 percent of their annual turnover. Interestingly, this is considered the heftiest penalty for non-compliance in the EU so far and even exceeds the charges under the General Data Protection Regulation (GDPR).
Such significant penalties leave no room for mistakes; therefore, businesses must keep up to date with the new requirements. All partners and third-party tech providers should also keep compliance on top of their agendas while being able to support businesses with their journeys.
In addition, being unable to comply with such regulations directly affects a business’s reputation. Consequently, customers may lose faith in its capabilities, eventually leading to financial losses.
Acclimatising to the new regulatory framework
While the new EU regulation may not directly apply to banks operating outside the region, it will still influence many financial-services markets outside the EU, including the United Kingdom. It will become part of the blueprint for how other countries develop and handle their own laws. For example, in the UK, regulators are expected to concentrate on incorporating fairness and transparency into the Financial Conduct Authority’s (FCA’s) new Consumer Duty regulation and its use of AI.
Naturally, existing regulations will also adjust according to the rise in AI adoption by banks and others. Financial-services organisations have leveraged the technology for many years in credit processes, claims management, anti-money laundering (AML) and fraud detection. But as demand grows and AI continues to evolve at a fast pace, regulators will need to consider whether existing rules are sufficient to address the current uses of AI or if they can be enhanced rather than replaced.
Even though the EU AI Act is set up to offer trust and safety, some concerns have already been raised that the regulation may hold back innovation and that the sector should aim to be bolder in its adoption of AI. Countries such as the UK may also try to diverge from the EU AI Act as a chance to take the lead in international competition.
There is no doubt that there will be some political gameplay in how AI regulations are framed publicly, but it is in everyone’s interests that there is consistency and alignment on regulations internationally to avoid confusion and onerous checks. Also, whether it is liked or not in certain places, it is expected that markets outside of the EU should follow, if not explicitly copy, the EU AI Act—in another incidence of the Brussels Effect on regulatory best practice, like the GDPR.
The reality is that some banks and financial institutions that have been working with AI applications for many years to streamline workflows are not taking any risks with implementing more robust AI technologies. They are being cautious, especially in how generative AI utilises sensitive data or becomes involved in direct customer interactions. The focus will and should always be on the outcomes for both the customer and the bank.
The real aim is to deliver the right outcome and improve rather than undermine business processes. The sector will welcome new regulations, such as the EU AI Act, because they propose clearer guidelines and guardrails on what can and cannot be done with this transformative technology. It is, therefore, in everyone’s best interest to be vigilant about remaining compliant and collaborate with trustworthy tech partners who share the same priorities and understand the importance of being proactive in adopting compliance measures early on.
ABOUT THE AUTHOR
Steve Morgan is the Global Banking Industry Market Lead at Pegasystems. Steve’s background is in both management consultancy and executive roles in the financial-services industry. He has led transformation programmes at JPMorgan Chase, Barclays, NatWest and Deutsche Bank and was the COO for Mortgages and Retail and Commercial Lending at ANZ Bank.