AI in Financial Systems: Balancing Innovation with Caution

By Sireesh Patnaik, on June 28, 2024

image

Financial institutions (FIs) have been increasingly incorporating AI into their operations and processes to drive efficiencies. According to The Economist, over 85% of IT executives in the financial services space already have a strategy for incorporating AI into new product and service development. The increasing acceptance of the transformative potential of AI points to a new era of smarter, highly personalized financial services which is great news for the industry and the end customers.

For example, AI technology enables round-the-clock customer service while personalizing each interaction to match each unique customer’s needs. Bank of America’s virtual assistant Erica recently crossed over a billion customer interactions since its launch in 2018, with nearly 1.5 million interactions per day. In addition to better customer service, AI automates daily tasks, enables deep data analysis, allows for more accurate risk assessment and credit scoring, drives new product development, and improves fraud detection. And as AI technology evolves, it will continue to further reshape the financial services industry into being more innovative and profitable. Studies estimate that by 2030, even traditional FIs will be able to lower their costs by 22% by implementing AI throughout their front, middle, and back offices.

There’s a fly in my AI soup

With all the opportunities for greater revenue and efficiency, AI technology also brings to the table ethical and security risks, which can wreak havoc if not managed well. AI for banking and financial services requires the appropriate governance and guardrails that can strike the ideal balance between innovation and growth and risk mitigation. According to the Monetary Authority of Singapore, “Compared to human decision-making, the nature and the increasing use of AIDA [artificial intelligence and data analytics] may heighten the risks of systematic misuse. This may result in impacts which are more widespread, perpetuated at greater speed.” Let’s understand what are most crucial risks involved.

Biased decisions

Using AI in data analysis, such as when evaluating a prospective lending customer, is as prone to bias as human analysis. This is because existing unconscious bias and prejudice of the creator could sneak into the algorithms that underpin AI. This could prevent the bank from meeting their gender, socio-economic, and diversity targets and build a wider customer base. Wells Fargo was recently under legal action over alleged bias in the algorithms it used to approve mortgages, wherein certain neighbourhoods were considered ineligible for rapid loan processing due to the predominant race. To prevent such incidents, some financial bodies are already incorporating countermeasures. For instance, the European Banking Federation has set in place controls to minimize reference to AI’s output data, in order to avoid any assumptions, like age, gender, ethnicity, religion, sexual orientation etc., being derived from that data.

Data security

Perhaps the biggest challenge bothering the AI industry today is balancing AI’s need for large amounts of data with the right to privacy. After all, AI’s true abilities to improve a bank’s operations and drive efficiencies cannot be fully realized if overregulated. However, using AI to analyze and profile customers requires collating existing customer data stored with the FI, which at times could lead to the FI breaching privacy laws. Chances are high that a lot of the sensitive personal information constituting this data may not always be anonymised and could be used to identify customers for targeted phishing attacks. There is also the issue of consent, where a customer might consent to their data being used without fully understanding the implications and possible dangers. All these data security concerns will require strong data protection and security practices when the data is collected, shared, and stored, with appropriate regulatory requirements in cases where AI processing is being outsourced to an independent organization.

Lack of Transparency and Explainability

Many AI models, especially those based on deep learning, are often described as “black boxes” due to their lack of transparency. This can be problematic in the financial sector, where decisions need to be explainable to customers and regulators. The inability to understand how AI models make certain decisions complicates compliance with regulations like GDPR, which mandates a right to explanation.

Insufficient regulations

Although regulators have expressed concerns about AI in banking, including algorithmic bias in decision making, AI-driven social engineering, and deep fake attacks,  there are not enough clear regulations governing the adoption of AI in sensitive banking and financial operations. The rapid adoption of generative AI, despite its convenience, is worrying regulators due to deficient governance rules, especially when it comes to data privacy and security.

Operational Risks and Dependence

Overreliance on AI systems can lead to operational vulnerabilities, especially if these systems experience failures or inaccuracies. Additionally, the need for constant updates and maintenance of AI models can introduce new risks and dependencies on specific technologies or vendors.

Gen AI transforming the Cybersecurity Battleground

The evolution of generative AI has opened new avenues for cybercrimes, providing them with tools to create more convincing phishing emails, generate deepfake content for social engineering attacks, and automate the discovery of vulnerabilities at scale. Hackers are leveraging generative AI to craft personalized and contextually relevant messages that are harder for both users and traditional security systems to identify as malicious. Furthermore, the ability of AI to analyze vast datasets can be exploited to identify potential targets more efficiently, making cyber-attacks more targeted and difficult to prevent.

Generative AI models can also be used to bypass security mechanisms, such as CAPTCHA, by learning to solve these challenges at human or superhuman levels, further complicating the security landscape for financial institutions. The adaptability of AI-driven tools means that they can quickly adjust to new security measures, requiring constant vigilance and innovation from cybersecurity teams.

Moreover, the use of AI in creating sophisticated voice and video deepfakes poses a significant threat to identity verification processes. These technologies can be used to mimic the voice or appearance of trusted individuals, tricking victims into divulging sensitive information or authorizing fraudulent transactions.

Voice cloning

Voice-enabled banking that makes it easier for a customer to complete basic banking tasks with just a voice prompt sounds like a futuristic movie, but it’s happening now. Market studies predict that voice-based banking will reach USD 3.7 billion by 2031. But despite its benefits, there is growing concern about potential security risks and advanced AI voice technology and generative AI which open avenues for identity theft, phishing attacks, and so on. Fraudsters can easily fabricate counterfeit audio clips or voice instructions that closely resemble the authentic voice of a real customer and use these clips to authenticate banking transactions. Such risks are leading to new and creative forms of financial fraud and security breaches, even as there are no security tools to check or prevent voice cloning and deep fake attacks.

Data poisoning

Like voice cloning, data poisoning is another emerging form of cyberattack that can dupe AI systems by manipulating the processed data. The quality of the insights and analysis harvested by an AI system depends heavily on the quality of the input data. Any deliberate, or otherwise, introduction of inaccurate or misleading elements into the data directly compromises the AI’s performance. With the advent of large language models as we see in generative AI, the risks associated with data poisoning are higher in magnitude, especially since with complex AI it becomes harder to detect a data poisoning attack.

Next steps

Highlighting concerns about the dual-edged nature of AI to the global financial system, the World Economic Forum reported in 2023 that “While continued tech integration into the financial system has many benefits, it’s important that industry leaders, regulators and consumers be aware of emerging tech-driven risks, and take appropriate action to mitigate them.” As FIs become more dependent on AI technology, they also need to be aware of emerging risks and apply the appropriate security measures to ensure resilience and stability as we move forward.

Financial services firms need to establish stringent data governance policies and practices for iron-clad data privacy and security that can assure 100% safety to their customers. This would include strong encryption systems, strict access controls, regular security audits, unquestionable compliance with data protection regulations like GDPR, and regular policy revisions. In fact the GDPR audit process already includes some aspects of using AI for data analysis, such as a customer’s right to know when companies use algorithms to make automated decisions. In addition to this, rigorous penetration testing must be conducted to identify and address any new vulnerabilities, protect sensitive customer information, ensure fairness and transparency, and prevent data breaches.

In conclusion, as we harness the transformative power and immense possibilities of AI and Generative AI to redefine financial services, it is paramount to adopt a proactive and thorough approach towards cybersecurity, ethical utilization of AI, and adherence to regulatory frameworks. Such measures are vital for reducing risks and guaranteeing a resilient and ethical progression of AI within the financial sector.