Treasury report calls out cyber risks to financial sector fueled by AI
The financial services industry could be increasingly vulnerable to cyber-enabled fraud perpetrated by threat actors leveraging artificial intelligence tools, according to a Treasury Department report released Wednesday that examines AI-specific cyber risks to the critical infrastructure sector.
The report, led by Treasury’s Office of Cybersecurity and Critical Infrastructure Protection to fulfill a requirement in President Joe Biden’s AI executive order, delivers no cyber-related mandates to the financial services sector, nor does it recommend or argue against the use of AI in the industry’s work. But the report, based in part on interviews with representatives from 42 financial services and tech-related companies, provides warnings to the industry at large about AI’s potential to worsen fraud while also sharing best practices and AI use cases for cyber and fraud prevention.
“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” Under Secretary for Domestic Finance Nellie Liang said in a statement. “Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud.”
The fear of an uptick in cyber-enabled fraud is fueled by increased accessibility to emerging AI tools, the report notes, giving threat actors an “advantage by outpacing and outnumbering their AI targets,” at least initially.
To combat that advantage, the report pushes financial institutions to “expand and strengthen their risk management and cybersecurity practices to account for AI systems’ advanced and novel capabilities, consider greater integration of AI solutions into their cybersecurity practices, and enhance collaboration, particularly threat information sharing.”
Managing AI-related cyber risks should be akin to best practices in the protection of IT systems, the report said. Several of the participating financial institutions told the report’s authors that their current practices match elements of the National Institute of Standards and Technology’s AI Risk Management Framework, though “many also noted that it is challenging to establish practical and enterprise-wide policies and controls for emerging technologies like Generative AI.”
Other financial sector report participants said they were developing AI-specific risk management frameworks in-house, many of which are guided by the principles laid out in NIST’s RMF as well as the Office for Economic Cooperation and Development’s AI principles and the Open Worldwide Application Security Project’s AI security and privacy guide.
But the experimentation with and development of financial firms’ in-house AI systems and frameworks underscores “a widening capability gap” between the biggest and smallest companies in the sector.
“One firm has stated that it has approximately 400 employees working on fraud-prevention AI systems, and AI service providers noted being approached with thousands of use cases by larger firms,” the report said. “Smaller firms report that they do not have the IT resources or expertise to develop their own AI models; therefore, these firms solely rely on third-party or core service providers for such capabilities.”
Many financial institution participants said they believed AI adoption was important because of the technology’s potential to “significantly improve the quality and cost efficiencies of their cybersecurity and anti-fraud management functions.” Among the ways in which cyber threat actors can utilize AI, the report specifically called out social engineering, malware and code generation, vulnerability discovery and disinformation. Cyberthreats to AI systems include data poisoning, data leakage, evasion and model extraction.
The automation currently used by financial institutions for “time-consuming and labor-intensive anti-fraud and cybersecurity-related tasks” will likely be enhanced by generative AI “by capturing and processing broader and deeper data sets and utilizing more sophisticated analytics.” Technologies of that kind, the report added, can also enable financial firms to take on “more proactive cybersecurity and fraud-prevention postures.”
Going forward, the financial services sector relayed that it would be helpful to have “a common lexicon” on AI tools to aid in more productive discussions with third parties and regulators, ensuring that all stakeholders are speaking the same language. Report participants also said their firms would “benefit from the development of best practices concerning the mapping of data supply chains and data standards.”
The Treasury Department said it would work with the financial sector, as well as NIST, the Cybersecurity and Infrastructure Security Agency and the National Telecommunications and Information Administration to further discuss potential recommendations tied to those asks.
In the coming months, Treasury officials will collaborate with industry, other agencies, international partners and federal and state financial sector regulators on critical initiatives tied to AI-related challenges in the sector.