Do We Need AI, and Does AI Need Us?

Avatar for Vera Romano
By Vera Romano
February 29th 2024 | 4 minute read

In conversation with The Dark Money Files’ Graham Barrow

Are we interceding with artificial intelligence too often and too quickly? asks The Dark Money Files director and anti-money laundering expert Graham Barrow.

A week after chip giant Nvidia reported blowout earnings on frenzied customer AI hardware spending and forecast excellent conditions for continued growth, questions about how AI can best be leveraged and where it will/won’t make humans redundant continue to grow.

AI is transforming financial services tasks

Integration in the workplace will clearly increase. Much of the discussion at present centres around artificial intelligence’s potential as a collaborative tool to boost productivity. The financial services use cases are extensive and growing, as AI’s expanding capabilities allow the automation of routine tasks and big data processing, freeing employees to focus on areas where they can better add value.

For example, asset management companies are already using Generative AI to justify new trade ideas, communicate returns and performance attribution to clients, improve operational efficiency and combat cyberattacks. Higher-skill tasks such as asset allocation, model portfolios, security selection and risk mitigation will follow, reckons former PIMCO chief Mohamed El-Erian.

In the AML arena, generative AI’s ability to take information and produce high-quality, comprehensible Suspicious Activity Reports (SARs) ready for filing is just one area where the technology can ease staff workloads and enhance compliance, notes Barrow.

Is AI always the best solution?

But just because artificial intelligence can take on some of these functions, should it? “AI is becoming such a darling that we increasingly start down the AI route when, if we thought about it, we could write explainable rules that do the job just as well,” suggests Barrow.

Explainability is key. As Barrow observes, with AI you may get a solution to a problem, but you don’t always know if it’s the right, or a reliable, solution.

“If you use AI in a regulatory environment, you cannot be in a situation where you’re unable to explain to a regulator why your AI either said yes or no to, say, a suspicious transaction,” he warns. “What you’re doing must have some robustness. But if you need explainable AI, that might limit its ability to generate novel solutions.”

AI’s propensity to “hallucinate” poses a related issue. Whereas 200 years ago people lived in an age where everything they knew came from direct experience, in today’s more complex world the vast majority of our information, knowledge and beliefs are based on trust, because nobody is able to understand everything anymore, Barrow notes. AI makes stuff up, but its falsehoods are presented so confidently it is easy to take its proclamations on faith. Without enough people with the knowledge and time to corroborate those assertions, we risk them permeating the ether and devolving into “fact”.

The jagged frontier of where AI really can help

A further problem is what researchers have termed the “jagged technological frontier” of generative AI performance.

Working with Boston Consulting Group (BCG), the researchers found that consultants armed with GPT-4 performed dramatically better than those without it on tasks such as brainstorming product ideas, performing a market segmentation analysis and writing a press release. But a task designed to fool the AI – to make strategy recommendations to a client based on misleading financial data and staff interview transcripts – tended to result in the consultants giving extremely bad advice.

As Financial Times columnist Tim Harford put it: “Sometimes the AI is better than you, and sometimes you are better than the AI. Good luck guessing which is which.”

The pace of progress may eradicate these weaknesses in time. But we are not there yet. In the meantime, human oversight and control over AI processes remain essential.

AI needs humans

Humans may not be able to deal with the vast quantities of data that AI applications can, but one advantage is our ability to examine information critically, spot patterns and make intuitive leaps that can be subsequently tested, says Barrow.

For example, in his work Barrow regularly sees nonsensical company names on the official register that have been generated by algorithms from stolen identity data to create fictitious shell corporations – perhaps using a portmanteau of someone’s names to call a company ‘Smjeda’. By using our critical reasoning, humans can spot these fake company names that an AI tool may not.

Applying AI to transaction monitoring requires similar human control to guard against rogue results. “Potentially AI could do a good job, but will it do that accurately every time, or unjustly indicate people are engaging in criminal acts where there has been nothing wrong with their transactional activity?” says Barrow. “If we are going to use AI, we need to understand the limits of its effectiveness and make sure humans are watching what the technology does.”

And given the exponential rate of advance AI will see in the coming years in directions we can’t anticipate, there are serious philosophical questions we need to consider around how much ability we want the technology to have, he concludes.

“We need to proceed with caution because firms’ reputations are on the line. If we don’t have that philosophical conversation now, we may create a future over which we have no control because we haven’t thought about what those futures might look like.”

 

Vera Romano
Vera is responsible for driving Deep Pool’s overall marketing strategy. Vera is a qualified and proven marketer with 20+ years of experience at companies ranging from tech start-ups to large corporates, where she has led creative teams in developing and managing innovative brands through strategic campaigns to grow market share, sales and achieve targets.