Is AI Talk About AML All Wrong?

Can you help me make sense of something we hear a lot about in AML?

The IMF, UN, FATF, and other government organizations estimate global money laundering exceeds $3 trillion annually.

These same government agencies report that each year law enforcement seizes less than 1% of this $3 trillion in illegal profits.  In other words, mostly all money laundered remains undetected.

We are told AML compliance is a critical part of the system that helps law enforcement identify financial crime.

Yet, current approaches to transaction monitoring and suspicious activity detection leave us bogged down in false positives and relying on decade-old manual and monotonous information gathering and file documentation steps.  Depressingly, despite all this work, we don’t seem to be detecting much real money laundering.

We are told that modern software like AI and machine learning will reduce false positives and automate many of the manual steps investigators now must endure.  As a result, there will be far less work, and AML teams will shrink.

Here is my question – If existing AML software detects so little suspicious activity now, and that software is replaced by sophisticated modern software, won’t the modern software then detect more suspicious activity?  And won’t that then lead to more AML work, not less?

Or, do many AML leaders believe that current transaction monitoring systems detect most suspicious activity, and once modern software clears out false positives and automates file documentation, AML work will shrivel up?

It seems to me these are the essential questions AML management, regulators, bank executives, software developers, and national policymakers should be thinking and talking about.  Imagine the increase in AML workloads if AI or machine learning based transaction monitoring systems were to double or triple the number of alerts that end up as SARs?

Once modern software figures out how to better detect suspicious activity, SARs will skyrocket.  After all isn’t the promise of AI and machine learning that systems will “learn” how to continuingly operate better?

False positives require about 20 minutes of analysis to resolve.  Case investigations require several hours. Even when new information gathering tools automate much of the documentation, case investigations will still require time and experienced human decision-making skills.

No-one really knows the impact modern software will have on AML, but viewing AI and machine learning based systems as the great “compliance cost cutter” is unwise.

At no point in human history has financial crime and the desire for its profits decreased.  As crime and laundering increase, won’t our better AML software detect more of it and shouldn’t we be prepared when it does?