This post has already been read 547 times!
How to reduce False Positives
The amount of revenue banks spend on AML is burgeoning. Hastened by tech that is letting them down. Rather than identify risk it fogs it with needless and inaccurate assumptions. How to reduce false positives is a critical question to save resource. Read on for some ideas…
Define the problem.
The problem is caused by poor data management. Every bank should have a data governance strategy with teams of individuals specifically focused on improving data relevance, accuracy, timeliness and categorization.
One of the key ways this can be done is by improving Meta data management to help teams understand not only what the data is but where, when, why, how and by whom it was sourced.
From that an algorithm would be able to relatively easily grade and weight the accuracy and allow it to be merged with other equally assessed data. The problems stem from the below list.
- Volumes of alerts, transactions and entity lists are growing.
- “Sectoral” – focused sanctions on a specific area or activity versus blanket sanctions; these are here now are more are coming .
- This combination creates more false positives than have been experienced in the past.
- Poor data management, timeliness and accuracy cause inaccurate results.
Knowing not only what to include but just as importantly what not to, is a critical decision.
- When is a match not a match
- How you can meet issues such as sectoral sanctions
- Focus here is on transaction screening but has lessons for customer screening
By introducing Artificial Intelligence systems, the hard work can be shifted to a machine, away from the highlighting pen of an AML investigator. While costs are burgeoning for human resource and accuracy is as low as 90% false positives, the decision is made, it is just time to implement it.
If we consider customer cost. Imagine being stopped and accused of shoplifting, what actually is the difference when we freeze accounts or cards pending investigation of a transaction?
The matching of data across categories defines the problem. Poorly laid out matching rules with poorly managed data alerting to the wrong result.
The below table gives some examples of poorly matched and alerted data. This gives rise to false positive results.
False Positive Impact
Time, money and morale are the three biggest issues with false positives. These three concepts amply demonstrate the criticality of improving false positive rates.
By examining the nature of false positives, even with a human eye, we can categorise the main components causing the errors. Through this, focus can be made on the top 20%. The Pareto principle claims this will account for 80% of all errors. By improving this we improve the whole error issue.
If Machine learning is utilised to learn the patterns of errors, this step can be automated with supervised AI systems. This will reduce manual handling and improve accuracy. But the over-arching issue is to improve the management and accuracy of data with a governance structure throughout the business.