Can artificial intelligence (AI) write and submit suspicious activity reports (SARs)? I pondered this subject last week and canvassed the opinions of others, including fellow financial crime professionals. Then, I read a recent article in the The Guardian newspaper written by AI. So, I determined to stop pondering and start writing.

I am a big fan of AI—I don’t hold any fears or anxieties about a world controlled by cruel and demanding robots. Like The Guardian article rightly points out, we have far more to fear from humans influenced by hatred seeking to inflict violence upon each other.

The article was written by a self-taught robot; we can also teach a robot to be an anti-money laundering compliance professional. I believe AI can make a big difference in the fight against financial crime because of the capacity to examine vast quantities of data and identity gaps, contradictions, anomalies, and perhaps even suspicions of money laundering or other financial crime. AI can be in multiple places, simultaneously, 24 hours a day. Properly programmed, AI can represent you—it can apply your thinking, your policy, and logic.

Within transaction monitoring AI can logically articulate why a transaction or series of transactions is unusual, but can it go a step further and apply a label of suspicion? AI can learn the law and regulations; it can understand the ingredients required for an offense to be committed. But is the determination of suspicion an opinion? Can AI give an opinion? The Guardian article references the non-judgmental characteristics of AI.

In many countries the human being who files SARs holds an appointed and approved position. Consequently, this person has responsibility and accountability. I have previously held this position, and more than once, I have found myself wrapped up in legal arguments with lawyers representing customers and judges opining upon my actions. I cannot envisage a robot in the witness box at the High Court in London.

My reservations are compounded by my sense of “gut instinct”—we all have it, and we apply it. Some years ago scientists at Cambridge University undertook a study that pitched man against machine. More specifically, the study pitched a trading algorithm against an experienced manual trader. Guess who won. The manual trader was victorious, and the scientists concluded it was his gut instinct, influenced by his experience, emotional intelligence, and wider reading of the market, that gave him an advantage.

The study did not assert the algorithm could not trade effectively—rather, it asserted the manual trader was better. When I train financial crime professionals and other staff within regulated businesses, I tell them never to suppress their own gut instinct, because it is seldom wrong. This begs the question: Can AI develop a gut instinct? For sure, AI constantly learns, develops, and improves, but gut instinct is not always logical, and AI makes all decisions based upon logic (I think).

I don’t know if there is a definitive answer to the question I have posed, but if I were to find myself in the witness box in a court room, summoned there because of a SAR submitted under my watch, I would want to have been the author of that SAR.