The future of risk management will likely be a combination of human with machines, rather than human versus machine. After all, both humans and machines have their relative strengths and weaknesses. And hybrid models will deliver outcomes greater than the sums of their parts, as Christian Chmiel explains.
Q: So, let’s make sure we’re all on the same page from the start: what is artificial intelligence or AI?
Christian: AI can mean different things depending on the context. It’s a broad discipline containing a spectrum of approaches and technologies, for example machine learning, deep learning, natural language processing, optimisation, cognitive computing and swarm intelligence. Which one you choose depends on the problem you’re trying to solve.
I think that this is really critical. It’s as Einstein said: “If I had an hour to solve a problem I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.” AI has been trending in the hype cycle for so long that there is a temptation to make it the solution without properly defining the problem.
Most businesses don’t really need AI, namely machines that sense, think, act and learn in a feedback loop. Some type of machine learning would probably be adequate. This uses complex statistical methods to identify patterns across past datasets and, without being explicitly programmed, predict what may happen. It’s about creating algorithms that can learn from experience, unlike the databases of the 1950/60s where you got out what you put in.
Machine learning has been around for forty-odd years but currently is in the eye of a perfect storm. There’s the huge rise in data with sensors everywhere, ubiquitous connectivity, plus structured and unstructured data much of which is consumer-generated rather than from industrial and scientific sources. Secondly, there’s the increase in computational power, which is scalable, available instantly and cheaply. The costs of data storage are also falling. Lastly, there’s been huge advances in core AI techniques and the quality of machine learning algorithms.
Q: Your central thesis is that the hybrid underwriter of the future will be half human, half machine, why do you think that?
Christian: I think popular culture has influenced how we think about machines. Are the your masters or our tools? We all know the story of Frankenstein’s monster, which has stoked a latent fear of creating something cleverer and more powerful than ourselves. Who is working for whom? Is it machines working for humans, or vice versa?
This can quickly become a dystopian discussion on runaway robots and evil AI. Which is not really relevant or helpful in the context of risk management and merchant underwriting. Nor are discussions about humans being replaced by robots. In truth, the future will likely be human and machine, rather than human versus machine, because both have their relative strengths and weaknesses.
Machines are good at spotting patterns and analysing huge amounts of data quickly. More so than humans. Machines also love repetitive tasks. It’s what they’re designed for. They can continue doing them again and again without getting bored, tired, making mistakes, or needing a break, unlike humans. Machines are ideal for identifying patterns whether in photographs, speech or datasets that escape human eyes. But their intelligence is niche and somewhat narrow.
Ask machines to make judgements on data — that’s something different. This is amusingly illustrated on the Spurious Correlations website. It pokes fun at the idea that correlation equals causation. Graphs on the site show the number of people who have drowned in swimming pools correlates with the number of Nicholas Cage films. Similarly, that divorce rates in the US state of Maine correlate with the per capita consumption of margarine.
Humans can see immediately that there’s no correlation, but what about machines? They can match data but may struggle with understanding its significance. In certain risk management situations, there are no right or wrong answers, often just well-informed opinions based on judgement. That’s why at Web Shield we believe that the underwriter of the future will be a hybrid between man and machine. Decisioning will be machine assisted. The algorithms will do the heavy lifting. They will make recommendations and escalate only the cases where human input adds value.
The humans will take over when the machines reach their limits. This frees them up to focus on complex, higher value work. And take decisions too important to delegate wholly to machines. This hybrid model should produce better results than either working alone. Clearly, it’ll have implications for workflow and processes. Humans will hand off to machines. And machines hand off to humans in a type of human-machine tango.
Q: You’ve mentioned workflow and processes, what are some other implications of the hybrid model?
Christian: Machine learning is only as good as the data put into it and the humans that use and check it. This encompasses everything from defining the use cases, programming the algorithms, iterating the models and determining the limits and metrics of success.
Iterating the models takes a combination of man and machine. Data hygiene, feature extraction and model updates used to be manual, labour-intensive processes. They have become automated, yet technology is simply not good enough to takeover completely. There have been some high-profile failures. These are inevitable, by the way, and help add to understanding of the problem. For example, the Microsoft chatbot subject to a coordinated Twitter attack, which had to be taken offline after making racist and sexist comments.
Users need to ask themselves regularly — just as they would with monitoring parameters — is the machine learning working properly? Humans must define the limits of machines as well as effectiveness and success rates.
Q:What do businesses need to have in place to take advantage of AI or machine learning?
Christian: Firstly, I’d say that this is about cultural change. It’s as much about ‘headware’ or mindset than hardware or software. A conceptual up-skilling in what is possible — and desirable — is required, particularly at senior management level. This will have all sorts of implications from how AI is integrated into business as usual to organisational structure and the nature and future of work done in specific areas.
Secondly, build the business case for AI or machine learning. Does it help to do a particular part of the value chain better, faster, cheaper, more accurately or with higher quality? If so, investigate further. Remember though, AI won’t be the answer to every question.
Third, take a long, hard look at your data. Payments has traditionally been data-rich, but how can this be harnessed by risk professionals? Is it the right type of data, in the right format and clean enough? Can this be supplemented by non-payment or external data? The algorithms are only as good as the data they are trained on.
As I’ve said above, machine learning is only as good as the humans that use and check it. So lastly, consider how you’d operate and monitor your systems. This is to ensure that they are resilient and remain compliant. But also, so you can refine and improve them. A learning system doesn’t always behave tomorrow the way it behaves today.
So, while culture, business cases, data, algorithms and machine systems are important, the success of AI depends on having enough of the right staff — the right humans.
For more information
Web Shield products draw on a range of public and proprietary data sources and use a machine learning element to improve over time. To find out more about our on-boarding and monitoring tools, Investigate and Monitor, please visit https://www.webshield.com/.