The digital transformation of existing business models is a theme of our age. Robotic process automation (RPA) is one of the many acronyms to have found its way into the terminology of businesses today. I highlighted the potential for telecoms to digitalise their business models in this post. Klaus Schwab of the World Economic Forum in his book “Fourth Industrial Revolution” refers to the current era as one whereby “new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human”.
The financial services business is one that is regularly touted as been rife for transformation with fintech being the much-hyped buzz word. I last posted here and here on fintech and insurtech, the use of technology innovations designed to squeeze out savings and efficiency from existing insurance business models.
Artificial intelligence (AI) is used as an umbrella term for everything from process automation, to robotics and to machine learning. As referred to in this post on equity markets, the Financial Stability Board (FSB) released a report called “Artificial Intelligence and Machine Learning in Financial Services” in November 2017. In relation to insurance, the FSB report highlights that “some insurance companies are actively using machine learning to improve the pricing or marketing of insurance products by incorporating real-time, highly granular data, such as online shopping behaviour or telemetrics (sensors in connected devices, such as car odometers)”. Other areas highlighted include machine learning techniques in claims processing and the preventative benefits of remote sensors connected through the internet of things. Consultants are falling over themselves to get on the bandwagon as reports from the likes of Deloitte, EY, PwC, Capgemini, and Accenture illustrate.
One of the better recent reports on the topic is this one from the reinsurer SCOR. CEO Denis Kessler states that “information is becoming a commodity, and AI will enable us to process all of it” and that “AI and data will take us into a world of ex-ante predictability and ex-post monitoring, which will change the way risks are observed, carried, realized and settled”. Kessler believes that AI will impact the insurance sector in 3 ways:
- Reducing information asymmetry and bringing comprehensive and dynamic observability in the insurance transaction,
- Improving efficiencies and insurance product innovation, and
- Creating new “intrinsic“ AI risks.
I found one article in the SCOR report by Nicolas Miailhe of the Future Society at the Harvard Kennedy School particularly interesting. Whilst talking about the overall AI market, Miailhe states that “the general consensus remains that the market is on the brink of a revolution, which will be characterized by an asymmetric global oligopoly” and the “market is qualified as oligopolistic because of the association between the scale effects and network effects which drive concentration”. When referring to an oligopoly, Miailhe highlights two global blocks – GAFA (Google/Apple/Facebook/Amazon) and BATX (Baidu/Alibaba/Tencent/Xiaomi). In the insurance context, Miailhe states that “more often than not, this will mean that the insured must relinquish control, and at times, the ownership of data” and that “the delivery of these new services will intrude heavily on privacy”.
At a more mundane level, Miailhe highlights the difficulty for stakeholders such as auditors and regulators to understand the business models of the future which “delegate the risk-profiling process to computer systems that run software based on “black box” algorithms”. Miailhe also cautions that bias can infiltrate algorithms as “algorithms are written by people, and machine-learning algorithms adjust what they do according to people’s behaviour”.
In a statement that seems particularly relevant today in terms of the current issue around Facebook and data privacy, Miailhe warns that “the issues of auditability, certification and tension between transparency and competitive dynamics are becoming apparent and will play a key role in facilitating or hindering the dissemination of AI systems”.
Now, that’s not something you’ll hear from the usual cheer leaders.
I flipped quickly (ok, not so quickly) through the SCOR paper. Not sure whether these people actually worked with AI tools or whether someone (a helpful consultant, maybe) fed them the proper buzzwords. In my book AI is a cool tool to interpolate resp. extrapolate existing data and find patterns (think face recognition), AI, however, has no understanding of “meaning” (that’s why a while ago face recognition software identified black people with monkeys). The latter point is a nice, albeit toy, example of AI risk, btw (since the person(s) who trained the model probably didn’t plan that).
I found it interesting (read: mildly funny) that there is also a Blockchain initiative for insurance cos. If you have time and are interested, this blog provides a nice antidote to the prevailing Blockchain hype: https://davidgerard.co.uk/blockchain/ (Attack of the 50 Foot Blockchain)
Have a nice long weekend!
Thanks Eddie, I’ll check out the blockchain blog as I am curious to understand the hype from reality over blockchain.
Here’s a funny example of teething problems in this bold new world from the SCOR report…..
“A well-known extreme example of this is Microsoft’s attempt at launching an AI chatbot on Twitter. Microsoft launched a Twitter account for “Tay” on the 23rd March 2016 and within 16 hours the Twitter account had to be shut down. Tay’s tweets became racist, political and inflammatory. Twitter trolls trained Tay by tweeting it provocative comments causing the AI chatbot to learn and develop an offensive bias. This example highlights the risk of bias in data and how algorithms should be well validated and audited to avoid such problems.”
Have a great Easter, all the best.