AI and Machine Learning in Insurance: can we ensure Fairness and Explainability?

Kennisbank •

Artificial Intelligence (AI) and Machine Learning (ML) are rapidly marching into finance. Credit scoring, fraud detection and quant investing are just a few of the finance areas where ML-powered models are already used. Such models are also finding their way into the insurance sector.

AI and Machine Learning in Insurance: can we ensure Fairness and Explainability?

Recently, AFM (Autoriteit Financiële Markten) published a paper ‘Technologie richting 2023: De toekomst van verzekeren en toezicht’, where it warns of risks associated with new technologies and digitalization, and outlines approaches to mitigating these risks. AFM argues that, by using vast amounts of (often personal) data and advanced models, it becomes possible for insurers to exclude certain customers or dramatically increase their insurance premia. Existing legislation such as acceptance obligation or privacy law GDPR do not offer a full solution to this: acceptance obligation can be avoided by charging an exorbitant premium for an insurance policy, and customers can be nudged to (unwittingly) give permission to the use of their personal data by a click of a mouse.

The two main objections of regulators such as ECB and DNB against use of ML models in material decisions are: their lack of explainability and potential unfairness of outcomes. These worries are also echoed by AFM for the insurance sector.

Lees dit artikel verder onder Download.

Over de auteur

Svetlana Borovkova

Dr. S. Borovkova is a partner and Head of Quantitative Modelling at risk advisory firm Probability & Partners. She is also an Associate Professor of Quantitative Finance and Risk Management at Vrije Universiteit Amsterdam. Find her research on SSRN and her columns on various issues in finance in Financial Investigator.