The perils of a modelled world: technical challenges for catastrophe modelling

Kennisbank •

The adoption and incorporation of catastrophe models and their outputs into the enterprise risk management frameworks and processes of (re-)insurance companies for property and associated lines has developed apace since the early 1990s (see the article on catastrophe modelling by Marc Melsen).

The perils of a modelled world: technical challenges for catastrophe modelling

Solvency II in Europe and similar regulatory regimes around the world have further accelerated this process.

 

Despite the widespread adoption of these models from direct underwriting to reinsurance and capital modelling, challenges remain; challenges related to data, model complexity and model validity amongst many others.

 

“You can have data without information, but you cannot have information without data” – Daniel Keys Moran

 

As catastrophe models have become increasingly widespread, but also complex, driven by advances in the scientific community, this has encouraged, and to some degree required, (re-)insurers to collect increasingly granular data on their exposures, such as:

1.     Location – detailed address level information for precise geolocation is becoming the norm.

2.     Occupancy & Construction – detailed specification of the usage of the building and construction materials are expected.

3.     Building Characteristics – data relating to building height, age, and size are increasingly differentiating insurers.

4.     Financial conditions – knowledge of deductibles and limits of cover for all lines of business is expected.

Over de auteur

Tim Fewtrell MSc

is Head of Catastrophe Analytics for EMEA North & East at Willis Re and a member of the Dutch client team.