a-model-of-efficiency
1 January 1970

A model of efficiency -- modeling

Greg Hendrick, president, XL Re Ltd
Tim Mardon, president and chief underwriting officer, Torus Re
Underwriting department, Alterra
Tom Larsen, senior vice president, EQECAT, Inc.
Paul VanderMarck, chief products officer, RMS
Adam Champion, manager of client relations, AIR Worldwide

With the impending 2010 hurricane season approaching, we reflect on past criticism that the underwriting community were over-reliant on cat modelling—how do you respond and how do you see it changing it the future?

Greg Hendrick, president, XL Re Ltd

While some participants in the insurance market had become over-reliant on cat models, many within the industry exercised effective cat management by blending modelling with traditional aggregate controls.

As we look ahead to the next event, the cat models have improved dramatically since 2004/2005, and data quality across the industry has improved, although on some of the larger commercial exposures, it still requires a great deal of refinement.

The improvement in cat models has been driven by several factors. First, information from the claims activity of 2004 and 2005 has now been incorporated into the models. Second, the probability of loss has increased as modellers have used the near-term view of events rather than the long-term average to reflect current climate conditions. Last, but not least, the models now better reflect a number of secondary characteristics and the way those characteristics impact loss size in an event. For instance, square footage of a home turned out to be a driving factor of loss in 2004 and 2005, and the damageability functions of the models now reflect that differential.

While cat models and data quality across the industry have improved, there’s always a concern that bad habits will repeat themselves, as the industry has demonstrated so well in the past. It is unfortunate, but inevitable, that as pricing margins begin to erode, some risk assumers resort to old habits of pushing the envelope to generate more return.

However, most prudent catastrophe risk management programmes have an element of zonal aggregate limitation. In addition to looking at various return period losses and other metrics of catastrophe models, at XL Re, we also track every dollar of limit that we have exposed in numerous zones around the world. This allows us to balance the model output with an absolute downside loss perspective. It’s always healthy to blend modelling with the conservative approach.

Tim Mardon, president and chief underwriting officer, Torus Re

In our view, cat models are an indispensable tool in assessing cat risk. They form the backbone of objective risk pricing, portfolio risk management, reinsurance buying strategy, rating agency risk assessments, portfolio concentration, event loss reserving and diversification measurements, and are critical for enterprise risk management, capital raising and placement of capital markets hedges.

Any company that does not use them cannot perform these functions adequately. That being said, the accuracy of such models clearly underpins much of a sophisticated company’s operations and therefore needs to be treated with caution. We note the following shortfalls/ problems that need to be considered:

• Input data accuracy (garbage in = garbage out)

• Data reflects a static point in time and underlying exposures may change

• Models are better developed for some risk zones/perils (e.g. US hurricane versus Chile earthquake)

• There are material differences between models for some zone territories

• Some perils are not modelled or are very weakly modelled (e.g. brushfire)

• Models are never precisely correct for any event due to unique and unmeasurable characteristics

• Models rely on historical data (which may be adjusted) for storm frequency and the latest research for earthquake likelihood may not be correct

• Models are becoming increasingly costly

• Models do not quantify all risks that may arise from a catastrophe (e.g. contingent credit risk, ‘political’ risk of after-the-fact changes to policy interpretation). To allow for these shortfalls, we supplement model usage in a number of ways:

• Adjustments are made to base model output for data quality, portfolio growth and unmodelled losses (e.g. ALAE) • We have separate internal models for non-headline perils

• We use at least two models

• Frequency assumptions are continually reviewed and adjusted as deemed necessary

• Reliance on policy features (e.g. deductibles) needs to be considered in relation to the litigiousness of the states/countries covered

• We have other risk management guidelines (e.g. aggregate limits by zone, ‘RDS’-type scenarios) in case of model failure

• We also review cedant experience in major losses versus model loss costs

• Loss reserves are also checked by simpler market share calculations

• Where there is a large diversion in model loss costs, we are more cautious in writing a risk

• Raw model ‘ROEs’ are treated with different levels of attractiveness depending on the underlying exposures (e.g. homeowners better modelled than commercial or retro business).

In summary, the cat models need to be used in an educated fashion and treated with a degree of scepticism, but are a critical tool for both reinsurers and insurers with cat exposure. We believe that underwriters who constantly criticise model use in an uninformed fashion are excusing writing risks at below modelrequired profitability, rather than requiring higher pricing where the models are inadequate, which would be a view we would have sympathy with.

We perceive the biggest problem in the future is over-reliance on one or two modelling companies that are now leveraging their market position to make their products increasingly unaffordable for smaller companies and leading to a lack of divergent opinion.

There is also the concern of companies effectively ‘outsourcing’ one of the most critical assumptions to their business (this obviously depends on the degree to which the models are used unadjusted).

We think this may ultimately lead to larger companies increasingly developing their own internal cat models and/or industry collaboration to create an independent model agency at lower cost.

Underwriting department, Alterra

Our response to the question posed is that yes, we rely heavily on models because they are the best tools available. But as sound and experienced underwriters, we know many of the clients and their business exposures that weight in our pricing and decisions. We also have proprietary in-house tools to measure risk, which are used in conjunction with models we purchase.

The model makers

Tom Larsen, senior vice president, EQECAT, Inc.

EQECAT provides natural catastrophe loss estimation tools to the underwriting community to assist in the quantification of risk from low-frequency, high-severity catastrophes such as hurricanes and earthquakes. These models are only one of many tools used by underwriters in their assessment of insurance risk. In a competitive underwriting environment, underwriters are continually seeking better tools to help in the differentiation and management of risk.

CAT models are models of the physical world in which we live and must address many uncertainties. EQECAT diligently works to address this uncertainty in its models by both developing ever more sophisticated models that address this uncertainty, and educating our clients and the market of the impacts and sensitivities of these uncertainties, including the limitations of our knowledge.

EQECAT’s response to criticisms of the underwriting community in model usage is to more effectively communicate the strengths and limitations of CAT models. EQECAT’s goal for transparency in catastrophe modelling seeks to empower the underwriter with a greater understanding of what we know for certain and the randomness that can’t be controlled.

The future brings many aspects of catastrophe modelling to the forefront. Better tools to audit input exposure data will enable sensitivity testing of the impacts of this uncertainty. More refined models will enable underwriters to understand the sensitivity of various modelling components with instantaneous and detailed risk sensitivity reports. Enhanced post-event reporting and discussions will enable better real-time understanding of the insights gained from events around the world and the implications of this new understanding to areas that have not yet seen their ‘big event’.

Paul VanderMarck, chief products officer, RMS

Catastrophe models are central to critical decisions, ranging from pricing and underwriting individual accounts to managing and allocating capital on a global scale. When appropriately used, they are essential tools for making informed judgements. But it is important to properly understand the limitations, assumptions and capabilities of the models to fully appreciate the uncertainty behind any one decision.

At RMS, we work with clients to architect and deliver modelling solutions that give greater insight into areas of uncertainty. Our next-generation models for earthquake risks released last year make explicit how different views in scientific understanding can have a significant bearing on loss estimates. The models allow companies to explore how uncertainty in the science that underlies the models propagates through to sensitivities in loss estimates.

We’re at the early stages of a paradigm shift in catastrophe modelling and, increasingly, the focus is turning from the ability to quantify the risk to the ability to understand more robustly the uncertainty in that quantification of risk and use that explicitly when making decisions.

Adam Champion, manager of client relations, AIR Worldwide

Bermuda re/insurers employ many of the most sophisticated users of catastrophe models, who understand how models are developed, as well as how best to employ them to differentiate and manage risk. This includes how to interpret model output and how to incorporate probabilistic, historical and real-time loss estimates to improve their underwriting and portfolio management.

They also know that using model results to inform underwriting decisions requires an understanding of the uncertainty inherent in the model. AIR works closely with our clients to help them better understand the assumptions and methods behind the models, as well as how to best interpret the results, so that they can do their job more effectively and with greater confidence.

In addition, most Bermuda companies now realise that the reliability of model output is only as good as the quality of the exposure data input into catastrophe models. AIR is working closely with companies to help them better assess and improve the quality of exposure data.

Catastrophe models are an important tool for differentiating and managing catastrophe risk, and AIR is committed to helping our clients incorporate the most comprehensive and robust view of risk into all of their decision-making processes.