Tag Archives: AIR

CAT models and fat tails: an illustration from Florida

I have posted numerous times now (to the point of boring myself!) on the dangers of relying on a single model for estimating losses from natural catastrophes. The practise is reportedly widespread in the rapidly growing ILS fund sector. The post on assessing probable maximum losses (PMLs) outlined the sources of uncertainty from such models, especially the widely used commercial vendors models from RMS, AIR and EqeCat.

The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) was created in 1995 as an independent panel of experts to evaluate computer models used for setting rates for residential property insurance. The website of the FCHLPM contains a treasure trove of information on each of the modelling firms who provide detailed submissions in a pre-set format. These submissions include specifics on the methodology utilised in their models and the output from their models for specified portfolios.

In addition to the three vendor modellers (RMS, AIR, EqeCat), there is also details on two other models approved by FCHLPM, namely Applied Research Associates (ARA) and the Florida Public Hurricane Loss Model (FPHLM)developed by the Florida International University.

In one section of the mandated submissions, the predictions of each of the models on the number of annual landfall hurricanes for a 112 year period (1900 to 2011 is the historical reference period) are outlined. Given the issue over the wind speed classification of Super-storm Sandy as it hit land and the use of hurricane deductibles, I assume that the definition of landfall hurricanes is consistent between the FCHLPM submissions. The graph below shows the assumed frequency over 112 years of 0,1,2,3 or 4 landfall hurricanes from the five modellers.

click to enlargeLandfalling Florida Hurricanes

As one of the objectives of the FCHLPM is to ensure insurance rates are neither excessive nor inadequate, it is unsurprising that each of the models closely matches known history. It does however demonstrate that the models are, in effect, limited by that known history (100 odd years in terms of climatic experiences is limited by any stretch!). One item to note is that most of the models have a higher frequency for 1 landfall hurricane and a lower frequency for 2 landfall hurricanes when compared with the 100 year odd history. Another item of note is that only EqeCat and FPHLM have any frequency for 4 landfall hurricanes in any one year over the reference period.

Each of the modellers are also required to detail their loss exceedance estimates for two assumed risk portfolios. The first portfolio is set by FCHLPM and is limited to 3 construction types, geocodes by ZIP code centroil (always be wary of anti-selection dangers in relying on centroil data, particularly in large counties or zones with a mixture of coastal and inland exposure), and specific policy conditions. The second portfolio is the 2007 Florida Hurricane Catastrophe Fund aggregate personal and commercial residential exposure data. The graphs below show the results for the different models with the dotted lines representing the 95th percentile margin of error around the average of all 5 model outputs.

click to enlarge

Modelled Losses Florida Notional Residential PortfolioModelled Losses FHCF Commercial Residential Portfolio

As would be expected, uncertainty over losses increase as the return periods increase. The tail of outputs from catastrophe models clearly need to be treated will care and tails need to be fatten up to take into account uncertainty. Relying solely on a single point from a single model is just asking for trouble.

Insurance & capital market convergence hype is getting boring

As the horde of middle aged (still mainly male) executives pack up their chinos and casual shirts, the overriding theme coming from this year’s Monte Carlo Renez-Vous seems to be impact of the new ILS capacity or “convergence capital” on the reinsurance and specialty insurance sector. The event, described in a Financial Times article as “the kind of public display of wealth most bankers try to eschew”, is where executives start the January 1 renewal discussions with clients in quick meetings crammed together in the luxury location.

The relentless chatter about the new capital will likely leave many bored senseless of the subject. Many may now hope that, just like previous hot discussion topics that were worn out (Solvency II anybody?), the topic fades into the background as the reality of the office huts them next week.

The more traditional industry hands warned of the perils of the new capacity on underwriting discipline. John Nelson of Lloyds highlighted that “some of the structures being used could undermine some of the qualities of the insurance model”. Tad Montross of GenRe cautioned that “bankers looking to replace lost fee income” are pushing ILS as the latest asset class but that the hype will die down when “the inability to model extreme weather events accurately is better understood”. Amer Ahmed of Allianz Re predicted the influx “bears the danger that certain risks get covered at inadequate rates”. Torsten Jeworrek of Munich Re said that “our research shows that ILS use the cheapest model in the market” (assumingly in a side swipe at AIR).

Other traditional reinsurers with an existing foothold in the ILS camp were more circumspect. Michel Lies of Swiss Re commented that “we take the inflow of alternative capital seriously but we are not alarmed by it”.

Brokers and other interested service providers were the loudest cheerleaders. Increasing the size of the pie for everybody, igniting coverage innovative in the traditional sector, and cheap retrocession capacity were some of the advantages cited. My favourite piece of new risk management speak came from Aon Benfield’s Bryon Ehrhart in the statement “reinsurers will innovate their capital structures to turn headwinds from alternative capital sources into tailwinds”. In other words, as Tokio Millennium Re’s CEO Tatsuhiko Hoshina said, the new capital offers an opportunity to leverage increasingly diverse sources of retrocessional capacity. An arbitrage market (as a previous post concluded)?

All of this talk reminds me of the last time that “convergence” was a buzz word in the sector in the 1990s. For my sins, I was an active participant in the market then. Would the paragraph below from an article on insurance and capital market convergence by Graciela Chichilnisky of Columbia University in June 1996 sound out of place today?

“The future of the industry lies with those firms which implement such innovation. The companies that adapt successfully will be the ones that survive. In 10 years, these organizations will draw the map of a completely restructured reinsurance industry”

The current market dynamics are driven by low risk premia in capital markets bringing investors into competition with the insurance sector through ILS and collaterised structures. In the 1990s, capital inflows after Hurricane Andrew into reinsurers, such as the “class of 1992”, led to overcapacity in the market which resulted in a brutal and undisciplined soft market in the late 1990s.

Some (re)insurers sought to diversify their business base by embracing innovation in transaction structures and/or by looking at expanding the risks they covered beyond traditional P&C exposures. Some entered head first into “finite” type multi-line multi-year programmes that assumed structuring could protect against poor underwriting. An over-reliance on the developing insurance models used to price such transactions, particularly in relation to assumed correlations between exposures, left some blind to basic underwriting disciplines (Sound familiar, CDOs?). Others tested (unsuccessfully) the limits of risk transfer and legality by providing limited or no risk coverage to distressed insurers (e.g. FAI & HIH in Australia) or by providing reserve protection that distorted regulatory requirements (e.g. AIG & Cologne Re) by way of back to back contracts and murky disclosures.

Others, such as the company I worked for, looked to cover financial risks on the basis that mixing insurance and financial risks would allow regulatory capital arbitrage benefits through increased diversification (and may even offer an inflation & asset price hedge). Some well known examples* of the financial risks assumed by different (re)insurers at that time include the Hollywood Funding pool guarantee, the BAe aircraft leasing income coverage, Rolls Royce residual asset guarantees, dual trigger contingent equity puts, Toyota motor residual value protection, and mezzanine corporate debt credit enhancement  coverage.

Many of these “innovations” ended badly for the industry. Innovation in itself should never be dismissed as it is a feature of the world we live in. In this sector however, innovation at the expense of good underwriting is a nasty combination that the experience in the 1990s must surely teach us.

Bringing this back to today, I recently discussed the ILS market with a well informed and active market participant. He confirmed that some of the ILS funds have experienced reinsurance professionals with the skills to question the information in the broker pack and who do their own modelling and underwriting of the underlying risks. He also confirmed however that there is many funds (some with well known sponsors and hungry mandates) that, in the words of Kevin O’Donnell of RenRe, rely “on a single point” from a single model provided by to them by an “expert” 3rd party.

This conversation got me to thinking again about the comment from Edward Noonan of Validus that “the ILS guys aren’t undisciplined; it’s just that they’ve got a lower cost of capital.” Why should an ILS fund have a lower cost of capital to a pure property catastrophe reinsurer? There is the operational risk of a reinsurer to consider. However there is also operational risk involved with an ILS fund given items such as multiple collateral arrangements and other contracted 3rd party service provided functions to consider. Expenses shouldn’t be a major differing factor between the two models. The only item that may justify a difference is liquidity, particularly as capital market investors are so focussed on a fast exit. However, should this be material given the exit option of simply selling the equity in many of the quoted property catastrophe reinsurers?

I am not convinced that the ILS funds should have a material cost of capital advantage. Maybe the quoted reinsurers should simply revise their shareholder return strategies to be more competitive with the yields offered by the ILS funds. Indeed, traditional reinsurers in this space may argue that they are able to offer more attractive yields to a fully collaterised provider, all other things being equal, given their more leveraged business model.

*As a complete aside, an article this week in the Financial Times on the anniversary of the Lehman Brothers collapse and the financial crisis highlighted the role of poor lending practices as a primary cause of significant number of the bank failures. This article reminded me of a “convergence” product I helped design back in the late 1990s. Following changes in accounting rules, many banks were not allowed to continue to hold general loan loss provisions against their portfolio. These provisions (akin to an IBNR type bulk reserve) had been held in addition to specific loan provision (akin to case reserves). I designed an insurance structure for banks to pay premiums previously set aside as general provisions for coverage on massive deterioration in their loan provisions. After an initial risk period in which the insurer could lose money (which was required to demonstrate an effective risk transfer), the policy would act as a fully funded coverage similar to a collaterised reinsurance. In effect the banks could pay some of the profits in good years (assuming the initial risk period was set over the good years!) for protection in the bad years. The attachment of the coverage was designed in a way similar to the old continuous ratcheting retention reinsurance aggregate coverage popular at the time amongst some German reinsurers. After numerous discussions, no banks were interested in a cover that offered them an opportunity to use profits in the good times to buy protection for a rainy day. They didn’t think they needed it. Funny that.

Assessing reinsurers’ catastrophe PMLs

Prior to the recent market wobbles on what a post QE world will look like, a number of reinsurers with relatively high property catastrophe exposures have suffered pullbacks in their stock due to fears about catastrophe pricing pressures (subject of previous post). Credit Suisse downgraded Validus recently stating that “reinsurance has become more of a commodity due to lower barriers to entry and vendor models.”

As we head deeper into the US hurricane season, it is worth reviewing the disclosures of a number of reinsurers in relation to catastrophe exposures, specifically their probable maximum losses or PMLs . In 2012 S&P’s influential annual publication – Global Reinsurance Highlights – there is an interesting article called “Just How Much Capital Is At Risk”. The article looked at net PMLs as a percentage of total adjusted capital (TAC), an S&P determined calculation, and also examined relative tail heaviness of PMLs disclosed by different companies. The article concluded that “by focusing on tail heaviness, we may have one additional tool to uncover which reinsurers could be most affected by such an event”. In other words, not only is the amount of the PMLs for different perils important but the shape of the curve across different return periods (e.g. 1 in 50 years, 1 in 100 years, 1 in 250 years, etc.) is also an important indicator of relative exposures. The graphs below show the net PMLs as a percentage of TAC and the net PMLs as a percentage of aggregate limits for the S&P sample of insurers and reinsurers.

click to enlarge

PML as % of S&P capital

PML as % of aggregate limit

Given the uncertainties around reported PMLs discussed in this post, I particularly like seeing PMLs as a percentage of aggregate limits. In the days before the now common use of catastrophic models (by such vendor firms as RMS, AIR and Eqecat), underwriters would subjectively calculate their PMLs as a percentage of their maximum possible loss or MPL (in the past when unlimited coverage was more common an estimate of the maximum loss was made whereas today the MPL is simply the sum of aggregate limits). This practise, being subjective, was obviously open to abuse (and often proved woefully inadequate). It is interesting to note however that some of the commonly used MPL percentages applied for peak exposures in certain markets were higher than those used today from the vendor models at high return periods.

The vendor modellers themselves are very open about the limitations in their models and regularly discuss the sources of uncertainty in their models. There are two main areas of uncertainty – primary and secondary – highlighted in the models. Some also refer to tertiary uncertainty in the uses of model outputs.

Primary uncertainty relates to the uncertainty in determining events in time, in space, in intensity, and in spatial distribution. There is often limited historical data (sampling error) to draw upon, particularly for large events. For example, scientific data on the physical characteristics of historical events such as hurricanes or earthquakes are only as reliable for the past 100 odd years as the instruments available at the time of the event. Even then, due to changes in factors like population density, the space over which many events were recorded may lack important physical elements of the event. Also, there are many unknowns relating to catastrophic events and we are continuously learning new facts as this article on the 2011 Japan quake illustrates.

Each of the vendor modellers build a catalogue of possible events by supplementing known historical events with other possible events (i.e. they fit a tail to known sample). Even though the vendor modellers stress that they do not predict events, their event catalogues determine implied probabilities that are now dominant in the catastrophe reinsurance pricing discovery process. These catalogues are subject to external validation from institutions such as Florida Commission which certifies models for use in setting property rates (and have an interest in ensuring rates stay as low as possible).

Secondary uncertainty relates to data on possible damages from an event like soil type, property structures, construction materials, location and aspect, building standards and such like factors (other factors include liquefaction, landslides, fires following an event, business interruption, etc.). Considerable strides, especially in the US, have taken place in reducing secondary uncertainties in developed insurance markets as databases have grown although Asia and parts of Europe still lag.

A Guy Carpenter report from December 2011 on uncertainty in models estimates crude confidence levels of -40%/+90% for PMLs at national level and -60%/+170% for PMLs at State level. These are significant levels and illustrate how all loss estimates produced by models must be treated with care and a healthy degree of scepticism.

Disclosures by reinsurers have also improved in recent years in relation to specific events. In the recent past, many reinsurers simply disclosed point estimates for their largest losses. Some still do. Indeed some, such as the well-respected Renaissance Re, still do not disclose any such figures on the basis that such disclosures are often misinterpreted by analysts and investors. Those that do disclose figures do so with comprehensive disclaimers. One of my favourites is “investors should not rely on information provided when considering an investment in the company”!

Comparing disclosed PMLs between reinsurers is rife with difficulty. Issues to consider include how firms define zonal areas, whether they use a vendor model or a proprietary model, whether model options such as storm surge are included, how model results are blended, and annual aggregation methodologies. These are all critical considerations and the detail provided in reinsurers’ disclosures is often insufficient to make a detailed determination. An example of the difficulty is comparing the disclosures of two of the largest reinsurers – Munich Re and Swiss Re. Both disclose PMLs for Atlantic wind and European storm on a 1 in 200 year return basis. Munich Re’s net loss estimate for each event is 18% and 11% respectively of its net tangible assets and Swiss Re’s net loss estimate for each event is 11% and 10% respectively of its net tangible assets.  However, the comparison is of limited use as Munich’s is on an aggregate VaR basis and Swiss Re’s is on the basis of pre-tax impact on economic capital of each single event.

Most reinsurers disclose their PMLs on an occurrence exceedance probability (OEP) basis. The OEP curve is essentially the probability distribution of the loss amount given an event, combined with an assumed frequency of an event. Other bases used for determining PMLs include an aggregate exceedance probability (AEP) basis or an average annual loss (AAL) basis. The AEP curves show aggregate annual losses and how single event losses are aggregated or ranked when calculating (each vendor has their own methodology) the AEP is critical to understand for comparisons. The AAL is the mean value of a loss exceedance probability distribution and is the expected loss per year averaged over a defined period.

An example of the potential misleading nature of disclosed PMLs is the case of Flagstone Re. Formed after Hurricane Katrina, Flagstone’s business model was based upon building a portfolio of catastrophe risks with an emphasis upon non-US risks. Although US risks carry the highest premium (by value and rate on line), they are also the most competitive. The idea was that superior risk premia could be delivered by a diverse portfolio sourced from less competitive markets. Flagstone reported their annual aggregate PML on a 1 in 100 and 1 in 250 year basis. As the graph below shows, Flagstone were hit by a frequency of smaller losses in 2010 and particularly in 2011 that resulted in aggregate losses far in excess of their reported PMLs. The losses invalidated their business model and the firm was sold to Validus in 2012 at approximately 80% of book value. Flagstone’s CEO, David Brown, stated at the closing of the sale that “the idea was that we did not want to put all of our eggs in the US basket and that would have been a successful approach had the pattern of the previous 30 to 40 years continued”.

click to enlarge

Flagstone CAT losses

The graphs below show a sample of reinsurer’s PML disclosures as at end Q1 2013 as a percentage of net tangible assets. Some reinsurers show their PMLs as a percentage of capital including hybrid or contingent capital. For the sake of comparisons, I have not included such hybrid or contingent capital in the net tangible assets calculations in the graphs below.

US Windstorm (click to enlarge)

US windstorm PMLs 2013

US & Japan Earthquake (click to enlarge)

US & Japan PMLs 2013

As per the S&P article, its important to look at the shape of PML curves as well as the levels for different events. For example, the shape of Lancashire PML curve stands out in the earthquake graphs and for the US gulf of Mexico storm. Montpelier for US quake and AXIS for Japan quakes also stand out in terms of the increased exposure levels at higher return periods. In terms of the level of exposure, Validus stands out on US wind, Endurance on US quake, and Catlin & Amlin on Japan quake.

Any investor in this space must form their own view on the likelihood of major catastrophes when determining their own risk appetite. When assessing the probabilities of historical events reoccurring, care must be taken to ensure past events are viewed on the basis of existing exposures. Irrespective of whether you are a believer in the impact of climate changes (which I am), graphs such as the one below (based off Swiss Re data inflated to 2012) are often used in industry. They imply an increasing trend in insured losses in the future.

Historical Insured Losses (click to enlarge)1990 to 2012 historical insured catastrophe losses Swiss ReThe reality is that as the world population increases resulting in higher housing density in catastrophe exposed areas such as coast lines the past needs to be viewed in terms of todays exposures. Pictures of Ocean Drive in Florida in 1926 and in 2000 best illustrates the point (click to enlarge).

Ocean Drive Florida 1926 & 2000

There has been interesting analysis performed in the past on exposure adjusting or normalising US hurricane losses by academics most notably by Roger Pielke (as the updated graph on his blog shows). Historical windstorms in the US run through commercial catastrophe models with todays exposure data on housing density and construction types shows a similar trend to those of Pielke’s graph. The historical trend from these analyses shows a more variable trend which is a lot less certain than increasing trend in the graph based off Swiss Re data. These losses suggest that the 1970s and 1980s may have been decades of reduced US hurricane activity relative to history and that more recent decades are returning to a more “normal” activity levels for US windstorms.

In conclusion, reviewing PMLs disclosed by reinsurers provides an interesting insight into potential exposures to specific events. However, the disclosures are only as good as the underlying methodology used in their calculation. Hopefully, in the future, further detail will be provided to investors on these PML calculations so that real and meaningful comparisons can be made. Notwithstanding what PMLs may show, investors need to understand the potential for catastrophic events and adapt their risk appetite accordingly.