Tag Archives: uncertainty in models

Correlation contagion

I fear that the daily announcements on bankruptcies, specifically in the retail sector, is just the beginning of the journey into our new reality. Despite relatively positive noises from US banks about short term loan provisions and rebounding consumer spending, the real level of defaults, particularly in the SME sector, will not become clear until the sugar high of direct government stimulus is withdrawn. In the UK, for example, the furlough scheme is paying 80% of the wages of approximately 9 million workers and is currently costing the same in government spending monthly as the NHS. This UK subsidy is due to be withdrawn in October. In the US, the $600 weekly boost to unemployment payments is due to expire at the end of July.

The S&P forecasts for the default rate on US junk debt, as below, illustrates a current projection. There are many uncertainties on the course of the pandemic and the economic impacts over the coming months and quarters that will dictate which scenario is in our future.

It is therefore not surprising that the oft highlighted concerns about the leveraged loan market have been getting a lot of recent attention, as the following articles in the New Yorker and the Atlantic dramatically attest to – here and here. I would recommend both articles to all readers.

I must admit to initially feeling that the dangers have been exaggerated in these articles in the name of journalist license. After all, the risks associated with the leveraged loan market have been known for some time, as this post from last year illustrates, and therefore we should be assured that regulators and market participants are on top of the situation from a risk management perspective. Right? I thought I would dig a little further into the wonderful world of collateralized loan obligations, commonly referred to as CLOs, to find out.

First up is a report from the Bank of International Settlements (BIS) in September on the differences between collateralised debt obligations (CDOs) and CLOs. I was heartened to learn that “there are significant differences between the CLO market today and the CDO market prior to the great financial crisis”. The report highlighted the areas of difference as well as the areas of similarity as follows:

CLOs are less complex, avoiding the use of credit default swaps (CDS) and resecuritisations; they are little used as collateral in repo transactions; and they are less commonly funded by short-term borrowing than was the case for CDOs. In addition, there is better information about the direct exposures of banks. That said, there are also similarities between the CLO market today and the CDO market then, including some that could give rise to financial distress. These include the deteriorating credit quality of CLOs’ underlying assets; the opacity of indirect exposures; the high concentration of banks’ direct holdings; and the uncertain resilience of senior tranches, which depend crucially on the correlation of losses among underlying loans.

The phase “uncertain resilience of senior tranches” and the reference to correlation sent a cold shiver down my spine. According to BIS, the senior AAA tranches are higher up the structure (e.g. 65% versus 75%-80% in the bad old days), as this primer from Guggenheim illustrates:

As in the good old CDO days, the role of the rating agencies is critical to the CLO ecosystem. This May report from the European Securities and Markets Authority (ESMA) shows that EU regulators are focused on the practices the rating agencies are following in relation to their CLO ratings. I was struck by the paragraph below in the executive summary (not exactly the reassurance I was hoping for).

The future developments regarding the Covid 19 outbreak will be an important test for CLO methodologies, notably by testing: i) the approaches and the assumptions for the modelling of default correlation among the pool of underlying loans; and ii) the sensitivity of CLO credit ratings to how default and recovery rates are calibrated. Moreover, the surge of covenant-lite loans prevents lenders and investors from early warning indicators on the deterioration of the creditworthiness of the leveraged loans.

As regular readers will know, correlations used extensively in financial modelling is a source of much blog angst on my part (examples of previous posts include here, here and here). As I may have previously explained, I worked for over a decade in a quant driven firm back in the 1990’s that totally underestimated correlations in a tail event on assumed diverse risk portfolios. The firm I worked for did not survive long after the events of 9/11 and the increased correlation across risk classes that resulted. It was therefore with much bewilderment that I watched the blow-up in complex financial structures because of the financial crisis and the gross misunderstanding of tail correlations that were absent from historical data sets used to calibrate quant models. It is with some trepidation therefore when I see default correlation been discussed yet again in relation to the current COVID19 recession. To paraphrase Buffett, bad loans do not become better by simply repackaging them. Ditto for highly leveraged loans with the volume turned up to 11. As many commentators have highlighted in recent years and the Fed noted recently (see this post), the leverage in terms of debt to EBITDA ratios in leveraged loans has crept up to pre-financial crisis levels before the COVID19 global outbreak.

Next up, I found this blog from MSCI in early April insightful. By applying market implied default rates and volatilities from late March to MSCI’s CLO model of a sample 2019 CLO deal with 300 loans diversified across 10 industry sectors, they arrived at some disturbing results. Using 1-year default rates for individual risks of approximately 20% to 25% across most sectors (which does not seem outrageous to me when talking about leverage loans, they are after all highly leveraged!!), they estimated the probability of joint defaults using sample 2019 CLO deal at 1 and 3-year horizons as below.

The MSCI analysis also showed the implied cross-sector default-rate correlations and a comparison with the correlations seen in the financial crisis, as below.

Even to me, some of these correlations (particularly those marked in red) look too elevated and the initial market reaction to COVID19 of shoot first and ask questions later may explain why. The MSCI article concludes with an emphasis on default correlation as below.

During periods of low default correlation, even with relatively high loan default rates, the tail probability of large total default is typically slim. If the current historically high default-rate correlations persist — combined with high loan default rates and default-rate volatility — our model indicates that a large portion of the examined pool may default and thereby threaten higher-credit tranches considered safe before the crisis

I decided to end my CLO journey by looking at what one rating agency was saying. S&P states that the factors which determine their CLO ratings are the weighted average rating of a portfolio, the diversity of the portfolio (in terms of obligors, industries, and countries), and the weighted average life of the portfolio. Well, we know we are dealing with highly leveraged loans, the junkiest if you like, with an average pre-COVID rating of B (although it is likely lower in today’s post-COVID environment) so I have focused on the portfolio diversity factor as the most important risk mitigant. Typically, CLOs have 100 to 300 loans which should give a degree of comfort although in a global recession, the number of loans matters less given the common risky credit profile of each. In my view, the more important differentiator in this recession is its character in terms of the split between sector winners and losers, as the extraordinary rally in the equity market of the technology giants dramatically illustrates.

S&P estimated that as at year end 2019, the average CLO contained approximately 200 loans and had an average industry diversity metric of 25. Its important to stress the word “average” as it can hid all sorts of misdemeanors. Focusing on the latter metric, I investigated further the industry sector classifications used by S&P. These classifications are different and more specific from the usual broad industry sectorial classifications used in equity markets given the nature of the leveraged loan market. There are 66 industry sectors in all used by S&P although 25 of the sectors make up 80% of the loans by size. Too many spurious variables are the myth that often lies at the quant portfolio diversification altar. To reflect the character of this recession, I judgmentally grouped the industry sectors into three exposure buckets – high, medium and low. By sector number the split was roughly equal to a third for each bucket. However, by loan amount the split was 36%, 46% and 18% for the high, medium and low sectors respectively. Over 80% in the high and medium buckets! That simplistic view of the exposure would make me very dubious about the real amount of diversification in these portfolios given the character of this recession. As a result, I would question the potential risk to the higher credit quality tranches of CLOs if their sole defense is diversification.

Maybe the New Yorker and Atlantic articles are not so sensationalist after all.

Confounding correlation

Nassim Nicholas Taleb, the dark knight or rather the black swan himself, said that “anything that relies on correlation is charlatanism”.  I am currently reading the excellent “The signal and the noise” by Nate Silver. In Chapter 1 of the book he has a nice piece on CDOs as an example of a “catastrophic failure of prediction” where he points to certain CDO AAA tranches which were rated on an assumption of a 0.12% default rate and which eventually resulted in an actual rate of 28%, an error factor of over 200 times!.

Silver cites a simplified CDO example of 5 risks used by his friend Anil Kashyap in the University of Chicago to demonstrate the difference in default rate if the 5 risks are assumed to be totally independent and dependent.  It got me thinking as to how such a simplified example could illustrate the impact of applied correlation assumptions. Correlation between core variables are critical to many financial models and are commonly used in most credit models and will be a core feature in insurance internal models (which under Solvency II will be used to calculate a firms own regulatory solvency requirements).

So I set up a simple model (all of my models are generally so) of 5 risks and looked at the impact of varying correlation from 100% to 0% (i.e. totally dependent to independent) between each risk. The model assumes a 20% probability of default for each risk and the results, based upon 250,000 simulations, are presented in the graph below. What it does show is that even at a high level of correlation (e.g. 90%) the impact is considerable.

click to enlarge5 risk pool with correlations from 100% to 0%

The graph below shows the default probabilities as a percentage of the totally dependent levels (i.e 20% for each of the 5 risks). In effect it shows the level of diversification that will result from varying correlation from 0% to 100%. It underlines how misestimating correlation can confound model results.

click to enlargeDefault probabilities & correlations

The imperfect art of climate change modelling

The completed Group I report from the 5th Intergovernmental Panel on Climate Change (IPCC) assessment was published in January (see previous post on summary report in September). One of the few definite statements made in the report was that “global mean temperatures will continue to rise over the 21st century if greenhouse gas (GHG) emissions continue unabat­ed”. How we measure the impact of such changes is therefore incredibly important. A recent article in the FT by Robin Harding on the topic which highlighted the shortcomings of models used to assess the impact of climate change therefore caught my attention.

The article referred to two academic papers, one by Robert Pindyck and another by Nicholas Stern, which contained damning criticism of models that integrate climate and economic models, so called integrated assessment models (IAM).

Pindyck states that “IAM based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading”. Stern also criticizes IAMs stating that “assumptions built into the economic modelling on growth, damages and risks, come close to assuming directly that the impacts and costs will be modest and close to excluding the possibility of catastrophic outcomes”.

These comments remind me of Paul Wilmott, the influential English quant, who included in his Modeller’s Hippocratic Oath the following: “I will remember that I didn’t make the world, and it doesn’t satisfy my equations” (see Quotes section of this website for more quotes on models).

In his paper, Pindyck characterised the IAMs currently used into 6 core components as the graphic below illustrates.

click to enlargeIntegrated Assessment Models

Pindyck highlights a number of the main elements of IAMs which involve a considerable amount of arbitrary choice, including climate sensitivity, the damage and social welfare (utility) functions. He cites important feedback loops in climates as difficult, if not impossible, to determine. Although there has been some good work in specific areas like agriculture, Pindyck is particularly critical on the damage functions, saying many are essentially made up. The final piece on social utility and the rate of time preference are essentially policy parameter which are open to political forces and therefore subject to considerable variability (& that’s a polite way of putting it).

The point about damage functions is an interesting one as these are also key determinants in the catastrophe vendor models widely used in the insurance sector. As a previous post on Florida highlighted, even these specific and commercially developed models result in varying outputs.

One example of IAMs directly influencing current policymakers is those used by the Interagency Working Group (IWG) which under the Obama administration is the entity that determines the social cost of carbon (SCC), defined as the net present damage done by emitting a marginal ton of CO2 equivalent (CO2e), used in regulating industries such as the petrochemical sector. Many IAMs are available (the sector even has its own journal – The Integrated Assessment Journal!) and the IWG relies on three of the oldest and most well know; the Dynamic Integrated Climate and Economy (DICE) model, the Policy Analysis of the Greenhouse Effect (PAGE) model, and the fun sounding Climate Framework for Uncertainty, Negotiation, and Distribution (FUND) model.

The first IWG paper in 2010 included an exhibit, reproduced below, summarizing the economic impact of raising temperatures based upon the 3 models.

click to enlargeClimate Change & Impact on GDP IWG SCC 2010

To be fair to the IWG, they do highlight that “underlying the three IAMs selected for this exercise are a number of simplifying assumptions and judgments reflecting the various modelers’ best attempts to synthesize the available scientific and economic research characterizing these relationships”.

The IWG released an updated paper in 2013 whereby revised SCC estimates were presented based upon a number of amendments to the underlying models. Included in these changes are revisions to damage functions and to climate sensitivity assumptions. The results of the changes on average and 95th percentile SCC estimates, at varying discount rates (which are obviously key determinants to the SCC given the long term nature of the impacts), can be clearly seen in the graph below.

click to enlargeSocial Cost of Carbon IWG 2010 vrs 2013

Given the magnitude of the SCC changes, it is not surprising that critics of the charges, including vested interests such as petrochemical lobbyists, are highlighting the uncertainty in IAMs as a counter against the charges. The climate change deniers love any opportunity to discredit the science as they demonstrated so ably with the 4th IPCC assessment. The goal has to be to improve modelling as a risk management tool that results in sensible preventative measures. Pindyck emphasises that his criticisms should not be an excuse for inaction. He believes we should follow a risk management approach focused on the risk of catastrophe with models updated as more information emerges and uses the threat of nuclear oblivion during the Cold War as a parallel. He argues that “one can think of a GHG abatement policy as a form of insurance: society would be paying for a guarantee that a low-probability catastrophe will not occur (or is less likely)”. Stern too advises that our focus should be on potential extreme damage and that the economic community need to refocus and combine current insights where “an examination and modelling of ways in which disruption and decline can occur”.

Whilst I was looking into this subject, I took the time to look over the completed 5th assessment report from the IPCC. First, it is important to stress that the IPCC acknowledge the array of uncertainties in predicting climate change. They state the obvious in that “the nonlinear and chaotic nature of the climate system imposes natu­ral limits on the extent to which skilful predictions of climate statistics may be made”. They assert that the use of multiple scenarios and models is the best way we have for determining “a wide range of possible future evolutions of the Earth’s climate”. They also accept that “predicting socioeconomic development is arguably even more difficult than predicting the evolution of a physical system”.

The report uses a variety of terms in its findings which I summarised in a previous post and reproduce below.

click to enlargeIPCC uncertainty

Under the medium term prediction section (Chapter 11) which covers the period 2016 to 2035 relative to the reference period 1986 to 2005, a number of the notable predictions include:

  • The projected change in global mean surface air temperature will likely be in the range 0.3 to 0.7°C (medium confidence).
  • It is more likely than not that the mean global mean surface air temperature for the period 2016–2035 will be more than 1°C above the mean for 1850–1900, and very unlikely that it will be more than 1.5°C above the 1850–1900 mean (medium confidence).
  • Zonal mean precipitation will very likely increase in high and some of the mid-latitudes, and will more likely than not decrease in the subtropics. The frequency and intensity of heavy precipitation events over land will likely increase on average in the near term (this trend will not be apparent in all regions).
  • It is very likely that globally averaged surface and vertically averaged ocean temperatures will increase in the near term. It is likely that there will be increases in salinity in the tropical and (especially) subtropical Atlantic, and decreases in the western tropical Pacific over the next few decades.
  • In most land regions the frequency of warm days and warm nights will likely increase in the next decades, while that of cold days and cold nights will decrease.
  • There is low confidence in basin-scale projections of changes in the intensity and frequency of tropical cyclones (TCs) in all basins to the mid-21st century and there is low confidence in near-term projections for increased TC intensity in the North Atlantic.

The last bullet point is especially interesting for the insurance sector involved in providing property catastrophe protection. Graphically I have reproduced two interesting projections below (Note: no volcano activity is assumed).

click to enlargeIPCC temperature near term projections

Under the longer term projections in Chapter 12, the IPCC makes the definite statement that opened this post. It also states that it is virtually certain that, in most places, there will be more hot and fewer cold temperature extremes as global mean temper­atures increase and that, in the long term, global precipitation will increase with increased global mean surface temperature.

I don’t know about you but it seems to me a sensible course of action that we should be taking scenarios that the IPCC is predicting with virtual certainty and applying a risk management approach to how we can prepare for or counteract extremes as recommended by experts such as Pindyck and Stern.

The quote “it’s better to do something imperfectly than to do nothing perfectly” comes to mind. In this regard, for the sake of our children at the very least, we should embrace the imperfect art of climate change modelling and figure out how best to use them in getting things done.

ILS Fund versus PropertyCat Reinsurer ROEs

Regular readers will know that I have queried how insurance-linked securities (ILS) funds, currently so popular with pensions funds, can produce a return on equity that is superior to that of a diversified property catastrophe reinsurer given that the reinsurer only has to hold a faction of its aggregate limit issued as risk based capital whereas all of the limits in ILS are collaterised. The recent FT article which contained some interesting commentary from John Seo of Fermat Capital Management got me thinking about this subject again. John Seo referred to the cost advantage of ILS funds and asserted that reinsurers staffed with overpaid executives “can grow again, but only after you lay off two out of three people”. He damned the traditional sector with “these guys have been so uncreative, they have been living off earthquake and hurricane risks that are not that hard to underwrite.

Now, far be it from me to defend the offshore chino loving reinsurance executives with a propensity for large salaries and low taxation. However, I still can’t see that the “excessive” overheads John Seo refers to could offset the capital advantage that a traditional property catastrophe reinsurer would have over ILS collateral requirements.

I understood the concept of ILS structures that provided blocks of capacity at higher layers, backed by high quality assets, which could (and did until recently) command a higher price than the traditional market. Purchasers of collaterised coverage could justify paying a premium over traditional coverage by way of large limits on offer and a lower counterparty credit risk (whilst lowering concentration risk to the market leading reinsurers). This made perfect sense to me and provided a complementary, yet different, product to that offered by traditional reinsurers. However, we are now in a situation whereby such collaterised reinsurance providers may be moving to compete directly with traditional coverage on price and attachment.

To satisfy my unease around the inconsistency in equity returns, I decided to do some simple testing. I set up a model of a reasonably diversified portfolio of 8 peak catastrophic risks (4 US and 4 international wind and quake peak perils). The portfolio broadly reflects the market and is split 60:40 US:International by exposure and 70:30 by premium. Using aggregate exceedance probability (EP) curves for each of the main 8 perils based off extrapolated industry losses as a percentage of limits offered across standard return periods, the model is set up to test differing risk premiums (i.e. ROL) for each of the 8 perils in the portfolio and their returns.  For the sake of simplicity, zero correlations were assumed between the 8 perils.

The first main assumption in the model is the level of risk based capital needed by the property catastrophe reinsurer to compete against the ILS fund. Reviewing some of the Bermudian property catastrophe players, equity (common & preferred) varies between 280% and 340% of risk premiums (net of retrocessions). Where debt is also included, ratios of up to 400% of net written premiums can be seen. However, the objective is to test different premium levels and therefore setting capital levels as a function of premiums distorts the results. As reinsurer’s capital levels are now commonly assessed on the basis of stressed economic scenarios (e.g. PMLs as % of capital), I did some modelling and concluded that a reasonable capital assumption for the reinsurer to be accepted is the level required at a 99.99th percentile or a 1 in 10,000 return period (the graph below shows the distribution assumed). As the graph below illustrates, this equates to a net combined ratio (net includes all expenses) of the reinsurer of approximately 450% for the average risk premium assumed in the base scenario (the combined ratio at the 99.99th level will change as the average portfolio risk premium changes).

click to enlargePropCAT Reinsurer Combined Ratio Distribution

So with the limit profile of the portfolio is set to broadly match the market, risk premiums per peril were also set according to market rates such that the average risk premium from the portfolio was 700 bps in a base scenario (again broadly where I understand the property catastrophe market is currently at).

Reviewing some of the actual figures from property catastrophe reinsurer’s published accounts, the next important assumption is that the reinsurer’s costs are made up of 10% acquisition costs and 20% overhead (the overhead assumption is a bit above the actual rates seen by I am going high to reinforce Mr Seo’s point about greedy reinsurance executives!) thereby reducing risk premiums by 30%. For the ILS fund, the model assumes a combined acquisition and overhead cost of just 10% (this may also be too light as many ILS funds are now sourcing some of their business through brokers and many reinsurance fund managers share the greedy habits of the traditional market!).

The results below show the average simulated returns for a reinsurer and an ILS fund writing the same portfolio with the expense levels as detailed above (i.e 30% versus 10%), and with different capital levels (reinsurer at 99.99th percentile and the ILS fund with capital equal to the limits issued). It’s important to stress that the figures below do not included investment income so historical operating ROEs from property catastrophe reinsurers are not directly comparable.

click to enlargePropCAT Reinsurer & ILS Fund ROE Comparison

So, the conclusion of the analysis re-enforces my initial argument that the costs savings cannot compensate for the leveraged nature of a reinsurer’s business model compared to the ILS fully funded model. However, this is a simplistic comparison. Why would a purchaser not go with a fully funded ILS provider if the product on offer was exactly the same as that of a reinsurer? As outlined above, both risk providers serve different needs and, as yet, are not full on competitors (although this may be the direction of the changes underway in the market currently).

Also, many ILS funds likely do use some form of leverage in their business model whether by way of debt or retrocession facilities. And competition from the ILS market is making the traditional market look at its overhead and how it can become more cost efficient. So it is likely that both business models will adapt and converge (indeed, many reinsurers are now also ILS managers).

Notwithstanding these issues, I can’t help conclude that (for some reason) our pension funds are the losers here by preferring the lower returns of an ILS fund sold to them by investment bankers than the higher returns on offer from simply owning the equity of a reinsurer (admittedly without the same operational risk profile). Innovative or just cheap risk premia? Go figure.

Assessing reinsurers’ catastrophe PMLs

Prior to the recent market wobbles on what a post QE world will look like, a number of reinsurers with relatively high property catastrophe exposures have suffered pullbacks in their stock due to fears about catastrophe pricing pressures (subject of previous post). Credit Suisse downgraded Validus recently stating that “reinsurance has become more of a commodity due to lower barriers to entry and vendor models.”

As we head deeper into the US hurricane season, it is worth reviewing the disclosures of a number of reinsurers in relation to catastrophe exposures, specifically their probable maximum losses or PMLs . In 2012 S&P’s influential annual publication – Global Reinsurance Highlights – there is an interesting article called “Just How Much Capital Is At Risk”. The article looked at net PMLs as a percentage of total adjusted capital (TAC), an S&P determined calculation, and also examined relative tail heaviness of PMLs disclosed by different companies. The article concluded that “by focusing on tail heaviness, we may have one additional tool to uncover which reinsurers could be most affected by such an event”. In other words, not only is the amount of the PMLs for different perils important but the shape of the curve across different return periods (e.g. 1 in 50 years, 1 in 100 years, 1 in 250 years, etc.) is also an important indicator of relative exposures. The graphs below show the net PMLs as a percentage of TAC and the net PMLs as a percentage of aggregate limits for the S&P sample of insurers and reinsurers.

click to enlarge

PML as % of S&P capital

PML as % of aggregate limit

Given the uncertainties around reported PMLs discussed in this post, I particularly like seeing PMLs as a percentage of aggregate limits. In the days before the now common use of catastrophic models (by such vendor firms as RMS, AIR and Eqecat), underwriters would subjectively calculate their PMLs as a percentage of their maximum possible loss or MPL (in the past when unlimited coverage was more common an estimate of the maximum loss was made whereas today the MPL is simply the sum of aggregate limits). This practise, being subjective, was obviously open to abuse (and often proved woefully inadequate). It is interesting to note however that some of the commonly used MPL percentages applied for peak exposures in certain markets were higher than those used today from the vendor models at high return periods.

The vendor modellers themselves are very open about the limitations in their models and regularly discuss the sources of uncertainty in their models. There are two main areas of uncertainty – primary and secondary – highlighted in the models. Some also refer to tertiary uncertainty in the uses of model outputs.

Primary uncertainty relates to the uncertainty in determining events in time, in space, in intensity, and in spatial distribution. There is often limited historical data (sampling error) to draw upon, particularly for large events. For example, scientific data on the physical characteristics of historical events such as hurricanes or earthquakes are only as reliable for the past 100 odd years as the instruments available at the time of the event. Even then, due to changes in factors like population density, the space over which many events were recorded may lack important physical elements of the event. Also, there are many unknowns relating to catastrophic events and we are continuously learning new facts as this article on the 2011 Japan quake illustrates.

Each of the vendor modellers build a catalogue of possible events by supplementing known historical events with other possible events (i.e. they fit a tail to known sample). Even though the vendor modellers stress that they do not predict events, their event catalogues determine implied probabilities that are now dominant in the catastrophe reinsurance pricing discovery process. These catalogues are subject to external validation from institutions such as Florida Commission which certifies models for use in setting property rates (and have an interest in ensuring rates stay as low as possible).

Secondary uncertainty relates to data on possible damages from an event like soil type, property structures, construction materials, location and aspect, building standards and such like factors (other factors include liquefaction, landslides, fires following an event, business interruption, etc.). Considerable strides, especially in the US, have taken place in reducing secondary uncertainties in developed insurance markets as databases have grown although Asia and parts of Europe still lag.

A Guy Carpenter report from December 2011 on uncertainty in models estimates crude confidence levels of -40%/+90% for PMLs at national level and -60%/+170% for PMLs at State level. These are significant levels and illustrate how all loss estimates produced by models must be treated with care and a healthy degree of scepticism.

Disclosures by reinsurers have also improved in recent years in relation to specific events. In the recent past, many reinsurers simply disclosed point estimates for their largest losses. Some still do. Indeed some, such as the well-respected Renaissance Re, still do not disclose any such figures on the basis that such disclosures are often misinterpreted by analysts and investors. Those that do disclose figures do so with comprehensive disclaimers. One of my favourites is “investors should not rely on information provided when considering an investment in the company”!

Comparing disclosed PMLs between reinsurers is rife with difficulty. Issues to consider include how firms define zonal areas, whether they use a vendor model or a proprietary model, whether model options such as storm surge are included, how model results are blended, and annual aggregation methodologies. These are all critical considerations and the detail provided in reinsurers’ disclosures is often insufficient to make a detailed determination. An example of the difficulty is comparing the disclosures of two of the largest reinsurers – Munich Re and Swiss Re. Both disclose PMLs for Atlantic wind and European storm on a 1 in 200 year return basis. Munich Re’s net loss estimate for each event is 18% and 11% respectively of its net tangible assets and Swiss Re’s net loss estimate for each event is 11% and 10% respectively of its net tangible assets.  However, the comparison is of limited use as Munich’s is on an aggregate VaR basis and Swiss Re’s is on the basis of pre-tax impact on economic capital of each single event.

Most reinsurers disclose their PMLs on an occurrence exceedance probability (OEP) basis. The OEP curve is essentially the probability distribution of the loss amount given an event, combined with an assumed frequency of an event. Other bases used for determining PMLs include an aggregate exceedance probability (AEP) basis or an average annual loss (AAL) basis. The AEP curves show aggregate annual losses and how single event losses are aggregated or ranked when calculating (each vendor has their own methodology) the AEP is critical to understand for comparisons. The AAL is the mean value of a loss exceedance probability distribution and is the expected loss per year averaged over a defined period.

An example of the potential misleading nature of disclosed PMLs is the case of Flagstone Re. Formed after Hurricane Katrina, Flagstone’s business model was based upon building a portfolio of catastrophe risks with an emphasis upon non-US risks. Although US risks carry the highest premium (by value and rate on line), they are also the most competitive. The idea was that superior risk premia could be delivered by a diverse portfolio sourced from less competitive markets. Flagstone reported their annual aggregate PML on a 1 in 100 and 1 in 250 year basis. As the graph below shows, Flagstone were hit by a frequency of smaller losses in 2010 and particularly in 2011 that resulted in aggregate losses far in excess of their reported PMLs. The losses invalidated their business model and the firm was sold to Validus in 2012 at approximately 80% of book value. Flagstone’s CEO, David Brown, stated at the closing of the sale that “the idea was that we did not want to put all of our eggs in the US basket and that would have been a successful approach had the pattern of the previous 30 to 40 years continued”.

click to enlarge

Flagstone CAT losses

The graphs below show a sample of reinsurer’s PML disclosures as at end Q1 2013 as a percentage of net tangible assets. Some reinsurers show their PMLs as a percentage of capital including hybrid or contingent capital. For the sake of comparisons, I have not included such hybrid or contingent capital in the net tangible assets calculations in the graphs below.

US Windstorm (click to enlarge)

US windstorm PMLs 2013

US & Japan Earthquake (click to enlarge)

US & Japan PMLs 2013

As per the S&P article, its important to look at the shape of PML curves as well as the levels for different events. For example, the shape of Lancashire PML curve stands out in the earthquake graphs and for the US gulf of Mexico storm. Montpelier for US quake and AXIS for Japan quakes also stand out in terms of the increased exposure levels at higher return periods. In terms of the level of exposure, Validus stands out on US wind, Endurance on US quake, and Catlin & Amlin on Japan quake.

Any investor in this space must form their own view on the likelihood of major catastrophes when determining their own risk appetite. When assessing the probabilities of historical events reoccurring, care must be taken to ensure past events are viewed on the basis of existing exposures. Irrespective of whether you are a believer in the impact of climate changes (which I am), graphs such as the one below (based off Swiss Re data inflated to 2012) are often used in industry. They imply an increasing trend in insured losses in the future.

Historical Insured Losses (click to enlarge)1990 to 2012 historical insured catastrophe losses Swiss ReThe reality is that as the world population increases resulting in higher housing density in catastrophe exposed areas such as coast lines the past needs to be viewed in terms of todays exposures. Pictures of Ocean Drive in Florida in 1926 and in 2000 best illustrates the point (click to enlarge).

Ocean Drive Florida 1926 & 2000

There has been interesting analysis performed in the past on exposure adjusting or normalising US hurricane losses by academics most notably by Roger Pielke (as the updated graph on his blog shows). Historical windstorms in the US run through commercial catastrophe models with todays exposure data on housing density and construction types shows a similar trend to those of Pielke’s graph. The historical trend from these analyses shows a more variable trend which is a lot less certain than increasing trend in the graph based off Swiss Re data. These losses suggest that the 1970s and 1980s may have been decades of reduced US hurricane activity relative to history and that more recent decades are returning to a more “normal” activity levels for US windstorms.

In conclusion, reviewing PMLs disclosed by reinsurers provides an interesting insight into potential exposures to specific events. However, the disclosures are only as good as the underlying methodology used in their calculation. Hopefully, in the future, further detail will be provided to investors on these PML calculations so that real and meaningful comparisons can be made. Notwithstanding what PMLs may show, investors need to understand the potential for catastrophic events and adapt their risk appetite accordingly.