Tag Archives: Swiss Re

Insurance & capital market convergence hype is getting boring

As the horde of middle aged (still mainly male) executives pack up their chinos and casual shirts, the overriding theme coming from this year’s Monte Carlo Renez-Vous seems to be impact of the new ILS capacity or “convergence capital” on the reinsurance and specialty insurance sector. The event, described in a Financial Times article as “the kind of public display of wealth most bankers try to eschew”, is where executives start the January 1 renewal discussions with clients in quick meetings crammed together in the luxury location.

The relentless chatter about the new capital will likely leave many bored senseless of the subject. Many may now hope that, just like previous hot discussion topics that were worn out (Solvency II anybody?), the topic fades into the background as the reality of the office huts them next week.

The more traditional industry hands warned of the perils of the new capacity on underwriting discipline. John Nelson of Lloyds highlighted that “some of the structures being used could undermine some of the qualities of the insurance model”. Tad Montross of GenRe cautioned that “bankers looking to replace lost fee income” are pushing ILS as the latest asset class but that the hype will die down when “the inability to model extreme weather events accurately is better understood”. Amer Ahmed of Allianz Re predicted the influx “bears the danger that certain risks get covered at inadequate rates”. Torsten Jeworrek of Munich Re said that “our research shows that ILS use the cheapest model in the market” (assumingly in a side swipe at AIR).

Other traditional reinsurers with an existing foothold in the ILS camp were more circumspect. Michel Lies of Swiss Re commented that “we take the inflow of alternative capital seriously but we are not alarmed by it”.

Brokers and other interested service providers were the loudest cheerleaders. Increasing the size of the pie for everybody, igniting coverage innovative in the traditional sector, and cheap retrocession capacity were some of the advantages cited. My favourite piece of new risk management speak came from Aon Benfield’s Bryon Ehrhart in the statement “reinsurers will innovate their capital structures to turn headwinds from alternative capital sources into tailwinds”. In other words, as Tokio Millennium Re’s CEO Tatsuhiko Hoshina said, the new capital offers an opportunity to leverage increasingly diverse sources of retrocessional capacity. An arbitrage market (as a previous post concluded)?

All of this talk reminds me of the last time that “convergence” was a buzz word in the sector in the 1990s. For my sins, I was an active participant in the market then. Would the paragraph below from an article on insurance and capital market convergence by Graciela Chichilnisky of Columbia University in June 1996 sound out of place today?

“The future of the industry lies with those firms which implement such innovation. The companies that adapt successfully will be the ones that survive. In 10 years, these organizations will draw the map of a completely restructured reinsurance industry”

The current market dynamics are driven by low risk premia in capital markets bringing investors into competition with the insurance sector through ILS and collaterised structures. In the 1990s, capital inflows after Hurricane Andrew into reinsurers, such as the “class of 1992”, led to overcapacity in the market which resulted in a brutal and undisciplined soft market in the late 1990s.

Some (re)insurers sought to diversify their business base by embracing innovation in transaction structures and/or by looking at expanding the risks they covered beyond traditional P&C exposures. Some entered head first into “finite” type multi-line multi-year programmes that assumed structuring could protect against poor underwriting. An over-reliance on the developing insurance models used to price such transactions, particularly in relation to assumed correlations between exposures, left some blind to basic underwriting disciplines (Sound familiar, CDOs?). Others tested (unsuccessfully) the limits of risk transfer and legality by providing limited or no risk coverage to distressed insurers (e.g. FAI & HIH in Australia) or by providing reserve protection that distorted regulatory requirements (e.g. AIG & Cologne Re) by way of back to back contracts and murky disclosures.

Others, such as the company I worked for, looked to cover financial risks on the basis that mixing insurance and financial risks would allow regulatory capital arbitrage benefits through increased diversification (and may even offer an inflation & asset price hedge). Some well known examples* of the financial risks assumed by different (re)insurers at that time include the Hollywood Funding pool guarantee, the BAe aircraft leasing income coverage, Rolls Royce residual asset guarantees, dual trigger contingent equity puts, Toyota motor residual value protection, and mezzanine corporate debt credit enhancement  coverage.

Many of these “innovations” ended badly for the industry. Innovation in itself should never be dismissed as it is a feature of the world we live in. In this sector however, innovation at the expense of good underwriting is a nasty combination that the experience in the 1990s must surely teach us.

Bringing this back to today, I recently discussed the ILS market with a well informed and active market participant. He confirmed that some of the ILS funds have experienced reinsurance professionals with the skills to question the information in the broker pack and who do their own modelling and underwriting of the underlying risks. He also confirmed however that there is many funds (some with well known sponsors and hungry mandates) that, in the words of Kevin O’Donnell of RenRe, rely “on a single point” from a single model provided by to them by an “expert” 3rd party.

This conversation got me to thinking again about the comment from Edward Noonan of Validus that “the ILS guys aren’t undisciplined; it’s just that they’ve got a lower cost of capital.” Why should an ILS fund have a lower cost of capital to a pure property catastrophe reinsurer? There is the operational risk of a reinsurer to consider. However there is also operational risk involved with an ILS fund given items such as multiple collateral arrangements and other contracted 3rd party service provided functions to consider. Expenses shouldn’t be a major differing factor between the two models. The only item that may justify a difference is liquidity, particularly as capital market investors are so focussed on a fast exit. However, should this be material given the exit option of simply selling the equity in many of the quoted property catastrophe reinsurers?

I am not convinced that the ILS funds should have a material cost of capital advantage. Maybe the quoted reinsurers should simply revise their shareholder return strategies to be more competitive with the yields offered by the ILS funds. Indeed, traditional reinsurers in this space may argue that they are able to offer more attractive yields to a fully collaterised provider, all other things being equal, given their more leveraged business model.

*As a complete aside, an article this week in the Financial Times on the anniversary of the Lehman Brothers collapse and the financial crisis highlighted the role of poor lending practices as a primary cause of significant number of the bank failures. This article reminded me of a “convergence” product I helped design back in the late 1990s. Following changes in accounting rules, many banks were not allowed to continue to hold general loan loss provisions against their portfolio. These provisions (akin to an IBNR type bulk reserve) had been held in addition to specific loan provision (akin to case reserves). I designed an insurance structure for banks to pay premiums previously set aside as general provisions for coverage on massive deterioration in their loan provisions. After an initial risk period in which the insurer could lose money (which was required to demonstrate an effective risk transfer), the policy would act as a fully funded coverage similar to a collaterised reinsurance. In effect the banks could pay some of the profits in good years (assuming the initial risk period was set over the good years!) for protection in the bad years. The attachment of the coverage was designed in a way similar to the old continuous ratcheting retention reinsurance aggregate coverage popular at the time amongst some German reinsurers. After numerous discussions, no banks were interested in a cover that offered them an opportunity to use profits in the good times to buy protection for a rainy day. They didn’t think they needed it. Funny that.

Assessing reinsurers’ catastrophe PMLs

Prior to the recent market wobbles on what a post QE world will look like, a number of reinsurers with relatively high property catastrophe exposures have suffered pullbacks in their stock due to fears about catastrophe pricing pressures (subject of previous post). Credit Suisse downgraded Validus recently stating that “reinsurance has become more of a commodity due to lower barriers to entry and vendor models.”

As we head deeper into the US hurricane season, it is worth reviewing the disclosures of a number of reinsurers in relation to catastrophe exposures, specifically their probable maximum losses or PMLs . In 2012 S&P’s influential annual publication – Global Reinsurance Highlights – there is an interesting article called “Just How Much Capital Is At Risk”. The article looked at net PMLs as a percentage of total adjusted capital (TAC), an S&P determined calculation, and also examined relative tail heaviness of PMLs disclosed by different companies. The article concluded that “by focusing on tail heaviness, we may have one additional tool to uncover which reinsurers could be most affected by such an event”. In other words, not only is the amount of the PMLs for different perils important but the shape of the curve across different return periods (e.g. 1 in 50 years, 1 in 100 years, 1 in 250 years, etc.) is also an important indicator of relative exposures. The graphs below show the net PMLs as a percentage of TAC and the net PMLs as a percentage of aggregate limits for the S&P sample of insurers and reinsurers.

click to enlarge

PML as % of S&P capital

PML as % of aggregate limit

Given the uncertainties around reported PMLs discussed in this post, I particularly like seeing PMLs as a percentage of aggregate limits. In the days before the now common use of catastrophic models (by such vendor firms as RMS, AIR and Eqecat), underwriters would subjectively calculate their PMLs as a percentage of their maximum possible loss or MPL (in the past when unlimited coverage was more common an estimate of the maximum loss was made whereas today the MPL is simply the sum of aggregate limits). This practise, being subjective, was obviously open to abuse (and often proved woefully inadequate). It is interesting to note however that some of the commonly used MPL percentages applied for peak exposures in certain markets were higher than those used today from the vendor models at high return periods.

The vendor modellers themselves are very open about the limitations in their models and regularly discuss the sources of uncertainty in their models. There are two main areas of uncertainty – primary and secondary – highlighted in the models. Some also refer to tertiary uncertainty in the uses of model outputs.

Primary uncertainty relates to the uncertainty in determining events in time, in space, in intensity, and in spatial distribution. There is often limited historical data (sampling error) to draw upon, particularly for large events. For example, scientific data on the physical characteristics of historical events such as hurricanes or earthquakes are only as reliable for the past 100 odd years as the instruments available at the time of the event. Even then, due to changes in factors like population density, the space over which many events were recorded may lack important physical elements of the event. Also, there are many unknowns relating to catastrophic events and we are continuously learning new facts as this article on the 2011 Japan quake illustrates.

Each of the vendor modellers build a catalogue of possible events by supplementing known historical events with other possible events (i.e. they fit a tail to known sample). Even though the vendor modellers stress that they do not predict events, their event catalogues determine implied probabilities that are now dominant in the catastrophe reinsurance pricing discovery process. These catalogues are subject to external validation from institutions such as Florida Commission which certifies models for use in setting property rates (and have an interest in ensuring rates stay as low as possible).

Secondary uncertainty relates to data on possible damages from an event like soil type, property structures, construction materials, location and aspect, building standards and such like factors (other factors include liquefaction, landslides, fires following an event, business interruption, etc.). Considerable strides, especially in the US, have taken place in reducing secondary uncertainties in developed insurance markets as databases have grown although Asia and parts of Europe still lag.

A Guy Carpenter report from December 2011 on uncertainty in models estimates crude confidence levels of -40%/+90% for PMLs at national level and -60%/+170% for PMLs at State level. These are significant levels and illustrate how all loss estimates produced by models must be treated with care and a healthy degree of scepticism.

Disclosures by reinsurers have also improved in recent years in relation to specific events. In the recent past, many reinsurers simply disclosed point estimates for their largest losses. Some still do. Indeed some, such as the well-respected Renaissance Re, still do not disclose any such figures on the basis that such disclosures are often misinterpreted by analysts and investors. Those that do disclose figures do so with comprehensive disclaimers. One of my favourites is “investors should not rely on information provided when considering an investment in the company”!

Comparing disclosed PMLs between reinsurers is rife with difficulty. Issues to consider include how firms define zonal areas, whether they use a vendor model or a proprietary model, whether model options such as storm surge are included, how model results are blended, and annual aggregation methodologies. These are all critical considerations and the detail provided in reinsurers’ disclosures is often insufficient to make a detailed determination. An example of the difficulty is comparing the disclosures of two of the largest reinsurers – Munich Re and Swiss Re. Both disclose PMLs for Atlantic wind and European storm on a 1 in 200 year return basis. Munich Re’s net loss estimate for each event is 18% and 11% respectively of its net tangible assets and Swiss Re’s net loss estimate for each event is 11% and 10% respectively of its net tangible assets.  However, the comparison is of limited use as Munich’s is on an aggregate VaR basis and Swiss Re’s is on the basis of pre-tax impact on economic capital of each single event.

Most reinsurers disclose their PMLs on an occurrence exceedance probability (OEP) basis. The OEP curve is essentially the probability distribution of the loss amount given an event, combined with an assumed frequency of an event. Other bases used for determining PMLs include an aggregate exceedance probability (AEP) basis or an average annual loss (AAL) basis. The AEP curves show aggregate annual losses and how single event losses are aggregated or ranked when calculating (each vendor has their own methodology) the AEP is critical to understand for comparisons. The AAL is the mean value of a loss exceedance probability distribution and is the expected loss per year averaged over a defined period.

An example of the potential misleading nature of disclosed PMLs is the case of Flagstone Re. Formed after Hurricane Katrina, Flagstone’s business model was based upon building a portfolio of catastrophe risks with an emphasis upon non-US risks. Although US risks carry the highest premium (by value and rate on line), they are also the most competitive. The idea was that superior risk premia could be delivered by a diverse portfolio sourced from less competitive markets. Flagstone reported their annual aggregate PML on a 1 in 100 and 1 in 250 year basis. As the graph below shows, Flagstone were hit by a frequency of smaller losses in 2010 and particularly in 2011 that resulted in aggregate losses far in excess of their reported PMLs. The losses invalidated their business model and the firm was sold to Validus in 2012 at approximately 80% of book value. Flagstone’s CEO, David Brown, stated at the closing of the sale that “the idea was that we did not want to put all of our eggs in the US basket and that would have been a successful approach had the pattern of the previous 30 to 40 years continued”.

click to enlarge

Flagstone CAT losses

The graphs below show a sample of reinsurer’s PML disclosures as at end Q1 2013 as a percentage of net tangible assets. Some reinsurers show their PMLs as a percentage of capital including hybrid or contingent capital. For the sake of comparisons, I have not included such hybrid or contingent capital in the net tangible assets calculations in the graphs below.

US Windstorm (click to enlarge)

US windstorm PMLs 2013

US & Japan Earthquake (click to enlarge)

US & Japan PMLs 2013

As per the S&P article, its important to look at the shape of PML curves as well as the levels for different events. For example, the shape of Lancashire PML curve stands out in the earthquake graphs and for the US gulf of Mexico storm. Montpelier for US quake and AXIS for Japan quakes also stand out in terms of the increased exposure levels at higher return periods. In terms of the level of exposure, Validus stands out on US wind, Endurance on US quake, and Catlin & Amlin on Japan quake.

Any investor in this space must form their own view on the likelihood of major catastrophes when determining their own risk appetite. When assessing the probabilities of historical events reoccurring, care must be taken to ensure past events are viewed on the basis of existing exposures. Irrespective of whether you are a believer in the impact of climate changes (which I am), graphs such as the one below (based off Swiss Re data inflated to 2012) are often used in industry. They imply an increasing trend in insured losses in the future.

Historical Insured Losses (click to enlarge)1990 to 2012 historical insured catastrophe losses Swiss ReThe reality is that as the world population increases resulting in higher housing density in catastrophe exposed areas such as coast lines the past needs to be viewed in terms of todays exposures. Pictures of Ocean Drive in Florida in 1926 and in 2000 best illustrates the point (click to enlarge).

Ocean Drive Florida 1926 & 2000

There has been interesting analysis performed in the past on exposure adjusting or normalising US hurricane losses by academics most notably by Roger Pielke (as the updated graph on his blog shows). Historical windstorms in the US run through commercial catastrophe models with todays exposure data on housing density and construction types shows a similar trend to those of Pielke’s graph. The historical trend from these analyses shows a more variable trend which is a lot less certain than increasing trend in the graph based off Swiss Re data. These losses suggest that the 1970s and 1980s may have been decades of reduced US hurricane activity relative to history and that more recent decades are returning to a more “normal” activity levels for US windstorms.

In conclusion, reviewing PMLs disclosed by reinsurers provides an interesting insight into potential exposures to specific events. However, the disclosures are only as good as the underlying methodology used in their calculation. Hopefully, in the future, further detail will be provided to investors on these PML calculations so that real and meaningful comparisons can be made. Notwithstanding what PMLs may show, investors need to understand the potential for catastrophic events and adapt their risk appetite accordingly.

Relative valuations of selected reinsurers and wholesale insurers

It’s been a great 12 months for wholesale insurers with most seeing their share price rise by 20%+, some over 40%. As would be expected, there has been some correlation between the rise in book values and the share price increase although market sentiment to the sector and the overall market rally have undoubtedly also played their parts. The graph below shows the movements over the past 12 months (click to enlarge).

12 month share price change selected reinsurers March 2013The price to tangible book is one of my preferred indicators of value although it has limitations when comparing companies reporting under differing accounting standards & currencies and trading in different exchanges. The P/TBV valuations as at last weekend are depicted in the graph below. The comments in this post are purely made on the basis of the P/TBV metric calculated from published data and readers are encouraged to dig deeper.

I tend to look at the companies relative to each other in 4 broad buckets – the London market firms, the continental European composite reinsurers, the US/Bermuda firms, and the alternative asset or “wannabe buffet” firms.  Comparisons across buckets can be made but adjustments need to be made for factors such as those outlined in the previous paragraph. Some firms such as Lancashire actually report in US$ as that is where the majority of their business is but trade in London with sterling shares. I also like to look at the relative historical movements over time & the other graph below from March 2011 helps in that regard.

Valuations as at March 2013 (click to enlarge):

Price to net tangible book & 5 year average ROE reinsurers March 2013

Valuations as at March 2011 (click to enlarge):

Price to net tangible book & 5 year average ROE reinsurers March 2011 The London market historically trades at the highest multiples – Hiscox, Amlin, & Lancashire are amongst the leaders, with Catlin been the poor cousin. Catlin’s 2012 operating results were not as strong as the others but the discount it currently trades at may be a tad unfair. In the interest of open disclosure, I must admit to having a soft spot for Lancashire. Their consistent shareholder friendly actions result in the high historical valuation. These actions and a clear communication of their straight forward business strategy shouldn’t distract investors from their high risk profile. The cheeky way they present their occurrence PMLs in public disclosures cannot hide their high CAT exposures when the occurrence PMLs are compared to their peers on a % of tangible asset basis. Their current position relative to Hiscox and Amlin may be reflective of this (although they tend to go down when ex dividend, usually a special dividend!).

Within the continental European composite reinsurer bucket, the Munich and Swiss, amongst others, classify chunky amounts of present value of future profits from their life business as an intangible. As this item will be treated as capital under Solvency II, further metrics need to be considered when looking at these composite reinsurers. The love of the continental Europeans of hybrid capital and the ability to compare the characteristics of the varying instruments is another factor that will become clearer in a Solvency II world. Compared to 2011 valuations Swiss Re has been a clear winner. It is arguable that the Munich deserves a premium given it’s position in the sector.

The striking thing about the current valuations of the US/Bermudian bucket is how concentrated they are, particularly when compared to 2011. The market seems to be making little distinction between the large reinsurers like Everest and the likes of Platinum & Montpelier. That is surely a failure of these companies to distinguish themselves and effectively communicate their differing business models & risk profiles.

The last bucket is the most eccentric. I would class firms such as Fairfax  in this bucket. Although each firm has its own twist, generally these companies are interested in the insurance business as the provider of cheap “float”, a la Mr Buffet, with the focus going into the asset side. Generally, their operating results are poorer than their peers and they have a liking for the longer tail business if the smell of the float is attractive enough (which is difficult with today’s interest rate). This bucket really needs to be viewed through different metrics which we’ll leave for another day.

Overall then, the current valuations reflect an improved sentiment on the sector. Notwithstanding the musings above, nothing earth shattering stands out based solely on a P/TBV analysis.  The ridiculously low valuations of the past 36 months aren’t there anymore. My enthusiasm for the sector is tempered by the macro-economic headwinds, the overall run-up in the market (a pull-back smells inevitable), and the unknown impact upon the sector of the current supply distortions from yield seeking capital market players entering the market.

Historical Price to Tangible Book Value for Reinsurers and Wholesale Insurers

Following on from the previous post, the graph below shows the historical P/TBV ratios for selected reinsurers and wholesale insurers with a portfolio including material books of reinsurance (company names as per previous post). The trend shows the recent uptick in valuations highlighted in the previous post. The graph is also consistent with the Guy Carpenter price to book value graph widely used in industry presentations.

Historical P to TBV Reinsurers & Wholesale Insurers 2001 to 2013Over the past 12 months the sector has broken out of the downward trend across the financial services sector following the financial crisis, most notably in the banking sector as the graph below from TT International illustrates.

TT International Bank Price to Book Ratio

Tangible book value growth across the wholesale insurance sector was approximately 10% from YE2011 to YE2012 and the weighted average operating ROE of 11% in 2012 has been rewarded with higher multiples.

The sector faces a number of significant issues and a return to valuations prior to the financial crisis remains unrealistic. An increase in capacity from non-traditional sources and the increased loss costs from catastrophes are cited in industry outlooks as headwinds although I tend to agree with EIOPA’s recently published risk dashboard in highlighting the impact of macro-economic risks on insurer’s balance sheets as the major headwind.

One issue that deserves further attention in this regard is the impact low interest rates have had on boasting unrealised gains and the resulting impact on the growth in book values. Swiss Re is one of the few companies to explicitly highlight the role of unrealised gains in its annual report, making up approximately 13% of its equity. In a presentation in September 2012, the company had an interesting slide on the impact of unrealised gains on the sector’s capital levels, reproduced below.

Reinsurer Capital & Unrealised Gains

P/TBV is one of my favoured metrics for looking at insurance valuations. But no one metric should be looked at in isolation. The impact of any sudden unwinding of unrealised gains if the macro environment turns nasty is just one of the issues facing the sector which deserves a deeper analysis.