Tag Archives: AIR

The Big Wind

With four US hurricanes and one earthquake in current times, mother nature is reminding us homo-sapiens of her power and her unpredictability. As the massive Hurricane Irma is about to hit Florida, we all hope that the loss of life and damage to people’s lives will be minimal and that the coming days will prove humane. Forgive me if it comes across as insensitive to be posting now on the likely impact of such events on the insurance industry.

For the insurance sector, these events, and particularly Hurricane Irma which is now forecast to move up the west coast of Florida at strength (rather the more destruction path of up the middle of Florida given the maximum forces at the top right-hand side of a hurricane like this one), may be a test on the predictive powers of its models which are so critical to pricing, particularly in the insurance linked securities (ILS) market.

Many commentators, including me (here, here and here are recent examples), have expressed worries in recent years about current market conditions in the specialty insurance, reinsurance and ILS sectors. On Wednesday, Willis Re reported that they estimate their subset of firms analysed are only earning a 3.7% ROE if losses are normalised and reserve releases dried up. David Rule of the Prudential Regulatory Authority in the UK recently stated that London market insurers “appear to be incorporating a more benign view of future losses into their technical pricing”, terms and conditions continued to loosen, reliance on untested new coverages such as cyber insurance is increasing and that insurers “may be too sanguine about catastrophe risks, such as significant weather events”.

With the reinsurance and specialty insurance sectors struggling to meet their cost of capital and pricing terms and conditions being so weak for so long (see this post on the impact of soft pricing on risk profiles), if Hurricane Irma impacts Florida as predicted (i.e. on Saturday) it has the potential to be a capital event for the catastrophe insurance sector rather than just an earnings event. On Friday, Lex in the FT reported that the South-East US makes up 60% of the exposures of the catastrophe insurance market.

The models utilised in the sector are more variable in their output as events get bigger in their impact (e.g. the higher the return period). A 2013 post on the variation in loss estimates from a selected portfolio of standard insurance coverage by the Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) illustrates the point and one of the graphs from that post is reproduced below.

click to enlarge

Based upon the most recent South-East US probable maximum losses (PML) and Atlantic hurricane scenarios from a group of 12 specialty insurers and reinsurers I selected, the graph below shows the net losses by return periods as a percentage of each firm’s net tangible assets. This graph does not consider the impact of hybrid or subordinate debt that may absorb losses before the firm’s capital. I have extrapolated many of these curves based upon industry data on US South-East exceedance curves and judgement on firm’s exposures (and for that reason I anonymised the firms).

click to enlarge

The results of my analysis confirm that specialty insurers and reinsurers, in aggregate, have reduced their South-East US exposures in recent years when I compare average figures to S&P 2014 data (by about 15% for the 1 in 100 return period). Expressed as a net loss ratio, the average for a 1 in 100  and a 1 in 250 return period respectively is 15% and 22%. These figures do look low for events with characteristics of these return periods (the average net loss ratio of the 12 firms from catastrophic events in 2005 and 2011 was 22% and 25% respectively) so it will be fascinating to see what the actual figures are, depending upon how Hurricane Irma pans out. Many firms are utilising their experience and risk management prowess to transfer risks through collaterised reinsurance and retrocession (i.e. reinsurance of reinsurers) to naïve capital market ILS investors.

If the models are correct and maximum losses are around the 1 in 100 return period estimates for Hurricane Irma, well capitalized and managed catastrophe exposed insurers should trade through recent and current events. We will see if the models pass this test. For example, demand surge (whereby labour and building costs increase following a catastrophic event due to overwhelming demand and fixed supply) is a common feature of widespread windstorm damage and is a feature in models (it is one of those inputs that underwriters can play with in soft markets!). Well here’s a thought – could Trump’s immigration policy be a factor in the level of demand surge in Florida and Texas?

The ILS sector is another matter however in my view due to the rapid growth of the private and unregulated collateralised reinsurance and retrocession markets to satisfy the demand for product supply from ILS funds and yield seeking investors. The prevalence of aggregate covers and increased expected loss attachments in the private ILS market resembles features of previous soft and overheated retrocession markets (generally before a crash) in bygone years. I have expressed my concerns on this market many times (more recently here). Hurricane Irma has the potential to really test underwriting standards across the ILS sector. The graph below from Lane Financial LLC on the historical pricing of US military insurer USAA’s senior catastrophe bonds again illustrates how the market has taken on more risk for less risk adjusted premium (exposures include retired military personnel living in Florida).

click to enlarge

The events in the coming days may tell us, to paraphrase Mr Buffet, who has been swimming naked or as Lex put it on Friday, “this weekend may be a moment when the search for uncorrelated returns bumps hard into acts of God”.

Hopefully, all parts of the catastrophe insurance sector will prove their worth by speedily indemnifying peoples’ material losses (nothing can indemnify the loss of life). After all, that’s its function and economic utility to society. Longer term, recent events may also lead to more debate and real action been taken to ensure that the insurance sector, in all its guises, can have an increased economic function and relevance in an increasingly uncertain world, in insuring perils such as floods for example (and avoiding the ridiculous political interference in risk transfer markets that has made the financial impact of flooding from Hurricane Harvey in Texas so severe).

Notwithstanding the insurance sector, our thoughts must be with the people who will suffer from nature’s recent wrath and our prayers are with all of those negatively affected now and in the future.

Mega-Tsunami Fright Scenario

There was a nice piece on the online FT on the forces impacting the reinsurance sector last night. Lancashire, which is behaving oddly these days, was one of the firms mentioned. Lancashire looks like its set to drop by approximately 12% (the amount of the special dividend) when it goes ex-dividend after today the 28th (although yahoo has been shown it dropping by 10%-12% at the end of trading for several days now, including yesterday). If it does drop to a £5.50 level, that’s approximately a 123% price to diluted tangible book value. Quite a come down from the loftier valuations of 150%-170% under previous CEO Richard Brindle!

Anyway, this post is not about that. A major part of modern risk management in the insurance sector today is applying real life scenarios to risk portfolios to assess their impact. Lloyds’ has being doing it for years with their realistic disaster scenarios (RDS). Insurers are adept at using scenarios generating by professional catastrophic models from firms like RMS and AIR on so-called peak zones like US hurricanes or Japan earthquake. Many non-peak scenarios are not explicitly modelled by such firms.

The horrors of the tsunamis from the 2011 Tōhoku and the 2004 Indian Ocean earthquakes have been brought home vividly in this multi-media age. The damage in human terms from the receding waters full of debris makes the realities of such events all too real.  Approximately 80% of tsunamis come from earthquakes and history is littered with examples of large destructive tsunami resulting from earthquakes – the 1755 Great Lisbon earthquake in Portugal, the 1783 Calabrian and the 1908 Messina earthquakes in Italy, the 1896 Sanriku earthquake in Japan, the recently discovered 365 AD Mediterranean quake, the 1700 Cascadia Megathrust earthquake in the west coast of the US, and the 1958 Lituya Bay quake in Alaska are but a few examples.

Volcanoes are another potential cause of mega tsunamis as many volcanoes are found next to the sea, notably in countries bordering the Pacific Ocean, the northern Mediterranean and the Caribbean Sea.  One scenario put forward by a paper from Steven Ward and Simon Day in 2001 is the possibility of a mega tsunami from a collapse of an unstable volcanic ridge caused by previous Cumbre Vieja volcanoes in 1949 and 1971 in La Palma in the Canary Islands. The threat was has been dramatically brought to life by a 2013 BBC Horizon programme called “Could We Survive A Mega-Tsunami?”. Unfortunately I could not find a link to the full programme but a taster can be found here.

The documentary detailed a scenario where a future eruption could cause a massive landslide of 500 km3 of rock crashing into the sea, causing multiple waves that would travel across the Atlantic Ocean and devastate major cities along the US east coast, as well as parts of Africa, Europe and southern England & Ireland. The damage would be unimaginable, causing over 4 million deaths and economic losses of over $800 billion. The impact of the damage on port and transport infrastructure would also result in horrible after event obstacles to rescue and recovery efforts.

The possibility of such a massive landslide resulting from a La Palma volcano has been disputed by many scientists. In 2006, Dutch scientists released research which stipulated that the south west flank of the island was stable and unlikely to fall into the sea for at least another 10,000 years. More recent research in 2013, has shown that 8 historical landslides associated with volcanoes in the Canary Islands have been staggered in discrete landslides and that the likelihood of one large 500 km3 landslide is therefore extremely remote. The report states:

“This has significant implications for geohazard assessments, as multistage failures reduce the magnitude of the associated tsunami. The multistage failure mechanism reduces individual landslide volumes from up to 350 km3 to less than 100 km3. Thus although multistage failure ultimately reduce the potential landslide and tsunami threat, the landslide events may still generate significant tsunamis close to source.”

Another graph from the research shows that timeframe over which such events should be viewed is in the thousands of years.

click to enlargeHistorical Volcanic & Landslide Activity Canary Islands

Whatever about the feasibility of the events dramatised in the BBC documentary, the scientists behind the latest research do highlight the difference between probability of occurrence and impact upon occurrence.

“Although the probability of a large-volume Canary Island flank collapse occurring is potentially low, this does not necessarily mean that the risk is low. Risk is dependent both on probability of occurrence and the resultant consequences of such events, namely generation of a tsunami(s). Therefore, determining landslide characteristics of past events will ultimately better inform tsunami modelling and risk assessments.”

And, after all, that’s what good risk management should be all about. Tsunami are caused by large infrequent events so, as with all natural catastrophes, we should be wary that historical event catalogues may be a poor guide to future hazards.

Will the climate change debate now move forward?

The release of the synthesis reports by the IPCC – in summary, short and long form – earlier this month has helped to keep the climate change debate alive. I have posted (here, here, and here) on the IPCC’s 5th assessment previously. The IPCC should be applauded for trying to present their findings in different formats targeted at different audiences. Statements such as the following cannot be clearer:

“Anthropogenic greenhouse gas (GHG) emissions have increased since the pre-industrial era, driven largely by economic and population growth, and are now higher than ever. This has led to atmospheric concentrations of carbon dioxide, methane and nitrous oxide that are unprecedented in at least the last 800,000 years. Their effects, together with those of other anthropogenic drivers, have been detected throughout the climate system and are extremely likely to have been the dominant cause of the observed warming since the mid-20th century.”

The reports also try to outline a framework to manage the risk, as per the statement below.

“Adaptation and mitigation are complementary strategies for reducing and managing the risks of climate change. Substantial emissions reductions over the next few decades can reduce climate risks in the 21st century and beyond, increase prospects for effective adaptation, reduce the costs and challenges of mitigation in the longer term, and contribute to climate-resilient pathways for sustainable development.”

The IPCC estimate the costs of adaptation and mitigation of keeping climate warming below the critical 2oC inflection level at a loss of global consumption of 1%-4% in 2030 or 3%-11% in 2100. Whilst acknowledging the uncertainty in their estimates, the IPCC also provide some estimates of the investment changes needed for each of the main GHG emitting sectors involved, as the graph reproduced below shows.

click to enlargeIPCC Changes in Annual Investment Flows 2010 - 2029

The real question is whether this IPCC report will be any more successful that previous reports at instigating real action. For example, is the agreement reached today by China and the US for real or just a nice photo opportunity for Presidents Obama and Xi?

In today’s FT Martin Wolf has a rousing piece on the subject where he summaries the laissez-faire forces justifying inertia on climate change action as using the costs argument and the (freely acknowledged) uncertainties behind the science. Wolf argues that “the ethical response is that we are the beneficiaries of the efforts of our ancestors to leave a better world than the one they inherited” but concludes that such an obligation is unlikely to overcome the inertia prevalent today.

I, maybe naively, hope for better. As Wolf points out, the costs estimated in the reports, although daunting, are less than that experienced in the developed world from the financial crisis. The costs don’t take into account any economic benefits that a low carbon economy may result in. Notwithstanding this, the scale of the task in changing the trajectory of the global economy is illustrated by one of graphs from the report, as reproduced below.

click to enlargeIPCC global CO2 emissions

Although the insurance sector has a minimal impact on the debate, it is interesting to see that the UK’s Prudential Regulatory Authority (PRA) recently issued a survey to the sector asking for responses on what the regulatory approach should be to climate change.

Many industry players, such as Lloyds’ of London, have been pro-active in stimulating debate on climate change. In May, Lloyds issued a report entitled “Catastrophic Modelling and Climate Change” with contributions from industry. In the piece from Paul Wilson of RMS in the Lloyds report, they concluded that “the influence of trends in sea surface temperatures (from climate change) are shown to be a small contributor to frequency adjustments as represented in RMS medium-term forecast” but that “the impact of changes in sea-level are shown to be more significant, with changes in Superstorm Sandy’s modelled surge losses due to sea-level rise at the Battery over the past 50-years equating to approximately a 30% increase in the ground-up surge losses from Sandy’s in New York.“ In relation to US thunderstorms, another piece in the Lloyds report from Ionna Dima and Shane Latchman of AIR, concludes that “an increase in severe thunderstorm losses cannot readily be attributed to climate change. Certainly no individual season, such as was seen in 2011, can be blamed on climate change.

The uncertainties associated with the estimates in the IPCC reports are well documented (I have posted on this before here and here). The Lighthill Risk Network also has a nice report on climate model uncertainty which concludes that “understanding how climate models work, are developed, and projection uncertainty should also improve climate change resilience for society.” The report highlights the need for expanding geological data sets beyond short durations of decades and centuries which we currently base many of our climate models on.

However, as Wolf says in his FT article, we must not confuse the uncertainty of outcomes with the certainty of no outcomes. On the day that man has put a robot on a comet, let’s hope the IPCC latest assessment results in an evolution of the debate and real action on the complex issue of climate change.

Follow-on comment: Oh dear the outcome of the Philae lander may not be a good omen!!!

CAT models and fat tails: an illustration from Florida

I have posted numerous times now (to the point of boring myself!) on the dangers of relying on a single model for estimating losses from natural catastrophes. The practise is reportedly widespread in the rapidly growing ILS fund sector. The post on assessing probable maximum losses (PMLs) outlined the sources of uncertainty from such models, especially the widely used commercial vendors models from RMS, AIR and EqeCat.

The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) was created in 1995 as an independent panel of experts to evaluate computer models used for setting rates for residential property insurance. The website of the FCHLPM contains a treasure trove of information on each of the modelling firms who provide detailed submissions in a pre-set format. These submissions include specifics on the methodology utilised in their models and the output from their models for specified portfolios.

In addition to the three vendor modellers (RMS, AIR, EqeCat), there is also details on two other models approved by FCHLPM, namely Applied Research Associates (ARA) and the Florida Public Hurricane Loss Model (FPHLM)developed by the Florida International University.

In one section of the mandated submissions, the predictions of each of the models on the number of annual landfall hurricanes for a 112 year period (1900 to 2011 is the historical reference period) are outlined. Given the issue over the wind speed classification of Super-storm Sandy as it hit land and the use of hurricane deductibles, I assume that the definition of landfall hurricanes is consistent between the FCHLPM submissions. The graph below shows the assumed frequency over 112 years of 0,1,2,3 or 4 landfall hurricanes from the five modellers.

click to enlargeLandfalling Florida Hurricanes

As one of the objectives of the FCHLPM is to ensure insurance rates are neither excessive nor inadequate, it is unsurprising that each of the models closely matches known history. It does however demonstrate that the models are, in effect, limited by that known history (100 odd years in terms of climatic experiences is limited by any stretch!). One item to note is that most of the models have a higher frequency for 1 landfall hurricane and a lower frequency for 2 landfall hurricanes when compared with the 100 year odd history. Another item of note is that only EqeCat and FPHLM have any frequency for 4 landfall hurricanes in any one year over the reference period.

Each of the modellers are also required to detail their loss exceedance estimates for two assumed risk portfolios. The first portfolio is set by FCHLPM and is limited to 3 construction types, geocodes by ZIP code centroil (always be wary of anti-selection dangers in relying on centroil data, particularly in large counties or zones with a mixture of coastal and inland exposure), and specific policy conditions. The second portfolio is the 2007 Florida Hurricane Catastrophe Fund aggregate personal and commercial residential exposure data. The graphs below show the results for the different models with the dotted lines representing the 95th percentile margin of error around the average of all 5 model outputs.

click to enlarge

Modelled Losses Florida Notional Residential PortfolioModelled Losses FHCF Commercial Residential Portfolio

As would be expected, uncertainty over losses increase as the return periods increase. The tail of outputs from catastrophe models clearly need to be treated will care and tails need to be fatten up to take into account uncertainty. Relying solely on a single point from a single model is just asking for trouble.

Insurance & capital market convergence hype is getting boring

As the horde of middle aged (still mainly male) executives pack up their chinos and casual shirts, the overriding theme coming from this year’s Monte Carlo Renez-Vous seems to be impact of the new ILS capacity or “convergence capital” on the reinsurance and specialty insurance sector. The event, described in a Financial Times article as “the kind of public display of wealth most bankers try to eschew”, is where executives start the January 1 renewal discussions with clients in quick meetings crammed together in the luxury location.

The relentless chatter about the new capital will likely leave many bored senseless of the subject. Many may now hope that, just like previous hot discussion topics that were worn out (Solvency II anybody?), the topic fades into the background as the reality of the office huts them next week.

The more traditional industry hands warned of the perils of the new capacity on underwriting discipline. John Nelson of Lloyds highlighted that “some of the structures being used could undermine some of the qualities of the insurance model”. Tad Montross of GenRe cautioned that “bankers looking to replace lost fee income” are pushing ILS as the latest asset class but that the hype will die down when “the inability to model extreme weather events accurately is better understood”. Amer Ahmed of Allianz Re predicted the influx “bears the danger that certain risks get covered at inadequate rates”. Torsten Jeworrek of Munich Re said that “our research shows that ILS use the cheapest model in the market” (assumingly in a side swipe at AIR).

Other traditional reinsurers with an existing foothold in the ILS camp were more circumspect. Michel Lies of Swiss Re commented that “we take the inflow of alternative capital seriously but we are not alarmed by it”.

Brokers and other interested service providers were the loudest cheerleaders. Increasing the size of the pie for everybody, igniting coverage innovative in the traditional sector, and cheap retrocession capacity were some of the advantages cited. My favourite piece of new risk management speak came from Aon Benfield’s Bryon Ehrhart in the statement “reinsurers will innovate their capital structures to turn headwinds from alternative capital sources into tailwinds”. In other words, as Tokio Millennium Re’s CEO Tatsuhiko Hoshina said, the new capital offers an opportunity to leverage increasingly diverse sources of retrocessional capacity. An arbitrage market (as a previous post concluded)?

All of this talk reminds me of the last time that “convergence” was a buzz word in the sector in the 1990s. For my sins, I was an active participant in the market then. Would the paragraph below from an article on insurance and capital market convergence by Graciela Chichilnisky of Columbia University in June 1996 sound out of place today?

“The future of the industry lies with those firms which implement such innovation. The companies that adapt successfully will be the ones that survive. In 10 years, these organizations will draw the map of a completely restructured reinsurance industry”

The current market dynamics are driven by low risk premia in capital markets bringing investors into competition with the insurance sector through ILS and collaterised structures. In the 1990s, capital inflows after Hurricane Andrew into reinsurers, such as the “class of 1992”, led to overcapacity in the market which resulted in a brutal and undisciplined soft market in the late 1990s.

Some (re)insurers sought to diversify their business base by embracing innovation in transaction structures and/or by looking at expanding the risks they covered beyond traditional P&C exposures. Some entered head first into “finite” type multi-line multi-year programmes that assumed structuring could protect against poor underwriting. An over-reliance on the developing insurance models used to price such transactions, particularly in relation to assumed correlations between exposures, left some blind to basic underwriting disciplines (Sound familiar, CDOs?). Others tested (unsuccessfully) the limits of risk transfer and legality by providing limited or no risk coverage to distressed insurers (e.g. FAI & HIH in Australia) or by providing reserve protection that distorted regulatory requirements (e.g. AIG & Cologne Re) by way of back to back contracts and murky disclosures.

Others, such as the company I worked for, looked to cover financial risks on the basis that mixing insurance and financial risks would allow regulatory capital arbitrage benefits through increased diversification (and may even offer an inflation & asset price hedge). Some well known examples* of the financial risks assumed by different (re)insurers at that time include the Hollywood Funding pool guarantee, the BAe aircraft leasing income coverage, Rolls Royce residual asset guarantees, dual trigger contingent equity puts, Toyota motor residual value protection, and mezzanine corporate debt credit enhancement  coverage.

Many of these “innovations” ended badly for the industry. Innovation in itself should never be dismissed as it is a feature of the world we live in. In this sector however, innovation at the expense of good underwriting is a nasty combination that the experience in the 1990s must surely teach us.

Bringing this back to today, I recently discussed the ILS market with a well informed and active market participant. He confirmed that some of the ILS funds have experienced reinsurance professionals with the skills to question the information in the broker pack and who do their own modelling and underwriting of the underlying risks. He also confirmed however that there is many funds (some with well known sponsors and hungry mandates) that, in the words of Kevin O’Donnell of RenRe, rely “on a single point” from a single model provided by to them by an “expert” 3rd party.

This conversation got me to thinking again about the comment from Edward Noonan of Validus that “the ILS guys aren’t undisciplined; it’s just that they’ve got a lower cost of capital.” Why should an ILS fund have a lower cost of capital to a pure property catastrophe reinsurer? There is the operational risk of a reinsurer to consider. However there is also operational risk involved with an ILS fund given items such as multiple collateral arrangements and other contracted 3rd party service provided functions to consider. Expenses shouldn’t be a major differing factor between the two models. The only item that may justify a difference is liquidity, particularly as capital market investors are so focussed on a fast exit. However, should this be material given the exit option of simply selling the equity in many of the quoted property catastrophe reinsurers?

I am not convinced that the ILS funds should have a material cost of capital advantage. Maybe the quoted reinsurers should simply revise their shareholder return strategies to be more competitive with the yields offered by the ILS funds. Indeed, traditional reinsurers in this space may argue that they are able to offer more attractive yields to a fully collaterised provider, all other things being equal, given their more leveraged business model.

*As a complete aside, an article this week in the Financial Times on the anniversary of the Lehman Brothers collapse and the financial crisis highlighted the role of poor lending practices as a primary cause of significant number of the bank failures. This article reminded me of a “convergence” product I helped design back in the late 1990s. Following changes in accounting rules, many banks were not allowed to continue to hold general loan loss provisions against their portfolio. These provisions (akin to an IBNR type bulk reserve) had been held in addition to specific loan provision (akin to case reserves). I designed an insurance structure for banks to pay premiums previously set aside as general provisions for coverage on massive deterioration in their loan provisions. After an initial risk period in which the insurer could lose money (which was required to demonstrate an effective risk transfer), the policy would act as a fully funded coverage similar to a collaterised reinsurance. In effect the banks could pay some of the profits in good years (assuming the initial risk period was set over the good years!) for protection in the bad years. The attachment of the coverage was designed in a way similar to the old continuous ratcheting retention reinsurance aggregate coverage popular at the time amongst some German reinsurers. After numerous discussions, no banks were interested in a cover that offered them an opportunity to use profits in the good times to buy protection for a rainy day. They didn’t think they needed it. Funny that.