Tag Archives: Prudential Regulatory Authority

The Big Wind

With four US hurricanes and one earthquake in current times, mother nature is reminding us homo-sapiens of her power and her unpredictability. As the massive Hurricane Irma is about to hit Florida, we all hope that the loss of life and damage to people’s lives will be minimal and that the coming days will prove humane. Forgive me if it comes across as insensitive to be posting now on the likely impact of such events on the insurance industry.

For the insurance sector, these events, and particularly Hurricane Irma which is now forecast to move up the west coast of Florida at strength (rather the more destruction path of up the middle of Florida given the maximum forces at the top right-hand side of a hurricane like this one), may be a test on the predictive powers of its models which are so critical to pricing, particularly in the insurance linked securities (ILS) market.

Many commentators, including me (here, here and here are recent examples), have expressed worries in recent years about current market conditions in the specialty insurance, reinsurance and ILS sectors. On Wednesday, Willis Re reported that they estimate their subset of firms analysed are only earning a 3.7% ROE if losses are normalised and reserve releases dried up. David Rule of the Prudential Regulatory Authority in the UK recently stated that London market insurers “appear to be incorporating a more benign view of future losses into their technical pricing”, terms and conditions continued to loosen, reliance on untested new coverages such as cyber insurance is increasing and that insurers “may be too sanguine about catastrophe risks, such as significant weather events”.

With the reinsurance and specialty insurance sectors struggling to meet their cost of capital and pricing terms and conditions being so weak for so long (see this post on the impact of soft pricing on risk profiles), if Hurricane Irma impacts Florida as predicted (i.e. on Saturday) it has the potential to be a capital event for the catastrophe insurance sector rather than just an earnings event. On Friday, Lex in the FT reported that the South-East US makes up 60% of the exposures of the catastrophe insurance market.

The models utilised in the sector are more variable in their output as events get bigger in their impact (e.g. the higher the return period). A 2013 post on the variation in loss estimates from a selected portfolio of standard insurance coverage by the Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) illustrates the point and one of the graphs from that post is reproduced below.

click to enlarge

Based upon the most recent South-East US probable maximum losses (PML) and Atlantic hurricane scenarios from a group of 12 specialty insurers and reinsurers I selected, the graph below shows the net losses by return periods as a percentage of each firm’s net tangible assets. This graph does not consider the impact of hybrid or subordinate debt that may absorb losses before the firm’s capital. I have extrapolated many of these curves based upon industry data on US South-East exceedance curves and judgement on firm’s exposures (and for that reason I anonymised the firms).

click to enlarge

The results of my analysis confirm that specialty insurers and reinsurers, in aggregate, have reduced their South-East US exposures in recent years when I compare average figures to S&P 2014 data (by about 15% for the 1 in 100 return period). Expressed as a net loss ratio, the average for a 1 in 100  and a 1 in 250 return period respectively is 15% and 22%. These figures do look low for events with characteristics of these return periods (the average net loss ratio of the 12 firms from catastrophic events in 2005 and 2011 was 22% and 25% respectively) so it will be fascinating to see what the actual figures are, depending upon how Hurricane Irma pans out. Many firms are utilising their experience and risk management prowess to transfer risks through collaterised reinsurance and retrocession (i.e. reinsurance of reinsurers) to naïve capital market ILS investors.

If the models are correct and maximum losses are around the 1 in 100 return period estimates for Hurricane Irma, well capitalized and managed catastrophe exposed insurers should trade through recent and current events. We will see if the models pass this test. For example, demand surge (whereby labour and building costs increase following a catastrophic event due to overwhelming demand and fixed supply) is a common feature of widespread windstorm damage and is a feature in models (it is one of those inputs that underwriters can play with in soft markets!). Well here’s a thought – could Trump’s immigration policy be a factor in the level of demand surge in Florida and Texas?

The ILS sector is another matter however in my view due to the rapid growth of the private and unregulated collateralised reinsurance and retrocession markets to satisfy the demand for product supply from ILS funds and yield seeking investors. The prevalence of aggregate covers and increased expected loss attachments in the private ILS market resembles features of previous soft and overheated retrocession markets (generally before a crash) in bygone years. I have expressed my concerns on this market many times (more recently here). Hurricane Irma has the potential to really test underwriting standards across the ILS sector. The graph below from Lane Financial LLC on the historical pricing of US military insurer USAA’s senior catastrophe bonds again illustrates how the market has taken on more risk for less risk adjusted premium (exposures include retired military personnel living in Florida).

click to enlarge

The events in the coming days may tell us, to paraphrase Mr Buffet, who has been swimming naked or as Lex put it on Friday, “this weekend may be a moment when the search for uncorrelated returns bumps hard into acts of God”.

Hopefully, all parts of the catastrophe insurance sector will prove their worth by speedily indemnifying peoples’ material losses (nothing can indemnify the loss of life). After all, that’s its function and economic utility to society. Longer term, recent events may also lead to more debate and real action been taken to ensure that the insurance sector, in all its guises, can have an increased economic function and relevance in an increasingly uncertain world, in insuring perils such as floods for example (and avoiding the ridiculous political interference in risk transfer markets that has made the financial impact of flooding from Hurricane Harvey in Texas so severe).

Notwithstanding the insurance sector, our thoughts must be with the people who will suffer from nature’s recent wrath and our prayers are with all of those negatively affected now and in the future.

Stressing the scenario testing

Scenario and stress testing by financial regulators has become a common supervisory tool since the financial crisis. The EU, the US and the UK all now regularly stress their banks using detailed adverse scenarios. In a recent presentation, Moody’s Analytics illustrated the variation in some of the metrics in the adverse scenarios used in recent tests by regulators, as per the graphic below of the peak to trough fall in real GDP.

click to enlargeBanking Stress Tests

Many commentators have criticized these tests for their inconsistency and flawed methodology while pointing out the political conflict many regulators with responsibility for financial stability have. They cannot be seen to be promoting a draconian scenario for stress testing on the one hand whilst assuring markets of the stability of the system on the other hand.

The EU tests have particularly had a credibility problem given the political difficulties in really stressing possible scenarios (hello, a Euro break-up?). An article last year by Morris Goldstein stated:

“By refusing to include a rigorous leverage ratio test, by allowing banks to artificially inflate bank capital, by engaging in wholesale monkey business with tax deferred assets, and also by ruling out a deflation scenario, the ECB produced estimates of the aggregate capital shortfall and a country pattern of bank failures that are not believable.”

In a report from the Adam Smith Institute in July, Kevin Dowd (a vocal critic of the regulator’s approach) stated that the Bank of England’s 2014 tests were lacking in credibility and “that the Bank’s risk models are worse than useless because they give false risk comfort”. Dowd points to the US where the annual Comprehensive Capital Assessment and Review (CCAR) tests have been supplemented by the DFAST tests mandated under Dodd Frank (these use a more standard approach to provide relative tests between banks). In the US, the whole process has been turned into a vast and expensive industry with consultants (many of them ex-regulators!) making a fortune on ever increasing compliance requirements. The end result may be that the original objectives have been somewhat lost.

According to a report from a duo of Columba University professors, banks have learned to game the system whereby “outcomes have become more predictable and therefore arguably less informative”. The worry here is that, to ensure a consistent application across the sector, regulators have been captured by their models and are perpetuating group think by dictating “good” and “bad” business models. Whatever about the dangers of the free market dictating optimal business models (and Lord knows there’s plenty of evidence on that subject!!), relying on regulators to do so is, well, scary.

To my way of thinking, the underlying issue here results from the systemic “too big to fail” nature of many regulated firms. Capitalism is (supposedly!) based upon punishing imprudent risk taking through the threat of bankruptcy and therefore we should be encouraging a diverse range of business models with sensible sizes that don’t, individually or in clusters, threaten financial stability.

On the merits of using stress testing for banks, Dowd quipped that “it is surely better to have no radar at all than a blind one that no-one can rely upon” and concluded that the Bank of England should, rather harshly in my view, scrap the whole process. Although I agree with many of the criticisms, I think the process does have merit. To be fair, many regulators understand the limitations of the approach. Recently Deputy Governor Jon Cunliffe of the Bank of England admitted the fragilities of some of their testing and stated that “a development of this approach would be to use stress testing more counter-cyclically”.

The insurance sector, particularly the non-life sector, has a longer history with stress and scenario testing. Lloyds of London has long required its syndicates to run mandatory realistic disaster scenarios (RDS), primarily focussed on known natural and man-made events. The most recent RDS are set out in the exhibit below.

click to enlargeLloyds Realistic Disaster Scenarios 2015

A valid criticism of the RDS approach is that insurers know what to expect and are therefore able to game the system. Risk models such as the commercial catastrophe models sold by firms like RMS and AIR have proven ever adapt at running historical or theoretical scenarios through today’s modern exposures to get estimates of losses to insurers. The difficulty comes in assigning probabilities to known natural events where the historical data is only really reliable for the past 100 years or so and where man-made events in the modern world, such as terrorism or cyber risks, are virtually impossible to predict. I previously highlighted some of the concerns on the methodology used in many models (e.g. on correlation here and VaR here) used to assess insurance capital which have now been embedded into the new European regulatory framework Solvency II, calibrated at a 1-in-200 year level.

The Prudential Regulatory Authority (PRA), now part of the Bank of England, detailed a set of scenarios last month to stress test its non-life insurance sector in 2015. The detail of these tests is summarised in the exhibit below.

click to enlargePRA General Insurance Stress Test 2015

Robert Childs, the chairman of the Hiscox group, raised some eye brows by saying the PRA tests did not go far enough and called for a war game type exercise to see “how a serious catastrophe may play out”. Childs proposed that such an exercise would mean that regulators would have the confidence in industry to get on with dealing with the aftermath of any such catastrophe without undue fussing from the authorities.

An efficient insurance sector is important to economic growth and development by facilitating trade and commerce through risk mitigation and dispersion, thereby allowing firms to more effectively allocate capital to productive means. Too much “fussing” by regulators through overly conservative capital requirements, maybe resulting from overtly pessimistic stress tests, can result in economic growth being impinged by excess cost. However, given the movement globally towards larger insurers, which in my view will accelerate under Solvency II given its unrestricted credit for diversification, the regulator’s focus on financial stability and the experiences in banking mean that fussy regulation will be in vogue for some time to come.

The scenarios selected by the PRA are interesting in that the focus for known natural catastrophes is on a frequency of large events as opposed to an emphasis on severity in the Lloyds’ RDS. It’s arguable that the probability of the 2 major European storms in one year or 3 US storms in one year is significantly more remote than the 1 in 200 probability level at which capital is set under Solvency II. One of the more interesting scenarios is the reverse stress test such that the firm becomes unviable. I am sure many firms will select a combination of events with an implied probability of all occurring with one year so remote as to be impossible. Or select some ultra extreme events such as the Cumbre Vieja mega-tsunami (as per this post). A lack of imagination in looking at different scenarios would be a pity as good risk management should be open to really testing portfolios rather than running through the same old known events.

New scenarios are constantly being suggested by researchers. Swiss Re recently published a paper on a reoccurrence of the New Madrid cluster of earthquakes of 1811/1812 which they estimated could result in $300 billion of losses of which 50% would be insured (breakdown as per the exhibit below). Swiss Re estimates the probability of such an event at 1 in 500 years or roughly a 10% chance of occurrence within the next 50 years.

click to enlarge1811 New Madrid Earthquakes repeated

Another interesting scenario, developed by the University of Cambridge and Lloyds, which is technologically possible, is a cyber attack on the US power grid (in this report). There have been a growing number of cases of hacking into power grids in the US and Europe which make this scenario ever more real. The authors estimate the event at a 1 in 200 year probability and detail three scenarios (S1, S2, and the extreme X1) with insured losses ranging from $20 billion to $70 billion, as per the exhibit below. These figures are far greater than the probable maximum loss (PML) estimated for the sector by a March UK industry report (as per this post).

click to enlargeCyber Blackout Scenario

I think it will be a very long time before any insurer willingly publishes the results of scenarios that could cause it to be in financial difficulty. I may be naive but I think that is a pity because insurance is a risk business and increased transparency could only lead to more efficient capital allocations across the sector. Everybody claiming that they can survive any foreseeable event up to a notional probability of occurrence (such as 1 in 200 years) can only lead to misplaced solace. History shows us that, in the real world, risk has a habit of surprising, and not in a good way. Imaginative stress and scenario testing, performed in an efficient and transparent way, may help to lessen the surprise. Nothing however can change the fact that the “unknown unknowns” will always remain.

Will the climate change debate now move forward?

The release of the synthesis reports by the IPCC – in summary, short and long form – earlier this month has helped to keep the climate change debate alive. I have posted (here, here, and here) on the IPCC’s 5th assessment previously. The IPCC should be applauded for trying to present their findings in different formats targeted at different audiences. Statements such as the following cannot be clearer:

“Anthropogenic greenhouse gas (GHG) emissions have increased since the pre-industrial era, driven largely by economic and population growth, and are now higher than ever. This has led to atmospheric concentrations of carbon dioxide, methane and nitrous oxide that are unprecedented in at least the last 800,000 years. Their effects, together with those of other anthropogenic drivers, have been detected throughout the climate system and are extremely likely to have been the dominant cause of the observed warming since the mid-20th century.”

The reports also try to outline a framework to manage the risk, as per the statement below.

“Adaptation and mitigation are complementary strategies for reducing and managing the risks of climate change. Substantial emissions reductions over the next few decades can reduce climate risks in the 21st century and beyond, increase prospects for effective adaptation, reduce the costs and challenges of mitigation in the longer term, and contribute to climate-resilient pathways for sustainable development.”

The IPCC estimate the costs of adaptation and mitigation of keeping climate warming below the critical 2oC inflection level at a loss of global consumption of 1%-4% in 2030 or 3%-11% in 2100. Whilst acknowledging the uncertainty in their estimates, the IPCC also provide some estimates of the investment changes needed for each of the main GHG emitting sectors involved, as the graph reproduced below shows.

click to enlargeIPCC Changes in Annual Investment Flows 2010 - 2029

The real question is whether this IPCC report will be any more successful that previous reports at instigating real action. For example, is the agreement reached today by China and the US for real or just a nice photo opportunity for Presidents Obama and Xi?

In today’s FT Martin Wolf has a rousing piece on the subject where he summaries the laissez-faire forces justifying inertia on climate change action as using the costs argument and the (freely acknowledged) uncertainties behind the science. Wolf argues that “the ethical response is that we are the beneficiaries of the efforts of our ancestors to leave a better world than the one they inherited” but concludes that such an obligation is unlikely to overcome the inertia prevalent today.

I, maybe naively, hope for better. As Wolf points out, the costs estimated in the reports, although daunting, are less than that experienced in the developed world from the financial crisis. The costs don’t take into account any economic benefits that a low carbon economy may result in. Notwithstanding this, the scale of the task in changing the trajectory of the global economy is illustrated by one of graphs from the report, as reproduced below.

click to enlargeIPCC global CO2 emissions

Although the insurance sector has a minimal impact on the debate, it is interesting to see that the UK’s Prudential Regulatory Authority (PRA) recently issued a survey to the sector asking for responses on what the regulatory approach should be to climate change.

Many industry players, such as Lloyds’ of London, have been pro-active in stimulating debate on climate change. In May, Lloyds issued a report entitled “Catastrophic Modelling and Climate Change” with contributions from industry. In the piece from Paul Wilson of RMS in the Lloyds report, they concluded that “the influence of trends in sea surface temperatures (from climate change) are shown to be a small contributor to frequency adjustments as represented in RMS medium-term forecast” but that “the impact of changes in sea-level are shown to be more significant, with changes in Superstorm Sandy’s modelled surge losses due to sea-level rise at the Battery over the past 50-years equating to approximately a 30% increase in the ground-up surge losses from Sandy’s in New York.“ In relation to US thunderstorms, another piece in the Lloyds report from Ionna Dima and Shane Latchman of AIR, concludes that “an increase in severe thunderstorm losses cannot readily be attributed to climate change. Certainly no individual season, such as was seen in 2011, can be blamed on climate change.

The uncertainties associated with the estimates in the IPCC reports are well documented (I have posted on this before here and here). The Lighthill Risk Network also has a nice report on climate model uncertainty which concludes that “understanding how climate models work, are developed, and projection uncertainty should also improve climate change resilience for society.” The report highlights the need for expanding geological data sets beyond short durations of decades and centuries which we currently base many of our climate models on.

However, as Wolf says in his FT article, we must not confuse the uncertainty of outcomes with the certainty of no outcomes. On the day that man has put a robot on a comet, let’s hope the IPCC latest assessment results in an evolution of the debate and real action on the complex issue of climate change.

Follow-on comment: Oh dear the outcome of the Philae lander may not be a good omen!!!