Tag Archives: solvency ii

Model Ratings

An interesting blog from PwC on the new European Solvency II regulatory regime for insurers commented that “analysts were optimistic about the level of detail they could expect from the Solvency II disclosures” and that it “was hoped that this would enable the volatility of cash generation and the fungibility of capital within a group to be better understood”.

The publication of insurer’s solvency ratios, under the Solvency II regulations to be introduced from next year, using the common mandated template called the standard formula is hoped to prove to be a good comparative measure for insurers across Europe. In the technical specifications for Solvency II (section 6 in this document), the implied credit ratings of the solvency ratios as calculated using the standard formula are shown below.

click to enlargeImplied Ratings in Solvency II Standard Formula

Firm specific internal models present a different challenge in comparing results as, by definition, there is no comment standard or template. The PwC blog states “the outcome of internal model applications remains uncertain in many jurisdictions and different national practices are emerging in implementing the long-term guarantee package including the transitional measures”. It’s therefore interesting to look at a number of Solvency II ratios published to date using insurer’s own models (or partial internal models) and compare them to their financial strength ratings (in this case S&P ratings), as per the graphic below.

click to enlargeInternal Model Solvency II Ratios & S&P Ratings

As firms start to publish their Solvency II ratios, whether under the standard formula or using an internal model, it will be interesting to see what useful insights for investors emerges. Or …..eh, based upon the above sample, otherwise.

Low risk premia and leverage

The buzz from the annual insurance speed dating festival in Monte Carlo last week seems to have been subdued. Amid all the gossip about the M&A bubble, insurers and reinsurers tried to talk up a slowing of the rate of price decreases. Matt Weber of Swiss Re said “We’ve seen a slowing down of price decreases, although prices are not yet stable. We believe the trend will continue and we’ll see a stabilisation very soon”. However, analysts are not so sure. Moody’s stated that “despite strong signs that a more rational marketplace is emerging in terms of pricing, the expansion of alternative capital markets continues to threaten the traditional reinsurance business models”.  Fitch commented that “a number of fundamental factors that influence pricing remain negative” and that “some reinsurers view defending market share by writing business below the technical price floor as being an acceptable risk”. KBW comment that on-going pricing pressures will “eventually compressing underwriting margins below acceptable returns”.

It is no surprise then that much of the official comments from firms focused on new markets and innovation. Moody’s states that “innovation is a defence against ongoing disintermediation, which is likely to become more pronounced in areas in which reinsurers are not able to maintain proprietary expertise”. Munich Re cited developing new forms of reinsurance cover and partnering with hi-tech industries to create covers for emerging risks in high growth industries. Aon Benfield highlighted three areas of potential growth – products based upon advanced data and analytics (for example in wider indemnification for financial institutions or pharmaceuticals), emerging risks such as cyber, and covering risks currently covered by public pools (like flood or mortgage credit). Others think the whole business model will change fundamentally. Stephan Ruoff of Tokio Millennium Re said “the traditional insurance and reinsurance value chain is breaking up and transforming”. Robert DeRose of AM Best commented that reinsurers “will have a greater transformer capital markets operation”.

Back in April 2013, I posed the question of whether financial innovation always ends in reduced risk premia (here). The risk adjusted ROE today from a well spread portfolio of property catastrophe business is reportingly somewhere between 6% and 12% today (depending upon who you ask and on how they calculate risk adjusted capital). Although I’d be inclined to believe more in the lower range, the results are likely near or below the cost of capital for most reinsurers. That leads you to the magic of diversification and the over hyped “non-correlated” feature of certain insurance risks to other asset classes. There’s little point in reiterating my views on those arguments as per previous posts here, here and here.

In the last post cited above, I commented that “the use by insurers of their economic capital models for reinsurance/retrocession purchases is a trend that is only going to increase as we enter into the risk based solvency world under Solvency II”. Dennis Sugrue of S&P said “we take some comfort from the strength of European reinsurers’ capital modelling capabilities”, which can’t but enhance the reputation of regulatory approved models under Solvency II. Some ILS funds, such as Twelve Capital, have set up subordinated debt funds in anticipation of the demand for regulatory capital (and provide a good comparison of sub-debt and reinsurance here).

One interesting piece of news from Monte Carlo was the establishment of a fund by Guy Carpenter and a new firm founded by ex-PwC partners called Vario Partners. Vario states on their website they were “established to increase the options to insurers looking to optimise capital in a post-Solvency II environment” and are proposing private bonds with collateral structured as quota share type arrangements with loss trigger points at 1-in-100 or 1-in-200 probabilities. I am guessing that the objective of the capital relief focussed structures, which presumably will use Vario proprietary modelling capabilities, is to allow investors a return by offering insurers an ability to leverage capital. As their website saysthe highest RoE is one where the insurer’s shareholders’ equity is geared the most, and therefore [capital] at it’s thinnest”. The sponsors claim that the potential for these bonds could be six times that of the cat bond market. The prospects of allowing capital markets easy access to the large quota share market could add to the woes of the current reinsurance business model.

Low risk premia and leverage. Now that’s a good mix and, by all accounts, the future.

Follow-on (13th October 2015): Below are two graphs from the Q3 report from Lane Financial LLC which highlight the reduced risk premia prevalent in the ILS public cat bond market.

click to enlargeILS Pricing September 2015

click to enlargeILS Price Multiples September 2015

Stressing the scenario testing

Scenario and stress testing by financial regulators has become a common supervisory tool since the financial crisis. The EU, the US and the UK all now regularly stress their banks using detailed adverse scenarios. In a recent presentation, Moody’s Analytics illustrated the variation in some of the metrics in the adverse scenarios used in recent tests by regulators, as per the graphic below of the peak to trough fall in real GDP.

click to enlargeBanking Stress Tests

Many commentators have criticized these tests for their inconsistency and flawed methodology while pointing out the political conflict many regulators with responsibility for financial stability have. They cannot be seen to be promoting a draconian scenario for stress testing on the one hand whilst assuring markets of the stability of the system on the other hand.

The EU tests have particularly had a credibility problem given the political difficulties in really stressing possible scenarios (hello, a Euro break-up?). An article last year by Morris Goldstein stated:

“By refusing to include a rigorous leverage ratio test, by allowing banks to artificially inflate bank capital, by engaging in wholesale monkey business with tax deferred assets, and also by ruling out a deflation scenario, the ECB produced estimates of the aggregate capital shortfall and a country pattern of bank failures that are not believable.”

In a report from the Adam Smith Institute in July, Kevin Dowd (a vocal critic of the regulator’s approach) stated that the Bank of England’s 2014 tests were lacking in credibility and “that the Bank’s risk models are worse than useless because they give false risk comfort”. Dowd points to the US where the annual Comprehensive Capital Assessment and Review (CCAR) tests have been supplemented by the DFAST tests mandated under Dodd Frank (these use a more standard approach to provide relative tests between banks). In the US, the whole process has been turned into a vast and expensive industry with consultants (many of them ex-regulators!) making a fortune on ever increasing compliance requirements. The end result may be that the original objectives have been somewhat lost.

According to a report from a duo of Columba University professors, banks have learned to game the system whereby “outcomes have become more predictable and therefore arguably less informative”. The worry here is that, to ensure a consistent application across the sector, regulators have been captured by their models and are perpetuating group think by dictating “good” and “bad” business models. Whatever about the dangers of the free market dictating optimal business models (and Lord knows there’s plenty of evidence on that subject!!), relying on regulators to do so is, well, scary.

To my way of thinking, the underlying issue here results from the systemic “too big to fail” nature of many regulated firms. Capitalism is (supposedly!) based upon punishing imprudent risk taking through the threat of bankruptcy and therefore we should be encouraging a diverse range of business models with sensible sizes that don’t, individually or in clusters, threaten financial stability.

On the merits of using stress testing for banks, Dowd quipped that “it is surely better to have no radar at all than a blind one that no-one can rely upon” and concluded that the Bank of England should, rather harshly in my view, scrap the whole process. Although I agree with many of the criticisms, I think the process does have merit. To be fair, many regulators understand the limitations of the approach. Recently Deputy Governor Jon Cunliffe of the Bank of England admitted the fragilities of some of their testing and stated that “a development of this approach would be to use stress testing more counter-cyclically”.

The insurance sector, particularly the non-life sector, has a longer history with stress and scenario testing. Lloyds of London has long required its syndicates to run mandatory realistic disaster scenarios (RDS), primarily focussed on known natural and man-made events. The most recent RDS are set out in the exhibit below.

click to enlargeLloyds Realistic Disaster Scenarios 2015

A valid criticism of the RDS approach is that insurers know what to expect and are therefore able to game the system. Risk models such as the commercial catastrophe models sold by firms like RMS and AIR have proven ever adapt at running historical or theoretical scenarios through today’s modern exposures to get estimates of losses to insurers. The difficulty comes in assigning probabilities to known natural events where the historical data is only really reliable for the past 100 years or so and where man-made events in the modern world, such as terrorism or cyber risks, are virtually impossible to predict. I previously highlighted some of the concerns on the methodology used in many models (e.g. on correlation here and VaR here) used to assess insurance capital which have now been embedded into the new European regulatory framework Solvency II, calibrated at a 1-in-200 year level.

The Prudential Regulatory Authority (PRA), now part of the Bank of England, detailed a set of scenarios last month to stress test its non-life insurance sector in 2015. The detail of these tests is summarised in the exhibit below.

click to enlargePRA General Insurance Stress Test 2015

Robert Childs, the chairman of the Hiscox group, raised some eye brows by saying the PRA tests did not go far enough and called for a war game type exercise to see “how a serious catastrophe may play out”. Childs proposed that such an exercise would mean that regulators would have the confidence in industry to get on with dealing with the aftermath of any such catastrophe without undue fussing from the authorities.

An efficient insurance sector is important to economic growth and development by facilitating trade and commerce through risk mitigation and dispersion, thereby allowing firms to more effectively allocate capital to productive means. Too much “fussing” by regulators through overly conservative capital requirements, maybe resulting from overtly pessimistic stress tests, can result in economic growth being impinged by excess cost. However, given the movement globally towards larger insurers, which in my view will accelerate under Solvency II given its unrestricted credit for diversification, the regulator’s focus on financial stability and the experiences in banking mean that fussy regulation will be in vogue for some time to come.

The scenarios selected by the PRA are interesting in that the focus for known natural catastrophes is on a frequency of large events as opposed to an emphasis on severity in the Lloyds’ RDS. It’s arguable that the probability of the 2 major European storms in one year or 3 US storms in one year is significantly more remote than the 1 in 200 probability level at which capital is set under Solvency II. One of the more interesting scenarios is the reverse stress test such that the firm becomes unviable. I am sure many firms will select a combination of events with an implied probability of all occurring with one year so remote as to be impossible. Or select some ultra extreme events such as the Cumbre Vieja mega-tsunami (as per this post). A lack of imagination in looking at different scenarios would be a pity as good risk management should be open to really testing portfolios rather than running through the same old known events.

New scenarios are constantly being suggested by researchers. Swiss Re recently published a paper on a reoccurrence of the New Madrid cluster of earthquakes of 1811/1812 which they estimated could result in $300 billion of losses of which 50% would be insured (breakdown as per the exhibit below). Swiss Re estimates the probability of such an event at 1 in 500 years or roughly a 10% chance of occurrence within the next 50 years.

click to enlarge1811 New Madrid Earthquakes repeated

Another interesting scenario, developed by the University of Cambridge and Lloyds, which is technologically possible, is a cyber attack on the US power grid (in this report). There have been a growing number of cases of hacking into power grids in the US and Europe which make this scenario ever more real. The authors estimate the event at a 1 in 200 year probability and detail three scenarios (S1, S2, and the extreme X1) with insured losses ranging from $20 billion to $70 billion, as per the exhibit below. These figures are far greater than the probable maximum loss (PML) estimated for the sector by a March UK industry report (as per this post).

click to enlargeCyber Blackout Scenario

I think it will be a very long time before any insurer willingly publishes the results of scenarios that could cause it to be in financial difficulty. I may be naive but I think that is a pity because insurance is a risk business and increased transparency could only lead to more efficient capital allocations across the sector. Everybody claiming that they can survive any foreseeable event up to a notional probability of occurrence (such as 1 in 200 years) can only lead to misplaced solace. History shows us that, in the real world, risk has a habit of surprising, and not in a good way. Imaginative stress and scenario testing, performed in an efficient and transparent way, may help to lessen the surprise. Nothing however can change the fact that the “unknown unknowns” will always remain.

Converts on a comeback

My initial reaction, from a shareholder view-point, when a firm issues a convertible bond is negative and I suspect that many other investors feel the same. My experience as a shareholder of firms that relied on such hybrid instruments has been varied in the past. Whether it’s a sign that a growing firm has limited options and may have put the shareholder at the mercy of some manipulative financier, or the prospect that arbitrage quants will randomly buy or sell the stock at the whim of some dynamic hedging model chasing the “greeks”, my initial reaction is one of discomfort at the uncertainty of how, by whom, and when my shareholding may be diluted.

In today’s low risk premia environment, it’s interesting to see a pick-up in convertible issuances and, in the on-going search for yield environment, investors are again keen on foregoing some coupon for the upside which the embedded call option that convertibles may offer. Names like Tesla, AOL, RedHat, Priceline and Twitter have all been active in recent times with conversion premiums averaging over 30%. The following graph shows the pick-up in issuances according to UBS.

click to enlargeConvertible Bond Market Issuances 2004 to 2014

Convertible bonds have been around since the days of the railroad boom in the US and, in theory, combining the certainty of a regular corporate bond with an equity call option which offers the issuer a source of low debt cost at a acceptable dilution rate to shareholders whilst offering an investor the relative safety of a bond with a potential for equity upside. The following graphic illustrates the return characteristics.

click to enlargeConvertible Bond Illustration

The problem for the asset class in the recent past came when the masters of the universe embraced convertible arbitrage strategies of long/short the debt/equity combined with heavy doses of leverage and no risk capital. The holy grail of an asymmetric trade without any risk was assumed to be at hand [and why not, given their preordained godness…or whatever…]! Despite the warning shot to the strategy that debt and equity pricing can diverge when Kirk Kerborian’s increased his stake in General Motors in 2005 just after the debt was downgraded, many convertible arb hedge funds continued to operate at leverage multiples of well in excess of 4.

The 2008 financial crisis and the unwinding of dubious lending practises to facilitate hedge fund leverage, such as the beautifully named rehypothecation lending by banks and brokers (unfortunately the actual explanation sounds more like a ponzi scheme), caused the arbitrage crash not only across convertibles but across many other asset classes mixed up in so called relative value strategies. This 2010 paper, entitled “Arbitrage Crashes and the Speed of Capital”, by Mark Mitchell and Todd Pulvino is widely cited and goes into the gory detail. There were other factors that exacerbated the impact of the 2008 financial crisis on the convertible debt market such as market segmentation whereby investors in other asset classes were slow to move into the convertible debt market to correct mis-pricing following the forced withdrawal of the hedge funds (more detail on this impact in this paper from 2013).

Prior to the crisis, convertible arb hedge funds dominated the convertible bond market responsible for up-to 80% of activity. Today, the market is dominated by long only investors with hedge funds only reported to be responsible for 25% of activity with those hedge funds operating at much lower leverage levels (prime brokers are restricted to leverage of less than 1.5 times these days with recent talk of an outright rehypothecation ban for certain intermediaries on the cards). One of the funds that made it through the crash, Ferox Capital, stated in an article that convertible bonds have “become the play thing of long only investors” and that the “lack of technically-driven capital (hedge funds and proprietary trading desks) should leave plenty of alpha to be collected in a relatively low-risk manner” (well they would say that wouldn’t they!).

The reason for my interest in this topic is that one of the firms I follow just announced a convertible issue and I wanted to find out if my initial negative reaction is still justified. [I will be posting an update on my thoughts concerning the firm in question, Trinity Biotech, after their Q1 results due this week].

Indeed, the potential rehabilitation of convertible bonds to today’s investors is highlighted by the marketing push from people like EY and Credit Suisse on the benefits of convertible bonds as an asset class to insurers (as per their recent reports here and here). EY highlight the benefit of equity participation with downside protection, the ability to de-risk portfolios, and the use of convertible bonds to hedge equity risk. Credit Suisse, bless their little hearts, go into more technical detail about how convertibles can be used to lower the solvency requirement under Solvency II and/or for the Swiss Solvency Test.

With outstanding issuances estimated at $500 billion, the market has survived its turbulent past and it looks like there is life left in the old convertible bond magic dog yet.

Tails of VaR

In an opinion piece in the FT in 2008, Alan Greenspan stated that any risk model is “an abstraction from the full detail of the real world”. He talked about never being able to anticipate discontinuities in financial markets, unknown unknowns if you like. It is therefore depressing to see articles talk about the “VaR shock” that resulted in the Swissie from the decision of the Swiss National Bank (SNB) to lift the cap on its FX rate on the 15th of January (examples here from the Economist and here in the FTAlphaVille). If traders and banks are parameterising their models from periods of unrepresentative low volatility or from periods when artificial central bank caps are in place, then I worry that they are not even adequately considering known unknowns, let alone unknown unknowns. Have we learned nothing?

Of course, anybody with a brain knows (that excludes traders and bankers then!) of the weaknesses in the value-at-risk measure so beloved in modern risk management (see Nassim Taleb and Barry Schachter quotes from the mid 1990s on Quotes page). I tend to agree with David Einhorn when, in 2008, he compared the metric as being like “an airbag that works all the time, except when you have a car accident“.  A piece in the New York Times by Joe Nocera from 2009 is worth a read to remind oneself of the sad topic.

This brings me to the insurance sector. European insurance regulation is moving rapidly towards risk based capital with VaR and T-VaR at its heart. Solvency II calibrates capital at 99.5% VaR whilst the Swiss Solvency Test is at 99% T-VaR (which is approximately equal to 99.5%VaR). The specialty insurance and reinsurance sector is currently going through a frenzy of deals due to pricing and over-capitalisation pressures. The recently announced Partner/AXIS deal follows hot on the heels of XL/Catlin and RenRe/Platinum merger announcements. Indeed, it’s beginning to look like the closing hours of a swinger’s party with a grab for the bowl of keys! Despite the trend being unattractive to investors, it highlights the need to take out capacity and overhead expenses for the sector.

I have posted previously on the impact of reduced pricing on risk profiles, shifting and fattening distributions. The graphic below is the result of an exercise in trying to reflect where I think the market is going for some businesses in the market today. Taking previously published distributions (as per this post), I estimated a “base” profile (I prefer them with profits and losses left to right) of a phantom specialty re/insurer. To illustrate the impact of the current market conditions, I then fattened the tail to account for the dilution of terms and conditions (effectively reducing risk adjusted premia further without having a visible impact on profits in a low loss environment). I also added risks outside of the 99.5%VaR/99%T-VaR regulatory levels whilst increasing the profit profile to reflect an increase in risk appetite to reflect pressures to maintain target profits. This resulted in a decrease in expected profit of approx. 20% and an increase in the 99.5%VaR and 99.5%T-VaR of 45% and 50% respectively. The impact on ROEs (being expected profit divided by capital at 99.5%VaR or T-VaR) shows that a headline 15% can quickly deteriorate to a 7-8% due to loosening of T&Cs and the addition of some tail risk.

click to enlargeTails of VaR

For what it is worth, T-VaR (despite its shortfalls) is my preferred metric over VaR given its relative superior measurement of tail risk and the 99.5%T-VaR is where I would prefer to analyse firms to take account of accumulating downside risks.

The above exercise reflects where I suspect the market is headed through 2015 and into 2016 (more risky profiles, lower operating ROEs). As Solvency II will come in from 2016, introducing the deeply flawed VaR metric at this stage in the market may prove to be inappropriate timing, especially if too much reliance is placed upon VaR models by investors and regulators. The “full detail of the real world” today and in the future is where the focus of such stakeholders should be, with much less emphasis on what the models, calibrated on what came before, say.