Tag Archives: economic capital models

Beautiful Models

It has been a while since I posted on dear old Solvency II (here). As highlighted in the previous post on potential losses, the insurance sector is perceived as having robust capital levels that mitigates against the current pricing and investment return headwinds. It is therefore interesting to look at some of detail emerging from the new Solvency II framework in Europe, including the mandatory disclosures in the new Solvency and Financial Condition Report (SFCR).

The June 2017 Financial Stability report from EIOPA, the European insurance regulatory, contains some interesting aggregate data from across the European insurance sector. The graph below shows solvency capital requirement (SCR) ratios, primarily driven by the standard formula, averaging consistently around 200% for non-life, life and composite insurers. The ratio is the regulatory capital requirement, as calculated by a mandated standard formula or a firm’s own internal model, divided by assets excess liabilities (as per Solvency II valuation rules). As the risk profile of each business model would suggest, the variability around the average SCR ratio is largest for the non-life insurers, followed by life insurers, with the least volatile being the composite insurers.

click to enlarge

For some reason, which I can’t completely comprehend, the EIOPA Financial Stability report highlights differences in the SCR breakdown (as per the standard formula, expressed as a % of net basic SCR) across countries, as per the graph below, assumingly due to the different profiles of each country’s insurance sector.

click to enlarge

A review across several SFCRs from the larger European insurers and reinsurers who use internal models to calculate their SCRs highlights the differences in their risk profiles. A health warning on any such comparison should be stressed given the different risk categories and modelling methodologies used by each firm (the varying treatment of asset credit risk or business/operational risk are good examples of the differing approaches). The graph below shows each main risk category as a percentage of the undiversified total SCR.

click to enlarge

By way of putting the internal model components in context, the graph below shows the SCR breakdown as a percentage of total assets (which obviously reflects insurance liabilities and the associated capital held against same). This comparison is also fraught with difficulty as an (re)insurers’ total assets is not necessarily a reliable measure of extreme insurance exposure in the same way as risk weighted assets is for banks (used as the denominator in bank capital ratios). For example, some life insurers can have low insurance related liabilities and associated assets (e.g. for mortality related business) compared to other insurance products (e.g. most non-life exposures).

Notwithstanding that caveat, the graph below shows a marked difference between firms depending upon whether they are a reinsurer or insurer, or whether they are a life, non-life or composite insurer (other items such as retail versus commercial business, local or cross-border, specialty versus homogeneous are also factors).

click to enlarge

Initial reactions by commentators on the insurance sector to the disclosures by European insurers through SFCRs have been mixed. Some have expressed disappointment at the level and consistency of detail being disclosed. Regulators will have their hands full in ensuring that sufficiently robust standards relating to such disclosures are met.

Regulators will also have to ensure a fair and consistent approach across all European jurisdictions is adopted in calculating SCRs, particularly for those calculated using internal models, whilst avoiding the pitfall of forcing everybody to use the same assumptions and methodology. Recent reports suggest that EIOPA is looking for a greater role in approving all internal models across Europe. Systemic model risk under the proposed Basel II banking regulatory rules published in 2004 is arguably one of the contributors to the financial crisis.

Only time will tell if Solvency II has avoided the mistakes of Basel II in the handling of such beautiful models.

Model Ratings

An interesting blog from PwC on the new European Solvency II regulatory regime for insurers commented that “analysts were optimistic about the level of detail they could expect from the Solvency II disclosures” and that it “was hoped that this would enable the volatility of cash generation and the fungibility of capital within a group to be better understood”.

The publication of insurer’s solvency ratios, under the Solvency II regulations to be introduced from next year, using the common mandated template called the standard formula is hoped to prove to be a good comparative measure for insurers across Europe. In the technical specifications for Solvency II (section 6 in this document), the implied credit ratings of the solvency ratios as calculated using the standard formula are shown below.

click to enlargeImplied Ratings in Solvency II Standard Formula

Firm specific internal models present a different challenge in comparing results as, by definition, there is no comment standard or template. The PwC blog states “the outcome of internal model applications remains uncertain in many jurisdictions and different national practices are emerging in implementing the long-term guarantee package including the transitional measures”. It’s therefore interesting to look at a number of Solvency II ratios published to date using insurer’s own models (or partial internal models) and compare them to their financial strength ratings (in this case S&P ratings), as per the graphic below.

click to enlargeInternal Model Solvency II Ratios & S&P Ratings

As firms start to publish their Solvency II ratios, whether under the standard formula or using an internal model, it will be interesting to see what useful insights for investors emerges. Or …..eh, based upon the above sample, otherwise.

Tails of VaR

In an opinion piece in the FT in 2008, Alan Greenspan stated that any risk model is “an abstraction from the full detail of the real world”. He talked about never being able to anticipate discontinuities in financial markets, unknown unknowns if you like. It is therefore depressing to see articles talk about the “VaR shock” that resulted in the Swissie from the decision of the Swiss National Bank (SNB) to lift the cap on its FX rate on the 15th of January (examples here from the Economist and here in the FTAlphaVille). If traders and banks are parameterising their models from periods of unrepresentative low volatility or from periods when artificial central bank caps are in place, then I worry that they are not even adequately considering known unknowns, let alone unknown unknowns. Have we learned nothing?

Of course, anybody with a brain knows (that excludes traders and bankers then!) of the weaknesses in the value-at-risk measure so beloved in modern risk management (see Nassim Taleb and Barry Schachter quotes from the mid 1990s on Quotes page). I tend to agree with David Einhorn when, in 2008, he compared the metric as being like “an airbag that works all the time, except when you have a car accident“.  A piece in the New York Times by Joe Nocera from 2009 is worth a read to remind oneself of the sad topic.

This brings me to the insurance sector. European insurance regulation is moving rapidly towards risk based capital with VaR and T-VaR at its heart. Solvency II calibrates capital at 99.5% VaR whilst the Swiss Solvency Test is at 99% T-VaR (which is approximately equal to 99.5%VaR). The specialty insurance and reinsurance sector is currently going through a frenzy of deals due to pricing and over-capitalisation pressures. The recently announced Partner/AXIS deal follows hot on the heels of XL/Catlin and RenRe/Platinum merger announcements. Indeed, it’s beginning to look like the closing hours of a swinger’s party with a grab for the bowl of keys! Despite the trend being unattractive to investors, it highlights the need to take out capacity and overhead expenses for the sector.

I have posted previously on the impact of reduced pricing on risk profiles, shifting and fattening distributions. The graphic below is the result of an exercise in trying to reflect where I think the market is going for some businesses in the market today. Taking previously published distributions (as per this post), I estimated a “base” profile (I prefer them with profits and losses left to right) of a phantom specialty re/insurer. To illustrate the impact of the current market conditions, I then fattened the tail to account for the dilution of terms and conditions (effectively reducing risk adjusted premia further without having a visible impact on profits in a low loss environment). I also added risks outside of the 99.5%VaR/99%T-VaR regulatory levels whilst increasing the profit profile to reflect an increase in risk appetite to reflect pressures to maintain target profits. This resulted in a decrease in expected profit of approx. 20% and an increase in the 99.5%VaR and 99.5%T-VaR of 45% and 50% respectively. The impact on ROEs (being expected profit divided by capital at 99.5%VaR or T-VaR) shows that a headline 15% can quickly deteriorate to a 7-8% due to loosening of T&Cs and the addition of some tail risk.

click to enlargeTails of VaR

For what it is worth, T-VaR (despite its shortfalls) is my preferred metric over VaR given its relative superior measurement of tail risk and the 99.5%T-VaR is where I would prefer to analyse firms to take account of accumulating downside risks.

The above exercise reflects where I suspect the market is headed through 2015 and into 2016 (more risky profiles, lower operating ROEs). As Solvency II will come in from 2016, introducing the deeply flawed VaR metric at this stage in the market may prove to be inappropriate timing, especially if too much reliance is placed upon VaR models by investors and regulators. The “full detail of the real world” today and in the future is where the focus of such stakeholders should be, with much less emphasis on what the models, calibrated on what came before, say.

Computer says yes

Amlin reported their Q1 figures today and had some interesting comments on their reinsurance and retrocession spend that was down £50 million on the quarter (from 23% of gross premiums to 18%). Approx £20 million was due to a business line withdrawal with the remainder due to “lower rates and improved cover available on attractive terms”.

Amlin also stated “with the assistance of more sophisticated modelling, we have taken the decision to internalise a proportion of a number of programmes. Given the diversifying nature of many of our insurance classes, this has the effect of increasing mean expected profitability whilst only modestly increasing extreme tail risk.

The use by insurers of their economic capital models for reinsurance/retrocession purchases is a trend that is only going to increase as we enter into the risk based solvency world under Solvency II. Current market conditions have resulted in reinsurers being more open to offering multi-line aggregate coverage which protect against both frequency and severity with generous exposure inclusions.

It will only be a matter of time, in my opinion, before reinsurers underwrite coverage directly based upon a insurer’s own capital model, particularly when such a model has been approved by a firm’s regulator or been given the blessing of a rating agency.

Also in the future I expect that firms will more openly disclose their operating risk profiles. There was a trend a few years ago whereby firms such as Endurance (pre- Charman) and Aspen did include net risk profiles, such as those in the graphs below, in their investor presentations and supplements (despite the bad blood in the current Endurance-Aspen hostile take-over bid, at least it’s one thing they can say they have in common!).

click to enlargeOperating Risk Distributions

Unfortunately, it was a trend that did not catch on and was quickly discontinued by those firms. If insurers and reinsurers are increasingly using their internal capital models in key decision making, investors will need to insist on understanding them in more detail. A first step would be more public disclosure of the results, the assumptions, and their strengths and weaknesses.

Confounding correlation

Nassim Nicholas Taleb, the dark knight or rather the black swan himself, said that “anything that relies on correlation is charlatanism”.  I am currently reading the excellent “The signal and the noise” by Nate Silver. In Chapter 1 of the book he has a nice piece on CDOs as an example of a “catastrophic failure of prediction” where he points to certain CDO AAA tranches which were rated on an assumption of a 0.12% default rate and which eventually resulted in an actual rate of 28%, an error factor of over 200 times!.

Silver cites a simplified CDO example of 5 risks used by his friend Anil Kashyap in the University of Chicago to demonstrate the difference in default rate if the 5 risks are assumed to be totally independent and dependent.  It got me thinking as to how such a simplified example could illustrate the impact of applied correlation assumptions. Correlation between core variables are critical to many financial models and are commonly used in most credit models and will be a core feature in insurance internal models (which under Solvency II will be used to calculate a firms own regulatory solvency requirements).

So I set up a simple model (all of my models are generally so) of 5 risks and looked at the impact of varying correlation from 100% to 0% (i.e. totally dependent to independent) between each risk. The model assumes a 20% probability of default for each risk and the results, based upon 250,000 simulations, are presented in the graph below. What it does show is that even at a high level of correlation (e.g. 90%) the impact is considerable.

click to enlarge5 risk pool with correlations from 100% to 0%

The graph below shows the default probabilities as a percentage of the totally dependent levels (i.e 20% for each of the 5 risks). In effect it shows the level of diversification that will result from varying correlation from 0% to 100%. It underlines how misestimating correlation can confound model results.

click to enlargeDefault probabilities & correlations