Monthly Archives: June 2013

Assessing reinsurers’ catastrophe PMLs

Prior to the recent market wobbles on what a post QE world will look like, a number of reinsurers with relatively high property catastrophe exposures have suffered pullbacks in their stock due to fears about catastrophe pricing pressures (subject of previous post). Credit Suisse downgraded Validus recently stating that “reinsurance has become more of a commodity due to lower barriers to entry and vendor models.”

As we head deeper into the US hurricane season, it is worth reviewing the disclosures of a number of reinsurers in relation to catastrophe exposures, specifically their probable maximum losses or PMLs . In 2012 S&P’s influential annual publication – Global Reinsurance Highlights – there is an interesting article called “Just How Much Capital Is At Risk”. The article looked at net PMLs as a percentage of total adjusted capital (TAC), an S&P determined calculation, and also examined relative tail heaviness of PMLs disclosed by different companies. The article concluded that “by focusing on tail heaviness, we may have one additional tool to uncover which reinsurers could be most affected by such an event”. In other words, not only is the amount of the PMLs for different perils important but the shape of the curve across different return periods (e.g. 1 in 50 years, 1 in 100 years, 1 in 250 years, etc.) is also an important indicator of relative exposures. The graphs below show the net PMLs as a percentage of TAC and the net PMLs as a percentage of aggregate limits for the S&P sample of insurers and reinsurers.

click to enlarge

PML as % of S&P capital

PML as % of aggregate limit

Given the uncertainties around reported PMLs discussed in this post, I particularly like seeing PMLs as a percentage of aggregate limits. In the days before the now common use of catastrophic models (by such vendor firms as RMS, AIR and Eqecat), underwriters would subjectively calculate their PMLs as a percentage of their maximum possible loss or MPL (in the past when unlimited coverage was more common an estimate of the maximum loss was made whereas today the MPL is simply the sum of aggregate limits). This practise, being subjective, was obviously open to abuse (and often proved woefully inadequate). It is interesting to note however that some of the commonly used MPL percentages applied for peak exposures in certain markets were higher than those used today from the vendor models at high return periods.

The vendor modellers themselves are very open about the limitations in their models and regularly discuss the sources of uncertainty in their models. There are two main areas of uncertainty – primary and secondary – highlighted in the models. Some also refer to tertiary uncertainty in the uses of model outputs.

Primary uncertainty relates to the uncertainty in determining events in time, in space, in intensity, and in spatial distribution. There is often limited historical data (sampling error) to draw upon, particularly for large events. For example, scientific data on the physical characteristics of historical events such as hurricanes or earthquakes are only as reliable for the past 100 odd years as the instruments available at the time of the event. Even then, due to changes in factors like population density, the space over which many events were recorded may lack important physical elements of the event. Also, there are many unknowns relating to catastrophic events and we are continuously learning new facts as this article on the 2011 Japan quake illustrates.

Each of the vendor modellers build a catalogue of possible events by supplementing known historical events with other possible events (i.e. they fit a tail to known sample). Even though the vendor modellers stress that they do not predict events, their event catalogues determine implied probabilities that are now dominant in the catastrophe reinsurance pricing discovery process. These catalogues are subject to external validation from institutions such as Florida Commission which certifies models for use in setting property rates (and have an interest in ensuring rates stay as low as possible).

Secondary uncertainty relates to data on possible damages from an event like soil type, property structures, construction materials, location and aspect, building standards and such like factors (other factors include liquefaction, landslides, fires following an event, business interruption, etc.). Considerable strides, especially in the US, have taken place in reducing secondary uncertainties in developed insurance markets as databases have grown although Asia and parts of Europe still lag.

A Guy Carpenter report from December 2011 on uncertainty in models estimates crude confidence levels of -40%/+90% for PMLs at national level and -60%/+170% for PMLs at State level. These are significant levels and illustrate how all loss estimates produced by models must be treated with care and a healthy degree of scepticism.

Disclosures by reinsurers have also improved in recent years in relation to specific events. In the recent past, many reinsurers simply disclosed point estimates for their largest losses. Some still do. Indeed some, such as the well-respected Renaissance Re, still do not disclose any such figures on the basis that such disclosures are often misinterpreted by analysts and investors. Those that do disclose figures do so with comprehensive disclaimers. One of my favourites is “investors should not rely on information provided when considering an investment in the company”!

Comparing disclosed PMLs between reinsurers is rife with difficulty. Issues to consider include how firms define zonal areas, whether they use a vendor model or a proprietary model, whether model options such as storm surge are included, how model results are blended, and annual aggregation methodologies. These are all critical considerations and the detail provided in reinsurers’ disclosures is often insufficient to make a detailed determination. An example of the difficulty is comparing the disclosures of two of the largest reinsurers – Munich Re and Swiss Re. Both disclose PMLs for Atlantic wind and European storm on a 1 in 200 year return basis. Munich Re’s net loss estimate for each event is 18% and 11% respectively of its net tangible assets and Swiss Re’s net loss estimate for each event is 11% and 10% respectively of its net tangible assets.  However, the comparison is of limited use as Munich’s is on an aggregate VaR basis and Swiss Re’s is on the basis of pre-tax impact on economic capital of each single event.

Most reinsurers disclose their PMLs on an occurrence exceedance probability (OEP) basis. The OEP curve is essentially the probability distribution of the loss amount given an event, combined with an assumed frequency of an event. Other bases used for determining PMLs include an aggregate exceedance probability (AEP) basis or an average annual loss (AAL) basis. The AEP curves show aggregate annual losses and how single event losses are aggregated or ranked when calculating (each vendor has their own methodology) the AEP is critical to understand for comparisons. The AAL is the mean value of a loss exceedance probability distribution and is the expected loss per year averaged over a defined period.

An example of the potential misleading nature of disclosed PMLs is the case of Flagstone Re. Formed after Hurricane Katrina, Flagstone’s business model was based upon building a portfolio of catastrophe risks with an emphasis upon non-US risks. Although US risks carry the highest premium (by value and rate on line), they are also the most competitive. The idea was that superior risk premia could be delivered by a diverse portfolio sourced from less competitive markets. Flagstone reported their annual aggregate PML on a 1 in 100 and 1 in 250 year basis. As the graph below shows, Flagstone were hit by a frequency of smaller losses in 2010 and particularly in 2011 that resulted in aggregate losses far in excess of their reported PMLs. The losses invalidated their business model and the firm was sold to Validus in 2012 at approximately 80% of book value. Flagstone’s CEO, David Brown, stated at the closing of the sale that “the idea was that we did not want to put all of our eggs in the US basket and that would have been a successful approach had the pattern of the previous 30 to 40 years continued”.

click to enlarge

Flagstone CAT losses

The graphs below show a sample of reinsurer’s PML disclosures as at end Q1 2013 as a percentage of net tangible assets. Some reinsurers show their PMLs as a percentage of capital including hybrid or contingent capital. For the sake of comparisons, I have not included such hybrid or contingent capital in the net tangible assets calculations in the graphs below.

US Windstorm (click to enlarge)

US windstorm PMLs 2013

US & Japan Earthquake (click to enlarge)

US & Japan PMLs 2013

As per the S&P article, its important to look at the shape of PML curves as well as the levels for different events. For example, the shape of Lancashire PML curve stands out in the earthquake graphs and for the US gulf of Mexico storm. Montpelier for US quake and AXIS for Japan quakes also stand out in terms of the increased exposure levels at higher return periods. In terms of the level of exposure, Validus stands out on US wind, Endurance on US quake, and Catlin & Amlin on Japan quake.

Any investor in this space must form their own view on the likelihood of major catastrophes when determining their own risk appetite. When assessing the probabilities of historical events reoccurring, care must be taken to ensure past events are viewed on the basis of existing exposures. Irrespective of whether you are a believer in the impact of climate changes (which I am), graphs such as the one below (based off Swiss Re data inflated to 2012) are often used in industry. They imply an increasing trend in insured losses in the future.

Historical Insured Losses (click to enlarge)1990 to 2012 historical insured catastrophe losses Swiss ReThe reality is that as the world population increases resulting in higher housing density in catastrophe exposed areas such as coast lines the past needs to be viewed in terms of todays exposures. Pictures of Ocean Drive in Florida in 1926 and in 2000 best illustrates the point (click to enlarge).

Ocean Drive Florida 1926 & 2000

There has been interesting analysis performed in the past on exposure adjusting or normalising US hurricane losses by academics most notably by Roger Pielke (as the updated graph on his blog shows). Historical windstorms in the US run through commercial catastrophe models with todays exposure data on housing density and construction types shows a similar trend to those of Pielke’s graph. The historical trend from these analyses shows a more variable trend which is a lot less certain than increasing trend in the graph based off Swiss Re data. These losses suggest that the 1970s and 1980s may have been decades of reduced US hurricane activity relative to history and that more recent decades are returning to a more “normal” activity levels for US windstorms.

In conclusion, reviewing PMLs disclosed by reinsurers provides an interesting insight into potential exposures to specific events. However, the disclosures are only as good as the underlying methodology used in their calculation. Hopefully, in the future, further detail will be provided to investors on these PML calculations so that real and meaningful comparisons can be made. Notwithstanding what PMLs may show, investors need to understand the potential for catastrophic events and adapt their risk appetite accordingly.

Apples’ options

Nothing that has occurred concerning Apple (AAPL) since my previous post in April has radically changed my view. The shareholder friendly proposals announced at the last quarterly call and the updates announced  at the recent conference don’t alter the fundamentals. Reduced margins and revenue growth from the flagship iPhone product lines may be partially offset by iPad growth and service revenues (from the likes of iRadio) but Apple’s trajectory is now looking like it’s status as a growth stock is in the past. In short, I think AAPL will trade within a range around +/-20% of its current level until its medium term future is more certain. I don’t see much of a compelling investment case in the short term as I do not think that the market has yet fully grasped the new reality for Apple (e.g. analyst targets still look too rosy to me). I think Apple’s future lies in reinforcing its ecosystem through upgrades, new services and incremental changes to product offerings. New product offerings may restore Apple’s growth status but nothing currently being speculated on seems to be a game changer to me.

I did have a quick look at Apples’ options for opportunities. I used data from Yahoo last weekend at $430 AAPL (I would caution against using data from sources like Yahoo for detailed analysis but they are generally okay for a rough initial feel), as per the graph below (click to enlarge).

Apples optionsGiven the recent volatility in the stock and the current uncertainty, it was no surprise to find that Apple’s options are expensive with significant time decay premia. The option curves are skewed on the upside which reflects current market expectations and doesn’t offer any normal distribution mispricing opportunities.  My analysis show that Apple’s balance sheet limits the downside potential beyond a 30% drop. I was thinking of a possible strategy along the lines of buying the stock and using the dividend to put downside protection but the curve shows that would provide for only a 6 month put at a strike around $360 – not a strategy that is compelling given my views and lack of conviction on the stock.

I do use options occasionally, primarily for two reasons – for risk management purposes (mainly insurance on downsides) or as an investment on a stock by way of an out of the money option that I think will breakout of a historically range (generally on upside but equally valid for downside punt also).  For the latter purpose, the difficulty is finding a liquid out of the money option market over 12 months on such breakout opportunities. Many people follow strategies such as selling puts or calls to supplement returns, such as the one in this article on Apple. I really don’t get these strategies (negative gamma in trader speak). Why take such downside (albeit tail) risk for such little upside? Seems like the ultimate pennies in front of a steamroller play to me. Add in the negative liquidity impacts of the strategy in a stress scenario (just when liquidity becomes so valuable) highlights further the dangers. In the words of the prop trader interviewed in “Inside the House of Money”, Steven Drobny’s excellent book, “you should never be short gamma“.

New valuation realities

As the market pulls back again this week in a much-needed dose of worry about where QE is leading us and how it will end, there is another interesting article from Buttonwood in this week’s Economist. Based upon work of analysts in investment banks BNP Paribas, Société Générale, and Goldman Sachs (Andrew Lapthorne of SG does high quality analysis and his work generally makes for insightful reading), the article highlights how valuations based upon price to book ratios have broken with pre-crisis history and currently differentiate more acutely between “quality” stocks (depending upon varying criteria as applied by the said analysts).

The article highlights the limited pool of “quality” stocks no-matter what criteria is used and Buttonwood also makes a point (which I fully agree with), namely that “investors have been flocking to equities because interest rates are so low; some, perhaps, on the naive view that using a lower discount rate on future cashflows translates into higher share prices today“.

As readers of this blog will be aware, two sectors that I follow are the wholesale insurance and the alternative telecom sectors. In previous posts, I have presented my historical valuation metrics for both sectors (albeit from limited samples) and they are combined in the graph below (one based upon price to tangible book, the other an EV/ebitda metric). The alternative telecom sector is as far away from any “quality” stock criteria that one could imagine and would be in the lowest quintile (on volatility alone!) of any sensible criteria. Although results are volatile by definition in the wholesale insurance sector, some of the bigger names like Munich Re may get higher ratings, maybe a 2 or 3 on Buttonwood’s graph.

click to enlarge

wholesale insurer & altnet valuation metric comparison

The main point I am trying to make in this post is that relying on valuations returning to levels prior to the financial crisis for certain sectors is just not realistic or sensible. Unless the market goes into fantasy land on the upside (this may seem idle speculation given the market’s current mood but just think where sentiment was a few short weeks ago), the differentiation currently been made in the market between business models and their inherent volatility is rational. The worry, as the article points out, is that there is not enough “quality” stocks around currently to wet the appetite of hungry investors and historically that has been a negative indicator for future stock returns.

Will it be different for Level3 this time?

As per my previous post on telecom experiences, I reviewed my projections and valuation methodology for Level3. Level3 has a frustrating yet fascinating past. It miraculously survived the telecom implosion with an over sized debt load through growing into its debt by buying up smaller metro focused telecoms and its most recent merger with a post chapter 11 Global Crossing.

Level 3 struggled with the integration of its numerous merger partners from 2005 to 2007 and, with the downturn in 2008 to 2010, suffered reductions in the both of the top and bottom lines of the combined entities. There is however now some hope that the integration with Global Crossing will not suffer the same fate. For a start, Level3 approached the integration with a much sharper focus on the customer experience during the merger and ensuring minimal service disruptions. Also, Global Crossing itself had a number of years following its restructuring where it focused on its core products and de-emphasised the low margin commodity business. Finally, the recent replacement of long time CEO Jim Crowe with the COO Jeff Storey seems to have brought a new focus in the company on growing the larger business organically rather than through continuous M&A.

I developed 3 scenarios to illustrate the benefits and the dangers of the current Level3 leveraged business model. The pessimistic scenario assumes that Level3 does not succeed in growing the top line and stumbles on achieving material ebitda margin improvement, only managing margins in the middle 20’s range. The base scenario assumes that Level3 does grow its higher margin business modestly (against a stodgy economic background with interest rates gradually stepping up over the medium term) which offsets reductions in voice based business, achieving an ebitda margin around 30% in the medium term. The optimistic scenario assumes Level3 gets on-going synergies and material ebitda margin improvement achieving a 33% margin by 2017 and thereafter. Graphs representing the scenarios are below and also show the resulting leverage ratios the business achieves.

LVLT Projection Pessimistic Scenario (click to enlarge)Level3 Pessimistic

LVLT Projection Base Scenario (click to enlarge)Level3 Base

LVLT Projection Optimistic Scenario (click to enlarge)Level3 Optimistic

The pessimistic scenario assumes that Level3 can’t get its leverage materially below 500% and would ultimately need to be restructured. Assuming the equity would be wiped out here may be conservative given the equity’s history to date at higher leverage levels. Also, a takeover may give the equity some value in this scenario. Notwithstanding these possibilities, the pessimistic scenario does illustrate the dangers to investing in a highly leveraged firm and given the current macro-economic headwinds and the likely higher interest rate environment to come, I believe an equity wipe-out remains a risk for Level3 in a pessimistic scenario.

The thin line between madness and sanity for highly leveraged firms is illustrated by the upside that modest and healthy growth of both bottom and top lines could result in the base and optimistic scenarios respectively. The following table shows the DCF results at discount rates ranging from 5% to 15%. The discounted cash-flow analysis assumes a termination multiple of discounted free cash-flow after 10 years in 2022 (different multiples for each scenario). As I stated in the previous telecom post, I take the results of a DCF analysis for these firms with a healthy pinch of salt given the timeframe involved and the number of assumptions that have to be made (e.g. cost of debt). Focussing on a discount rate of between 7.5% and 12.5% (which is where I think LVLT should be) does show that the leveraged business model of Level3 provides a 2 to 3 times upside against a 100% downside risk profile (assuming a current $21 per share price).

Summary of DCF Analysis (click to enlarge)

LVLT Share Price Upside & Downside

An alternative valuation method is to look at the EV/EBITDA multiple valuation that the scenarios above may imply. This analysis confirms a possible 200% to 300% upside for the base and optimistic scenarios respectively over a 5 year time horizon (and the 100% downside!).

EV/EBITDA Projection Pessimistic Scenario (click to enlarge)Level3 Pessimistic EV EBITDA multiple

EV/EBITDA Projection Base Scenario (click to enlarge)

LVLT EVtoEBITDA Project BASE

EV/EBITDA Projection Optimistic Scenario (click to enlarge)

LVLT EVtoEBITDA Project OPT

Conclusion

Level3 has broken many hearts in the past. However, if the new CEO can execute on organic growth and margin improvement, the stock offers an attractive upside over the next few years due to its leveraged balance sheet and operating model. A lack of macro-economic turmoil will also be an important factor in any success. For even more aggressive investors, playing the stock through long dated out of the money options offers the prospect of leveraging returns even further (with the accompanying increase in risk profile). As I keep stating, Level3 has promised much in the past and failed to deliver on a spectacular basis. This time, maybe, just maybe, it could deliver something for patient investors. Anybody considering Level3 should always keep in mind that it remains a high risk/return play and is not for the faint hearted.

Are equity markets in bubble territory?

With Friday’s selloff, it will be interesting to see if this week brings a pause to the equity run-up. The rise has been dramatic with most US indices up 12% to 14% this year and over 20% since the November lows. I was struck by the last market pause in May and the comments on the US business TV shows. One said that there was a wall of money on the sidelines waiting to buy on the dip. Institutional money desperate for yield and company’s filling buy back programmes do seem to provide this market with a floor.

Historical multiples such as the TTM PE and the PE 10 at 18.85 and 23.89 at the end of May for the Dow are high relative to the historical averages of 15.5 and 16.47 respectively. However given the flood of money printing at Central Banks around the world such levels are not surprising nor excessive. I don’t think we are in bubble territory yet but, given the lack of alternatives for money, we will likely end up there. Whether that takes another 6 or 12 or 24 months is not really important. In cases where risk premia is irrational, a quote from Jim Leitner in “The Invisible Hands” comes to mind where he advises that an investor should focus on “the possibility of buying cheap insurance when the market is willing to sell it, before the horse has left the barn“. It seems to me as this is such a time and I will be looking for such opportunities in the absence of a major pull back. In my mind, its better to spend some profit to give peace of mind whilst also participating in further run-ups.

Longer term, I am disturbed by the macro policies currently been pursued and the impact that an exit from QE may have. It makes little sense to respond to every crisis with loose monetary policy designed to reinflate asset values so that Western consumers can get back to the Mall. I thought our response was going to be more fundamental this time! On that subject, I noticed a review of a book in the Sundays – its called “When the money runs out, the end of western influence” by the HSBC economist Stephen King. The review was just okay although the Economist seems to have given it a better one. Cheery reading for the holidays!