Tag Archives: economic growth

Keep on moving, 2018

As I re-read my eve of 2017 post, its clear that the trepidation coming into 2017, primarily caused by Brexit and Trump’s election, proved unfounded in the short term. In economic terms, stability proved to be the byword in 2017 in terms of inflation, monetary policy and economic growth, resulting in what the Financial Times are calling a “goldilocks year” for markets in 2017 with the S&P500 gaining an impressive 18%.

Politically, the madness that is British politics resulted in the June election result and the year ended in a classic European fudge of an agreement on the terms of the Brexit divorce, where everybody seemingly got what they wanted. My anxiety over the possibility of a European populist curveball in 2017 proved unfounded with Emmanuel Macron’s election. Indeed, Germany’s election result has proven a brake on any dramatic federalist push by Macron (again the goldilocks metaphor springs to mind).

My prediction that “volatility is likely to be ever present” in US markets as the “realities of governing and the limitations of Trump’s brusque approach becomes apparent” also proved to be misguided – the volatility part not the part about Trump’s brusque approach! According to the fact checkers, Trump made nearly 2,000 false or misleading claims in his first year, that’s an average of over 5 a day! Trump has claimed credit for the amazing performance of the 2017 equity market no less than 85 times (something that may well come back to bite him in the years ahead). The graph below does show the amazing smooth performance of the S&P500 in 2017 compared to historical analysts’ predictions at the beginning of the year (see this recent post on my views relating to the current valuation of the S&P500).

click to enlarge

As for the equity market in 2018, I can’t but help think that volatility will make a come-back in a big way. Looking at the near unanimous positive commentators’ predictions for the US equity market, I am struck by a passage from Andrew Lo’s excellent book “Adaptive Markets” (which I am currently reading) which states that “it seems risk-averse investors process the risk of monetary loss with the same circuit they contemplate viscerally disgusting things, while risk-seeking investors process their potential winnings with the same reward circuits used by drugs like cocaine”. Lo further opines that “if financial gain is associated with risky activities, a potentially devastating loop of positive feedback can emerge in the brain from a period of lucky investments”.

In a recent example of feeding the loop of positive feedback, Credit Suisse stated that “historically, strong returns tend to be followed by strong returns in the subsequent year”. Let’s party on! With a recent survey of retail investors in the US showing that over 50% are bullish and believe now is a good time to get into equities, it looks like now is a time where positive feedback should be restrained rather than being espoused, as Trump’s mistimed plutocratic policies are currently doing. Add in a new FED chair, Jay Powell, and the rotation of many in the FOMC in 2018 which could result in any restriction on the punch bowl getting a pass in the short term. Continuing the goldilocks theme feeding the loop, many commentators are currently predicting that the 10-year treasury yield wouldn’t even breach 3% in 2018! But hey, what do I know? This party will likely just keep on moving through 2018 before it comes to a messy end in 2019 or even 2020.

As my post proved last year, trying to predict the next 12 months is a mugs game. So eh, proving my mug credentials, here goes…

  • I am not even going to try to make any predictions about Trump (I’m not that big of a mug). If the Democrats can get their act together in 2018 and capitalize on Trump’s disapproval ratings with sensible policies and candidates, I think they should win back the House in the November mid-terms. But also gaining control of the Senate may be too big an ask, given the number of Trump strong-holds they’ll have to defend.
  • Will a Brexit deal, both the final divorce terms and an outline on trade terms, get the same fudge treatment by October in 2018? Or could it all fall apart with a Conservative implosion and another possible election in the UK? My guess is on the fudge, kicking the can down the transition road seems the best way out for all. I also don’t see a Prime Minster Corbyn, or a Prime Minister Johnson for that matter. In fact, I suspect this time next year Theresa May will still be the UK leader!
  • China will keep on growing (according to official figures anyway), both in economics terms and in global influence, and despite the IMF’s recent warning about a high probability of financial distress, will continue to massage their economy through choppy waters.
  • Despite a likely messy result in the Italian elections in March with the usual subsequent drawn out coalition drama, a return of Silvio Berlusconi on a bandwagon of populist right-wing policies to power is even too pythonesque for today’s reality (image both Trump and Berlusconi on the world stage!).
  • North Korea is the one that scares me the most, so I hope that the consensus that neither side will go there holds. The increasingly hawkish noises from the US security advisors is a worry.
  • Finally, as always, the winner of the World Cup in June will be ……. the bookies! Boom boom.

A happy and health New Year to all.

Piddling Productivity

Walk around any office today and you will likely see staff on the internet or playing with their smartphones, the extent of which will depend upon the office etiquette. The rise of the networked society would intuitively imply increased productivity. Data analytics, the cloud, the ease with which items can be researched and purchased all imply a rise in efficiency and productivity. Or does it?

Productivity is about “working smarter” rather than “working harder” and it reflects our ability to produce more output by better combining inputs, owing to new ideas, technological innovations and business models. Productivity is critical to future growth. Has the rise of social media, knowing what your friends favourite type of guacamole is, made any difference to productivity? The statistics from recent years indicate the answer is no with the slowdown in productivity vexing economists with a multitude of recent opinion and papers on the topic. Stanley Fisher from the Fed stating in an interesting speech for earlier this month that “we simply do not know what will happen to productivity growth” and included the graph below in his presentation.

click to enlargeUS Average Productivity Growth 1952 to 2015

Martin Wolf in a piece in the FT on recent projections by the Office for Budget Responsibility (OBR) calls the prospects for productivity “the most important uncertainty affecting economic prospects of the British people”.

Some think the productivity statistics have misestimated growth and the impact of technology (e.g. the amount of free online services). A recent paper from earlier this month by Fed and IMF employees Byrne, Fernald and Reinsdorf concluded that “we find little evidence that the slowdown arises from growing mismeasurement of the gains from innovation in IT-related goods and services”.

The good news seems to be that productivity slumps are far from unprecedented according to a paper in September last year from Eichengreen, Park and Shin. The bad news is that the authors conclude the current slump is widespread and evident in advanced countries like the U.S. and UK as well as in emerging markets in Latin America, Southeast Europe and Central Asia including China.

A fascinating paper from December 2015 by staff at the Bank of England called “Secular drivers of the global real interest rate” covers a wide range of issues which are impacting growth, including productivity growth. I am still trying to digest much of the paper but it does highlight many of the economists’ arguments on productivity.

One of those is Robert Gordon, who has a new bestseller out called “The Rise and Fall of American Growth”. Gordon has long championed the view of a stagnation in technology advances due to structural headwinds such as an educational plateau, income inequality and public indebtedness.

click to enlargeAverage Annual Total Facor Productivity

Others argue that productivity comes in waves and new technology often takes time to be fully integrated into the production process (e.g. electricity took 20 years before the benefits showed in labour productivity).

Clearly this is an important issue and one which deserves the current level of debate. Time will tell whether we are in a slump and will remain there or whether we are at the dawn of a golden era of innovation led productivity growth…..

Stressing the scenario testing

Scenario and stress testing by financial regulators has become a common supervisory tool since the financial crisis. The EU, the US and the UK all now regularly stress their banks using detailed adverse scenarios. In a recent presentation, Moody’s Analytics illustrated the variation in some of the metrics in the adverse scenarios used in recent tests by regulators, as per the graphic below of the peak to trough fall in real GDP.

click to enlargeBanking Stress Tests

Many commentators have criticized these tests for their inconsistency and flawed methodology while pointing out the political conflict many regulators with responsibility for financial stability have. They cannot be seen to be promoting a draconian scenario for stress testing on the one hand whilst assuring markets of the stability of the system on the other hand.

The EU tests have particularly had a credibility problem given the political difficulties in really stressing possible scenarios (hello, a Euro break-up?). An article last year by Morris Goldstein stated:

“By refusing to include a rigorous leverage ratio test, by allowing banks to artificially inflate bank capital, by engaging in wholesale monkey business with tax deferred assets, and also by ruling out a deflation scenario, the ECB produced estimates of the aggregate capital shortfall and a country pattern of bank failures that are not believable.”

In a report from the Adam Smith Institute in July, Kevin Dowd (a vocal critic of the regulator’s approach) stated that the Bank of England’s 2014 tests were lacking in credibility and “that the Bank’s risk models are worse than useless because they give false risk comfort”. Dowd points to the US where the annual Comprehensive Capital Assessment and Review (CCAR) tests have been supplemented by the DFAST tests mandated under Dodd Frank (these use a more standard approach to provide relative tests between banks). In the US, the whole process has been turned into a vast and expensive industry with consultants (many of them ex-regulators!) making a fortune on ever increasing compliance requirements. The end result may be that the original objectives have been somewhat lost.

According to a report from a duo of Columba University professors, banks have learned to game the system whereby “outcomes have become more predictable and therefore arguably less informative”. The worry here is that, to ensure a consistent application across the sector, regulators have been captured by their models and are perpetuating group think by dictating “good” and “bad” business models. Whatever about the dangers of the free market dictating optimal business models (and Lord knows there’s plenty of evidence on that subject!!), relying on regulators to do so is, well, scary.

To my way of thinking, the underlying issue here results from the systemic “too big to fail” nature of many regulated firms. Capitalism is (supposedly!) based upon punishing imprudent risk taking through the threat of bankruptcy and therefore we should be encouraging a diverse range of business models with sensible sizes that don’t, individually or in clusters, threaten financial stability.

On the merits of using stress testing for banks, Dowd quipped that “it is surely better to have no radar at all than a blind one that no-one can rely upon” and concluded that the Bank of England should, rather harshly in my view, scrap the whole process. Although I agree with many of the criticisms, I think the process does have merit. To be fair, many regulators understand the limitations of the approach. Recently Deputy Governor Jon Cunliffe of the Bank of England admitted the fragilities of some of their testing and stated that “a development of this approach would be to use stress testing more counter-cyclically”.

The insurance sector, particularly the non-life sector, has a longer history with stress and scenario testing. Lloyds of London has long required its syndicates to run mandatory realistic disaster scenarios (RDS), primarily focussed on known natural and man-made events. The most recent RDS are set out in the exhibit below.

click to enlargeLloyds Realistic Disaster Scenarios 2015

A valid criticism of the RDS approach is that insurers know what to expect and are therefore able to game the system. Risk models such as the commercial catastrophe models sold by firms like RMS and AIR have proven ever adapt at running historical or theoretical scenarios through today’s modern exposures to get estimates of losses to insurers. The difficulty comes in assigning probabilities to known natural events where the historical data is only really reliable for the past 100 years or so and where man-made events in the modern world, such as terrorism or cyber risks, are virtually impossible to predict. I previously highlighted some of the concerns on the methodology used in many models (e.g. on correlation here and VaR here) used to assess insurance capital which have now been embedded into the new European regulatory framework Solvency II, calibrated at a 1-in-200 year level.

The Prudential Regulatory Authority (PRA), now part of the Bank of England, detailed a set of scenarios last month to stress test its non-life insurance sector in 2015. The detail of these tests is summarised in the exhibit below.

click to enlargePRA General Insurance Stress Test 2015

Robert Childs, the chairman of the Hiscox group, raised some eye brows by saying the PRA tests did not go far enough and called for a war game type exercise to see “how a serious catastrophe may play out”. Childs proposed that such an exercise would mean that regulators would have the confidence in industry to get on with dealing with the aftermath of any such catastrophe without undue fussing from the authorities.

An efficient insurance sector is important to economic growth and development by facilitating trade and commerce through risk mitigation and dispersion, thereby allowing firms to more effectively allocate capital to productive means. Too much “fussing” by regulators through overly conservative capital requirements, maybe resulting from overtly pessimistic stress tests, can result in economic growth being impinged by excess cost. However, given the movement globally towards larger insurers, which in my view will accelerate under Solvency II given its unrestricted credit for diversification, the regulator’s focus on financial stability and the experiences in banking mean that fussy regulation will be in vogue for some time to come.

The scenarios selected by the PRA are interesting in that the focus for known natural catastrophes is on a frequency of large events as opposed to an emphasis on severity in the Lloyds’ RDS. It’s arguable that the probability of the 2 major European storms in one year or 3 US storms in one year is significantly more remote than the 1 in 200 probability level at which capital is set under Solvency II. One of the more interesting scenarios is the reverse stress test such that the firm becomes unviable. I am sure many firms will select a combination of events with an implied probability of all occurring with one year so remote as to be impossible. Or select some ultra extreme events such as the Cumbre Vieja mega-tsunami (as per this post). A lack of imagination in looking at different scenarios would be a pity as good risk management should be open to really testing portfolios rather than running through the same old known events.

New scenarios are constantly being suggested by researchers. Swiss Re recently published a paper on a reoccurrence of the New Madrid cluster of earthquakes of 1811/1812 which they estimated could result in $300 billion of losses of which 50% would be insured (breakdown as per the exhibit below). Swiss Re estimates the probability of such an event at 1 in 500 years or roughly a 10% chance of occurrence within the next 50 years.

click to enlarge1811 New Madrid Earthquakes repeated

Another interesting scenario, developed by the University of Cambridge and Lloyds, which is technologically possible, is a cyber attack on the US power grid (in this report). There have been a growing number of cases of hacking into power grids in the US and Europe which make this scenario ever more real. The authors estimate the event at a 1 in 200 year probability and detail three scenarios (S1, S2, and the extreme X1) with insured losses ranging from $20 billion to $70 billion, as per the exhibit below. These figures are far greater than the probable maximum loss (PML) estimated for the sector by a March UK industry report (as per this post).

click to enlargeCyber Blackout Scenario

I think it will be a very long time before any insurer willingly publishes the results of scenarios that could cause it to be in financial difficulty. I may be naive but I think that is a pity because insurance is a risk business and increased transparency could only lead to more efficient capital allocations across the sector. Everybody claiming that they can survive any foreseeable event up to a notional probability of occurrence (such as 1 in 200 years) can only lead to misplaced solace. History shows us that, in the real world, risk has a habit of surprising, and not in a good way. Imaginative stress and scenario testing, performed in an efficient and transparent way, may help to lessen the surprise. Nothing however can change the fact that the “unknown unknowns” will always remain.