Category Archives: Equity Market

Broken Record II

As the S&P500 hit an intraday all-time high yesterday, it’s been nearly 9 months since I posted on the valuation of the S&P500 (here). Since then, I have touched on factors like the reversal of global QE flows by Central Banks (here) and the lax credit terms that may be exposed by tightening monetary conditions (here). Although the traditional pull back after labor day in the US hasn’t been a big feature in recent years, the market feels frothy and a pullback seems plausible. The TINA (There Is No Alternative) trade is looking distinctly tired as the bull market approaches the 3,500-day mark. So now is an opportune time to review some of the arguments on valuations.

Fortune magazine recently had an interesting summary piece on the mounting headwinds in the US which indicate that “the current economic expansion is much nearer its end than its beginning”. Higher interest rates and the uncertainty from the ongoing Trump trade squabble are obvious headwinds that have caused nervous investors to moderate slightly valuation multiples from late last year. The Fortune article points to factors like low unemployment rates and restrictions on immigration pushing up wage costs, rising oil prices, the fleeting nature of Trump’s tax cuts against the long-term impact on federal debt, high corporate debt levels (with debt to EBITDA levels at 15 years high) and the over-optimistic earnings growth estimated by analysts.

That last point may seem harsh given the 24% and 10% growth in reported quarterly EPS and revenue respectively in Q2 2018 over Q2 2017, according to Factset as at 10/08/2018. The graph below shows the quarterly reported growth projections by analysts, as per S&P Dow Jones Indices, with a fall off in quarterly growth in 2019 from the mid-20’s down to a 10-15% range, as items like the tax cuts wash out. Clearly 10-15% earnings growth in 2019 is still assuming strong earnings and has some commentators questioning whether analysts are being too optimistic given the potential headwinds outlined above.

click to enlarge

According to Factset as at 10/08/2018, the 12-month forward PE of 16.6 is around the 5-year average level and 15% above the 10-year average, as below. As at the S&P500 high on 21/08/2018, the 12-month forward PE is 16.8.

click to enlarge

In terms of the Shiller PE or the cyclically adjusted PE (PE10), the graph below shows that the current PE10 ratio of 32.65 as at the S&P500 high on 21/08/2018, which is 63% higher than 50-year average of 20. For the purists, the current PE10 is 89% above the 100-year average.

click to enlargeCAPE Shiller PE PE10 as at 21082018 S&P500 high

According to this very interesting research paper called King of the Mountain, the PE10 metric varies across different macro-economic conditions, specifically the level of real interest rates and inflation. The authors further claim that PE10 becomes a statistically significant and economically meaningful predictor of shorter-term returns under the assumption that PE10 levels mean-revert toward the levels suggested by prevailing macroeconomic conditions rather than toward long-term averages. The graph below shows the results from the research for different real yield and inflation levels, the so-called valuation mountain.

click to enlarge

At a real yield around 1% and inflation around 2%, the research suggests a median PE around 20 is reasonable. Although I know that median is not the same as mean, the 20 figure is consistent with the 50-year PE10 average. The debates on CAPE/PE10 as a valuation metric have been extensively aired in this blog (here and here are examples) and range around the use of historically applicable earnings data, adjustments around changes in accounting methodology (such as FAS 142/144 on intangible write downs), relevant time periods to reflect structural changes in the economy, changes in dividend pay-out ratios, the increased contribution of foreign earnings in US firms, and the reduced contribution of labour costs (due to low real wage inflation).

One hotly debated issue around CAPE/PE10 is the impact of the changing profit margin levels. One conservative adjustment to PE10 for changes in profit margins is the John Hussman adjusted CAPE/PE10, as below, which attempts to normalise profit margins in the metric. This metric indicates that the current market is at an all time high, above the 1920s and internet bubbles (it sure doesn’t feel like that!!). In Hussman’s most recent market commentary, he states that “we project market losses over the completion of this cycle on the order of -64% for the S&P 500 Index”.

click to enlarge

Given the technological changes in business models and structures across economic systems, I believe that assuming current profit margins “normalise” to the average is too conservative, particularly given the potential for AI and digital transformation to cut costs across a range of business models over the medium term. Based upon my crude adjustment to the PE10 for 2010 and prior, as outlined in the previous Broken Record post (i.e. adjusted to 8.5%), using US corporate profits as a % of US GDP as a proxy for profit margins, the current PE10 of 32.65 is 21% above my profit margin adjusted 50-year average of 27, as shown below.

click to enlargeCAPE Shiller PE PE10 adjusted as at 21082018 S&P500 high

So, in summary, the different ranges of overvaluation for the S&P500 at its current high are from 15% to 60%. If the 2019 estimates of 10-15% quarterly EPS growth start to look optimistic, whether through deepening trade tensions or tighter monetary policy, I could see a 10% to 15% pullback. If economic headwinds, as above, start to get serious and the prospect of a recession gets real (although these things normally come quickly as a surprise), then something more serious could be possible.

On the flipside, I struggle to see where significant upside can come from in terms of getting earnings growth in 2019 past the 10-15% range. A breakthrough in trade tensions may be possible although unlikely before the mid-term elections. All in all, the best it looks like to me in the short term is the S&P500 going sideways from here, absent a post-labor day spurt of profit taking.

But hey, my record on calling the end to this bull market has been consistently broken….

Value Matters

I recently saw an interview with Damian Lewis, the actor who plays hedge fund billionaire Bobby “Axe” Axelrod in the TV show Billions, where he commented on the differences in reaction to the character in the US and the UK. Lewis said that in the US, the character is treated like an inspirational hero, whereas in the UK he’s seen as a villain. We all like to see a big shot hedgie fall flat on their face so us mere mortals can feel less stupid.

The case of David Einhorn is not so clear cut. A somewhat geekie character, the recent run of bad results of his hedge fund, Greenlight Capital, is raising some interesting questions amongst the talking heads of the merits of value stocks over the run away success of growth stocks in recent years. Einhorn’s recent results can be seen in a historical context, based upon published figures, in the graph below.

click to enlarge

Einhorn recently commented that “the reality is that the market is cyclical and given the extreme anomaly, reversion to the mean should happen sooner rather than later” whilst adding that “we just can’t say when“. The under-performance of value stocks is also highlighted by Alliance Bernstein in this article, as per the graph below.

click to enlarge

As an aside, Alliance Bernstein also have another interesting article which shows the percentage of debt to capital of S&P500 firms, as below.

click to enlarge

Einhorn not only invests in value stocks, like BrightHouse Financial (BHF) and General Motors (GM), but he also shorts highly valued so-called growth stocks like Tesla (TSLA), Amazon (AMZN) and Netflix (NFLX), his bubble basket. In fact, Einhorn’s bubble basket has been one of the reasons behind his recent poor performance. He queries AMZN on the basis that just because they “can disrupt somebody else’s profit stream, it doesn’t mean that AMZN earns that profit stream“. He trashes TSLA and its ability to deliver safe mass produced electric cars and points to the growing competition from “old media” firms for NFLX.

A quick look at the 2019 projected forward PE ratios, based off today’s valuations against average analysts estimates for 2018 and 2019 EPS numbers from Yahoo Finance of some of today’s most hyped growth stocks plus their Chinese counterparts plus some more “normal” firms like T and VZ as a counter weight, provides considerable justification to Einhorn’s arguments.

click to enlarge

[As an another aside, I am keeping an eye on Chinese valuations, hit by trade war concerns, for opportunities in case Trump’s trade war turns out to be another “huge” deal where he folds like the penny hustler he is.]

And the graph above shows only the firms with positive earnings to have a PE ratio in 2019 (eh, hello TSLA)!! In fact, the graph makes Einhorn’s rationale seem downright sensible to me.

Now, that’s not something you could say about Axe!

Heterogeneous Future

It seems like wherever you look these days there is references to the transformational power of artificial intelligence (AI), including cognitive or machine learning (ML), on businesses and our future. A previous post on AI and insurance referred to some of the changes ongoing in financial services in relation to core business processes and costs. This industry article highlights how machine learning (specifically multi-objective genetic algorithms) can be used in portfolio optimization by (re)insurers. To further my understanding on the topic, I recently bought a copy of a new book called “Advances in Financial Machine Learning” by Marcos Lopez de Prado, although I suspect I will be out of my depth on the technical elements of the book. Other posts on this blog (such as this one) on the telecom sector refer to the impact intelligent networks are having on telecom business models. One example is the efficiencies Centurylink (CTL) have shown in their capital expenditure allocation processes from using AI and this got me thinking about the competitive impact such technology will have on costs across numerous traditional industries.

AI is a complex topic and in its broadest context it covers computer systems that can sense their environment, think, and in some cases learn, and take applicable actions according to their objectives. To illustrate the complexity of the topic, neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units” loosely modelling the way that neurons interact in the brain. Neural networks need large data sets to be “trained” and the number of layers of simulated interconnected neurons, often numbering in their millions, determine how “deep” the learning can be. Before I embarrass myself in demonstrating how little I know about the technicalities of this topic, it’s safe to say AI as referred to in this post encompasses the broadest definition, unless a referenced report or article specifically narrows the definition to a subset of the broader definition and is highlighted as such.

According to IDC (here), “interest and awareness of AI is at a fever pitch” and global spending on AI systems is projected to grow from approximately $20 billion this year to $50 billion in 2021. David Schubmehl of IDC stated that “by 2019, 40% of digital transformation initiatives will use AI services and by 2021, 75% of enterprise applications will use AI”. By the end of this year, retail will be the largest spender on AI, followed by banking, discrete manufacturing, and healthcare. Retail AI use cases include automated customer service agents, expert shopping advisors and product recommendations, and merchandising for omni channel operations. Banking AI use cases include automated threat intelligence and prevention systems, fraud analysis and investigation, and program advisors and recommendation systems. Discrete manufacturing AI use cases including automated preventative maintenance, quality management investigation and recommendation systems. Improved diagnosis and treatment systems are a key focus in healthcare.

In this April 2018 report, McKinsey highlights numerous use cases concluding that ”AI can most often be adopted and create value where other analytics methods and techniques are also creating value”. McKinsey emphasis that “abundant volumes of rich data from images, audio, and video, and large-scale text are the essential starting point and lifeblood of creating value with AI”. McKinsey’s AI focus in the report is particularly in relation to deep learning techniques such as feed forward neural networks, recurrent neural networks, and convolutional neural networks.

Examples highlighted by McKinsey include a European trucking company who reduced fuel costs by 15 percent by using AI to optimize routing of delivery traffic, an airline who uses AI to predict congestion and weather-related problems to avoid costly cancellations, and a travel company who increase ancillary revenue by 10-15% using a recommender system algorithm trained on product and customer data to offer additional services. Other specific areas highlighted by McKinsey are captured in the following paragraph:

“AI’s ability to conduct preventive maintenance and field force scheduling, as well as optimizing production and assembly processes, means that it also has considerable application possibilities and value potential across sectors including advanced electronics and semiconductors, automotive and assembly, chemicals, basic materials, transportation and logistics, oil and gas, pharmaceuticals and medical products, aerospace and defense, agriculture, and consumer packaged goods. In advanced electronics and semiconductors, for example, harnessing data to adjust production and supply-chain operations can minimize spending on utilities and raw materials, cutting overall production costs by 5 to 10 percent in our use cases.”

McKinsey calculated the value potential of AI from neural networks across numerous sectors, as per the graph below, amounting to $3.5 to $5.8 trillion. Value potential is defined as both in the form of increased profits for companies and lower prices or higher quality products and services captured by customers, based off the 2016 global economy. They did not estimate the value potential of creating entirely new product or service categories, such as autonomous driving.

click to enlarge

McKinsey identified several challenges and limitations with applying AI techniques, as follows:

  • Making an effective use of neural networks requires labelled training data sets and therefore data quality is a key issue. Ironically, machine learning often requires large amounts of manual effort in “teaching” machines to learn. The experience of Microsoft with their chatter bot Tay in 2016 illustrates the shortcoming of learning from bad data!
  • Obtaining data sets that are sufficiently large and comprehensive to be used for comprehensive training is also an issue. According to the authors of the book “Deep Learning”, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labelled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labelled examples.
  • Explaining the results from large and complex models in terms of existing practices and regulatory frameworks is another issue. Product certifications in health care, automotive, chemicals, aerospace industries and regulations in the financial services sector can be an obstacle if processes and outcomes are not clearly explainable and auditable. Some nascent approaches to increasing model transparency, including local-interpretable-model-agnostic explanations (LIME), may help resolve this explanation challenge.
  • AI models continue to have difficulties in carrying their experiences from one set of circumstances to another, applying a generalisation to learning. That means companies must commit resources to train new models for similar use cases. Transfer learning, in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity, is one area of focus in response to this issue.
  • Finally, one area that has been the subject of focus is the risk of bias in data and algorithms. As bias is part of the human condition, it is engrained in our behaviour and historical data. This article in the New Scientist highlights five examples.

In 2016, Accenture estimated that US GDP could be $8.3 trillion higher in 2035 because of AI, doubling growth rates largely due to AI induced productivity gains. More recently in February this year, PwC published a report on an extensive macro-economic impact of AI and projected a baseline scenario that global GDP will be 14% higher due to AI, with the US and China benefiting the most. Using a Spatial Computable General Equilibrium Model (SCGE) of the global economy, PwC quantifies the total economic impact (as measured by GDP) of AI on the global economy via both productivity gains and consumption-side product enhancements over the period 2017-2030. The impact on the seven regions modelled by 2030 can be seen below.

click to enlarge

PwC estimates that the economic impact of AI will be driven by productivity gains from businesses automating processes as well as augmenting their existing labour force with AI technologies (assisted, autonomous and augmented intelligence) and by increased consumer demand resulting from the availability of personalised and/or higher-quality AI-enhanced products and services.

In terms of sectors, PwC estimate the services industry that encompasses health, education, public services and recreation stands to gain the most, with retail and wholesale trade as well as accommodation and food services also expected to see a large boost. Transport and logistics as well as financial and professional services will also see significant but smaller GDP gains by 2030 because of AI although they estimate that the financial service sector gains relatively quickly in the short term. Unsurprisingly, PwC finds that capital intensive industries have the greatest productivity gains from AI uptake and specifically highlight the Technology, Media and Telecommunications (TMT) sector as having substantial marginal productivity gains from uptaking replacement and augmenting AI. The sectoral gains estimated by PwC by 2030 are shown below.

click to enlarge

A key element of these new processes is the computing capabilities needed to process so much data that underlies AI. This recent article in the FT highlighted how the postulated demise of Moore’s law after its 50-year run is impacting the micro-chip sector. Mike Mayberry of Intel commented that “the future is more heterogeneous” when referring to the need for the chip industry to optimise chip design for specific tasks. DARPA, the US defence department’s research arm, has allocated $1.5 billion in research grants on the chips of the future, such as chip architectures that combine both power and flexibility using reprogrammable “software-defined hardware”. This increase in focus from the US is a direct counter against China’s plans to develop its intellectual and technical abilities in semiconductors over the coming years beyond simple manufacturing.

One of the current leaders in specialised chip design is Nvidia (NVDA) who developed software lead chips for video cards in the gaming sector through their graphics processing unit (GPU). The GPU accelerates applications running on standard central processing units (CPU) by offloading some of the compute-intensive and time-consuming portions of the code whilst the rest of the application still runs on the CPU. The chips developed by NVDA for gamers have proven ideal in handling the huge volumes of data needed to train deep learning systems that are used in AI. The exhibit below from NVDA illustrates how they assert that new processes such as GPU can overcome the slowdown in capability from the density limitation of Moore’s Law.

click to enlarge

NVDA, whose stock is up over 400% in the past 24 months, has been a darling of the stock market in recent years and reported strong financial figures for their quarter to end April, as shown below. Their quarterly figures to the end of July are eagerly expected next month. NVDA has been range bound in recent months, with the trade war often cited as a concern with their products sold approximately 20%, 20%, and 30% into supply chains in China, other Asia Pacific countries, and Taiwan respectively

click to enlarge

Although seen as the current leader, NVDA is not alone in this space. AMD recently reported strong Q1 2018 results, with revenues up 40%, and has a range of specialised chip designs to compete in the datacentre, auto, and machine learning sectors. AMD’s improved results also reduce risk on their balance sheet with leverage decreasing from 4.6X to 3.4X and projected to decline further. AMD’s stock is up approximately 70% year to date. AMD’s 7-nanonmeter product launch planned for later this year also compares favourably against Intel’s delayed release date to 2019 for its 10-nanometer chips.

Intel has historically rolled out a new generation of computer chips every two years, enabling chips that were consistently more powerful than their predecessors even as the cost of that computing power fell. But as Intel has run up against the limits of physics, they have reverted to making upgrades to its aging 14nm processor node, which they say performs 70% better than when initially released four years ago. Despite advances by NVDA and AMD in data centres, Intel chips still dominate. In relation to the AI market, Intel is focused on an approach called field-programmable gate array (FPGA) which is an integrated circuit designed to be configured by a customer or a designer after manufacturing. This approach of domain-specific architectures is seen as an important trend in the sector for the future.

Another interesting development is Google (GOOG) recently reported move to commercially sell, through its cloud-computing service, its own big-data chip design that it has been using internally for some time. Known as a tensor processing unit (TPU), the chip was specifically developed by GOOG for neural network machine learning and is an AI accelerator application-specific integrated circuit (ASIC). For example, in Google photos an individual TPU can process over 100 million photos a day. What GOOG will do with this technology will be an interesting development to watch.

Given the need for access to large labelled data sets and significant computing infrastructure, the large internet firms like Google, Facebook (FB), Microsoft (MSFT), Amazon (AMZN) and Chinese firms like Baidu (BIDU) and Tencent (TCEHY) are natural leaders in using and commercialising AI. Other firms highlighted by analysts as riding the AI wave include Xilinx (XLNX), a developer of high-performance FPGAs, and Yext (YEXT), who specialise in managing digital information relevant to specific brands, and Twilio (TWLO), a specialist invoice and text communication analysis. YEXT and TWLO are loss making. All of these stocks, possibly excluding the Chinese ones, are trading at lofty valuations. If the current wobbles on the stock market do lead to a significant fall in technology valuations, the stocks on my watchlist will be NVDA, BIDU and GOOG. I’d ignore the one trick ponys, particularly the loss making ones! Specifically, Google is one I have been trying to get in for years at a sensible value and I will watch NVDA’s results next month with keen interest as they have consistently broken estimates in recent quarters. Now, if only the market would fall from its current heights to allow for a sensible entry point…….maybe enabled by algorithmic trading or a massive trend move by the passives!

Hi there LIBOR

According to this article in the FT by Bhanu Baweja of UBS, the rise in the spread between the dollar 3-month LIBOR, now over 2.25% compared to 1.7% at the start of the year, and the overnight indexed swap (OIS) rate, as per the graph below, is a “red herring” and that “supply is at play here, not rising credit risk”. This view reflects the current market consensus, up until recently at least.

click to enlarge

Baweja argues that the spread widening is due to the increased T-bill-OIS spread because of increased yields due to widening fiscal deficits in the US and to the increased commercial paper (CP) to T-bill spread due to US company repatriations as a result of the Trump tax cuts. Although Baweja lists off the current bull arguments to be cheerful, he does acknowledge that an increasing LIBOR will impact US floating borrowers of $2.2 trillion of debt, half of whom are BB- and below, particularly if 3-month US LIBOR breaks past 3%. Baweja points to rises in term premiums as the real red flags to be looking out for.

Analysts such as Matt Smith of Citi and Jonathan Garner of Morgan Stanley are not as nonchalant as the market consensus as articulated by Baweja. The potential for unintended consequences and/or imbalances in this tightening phase, out of the greatest monetary experiment every undertaken, is on many people’s minds, including mine. I cannot but help think of a pressure cooker with every US rate rise ratcheting the heat higher.

Citi worry that LIBOR may be a 3-month leading indicator for dollar strengthening which may send shock-waves across global risk markets, particularly if FX movements are disorderly. Garner believes that “we’re already looking at a significant tightening of monetary policy in the US and in addition China is tightening monetary policy at the same time and this joint tightening is a key reason why we are so cautious on markets”. Given Chairman Powell’s debut yesterday and the more hawkish tone in relation to 2019 and 2020 tightening, I’ll leave this subject on that note.

The intricacies of credit market movements are not my area of expertise, so I’ll take council on this topic from people who know better.

Eh, help Eddie….what do you think?

The Bionic Invisible Hand

Technology is omnipresent. The impacts of technology on markets and market structures are a topic of much debate recently. Some point to its influence to explain the lack of volatility in equity markets (ignoring this week’s wobble). Marko Kolanovic, a JPMorgan analyst, has been reported to have estimated that a mere 10% US equity market trading is now conducted by discretionary human traders.

The first wave of high frequency trading (HFT) brought about distortive practises by certain players such as front running and spoofing, as detailed in Michael Lewis’s bestselling exposé Flash Boys. Now HFT firms are struggling to wring profits from the incremental millisecond, as reported in this FT article, with 2017 revenues for HFT firms trading US stocks falling below $1 billion in 2017 from over $7 billion in 2009, according to the consultancy Tabb Group. According to Doug Duquette of Vertex Analytics “it has got to the point where the speed is so ubiquitous that there really isn’t much left to get”.

The focus now is on the impact of various rules-based automatic investment systems, ranging from exchange traded funds (ETFs) to computerised high-speed trading programs to new machine learning and artificial intelligence (AI) innovations. As Tom Watson said about HFT in 2011, these new technologies have the potential to give “Adam Smith’s invisible hand a bionic upgrade by making it better, stronger and faster like Steve Austin in the Six Million Dollar Man”.

As reported in another FT article, some experts estimate that computers are now generating around 50% to 70% of trading in equity markets, 60% of futures and more than 50% of treasuries. According to Morningstar, by year-end 2017 the total assets of actively managed funds stood at $11.4 trillion compared with $6.7 trillion for passive funds in the US.

Although the term “quant fund” covers a multitude of mutual and hedge fund strategies, assuming certain classifications are estimated to manage around $1 trillion in assets out of total assets under management (AUM) invested in mutual funds globally of over $40 trillion. It is believed that machine learning or AI only drives a small subset of quant funds’ trades although such systems are thought to be used as investment tools for developing strategies by an increasing number of investment professionals.

Before I delve into these issues further, I want to take a brief detour into the wonderful world of quantitative finance expert Paul Wilmott and his recent book, with David Orrell, called “The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took Over the Markets”. I am going to try to summarize the pertinent issues highlighted by the authors in the following sequence of my favourite quotes from the book:

“If anybody can flog an already sick horse to death, it is an economist.”

“Whenever a model becomes too popular, it influences the market and therefore tends to undermine the assumptions on which it was built.”

“Real price data tend to follow something closer to a power-law distribution and are characterized by extreme events and bursts of intense volatility…which are typical of complex systems that are operating at a state known as self-organized criticality…sometimes called the edge of chaos.”

“In quantitative finance, the weakest links are the models.”

“The only half decent, yet still toy, models in finance are the lognormal random walk models for those instruments whose level we don’t care about.”

“The more apparently realistic you make a model, the less useful it often becomes, and the complexity of the equations turns the model into a black box. The key then is to keep with simple models, but make sure that the model is capturing the key dynamics of the system, and only use it within its zone of validity.”

“The economy is not a machine, it is a living, organic system, and the numbers it produces have a complicated relationship with the underlying reality.”

“Calibration is a simple way of hiding model risk, you choose the parameters so that your model superficially appears to value everything correctly when really, it’s doing no such thing.”

“When their [quants] schemes, their quantitative seizing – cratered, the central banks stepped in to fill the hole with quantitative easing.”

“Bandwagons beget bubbles, and bubbles beget crashes.”

“Today, it is the risk that has been created by high speed algorithms, all based on similar models, all racing to be the first to do the same thing.”

“We have outsourced ethical judgments to the invisible hand, or increasingly to algorithms, with the result that our own ability to make ethical decisions in economic matters has atrophied.”

According to Morningstar’s annual fund flow report, flows into US mutual funds and ETFs reached a record $684.6 billion in 2017 due to massive inflows into passive funds. Among fund categories, the biggest winners were passive U.S. equity, international equity and taxable bond funds with each having inflows of more than $200 billion. “Indexing is no longer limited to U.S. equity and expanding into other asset classes” according to the Morningstar report.

click to enlarge

Paul Singer of Elliott hedge fund, known for its aggressive activism and distressed debt focus (famous for its Argentine debt battles), dramatically said “passive investing is in danger of devouring capitalism” and called it “a blob which is destructive to the growth-creating and consensus-building prospects of free market capitalism”.

In 2016, JP Morgan’s Nikolaos Panagirtzoglou stated that “the shift towards passive funds has the potential to concentrate investments to a few large products” and “this concentration potentially increases systemic risk making markets more susceptible to the flows of a few large passive products”. He further stated that “this shift exacerbates the market uptrend creating more protracted periods of low volatility and momentum” and that “when markets eventually reverse, the correction becomes deeper and volatility rises as money flows away from passive funds back towards active managers who tend to outperform in periods of weak market performance”.

The International Organization of Securities Commissions (IOSCO), proving that regulators are always late to the party (hopefully not too late), is to broaden its analysis on the ETF sector in 2018, beyond a previous review on liquidity management, to consider whether serious market distortions might occur due to the growth of ETFs, as per this FT article. Paul Andrews, a veteran US regulator and secretary general of IOSCO, called ETFs “financial engineering at its finest”, stated that “ETFs are [now] a critical piece of market infrastructure” and that “we are on autopilot in many respects with market capitalisation-weighted ETFs”.

Artemis Capital Management, in this report highlighted in my previous post, believe that “passive investing is now just a momentum play on liquidity” and that “large capital flows into stocks occur for no reason other than the fact that they are highly liquid members of an index”. Artemis believes that “active managers serve as a volatility buffer” and that if such a buffer is withdrawn then “there is no incremental seller to control overvaluation on the way up and no incremental buyer to stop a crash on the way down”.

Algorithmic trading (automated trading, black-box trading, or simply algo-trading) is the process of using computers programmed to follow a defined set of instructions for placing a trade in order to generate profits at a speed and frequency that is impossible for a human trader.

Machine learning uses statistical techniques to infer relationships between data. The artificial intelligence “agent” does not have an algorithm to tell it which relationships it should find but infers, or learns if you like, from the data using statistical analysis to revise its hypotheses. In supervised learning, the machine is presented with examples of input data together with the desired output. The AI agent works out a relationship between the two and uses this relationship to make predictions given further input data. Supervised learning techniques, such as Bayesian regression, are useful where firms have a flow of input data and would like to make predictions.

Unsupervised learning, in contrast, does without learning examples. The AI agent instead tries to find relationships between input data by itself. Unsupervised learning can be used for classification problems determining which data points are similar to each other. As an example of unsupervised learning, cluster analysis is a statistical technique whereby data or objects are classified into groups (clusters) that are similar to one another but different from data or objects in other clusters.

Firms like Bloomberg use cluster analysis in their liquidity assessment tool which aims to cluster bonds with sufficiently similar behaviour so their historical data can be shared and used to make general predictions for all bonds in that cluster. Naz Quadri of Bloomberg, with the wonderful title of head of quant engineering and research, said that “some applications of clustering were more useful than others” and that their analysis suggests “clustering is most useful, and results are more stable, when it is used with a structural market impact model”. Market impact models are widely used to minimise the effect of a firm’s own trading on market prices and are an example of machine learning in practise.

In November 2017, the Financial Stability Board (FSB) released a report called “Artificial Intelligence and Machine Learning in Financial Services”. In the report the FSB highlighted some of the current and potential use cases of AI and machine learning, as follows:

  • Financial institutions and vendors are using AI and machine learning methods to assess credit quality, to price and market insurance contracts, and to automate client interaction.
  • Institutions are optimising scarce capital with AI and machine learning techniques, as well as back-testing models and analysing the market impact of trading large positions.
  • Hedge funds, broker-dealers, and other firms are using AI and machine learning to find signals for higher (and uncorrelated) returns and optimise trading execution.
  • Both public and private sector institutions may use these technologies for regulatory compliance, surveillance, data quality assessment, and fraud detection.

The FSB report states that “applications of AI and machine learning could result in new and unexpected forms of interconnectedness” and that “the lack of interpretability or ‘auditability’ of AI and machine learning methods has the potential to contribute to macro-level risk”. Worryingly they say that “many of the models that result from the use of AI or machine learning techniques are difficult or impossible to interpret” and that “many AI and machine learning developed models are being ‘trained’ in a period of low volatility”. As such “the models may not suggest optimal actions in a significant economic downturn or in a financial crisis, or the models may not suggest appropriate management of long-term risks” and “should there be widespread use of opaque models, it would likely result in unintended consequences”.

With increased use of machine learning and AI, we are seeing the potential rise of self-driving investment vehicles. Using self-driving cars as a metaphor, Artemis Capital highlights that “the fatal flaw is that your driving algorithm has never seen a mountain road” and that “as machines trade with against each other, self-reflexivity is amplified”. Others point out that machine learning in trading may involve machine learning algorithms learning the behaviour of other machine learning algorithms, in a regressive loop, all drawing on the same data and the same methodology. 13D Research opined that “when algorithms coexist in complex systems with subjectivity and unpredictability of human behaviour, unforeseen and destabilising downsides result”.

It is said that there is nothing magical about quant strategies. Quantitative investing is an approach for implementing investment strategies in an automated (or semi-automated) way. The key seems to be data, its quality and its uniqueness. A hypothesis is developed and tested and tested again against various themes to identify anomalies or inefficiencies. Jim Simons of Renaissance Technologies (called RenTec), one of the oldest and most successful quant funds, said that the “efficient market theory is correct in that there are no gross inefficiencies” but “we look at anomalies that may be small in size and brief in time. We make our forecast. Then, shortly thereafter, we re-evaluate the situation and revise our forecast and our portfolio. We do this all-day long. We’re always in and out and out and in. So we’re dependent on activity to make money“. Simons emphasised that RenTec “don’t start with models” but “we start with data” and “we don’t have any preconceived notions”. They “look for things that can be replicated thousands of times”.

The recently departed co-CEO Robert Mercer of RenTec [yes the Mercer who backs Breitbart which adds a scary political Big Brother surveillance angle to this story] has said “RenTec gets a trillion bytes of data a day, from newspapers, AP wire, all the trades, quotes, weather reports, energy reports, government reports, all with the goal of trying to figure out what’s going to be the price of something or other at every point in the future… The information we have today is a garbled version of what the price is going to be next week. People don’t really grasp how noisy the market is. It’s very hard to find information, but it is there, and in some cases it’s been there for a long long time. It’s very close to science’s needle in a haystack problem

Kumesh Aroomoogan of Accern recently said that “quant hedge funds are buying as much data as they can”. The so-called “alternative data” market was worth about $200 million in the US in 2017 and is expected to double in four years, according to research and consulting firm Tabb Group. The explosion of data that has and is becoming available in this technological revolution should keep the quants busy, for a while.

However, what’s scaring me is that these incredibly clever people will inevitably end up farming through the same data sets, coming to broadly similar conclusions, and the machines who have learned each other’s secrets will all start heading for the exits at the same time, in real time, in a mother of all quant flash crashes. That sounds too much like science fiction to ever happen though, right?