Tag Archives: Google

Heterogeneous Future

It seems like wherever you look these days there is references to the transformational power of artificial intelligence (AI), including cognitive or machine learning (ML), on businesses and our future. A previous post on AI and insurance referred to some of the changes ongoing in financial services in relation to core business processes and costs. This industry article highlights how machine learning (specifically multi-objective genetic algorithms) can be used in portfolio optimization by (re)insurers. To further my understanding on the topic, I recently bought a copy of a new book called “Advances in Financial Machine Learning” by Marcos Lopez de Prado, although I suspect I will be out of my depth on the technical elements of the book. Other posts on this blog (such as this one) on the telecom sector refer to the impact intelligent networks are having on telecom business models. One example is the efficiencies Centurylink (CTL) have shown in their capital expenditure allocation processes from using AI and this got me thinking about the competitive impact such technology will have on costs across numerous traditional industries.

AI is a complex topic and in its broadest context it covers computer systems that can sense their environment, think, and in some cases learn, and take applicable actions according to their objectives. To illustrate the complexity of the topic, neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units” loosely modelling the way that neurons interact in the brain. Neural networks need large data sets to be “trained” and the number of layers of simulated interconnected neurons, often numbering in their millions, determine how “deep” the learning can be. Before I embarrass myself in demonstrating how little I know about the technicalities of this topic, it’s safe to say AI as referred to in this post encompasses the broadest definition, unless a referenced report or article specifically narrows the definition to a subset of the broader definition and is highlighted as such.

According to IDC (here), “interest and awareness of AI is at a fever pitch” and global spending on AI systems is projected to grow from approximately $20 billion this year to $50 billion in 2021. David Schubmehl of IDC stated that “by 2019, 40% of digital transformation initiatives will use AI services and by 2021, 75% of enterprise applications will use AI”. By the end of this year, retail will be the largest spender on AI, followed by banking, discrete manufacturing, and healthcare. Retail AI use cases include automated customer service agents, expert shopping advisors and product recommendations, and merchandising for omni channel operations. Banking AI use cases include automated threat intelligence and prevention systems, fraud analysis and investigation, and program advisors and recommendation systems. Discrete manufacturing AI use cases including automated preventative maintenance, quality management investigation and recommendation systems. Improved diagnosis and treatment systems are a key focus in healthcare.

In this April 2018 report, McKinsey highlights numerous use cases concluding that ”AI can most often be adopted and create value where other analytics methods and techniques are also creating value”. McKinsey emphasis that “abundant volumes of rich data from images, audio, and video, and large-scale text are the essential starting point and lifeblood of creating value with AI”. McKinsey’s AI focus in the report is particularly in relation to deep learning techniques such as feed forward neural networks, recurrent neural networks, and convolutional neural networks.

Examples highlighted by McKinsey include a European trucking company who reduced fuel costs by 15 percent by using AI to optimize routing of delivery traffic, an airline who uses AI to predict congestion and weather-related problems to avoid costly cancellations, and a travel company who increase ancillary revenue by 10-15% using a recommender system algorithm trained on product and customer data to offer additional services. Other specific areas highlighted by McKinsey are captured in the following paragraph:

“AI’s ability to conduct preventive maintenance and field force scheduling, as well as optimizing production and assembly processes, means that it also has considerable application possibilities and value potential across sectors including advanced electronics and semiconductors, automotive and assembly, chemicals, basic materials, transportation and logistics, oil and gas, pharmaceuticals and medical products, aerospace and defense, agriculture, and consumer packaged goods. In advanced electronics and semiconductors, for example, harnessing data to adjust production and supply-chain operations can minimize spending on utilities and raw materials, cutting overall production costs by 5 to 10 percent in our use cases.”

McKinsey calculated the value potential of AI from neural networks across numerous sectors, as per the graph below, amounting to $3.5 to $5.8 trillion. Value potential is defined as both in the form of increased profits for companies and lower prices or higher quality products and services captured by customers, based off the 2016 global economy. They did not estimate the value potential of creating entirely new product or service categories, such as autonomous driving.

click to enlarge

McKinsey identified several challenges and limitations with applying AI techniques, as follows:

  • Making an effective use of neural networks requires labelled training data sets and therefore data quality is a key issue. Ironically, machine learning often requires large amounts of manual effort in “teaching” machines to learn. The experience of Microsoft with their chatter bot Tay in 2016 illustrates the shortcoming of learning from bad data!
  • Obtaining data sets that are sufficiently large and comprehensive to be used for comprehensive training is also an issue. According to the authors of the book “Deep Learning”, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labelled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labelled examples.
  • Explaining the results from large and complex models in terms of existing practices and regulatory frameworks is another issue. Product certifications in health care, automotive, chemicals, aerospace industries and regulations in the financial services sector can be an obstacle if processes and outcomes are not clearly explainable and auditable. Some nascent approaches to increasing model transparency, including local-interpretable-model-agnostic explanations (LIME), may help resolve this explanation challenge.
  • AI models continue to have difficulties in carrying their experiences from one set of circumstances to another, applying a generalisation to learning. That means companies must commit resources to train new models for similar use cases. Transfer learning, in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity, is one area of focus in response to this issue.
  • Finally, one area that has been the subject of focus is the risk of bias in data and algorithms. As bias is part of the human condition, it is engrained in our behaviour and historical data. This article in the New Scientist highlights five examples.

In 2016, Accenture estimated that US GDP could be $8.3 trillion higher in 2035 because of AI, doubling growth rates largely due to AI induced productivity gains. More recently in February this year, PwC published a report on an extensive macro-economic impact of AI and projected a baseline scenario that global GDP will be 14% higher due to AI, with the US and China benefiting the most. Using a Spatial Computable General Equilibrium Model (SCGE) of the global economy, PwC quantifies the total economic impact (as measured by GDP) of AI on the global economy via both productivity gains and consumption-side product enhancements over the period 2017-2030. The impact on the seven regions modelled by 2030 can be seen below.

click to enlarge

PwC estimates that the economic impact of AI will be driven by productivity gains from businesses automating processes as well as augmenting their existing labour force with AI technologies (assisted, autonomous and augmented intelligence) and by increased consumer demand resulting from the availability of personalised and/or higher-quality AI-enhanced products and services.

In terms of sectors, PwC estimate the services industry that encompasses health, education, public services and recreation stands to gain the most, with retail and wholesale trade as well as accommodation and food services also expected to see a large boost. Transport and logistics as well as financial and professional services will also see significant but smaller GDP gains by 2030 because of AI although they estimate that the financial service sector gains relatively quickly in the short term. Unsurprisingly, PwC finds that capital intensive industries have the greatest productivity gains from AI uptake and specifically highlight the Technology, Media and Telecommunications (TMT) sector as having substantial marginal productivity gains from uptaking replacement and augmenting AI. The sectoral gains estimated by PwC by 2030 are shown below.

click to enlarge

A key element of these new processes is the computing capabilities needed to process so much data that underlies AI. This recent article in the FT highlighted how the postulated demise of Moore’s law after its 50-year run is impacting the micro-chip sector. Mike Mayberry of Intel commented that “the future is more heterogeneous” when referring to the need for the chip industry to optimise chip design for specific tasks. DARPA, the US defence department’s research arm, has allocated $1.5 billion in research grants on the chips of the future, such as chip architectures that combine both power and flexibility using reprogrammable “software-defined hardware”. This increase in focus from the US is a direct counter against China’s plans to develop its intellectual and technical abilities in semiconductors over the coming years beyond simple manufacturing.

One of the current leaders in specialised chip design is Nvidia (NVDA) who developed software lead chips for video cards in the gaming sector through their graphics processing unit (GPU). The GPU accelerates applications running on standard central processing units (CPU) by offloading some of the compute-intensive and time-consuming portions of the code whilst the rest of the application still runs on the CPU. The chips developed by NVDA for gamers have proven ideal in handling the huge volumes of data needed to train deep learning systems that are used in AI. The exhibit below from NVDA illustrates how they assert that new processes such as GPU can overcome the slowdown in capability from the density limitation of Moore’s Law.

click to enlarge

NVDA, whose stock is up over 400% in the past 24 months, has been a darling of the stock market in recent years and reported strong financial figures for their quarter to end April, as shown below. Their quarterly figures to the end of July are eagerly expected next month. NVDA has been range bound in recent months, with the trade war often cited as a concern with their products sold approximately 20%, 20%, and 30% into supply chains in China, other Asia Pacific countries, and Taiwan respectively

click to enlarge

Although seen as the current leader, NVDA is not alone in this space. AMD recently reported strong Q1 2018 results, with revenues up 40%, and has a range of specialised chip designs to compete in the datacentre, auto, and machine learning sectors. AMD’s improved results also reduce risk on their balance sheet with leverage decreasing from 4.6X to 3.4X and projected to decline further. AMD’s stock is up approximately 70% year to date. AMD’s 7-nanonmeter product launch planned for later this year also compares favourably against Intel’s delayed release date to 2019 for its 10-nanometer chips.

Intel has historically rolled out a new generation of computer chips every two years, enabling chips that were consistently more powerful than their predecessors even as the cost of that computing power fell. But as Intel has run up against the limits of physics, they have reverted to making upgrades to its aging 14nm processor node, which they say performs 70% better than when initially released four years ago. Despite advances by NVDA and AMD in data centres, Intel chips still dominate. In relation to the AI market, Intel is focused on an approach called field-programmable gate array (FPGA) which is an integrated circuit designed to be configured by a customer or a designer after manufacturing. This approach of domain-specific architectures is seen as an important trend in the sector for the future.

Another interesting development is Google (GOOG) recently reported move to commercially sell, through its cloud-computing service, its own big-data chip design that it has been using internally for some time. Known as a tensor processing unit (TPU), the chip was specifically developed by GOOG for neural network machine learning and is an AI accelerator application-specific integrated circuit (ASIC). For example, in Google photos an individual TPU can process over 100 million photos a day. What GOOG will do with this technology will be an interesting development to watch.

Given the need for access to large labelled data sets and significant computing infrastructure, the large internet firms like Google, Facebook (FB), Microsoft (MSFT), Amazon (AMZN) and Chinese firms like Baidu (BIDU) and Tencent (TCEHY) are natural leaders in using and commercialising AI. Other firms highlighted by analysts as riding the AI wave include Xilinx (XLNX), a developer of high-performance FPGAs, and Yext (YEXT), who specialise in managing digital information relevant to specific brands, and Twilio (TWLO), a specialist invoice and text communication analysis. YEXT and TWLO are loss making. All of these stocks, possibly excluding the Chinese ones, are trading at lofty valuations. If the current wobbles on the stock market do lead to a significant fall in technology valuations, the stocks on my watchlist will be NVDA, BIDU and GOOG. I’d ignore the one trick ponys, particularly the loss making ones! Specifically, Google is one I have been trying to get in for years at a sensible value and I will watch NVDA’s results next month with keen interest as they have consistently broken estimates in recent quarters. Now, if only the market would fall from its current heights to allow for a sensible entry point…….maybe enabled by algorithmic trading or a massive trend move by the passives!

Artificial Insurance

The digital transformation of existing business models is a theme of our age. Robotic process automation (RPA) is one of the many acronyms to have found its way into the terminology of businesses today. I highlighted the potential for telecoms to digitalise their business models in this post. Klaus Schwab of the World Economic Forum in his book “Fourth Industrial Revolution” refers to the current era as one whereby “new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human”.

The financial services business is one that is regularly touted as been rife for transformation with fintech being the much-hyped buzz word. I last posted here and here on fintech and insurtech, the use of technology innovations designed to squeeze out savings and efficiency from existing insurance business models.

Artificial intelligence (AI) is used as an umbrella term for everything from process automation, to robotics and to machine learning. As referred to in this post on equity markets, the Financial Stability Board (FSB) released a report called “Artificial Intelligence and Machine Learning in Financial Services” in November 2017. In relation to insurance, the FSB report highlights that “some insurance companies are actively using machine learning to improve the pricing or marketing of insurance products by incorporating real-time, highly granular data, such as online shopping behaviour or telemetrics (sensors in connected devices, such as car odometers)”. Other areas highlighted include machine learning techniques in claims processing and the preventative benefits of remote sensors connected through the internet of things. Consultants are falling over themselves to get on the bandwagon as reports from the likes of Deloitte, EY, PwC, Capgemini, and Accenture illustrate.

One of the better recent reports on the topic is this one from the reinsurer SCOR. CEO Denis Kessler states that “information is becoming a commodity, and AI will enable us to process all of it” and that “AI and data will take us into a world of ex-ante predictability and ex-post monitoring, which will change the way risks are observed, carried, realized and settled”. Kessler believes that AI will impact the insurance sector in 3 ways:

  • Reducing information asymmetry and bringing comprehensive and dynamic observability in the insurance transaction,
  • Improving efficiencies and insurance product innovation, and
  • Creating new “intrinsic“ AI risks.

I found one article in the SCOR report by Nicolas Miailhe of the Future Society at the Harvard Kennedy School particularly interesting. Whilst talking about the overall AI market, Miailhe states that “the general consensus remains that the market is on the brink of a revolution, which will be characterized by an asymmetric global oligopoly” and the “market is qualified as oligopolistic because of the association between the scale effects and network effects which drive concentration”.  When referring to an oligopoly, Miailhe highlights two global blocks – GAFA (Google/Apple/Facebook/Amazon) and BATX (Baidu/Alibaba/Tencent/Xiaomi). In the insurance context, Miailhe states that “more often than not, this will mean that the insured must relinquish control, and at times, the ownership of data” and that “the delivery of these new services will intrude heavily on privacy”.

At a more mundane level, Miailhe highlights the difficulty for stakeholders such as auditors and regulators to understand the business models of the future which “delegate the risk-profiling process to computer systems that run software based on “black box” algorithms”. Miailhe also cautions that bias can infiltrate algorithms as “algorithms are written by people, and machine-learning algorithms adjust what they do according to people’s behaviour”.

In a statement that seems particularly relevant today in terms of the current issue around Facebook and data privacy, Miailhe warns that “the issues of auditability, certification and tension between transparency and competitive dynamics are becoming apparent and will play a key role in facilitating or hindering the dissemination of AI systems”.

Now, that’s not something you’ll hear from the usual cheer leaders.

Confused but content

As regular readers will know, I have posted on Level 3 (LVLT) many times over the years, more recently here. I ended that post with the comment that following the firm was never boring and the announcement of a merger with CenturyLink (CTL) on the 31st of October confirmed that, although the CTL tie-up surprised many observers, including me.

Before I muse on the merger deal, it is worth looking over the Q3 results which were announced at the same time as the merger. The recent trend of disappointing revenue, particularly in the US enterprise business, was compounded by an increased projection for capex at 16% of revenue. Although the free cash-flow guidance for 2016 was unchanged at $1-$1.1 billion, the lack of growth in the core US enterprise line for a second quarter is worrying. Without the merger announcement, the share price could well have tested the $40 level as revenue growth is core to maintaining the positive story for the market, and premium valuation, of Level 3 continuing to demonstrate its operating leverage through free cash-flow growth generation.

click to enlargelvlt-revenue-operating-trends

Level 3 management acknowledged the US enterprise revenue disappointment (again!) and produced the exhibit below to show the impact of the loss of smaller accounts due to a lack of focus following the TW Telecom integration. CEO Jeff Storey said “coupling our desire to move up market, with higher sales quotas we assigned to the sales team and with compensation plans rewarding sales more than revenue, we transitioned our customers more rapidly than they would have moved on their own”. The firm has refocused on the smaller accounts and realigned sales incentives towards revenue rather than sales. In addition, LVLT stated that higher capex estimate for 2016, due to strong demand for 100 Gig wavelengths and dark fibre, is a sign of future strength.

click to enlargelvlt-q3-revenue-by-customer

Although these figures and explanations do give a sense that the recent hiccup may be temporary, the overall trends in the sector do raise the suspicion that the LVLT story may not be as distinctive as previously thought. Analysts rushed to reduce their ratings although the target price remains over $60 (although the merger announcement led to some confused comments). On a stand-alone basis, I also revised my estimates down with the resulting DCF value of $60 down from $65.

Many commentators point to overall revenue weakness in the business telecom sector (includes wholesale), as can be seen in the exhibit below. Relative newcomers to this sector, such as Comcast, are pressuring tradition telecoms. Comcast is a firm that some speculators thought would be interested in buying LVLT. Some even suggest, as per this article in Wired, that the new internet giants will negate the need for firms like Level 3.

click to enlargebusiness-telecom-revenue-trends-q3-2016

However, different firms report revenues differently and care needs to be taken in making generalisations. If you take a closer look at the revenue breakdown for AT&T and Verizon it can be seen that not all revenue is the same, as per the exhibit below. For example, AT&T’s business revenues are split 33%:66% into strategic and legacy business compared to a 94%:6% ratio for LVLT.

click to enlargeatt-and-verizon-business-revenue-breakdown

That brings me to the CenturyLink deal. The takeover/merger proposes $26.50 in cash and 1.4286 CTL shares for each LVLT share. $975 million of annualised expense savings are estimated. The combined entity’s debt is estimated at 3.7 times EBITDA after expense savings (although this may be slightly reduced by CTL’s sale of its data centres for $2.3 billion). LVLT’s $10 billion of NOLs are also cited by CTL as attractive in reducing its tax bill and maintaining its cherished $2.16 annual dividend (CTL is one of the highest yield dividend plays in the US).

The deal is expected to close in Q3 2017 and includes a breakup fee of about $2 per LVLT share if a 3rd party wants to take LVLT away from CTL. Initially, the market reaction was positive for both stocks although CTL shares have since cooled to $23 (from $28 before the deal was announced) whilst LVLT is around $51 (from $47 before) which is 13% less than the implied takeover price. The consistent discount to the implied takeover price of the deal since it was announced suggests that the market has reservations about the deal closing as announced. The table below shows the implied value to LVLT of the deal shareholders depending upon CTL’s share price.

click to enlargecenturylink-level-3-merger-deal

CTL’s business profile includes the rural consumer RBOC business of CenturyTel and nationwide business customers from the acquired business assets of Qwest and Sprint. It’s an odd mix encompassing a range of cultures. For example, CTL have 43k employees of which 16k are unionised. The exhibit below shows the rather uninspiring recent operating results of the main segments.

click to enlargecenturylink-consumer-business-operating-metrics

CTL’s historical payout ratio, being its dividend divided by operating cash-flow less capex, can be seen below. This was projected to increase further but is expected to stabilise after the merger synergies have been realised around 60%. The advantage to CTL of LVLT’s business is an enhancement, due to its free cash-flow plus the expense synergies and the NOLs, to CTL’s ability to pay its $2.16 dividend (which represents a 9.4% yield at its current share price) at a more sustainable payout rate.

click to enlargecenturylink-payout-ratio

For LVLT shareholders, like me, the value of the deal all depends upon CTL’s share price at closing. I doubt I’ll keep much of the CTL shares after the deal closes as CTL’s post merger doesn’t excite me anywhere as much as a standalone LVLT although it is an issue that I am still trying to get my head around.

As per the post’s title, I’m confused but content about events with LVLT.

Restrict the Renters?

It is no surprise that the populist revolt against globalisation in many developed countries is causing concern amongst the so called elite. The philosophy of the Economist magazine is based upon its founder’s opposition to the protectionist Corn Laws in 1843. It is therefore predictable that they would mount a strong argument for the benefits of free trade in their latest addition, citing multiple research sources. The Economist concludes that “a three pronged agenda of demand management, active labour-market policies and boosting competition would go a long way to tackling the problems that are unfairly laid at the door of globalisation”.

One of the studies referenced in the Economist articles which catch my eye is that by Jason Furman of the Council of Economic Advisors in the US. The graph below from Furman’s report shows the growth in return on invested capital (excluding goodwill)  of US publically quoted firms and the stunning divergence of those in the top 75th and 90th percentiles.

click to enlargereturn-on-invested-capital-us-nonfinancial-public-firms

These top firms, primarily in the technology sector, have increased their return on invested capital (ROIC) from 3 times the median in the 1990s to 8 times today, dramatically demonstrating their ability to generate economic rent in the digitized world we now live in.

Furman’s report includes the following paragraph:

“Traditionally, price fixing and collusion could be detected in the communications between businesses. The task of detecting undesirable price behaviour becomes more difficult with the use of increasingly complex algorithms for setting prices. This type of algorithmic price setting can lead to undesirable price behaviour, sometimes even unintentionally. The use of advanced machine learning algorithms to set prices and adapt product functionality would further increase opacity. Competition policy in the digital age brings with it new challenges for policymakers.”

IT firms have the highest operating margins of any sector in the S&P500, as can be seen below.

click to enlargesp-500-operating-profit-margins-by-sector

And the increasing size of these technology firms have contributed materially to the increase in the overall operating margin of the S&P500, as can also be seen below. These expanding margins are a big factor in the rise of the equity market since 2009.

click to enlargesp-500-historical-operating-profit-margins

It is somewhat ironic that one of the actions which may be needed to show the benefits of free trade and globalisation to citizens in the developed world is coherent policies to restrict the power of economic rent generating technology giants so prevalent in our world today…

Level3 hiccup

I have posted on one of my major holdings Level 3 (ticker LVLT), a facilities-based provider of a range of integrated telecommunications services, many times before, most recently here. One of the features of LVLT is its volatility and the past weeks have proven no exception. LVLT broke below $50 in late June to $47 before being buoyed to above $56 by a unsubstantiated rumour that the firm was “reviewing strategic alternatives to maximize holder value, including outright sale or large buyback”. After the quarterly report on the 27th of July when LVLT reported disappointing revenues but beat on the bottom line, the stock is now down below $50 again without any news from the firm on buybacks or M&A.

The revenue figures, particularly the increase in CNS monthly churn to 1.2%, was disappointing with the loss in accounts been driven by SME enterprise customers. One possible reason for the lack of focus was the temporary absence of the CEO due to a heart issue earlier in the year. As the chart below shows, LVLT does have form with revenue dips after initial successful M&A integration. Many, including me, thought that the current management was more on top of the issue this time around.

click to enlargeLevel3 Operating History 2005 to 2017e

Despite this disappointment, the revenue impact is likely to more contained this time around and I believe the case for LVLT in the longer term remains strong. I have reduced my revenue estimates in the graph above but the free cashflow that LVLT’s business is throwing off makes the bull case. My PV cash-flow analysis still has a price target of over $65, which represents a 2018 EV/EBITDA multiple of slightly below 10. Although the multiple is high compared to the incumbent US telcom giants, I think it is warranted given the quality of LVLT’s assets in an ever data hungry economy. The current favourable, albeit political, regulatory trends (net neutrality and the ban on lock-up agreements) are another plus factor.

I estimate that the FCF generated by LVLT could, in the absence of any M&A, mean the firm could afford $1 billion of buybacks in 2017, rising by $250 million a year thereafter. An aggressive buyback programme over a five year period, 2017 to 2021, could amount to approx $7.5 billion or approx 30% of current share count at an average price of $65.

In terms of M&A, management are obviously keen although they did emphasis the need for discipline. An interesting response to an analyst question on the Q2 call that any potential M&A fiber targets for LVLT trade at higher EV/EBITDA multiples was as follows:

“So as we look at M&A, and you mentioned fiber companies, we look at fiber companies post-synergies and believe that we are very good at acquiring and capturing synergies and moving networks together, combining networks, and creating value for shareholders through that. So I don’t feel that the M&A environment is necessarily constrained.”

One of the firms that the analyst was possibly referring to is Zayo, who interestingly just hired LVLT’s long time CTO Jack Waters. Zayo currently trade at over 10 times its 2017 projected EBITDA compared to LVLT currently at a 2017 multiple in the low 9s. Obviously a premium would be needed in any M&A so the synergies would have to be meaningful (in Zayo’s case with a 50% plus EBITDA margin, the synergies would likely have to be mainly in the capex line). COLT telecom is another potential M&A target as Fidelity’s self imposed M&A embargo runs out after 2016 (see this post).

A significant attraction however is for LVLT itself to become a target. One of the US cable firms, most likely Comcast, is touted as a potential to beef up their enterprise offerings to compete with the incumbents. Other potential candidates include the ever data hungry technology firms such as Google or Microsoft who may wish to own significant fiber assets and reduce their dependence on telecoms such as Verizon who are increasingly looking like competitors.

As ever with LVLT, the ride is never boring, but hopefully not ever ending….