The Queen is Dead

Long live the queen……Aretha will never die.

There are many many songs that sound-tracked our lives and will continue to do so for generations. Rock Steady is one on my favourites for its pure raw funkiness… she knew how to sock it to us!

Aretha Franklin 1942-2018 RIP.

click to enlargeAretha Franklin Rock Steady quote

 

Value Matters

I recently saw an interview with Damian Lewis, the actor who plays hedge fund billionaire Bobby “Axe” Axelrod in the TV show Billions, where he commented on the differences in reaction to the character in the US and the UK. Lewis said that in the US, the character is treated like an inspirational hero, whereas in the UK he’s seen as a villain. We all like to see a big shot hedgie fall flat on their face so us mere mortals can feel less stupid.

The case of David Einhorn is not so clear cut. A somewhat geekie character, the recent run of bad results of his hedge fund, Greenlight Capital, is raising some interesting questions amongst the talking heads of the merits of value stocks over the run away success of growth stocks in recent years. Einhorn’s recent results can be seen in a historical context, based upon published figures, in the graph below.

click to enlarge

Einhorn recently commented that “the reality is that the market is cyclical and given the extreme anomaly, reversion to the mean should happen sooner rather than later” whilst adding that “we just can’t say when“. The under-performance of value stocks is also highlighted by Alliance Bernstein in this article, as per the graph below.

click to enlarge

As an aside, Alliance Bernstein also have another interesting article which shows the percentage of debt to capital of S&P500 firms, as below.

click to enlarge

Einhorn not only invests in value stocks, like BrightHouse Financial (BHF) and General Motors (GM), but he also shorts highly valued so-called growth stocks like Tesla (TSLA), Amazon (AMZN) and Netflix (NFLX), his bubble basket. In fact, Einhorn’s bubble basket has been one of the reasons behind his recent poor performance. He queries AMZN on the basis that just because they “can disrupt somebody else’s profit stream, it doesn’t mean that AMZN earns that profit stream“. He trashes TSLA and its ability to deliver safe mass produced electric cars and points to the growing competition from “old media” firms for NFLX.

A quick look at the 2019 projected forward PE ratios, based off today’s valuations against average analysts estimates for 2018 and 2019 EPS numbers from Yahoo Finance of some of today’s most hyped growth stocks plus their Chinese counterparts plus some more “normal” firms like T and VZ as a counter weight, provides considerable justification to Einhorn’s arguments.

click to enlarge

[As an another aside, I am keeping an eye on Chinese valuations, hit by trade war concerns, for opportunities in case Trump’s trade war turns out to be another “huge” deal where he folds like the penny hustler he is.]

And the graph above shows only the firms with positive earnings to have a PE ratio in 2019 (eh, hello TSLA)!! In fact, the graph makes Einhorn’s rationale seem downright sensible to me.

Now, that’s not something you could say about Axe!

Heterogeneous Future

It seems like wherever you look these days there is references to the transformational power of artificial intelligence (AI), including cognitive or machine learning (ML), on businesses and our future. A previous post on AI and insurance referred to some of the changes ongoing in financial services in relation to core business processes and costs. This industry article highlights how machine learning (specifically multi-objective genetic algorithms) can be used in portfolio optimization by (re)insurers. To further my understanding on the topic, I recently bought a copy of a new book called “Advances in Financial Machine Learning” by Marcos Lopez de Prado, although I suspect I will be out of my depth on the technical elements of the book. Other posts on this blog (such as this one) on the telecom sector refer to the impact intelligent networks are having on telecom business models. One example is the efficiencies Centurylink (CTL) have shown in their capital expenditure allocation processes from using AI and this got me thinking about the competitive impact such technology will have on costs across numerous traditional industries.

AI is a complex topic and in its broadest context it covers computer systems that can sense their environment, think, and in some cases learn, and take applicable actions according to their objectives. To illustrate the complexity of the topic, neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units” loosely modelling the way that neurons interact in the brain. Neural networks need large data sets to be “trained” and the number of layers of simulated interconnected neurons, often numbering in their millions, determine how “deep” the learning can be. Before I embarrass myself in demonstrating how little I know about the technicalities of this topic, it’s safe to say AI as referred to in this post encompasses the broadest definition, unless a referenced report or article specifically narrows the definition to a subset of the broader definition and is highlighted as such.

According to IDC (here), “interest and awareness of AI is at a fever pitch” and global spending on AI systems is projected to grow from approximately $20 billion this year to $50 billion in 2021. David Schubmehl of IDC stated that “by 2019, 40% of digital transformation initiatives will use AI services and by 2021, 75% of enterprise applications will use AI”. By the end of this year, retail will be the largest spender on AI, followed by banking, discrete manufacturing, and healthcare. Retail AI use cases include automated customer service agents, expert shopping advisors and product recommendations, and merchandising for omni channel operations. Banking AI use cases include automated threat intelligence and prevention systems, fraud analysis and investigation, and program advisors and recommendation systems. Discrete manufacturing AI use cases including automated preventative maintenance, quality management investigation and recommendation systems. Improved diagnosis and treatment systems are a key focus in healthcare.

In this April 2018 report, McKinsey highlights numerous use cases concluding that ”AI can most often be adopted and create value where other analytics methods and techniques are also creating value”. McKinsey emphasis that “abundant volumes of rich data from images, audio, and video, and large-scale text are the essential starting point and lifeblood of creating value with AI”. McKinsey’s AI focus in the report is particularly in relation to deep learning techniques such as feed forward neural networks, recurrent neural networks, and convolutional neural networks.

Examples highlighted by McKinsey include a European trucking company who reduced fuel costs by 15 percent by using AI to optimize routing of delivery traffic, an airline who uses AI to predict congestion and weather-related problems to avoid costly cancellations, and a travel company who increase ancillary revenue by 10-15% using a recommender system algorithm trained on product and customer data to offer additional services. Other specific areas highlighted by McKinsey are captured in the following paragraph:

“AI’s ability to conduct preventive maintenance and field force scheduling, as well as optimizing production and assembly processes, means that it also has considerable application possibilities and value potential across sectors including advanced electronics and semiconductors, automotive and assembly, chemicals, basic materials, transportation and logistics, oil and gas, pharmaceuticals and medical products, aerospace and defense, agriculture, and consumer packaged goods. In advanced electronics and semiconductors, for example, harnessing data to adjust production and supply-chain operations can minimize spending on utilities and raw materials, cutting overall production costs by 5 to 10 percent in our use cases.”

McKinsey calculated the value potential of AI from neural networks across numerous sectors, as per the graph below, amounting to $3.5 to $5.8 trillion. Value potential is defined as both in the form of increased profits for companies and lower prices or higher quality products and services captured by customers, based off the 2016 global economy. They did not estimate the value potential of creating entirely new product or service categories, such as autonomous driving.

click to enlarge

McKinsey identified several challenges and limitations with applying AI techniques, as follows:

  • Making an effective use of neural networks requires labelled training data sets and therefore data quality is a key issue. Ironically, machine learning often requires large amounts of manual effort in “teaching” machines to learn. The experience of Microsoft with their chatter bot Tay in 2016 illustrates the shortcoming of learning from bad data!
  • Obtaining data sets that are sufficiently large and comprehensive to be used for comprehensive training is also an issue. According to the authors of the book “Deep Learning”, a supervised deep-learning algorithm will generally achieve acceptable performance with around 5,000 labelled examples per category and will match or exceed human level performance when trained with a data set containing at least 10 million labelled examples.
  • Explaining the results from large and complex models in terms of existing practices and regulatory frameworks is another issue. Product certifications in health care, automotive, chemicals, aerospace industries and regulations in the financial services sector can be an obstacle if processes and outcomes are not clearly explainable and auditable. Some nascent approaches to increasing model transparency, including local-interpretable-model-agnostic explanations (LIME), may help resolve this explanation challenge.
  • AI models continue to have difficulties in carrying their experiences from one set of circumstances to another, applying a generalisation to learning. That means companies must commit resources to train new models for similar use cases. Transfer learning, in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity, is one area of focus in response to this issue.
  • Finally, one area that has been the subject of focus is the risk of bias in data and algorithms. As bias is part of the human condition, it is engrained in our behaviour and historical data. This article in the New Scientist highlights five examples.

In 2016, Accenture estimated that US GDP could be $8.3 trillion higher in 2035 because of AI, doubling growth rates largely due to AI induced productivity gains. More recently in February this year, PwC published a report on an extensive macro-economic impact of AI and projected a baseline scenario that global GDP will be 14% higher due to AI, with the US and China benefiting the most. Using a Spatial Computable General Equilibrium Model (SCGE) of the global economy, PwC quantifies the total economic impact (as measured by GDP) of AI on the global economy via both productivity gains and consumption-side product enhancements over the period 2017-2030. The impact on the seven regions modelled by 2030 can be seen below.

click to enlarge

PwC estimates that the economic impact of AI will be driven by productivity gains from businesses automating processes as well as augmenting their existing labour force with AI technologies (assisted, autonomous and augmented intelligence) and by increased consumer demand resulting from the availability of personalised and/or higher-quality AI-enhanced products and services.

In terms of sectors, PwC estimate the services industry that encompasses health, education, public services and recreation stands to gain the most, with retail and wholesale trade as well as accommodation and food services also expected to see a large boost. Transport and logistics as well as financial and professional services will also see significant but smaller GDP gains by 2030 because of AI although they estimate that the financial service sector gains relatively quickly in the short term. Unsurprisingly, PwC finds that capital intensive industries have the greatest productivity gains from AI uptake and specifically highlight the Technology, Media and Telecommunications (TMT) sector as having substantial marginal productivity gains from uptaking replacement and augmenting AI. The sectoral gains estimated by PwC by 2030 are shown below.

click to enlarge

A key element of these new processes is the computing capabilities needed to process so much data that underlies AI. This recent article in the FT highlighted how the postulated demise of Moore’s law after its 50-year run is impacting the micro-chip sector. Mike Mayberry of Intel commented that “the future is more heterogeneous” when referring to the need for the chip industry to optimise chip design for specific tasks. DARPA, the US defence department’s research arm, has allocated $1.5 billion in research grants on the chips of the future, such as chip architectures that combine both power and flexibility using reprogrammable “software-defined hardware”. This increase in focus from the US is a direct counter against China’s plans to develop its intellectual and technical abilities in semiconductors over the coming years beyond simple manufacturing.

One of the current leaders in specialised chip design is Nvidia (NVDA) who developed software lead chips for video cards in the gaming sector through their graphics processing unit (GPU). The GPU accelerates applications running on standard central processing units (CPU) by offloading some of the compute-intensive and time-consuming portions of the code whilst the rest of the application still runs on the CPU. The chips developed by NVDA for gamers have proven ideal in handling the huge volumes of data needed to train deep learning systems that are used in AI. The exhibit below from NVDA illustrates how they assert that new processes such as GPU can overcome the slowdown in capability from the density limitation of Moore’s Law.

click to enlarge

NVDA, whose stock is up over 400% in the past 24 months, has been a darling of the stock market in recent years and reported strong financial figures for their quarter to end April, as shown below. Their quarterly figures to the end of July are eagerly expected next month. NVDA has been range bound in recent months, with the trade war often cited as a concern with their products sold approximately 20%, 20%, and 30% into supply chains in China, other Asia Pacific countries, and Taiwan respectively

click to enlarge

Although seen as the current leader, NVDA is not alone in this space. AMD recently reported strong Q1 2018 results, with revenues up 40%, and has a range of specialised chip designs to compete in the datacentre, auto, and machine learning sectors. AMD’s improved results also reduce risk on their balance sheet with leverage decreasing from 4.6X to 3.4X and projected to decline further. AMD’s stock is up approximately 70% year to date. AMD’s 7-nanonmeter product launch planned for later this year also compares favourably against Intel’s delayed release date to 2019 for its 10-nanometer chips.

Intel has historically rolled out a new generation of computer chips every two years, enabling chips that were consistently more powerful than their predecessors even as the cost of that computing power fell. But as Intel has run up against the limits of physics, they have reverted to making upgrades to its aging 14nm processor node, which they say performs 70% better than when initially released four years ago. Despite advances by NVDA and AMD in data centres, Intel chips still dominate. In relation to the AI market, Intel is focused on an approach called field-programmable gate array (FPGA) which is an integrated circuit designed to be configured by a customer or a designer after manufacturing. This approach of domain-specific architectures is seen as an important trend in the sector for the future.

Another interesting development is Google (GOOG) recently reported move to commercially sell, through its cloud-computing service, its own big-data chip design that it has been using internally for some time. Known as a tensor processing unit (TPU), the chip was specifically developed by GOOG for neural network machine learning and is an AI accelerator application-specific integrated circuit (ASIC). For example, in Google photos an individual TPU can process over 100 million photos a day. What GOOG will do with this technology will be an interesting development to watch.

Given the need for access to large labelled data sets and significant computing infrastructure, the large internet firms like Google, Facebook (FB), Microsoft (MSFT), Amazon (AMZN) and Chinese firms like Baidu (BIDU) and Tencent (TCEHY) are natural leaders in using and commercialising AI. Other firms highlighted by analysts as riding the AI wave include Xilinx (XLNX), a developer of high-performance FPGAs, and Yext (YEXT), who specialise in managing digital information relevant to specific brands, and Twilio (TWLO), a specialist invoice and text communication analysis. YEXT and TWLO are loss making. All of these stocks, possibly excluding the Chinese ones, are trading at lofty valuations. If the current wobbles on the stock market do lead to a significant fall in technology valuations, the stocks on my watchlist will be NVDA, BIDU and GOOG. I’d ignore the one trick ponys, particularly the loss making ones! Specifically, Google is one I have been trying to get in for years at a sensible value and I will watch NVDA’s results next month with keen interest as they have consistently broken estimates in recent quarters. Now, if only the market would fall from its current heights to allow for a sensible entry point…….maybe enabled by algorithmic trading or a massive trend move by the passives!

The Centurylink Conundrum

It’s been over 4 months since I made my overtly positive 2018 EBITDA call on Centurylink (CTL), as per this February post. I estimated EBITDA guidance for 2018 around $9.25 billion whereas CTL’s actual guidance was between $8.75 billion to $8.95 billion. In my previous CTL post back in August 2017, the base case 2018 EBITDA was much closer to the mark at $8.95 billion! The reason for my $0.4 billion overshoot was an over optimistic reaction to the comments CFO Sunit Patel made earlier in the year on a possible 5%-7% margin improvement over the next 3 to 5 years. In my defense, the exhibit below comparing the cumulative margin improvement following the LVLT/TWTC merger compared to those now forecast by analysts from the CTL/LVLT merger shows how one could see more upside than indicated by guidance from the new management at CTL (i.e. the old LVLT management team after their successful reverse takeover of CTL).

click to enlarge

However, the LVLT/TWTC merger was a very different deal than the CTL/LVLT one. For starters, TWTC was a growing fiber based business, at both revenue and EBITDA lines, when it merged with LVLT whereas CTL is a declining one, at both revenue and EBITDA lines, with over 40% of its standalone business in legacy services.

I have rebuilt my model on the combined entity and carefully considered the top and bottom line impact of the declining legacy business on the combined CTL/LVLT projections in addition to the potential cost savings compared to those articulated by CFO Sunit Patel and new CEO Jeff Storey (I am assuming $1 billion of operating synergies compared to the guided figure of $0.85 billion). The graph below illustrates that looking at historical proforma margins on a combined business to project the future is misleading given the underlying trends at CTL, particularly the declining legacy business, and the improving margins at LVLT from the TWTC synergies.

click to enlarge

For the newly combined business, I estimate the old legacy business will make up approximately 25% of the new CTL’s revenue base. Historically, for the combined business, I estimate the legacy business (split 65:35 between the enterprise & consumer businesses) has been declining on average quarter on quarter by 2.5% over the past 6 quarters whereas the strategic (i.e. non-legacy) business has grown 0.5% on average. The strategic growth rate has been lower on average in recent quarters if Q1 2018 is excluded. The previous quarters poor performance could be due to the uncertainty over the merger, but the performance of the key strategic business will be an important metric to watch in future quarters (unfortunately they do not split the business out this way in their reports anymore). The forward quarter on quarter decline/growth rates for the legacy/strategic blocks are critical in determining future revenues and margins. The graph below shows the different impacts on annual revenue growth and EBITDA margins for different sets of legacy and strategic quarterly declines/increases.

click to enlarge

This analysis shows that the underlying business is facing the headwind of up to a 3% revenue decline in 2019 and up to 2.5% decline by 2022. As a result, underlying EBITDA margins also could be facing an annual 40 basis point decline from 2019 through 2022. This decline in underlying EBITDA margins explains an element of the difference in cumulative EBITDA margin improvement in the first graph above.

For my base scenario, I have assumed a quarterly revenue decline of 2.75% on legacy business (slightly worse that the past 6 quarter average of 2.5%) and a quarterly increase of 0.50% in strategic revenue (in line with the past 6 quarter average). I have also included an increase in EBITDA margin due to new business that the enlarged group can attract due to its larger footprint and relevance (the revenue impact is minimal as it will likely mainly be from existing clients scaling up although the margin impact could be more significant). I have sense checked the resulting revenue figures against independent projections using the new business classifications presented by the firm. The breakdown of the different cumulative impacts in my base scenario are shown below.

click to enlarge

These figures get to just within the 5%-7% range of margin improvement articulated by Sunit Patel with 5.3% combined improvement after the 3rd year, leaving room for further margin improvement in subsequent years. The graph below shows my base scenario revenue figures using the new classifications from CTL.

click to enlarge

This time around, I will not be considering an optimistic scenario but rather focusing on the downside to my base selections. Upside to my base scenarios is possible if recent revenue trends improve, as a result say of rapid 5G deployments (see this post). Upside could also come from CTL’s deep enterprise network being tempting to possible acquirers in a vertical M&A frenzy, although that sounds a bit like wishful thinking to my mind. The balance of probabilities is more likely to be on the downside in terms of revenue (although I do have confidence in CTL’s management ability to manage the various levers to hit their EBITDA targets).

For my pessimistic selections, I have assumed an accelerated quarterly revenue decline of 3% on legacy business and a flatlining quarterly change of 0% in strategic revenue (due to pressure on the enterprise business from increased software enabled competitors). These are fairly brutal assumptions. The impact of new business, particularly its impact on margins, is also assumed to be diminished compared to my base assumptions. Again, I sense checked these top-line figures by projections using the new business classifications from CTL, as below. These projections clearly show a business model under significant pressure.

click to enlarge

Taking all of the above factors into account, my revenue and post synergy EBITDA projections come out as per the graph below.

click to enlarge

On CTL’s debt, despite the issues surrounding LIBOR as a base rate for the floating debt (see this post), I am reasonably comfortable given approx. 65% of the debt is fixed. The debt load is high, at net debt to mid-point 2018 guided EBITDA of 4.2, but manageable (one area where Sunit has proven his ability is debt management!). I estimate that for every 25 basis point increase in the base floating rate (there are alternatives to LIBOR detailed in its floating credit facilities which is important given LIBOR’s likely replacement by something like SOFR, as per this article) the impact on CTL’s total debt interest rate is 8.5 basis points. Also, any significant debt repayment is not due until 2020 which gives time to ensure operating efficiencies are delivered. Of course, if the overall business model is in trouble despite achieving operational targets (e.g. software based telecom serves disrupt CTL) as per the pessimistic scenario, the debt load will become a big issue by 2020 (I still have the scars from the telecom bust)!

The key question concerning CTL in the short term is the sustainability of its dividend, given its current dividend yield of over 11.5% is amongst the highest in the S&P500. Under my base scenario, I project that the dividend is sustainable, just. It will be tight particularly when CTL will want to be demonstrate continued progress on deleveraging year on year. Under my pessimistic scenario, I assume a 50% dividend cut in 2019 would be required.

My valuation for CTL under my base and pessimistic scenarios is $18 and $10 respectively. Given the stock currently trades around $18, the market is indicating a belief in the new management team and its guidance. I have a high degree of confidence in current management and their ability to navigate the integration of CTL/LVLT and the challenges ahead in telecomland. However, the dividend sustainability issue will not be resolved until we see the quarterly progress over the next few quarters. Well into 2019 would be my guess.

It’s also likely that markets will be increasingly volatile over the next 12 months (eh, the market seems to be waking up to the folly of Mr Trump’s trade war this morning). Highly indebted firms will likely be battered as we get deeper into a tighter monetary environment, particularly those without topline growth. Any dividend cut will hit the stock heavily. I would not be surprised if the stock fell as low as $6 following a 50% cut. Given the juicy dividend yield will provide the cashflow, buying options seem a sensible means of protecting against such a downside. Ultimately, I have high hopes for CTL but this one is not for the faint hearted. It’s a risky stock, as the dividend yield implies, in an increasingly unbalanced market and it has to execute flawlessly over the coming quarters to justify the risk.

5G: Telecom Hype or Saviour

John Legere of T-Mobile is a canny operator and knows how to play the sycophant to Trump’s nationalist instincts in touting the ability of a combined T-Mobile & Sprint to invest in a super-charged 5G roll out, as per this presentation, playing the job creation and beat the Chinese technological advancement cards. Legere cites an Analysys Mason report commissioned by the US industry lobby group CTIA to back up such claims which in turn cites an Accenture report from 2017 on 5G in the US which claims that “telecom operators are expected to invest approximately $275 billion in infrastructure, which could create up to 3 million jobs and boost GDP by $500 billion”. In 2016, the European Commission in this report stated that 5G “investments of approximately €56.6 billion will be likely to create 2.3 million jobs in Europe”. An IHS Markit 2017 report commissioned by Qualcomm claims that in 2035, “5G will enable $12.3 trillion of global economic output” and “the global 5G value chain will generate $3.5 trillion in output and support 22 million jobs” on the basis that “the global 5G value chain will invest an average of $200 billion annually”.

These are fantastical figures. Many assumptions go into their computation including the availability, range and cost of spectrum plus infrastructure spend and policy in relation to streamlining procedures and fee structures for the deployment of the small shoe-box cell sites (between 10 to 100 more antenna are required for 5G than current networks). Larger issues such as privacy and security also need to be addressed before we enter a world of ubiquitous ultra-reliable low latency networks as envisaged by the reports referenced above. Those of us who lived through, and barely survived, the telecom boom of the late 1990s can be forgiven for having a jaundice view of a new technology saving the telecom industry. This blog illustrates some of the challenges facing the wired telecom sector and the graph below shows the pressures that the US mobile players are under in terms of recent trends in service revenues.

click to enlarge

The mobile service revenues trends are remarkably similar to those in the enterprise and wholesale space. The graph above also shows the rationale for the T-Mobile/Sprint merger in terms of size as well as the impact of T-Mobile’s aggressive pricing strategy. All these trends are in the context of the insatiable increase in bandwidth traffic, as illustrated by the IP figures from Cisco below.

click to enlarge

This report from 2017 by Oliver Wyman is one of the better ones and contains some illuminating context for the 5G era. It shows that in Europe despite a 40% annual increase in mobile subscribers and a 36% annual increase in European IP traffic from 2006 to 2016, mobile service revenue and total telecom service revenue decreased by 22% and 19% respectively, as per the graphic below.

click to enlarge

The Oliver Wyman report concludes as follows:

“In the next five to ten years, demand in fixed-line broadband bandwidth will grow exponentially, leading to speeds that can only be supplied by FTTH/B. Mobile broadband demand will follow in parallel. Virtual reality is the “killer app” that will drive massive demand. Mobile broadband supply will begin to reach its limits, with spectral efficiency gains and additional attractive spectrum in the current bands not growing as fast as they have in the past. High-frequency beam technology in 5G will be radically new and will be able to meet future demand. At the same time, however, it will create massive mobile backhaul demand. The outcome is likely to shake the industry, leading not only to a new balance of power between mobile-only and integrated/fixed-line operators, but also to new potential revenue growth for the first time in many years.”

Another interesting graphic from the report, as below, is the historical and projected broadband usage.

click to enlarge

This 2017 report from Deloitte argues that “5G, across both the core and radio access network, stands to have a potentially greater impact on the overall ecosystem than any previous wireless generation”. Deloitte sees a “convergence of supply between wireline and wireless broadband, as almost all devices become connected over short-range wireless”. Deloitte concludes that “with an increasingly converged ecosystem of network and content players, an increasingly software-managed and defined physical networking space, and the demands and needs of consumers becoming complex enough that they no longer can manage individually, 5G and its associated technologies may have the power to reset the wireless landscape”.

This paper from an Infinera executive called Jon Baldry highlights the need for “improvements to the overall network infrastructure in terms of performance, features and bandwidth” to support 5G “using software-defined networking (SDN) control and network functions virtualization (NFV) will play a major role in the optimization of the network”. I came across an interesting claim that SDN and network virtualisation can reduce opex and capex by 63% and 68% respectively compared to traditional telecom networking. Baldry concludes that “these improvements will drive new fiber builds, and fiber upgrades to an ever-growing number of cell sites, creating significant opportunity for cable MSOs and other wholesale operators to capture significant share of cell backhaul and fronthaul services for 4G and 5G mobile networks”.

Whether all these investments and resulting new networks will halt the declining revenue trend for the telecom sector or merely provide a survival avenue for certain telecoms is something I have yet to be convinced about. One thing seems certain however and that is that tradition telecom models will change beyond recognition in the forthcoming 5G era.