Guest Post: OpenAI Reboot: Ecosystem Still in Trouble

OpenAI has been put back the way that it was, but the governance will be very different and the damage that has been done to its ecosystem aspirations could greatly impact its long-term future.

  • Altman and his crew have been reinstated at OpenAI and a new board has been formed. But how the non-profit part of OpenAI’s mission has changed is not clear at this stage.
  • The previous structure of OpenAI was a non-profit company with a subsidiary that would be able to make money. It is in this subsidiary that most of the investments have been made.
  • The problem is that OpenAI’s mission to develop artificial general intelligence (AGI) has a rapacious appetite for compute resources which is where the vast majority of the over $11 billion that Microsoft has invested has been spent.
  • This is where the non-profit and for-profit ideologies bump against each other as Microsoft has a fiduciary duty to make money for its shareholders and shoveling $11 billion into a black hole with no prospect of a return is a breach of that duty.
  • This is why there is an unusual situation in the for-profit subsidiary where Microsoft’s return is capped at 100x, which for all intents and purposes is perfectly fine.
  • The problem is that the board that oversaw OpenAI (and the for-profit subsidiary) was only supposed to care about AI benefitting humanity, which also means capping AI that it thinks could trigger the machine takeover of the human race.
  • I am pretty sure that this was the source of the conflict that led to the firing of Altman, but I suspect that it was rumors of a breakthrough in AI that were the catalyst for the recent events.
  • This “breakthrough” in AI, which has been termed Q*, appears to have been enough to make the board nervous and it may have something to do with reasoning.
  • I suspect that this “breakthrough” will be an enhancement of GPT models that makes them appear to be better at reasoning.
  • So far, I have seen no evidence whatsoever that any deep learning system is capable of reasoning.
  • Instead, what they are very good at is learning from examples and then applying that learning in a controlled setting.
  • The minute the setting becomes uncontrolled, deep learning systems go off the rails and start making things up or hallucinating or making horrible errors on the road that force the humans to take over.
  • This is because they have no causal understanding of the tasks that they are performing and instead only understand the correlation.
  • If this “breakthrough” involves reasoning and is real, then this would represent a step along the way to AGI.
  • However, all of the evidence I have seen suggests that while the machines can simulate reasoning, they always fall over the minute that they are put to a real test on data that they have not seen before.
  • This would also not be the first time that a heralded breakthrough from OpenAI turned out to be a red herring (Robotic Rubik Cube solver).
  • Hence, I suspect that all of the fuss about a robot apocalypse may have damaged OpenAI’s long-term outlook and greatly aided its competitors.
  • OpenAI launched its play for the AI ecosystem just this month and to make it successful, everyone needs to have complete confidence in OpenAI as a going concern as they will be basing their apps and services upon its foundation models or GPT itself.
  • The recent antics have shattered that confidence and now OpenAI will have to work much harder to shore up developer confidence that it will be around for the long term.
  • To make matters worse, it will now be much easier for rivals to lure developers, meaning that the whole ecosystem proposition has taken a large hit.
  • OpenAI is out of the woods and has a future, but its valuation and the prospect of dominating the AI ecosystem remain in disarray.

(This guest post was written by Richard Windsor, our Research Director at Large.  This first appeared on Radio Free Mobile. All views expressed are Richard’s own.) 

Related Posts

Podcast #69: ChatGPT and Generative AI: Differences, Ecosystem, Challenges, Opportunities

Generative AI has been a hot topic, especially after the launch of ChatGPT by OpenAI. It has even exceeded Metaverse in popularity. From top tech firms like Google, Microsoft and Adobe to chipmakers like Qualcomm, Intel, and NVIDIA, all are integrating generative AI models in their products and services. So, why is generative AI attracting interest from all these companies?

While generative AI and ChatGPT are both used for generating content, what are the key differences between them? The content generated can include solutions to problems, essays, email or resume templates, or a short summary of a big report to name a few. But it also poses certain challenges like training complexity, bias, deep fakes, intellectual property rights, and so on.

In the latest episode of ‘The Counterpoint Podcast’, host Maurice Klaehne is joined by Counterpoint Associate Director Mohit Agrawal and Senior Analyst Akshara Bassi to talk about generative AI. The discussion covers topics including the ecosystem, companies that are active in the generative AI space, challenges, infrastructure, and hardware. It also focuses on emerging opportunities and how the ecosystem could evolve going forward.

Click to listen to the podcast

Click here to read the podcast transcript.

Podcast Chapter Markers

01:37 – Akshara on what is generative AI.

03:26 – Mohit on differences between ChatGPT and generative AI.

04:56 – Mohit talks about the issue of bias and companies working on generative AI right now.

07:43 – Akshara on the generative AI ecosystem.

11:36 – Akshara on what Chinese companies are doing in the AI space.

13:41 – Mohit on the challenges associated with generative AI.

17:32 – Akshara on the AI infrastructure and hardware being used.

22:07 – Mohit on chipset players and what they are actively doing in the AI space.

24:31 – Akshara on how the ecosystem could evolve going forward.

Also available for listening/download on:


Guest post: AI Business Model on Shaky Ground

OpenAI, Midjourney and Microsoft have set the bar for chargeable generative AI services with ChatGPT (GPT-4) and Midjourney costing $20 per month and Microsoft charging $30 per month for Copilot. The $20-per-month benchmark set by these early movers is also being used by generative AI start-ups to raise money at ludicrous valuations from investors hit by the current AI FOMO craze. But I suspect the reality is that it will end up being more like $20 a year.

To be fair, if one can charge $20 per month, have 6 million or more users, and run inference on NVIDIA’s latest hardware, then a lot of money can be made. If one then moves inference from the cloud to the end device, even more is possible as the cost of compute for inference will be transferred to the user. Furthermore, this is a better solution for data security and privacy as the user’s data in the form of requests and prompt priming will remain on the device and not transferred to the public cloud. This is why it can be concluded that for services that run at scale and for the enterprise, almost all generative AI inference will be run on the user’s hardware, be it a smartphone, PC or a private cloud.

Consequently, assuming that there is no price erosion and endless demand, the business cases being touted to raise money certainly hold water. While the demand is likely to be very strong, I am more concerned with price erosion. This is because outside of money to rent compute, there are not many barriers to entry and Meta Platforms has already removed the only real obstacle to everyone piling in.

The starting point for a generative AI service is a foundation model which is then tweaked and trained by humans to create the service desired. However, foundation models are difficult and expensive to design and cost a lot of money to train in terms of compute power. Up until March this year, there were no trained foundation models widely available, but that changed when Meta Platforms’ family of LlaMa models “leaked” online. Now it has become the gold standard for any hobbyist, tinkerer or start-up looking for a cheap way to get going.

Foundation models are difficult to switch out, which means that Meta Platforms now controls an AI standard in its own right, similar to the way OpenAI controls ChatGPT. However, the fact that it is freely available online has meant that any number of AI services for generating text or images are now freely available without any of the constraints or costs being applied to the larger models.

Furthermore, some of the other better-known start-ups such as Anthropic are making their best services available online for free. Claude 2 is arguably better than OpenAI’s paid ChatGPT service and so it is not impossible that many people notice and start to switch.

Another problem with generative AI services is that outside of foundation models, there are almost no switching costs to move from one service to another. The net result of this is that freely available models from the open-source community combined with start-ups, which need to get volume for their newly launched services, are going to start eroding the price of the services. This is likely to be followed by a race to the bottom, meaning that the real price ends up being more like $20 per year rather than $20 per month. It is at this point that the FOMO is likely to come unstuck as start-ups and generative AI companies will start missing their targets, leading to down rounds, falling valuations, and so on.

There are plenty of real-world use cases for generative AI, meaning that it is not the fundamentals that are likely to crack but merely the hype and excitement that surrounds them. This is precisely what has happened to the Metaverse where very little has changed in terms of developments or progress over the last 12 months, but now no one seems to care about it.

(This guest post was written by Richard Windsor, our Research Director at Large.  This first appeared on Radio Free Mobile. All views expressed are Richard’s own.) 

Related Posts

AI Drives Cloud Player Capex Amid Cautious Overall Spend

  • Cloud service providers’ capex is expected to grow by around 8% YoY in 2023 due to investments in AI and networking equipment.
  • Microsoft and Amazon are among the highest spenders as they invest in data center development. Microsoft will spend over 13% of its capex on AI infrastructure.
  • AI infrastructure can be 10x-30x more expensive than traditional general-purpose data center IT infrastructure.
  • Chinese hyperscalers’ capex is decreasing due to their inability to access NVIDIA’s GPU chips, and decreasing cloud revenues.

New Delhi, Beijing, Seoul, Hong Kong, London, Buenos Aires, San Diego – July 25, 2023

Global cloud service providers will grow capex by an estimated 7.8% YoY in 2023, according to the latest research from Counterpoint’s Cloud Service. Higher debt costs, enterprise spending cuts and muted cloud revenue growth are impacting infrastructure spend in data centers compared to 2022.

Commenting on the large cloud service providers’ 2023 plans, Senior Research Analyst Akshara Bassi said, “Hyperscalers are increasingly focusing on ramping up their AI infrastructure in data centers to cater to the demand for training proprietary AI models, launching native B2C generative AI user applications, and expanding AIaaS (Artificial Intelligence-as-a-Service) product offerings”.

According to Counterpoint’s estimates, around 35% of the total cloud capex for 2023 is earmarked for IT infrastructure including servers and networking equipment compared to 32% in 2022.

Global Cloud Service provider's Capex
Source: Counterpoint Research
2023 Capex Share
Source: Counterpoint Research

In 2023, Microsoft and Amazon (AWS) will account for 45% of the total capex. US-based hyperscalers will contribute to 91.9% of the overall global capex in 2023.

Chinese hyperscalers are spending less due to slower growth in cloud revenues amid a weak economy and difficulties in acquiring the latest NVIDIA GPU chips for AI due to US bans. The scaled-down version – A800 of the flagship A100/H100 chips – that NVIDIA has been supplying to Chinese players may also come under the purview of the ban, further reducing access to AI silicon for Chinese hyperscalers.

Global Cloud Service Provider's AI spends as % of Total Capex, 2023
Source: Counterpoint Research

Based on Counterpoint estimates, Microsoft will spend proportionally the most on AI-related infrastructure with 13.3% of its capex directed towards AI, followed by Google at around 6.8% of its capex. Microsoft has already announced its intention to integrate AI within its existing suite of products.

AI infrastructure can be 10x-30x more expensive than traditional general-purpose data center IT infrastructure.

Though Chinese players are investing a larger portion of their spends towards AI, the amount is significantly less than that of the US counterparts due to a lower overall capex.

 The comprehensive and in-depth ‘Global Cloud Service Providers Capex’ report is available. Please contact Counterpoint Research to access the report.


Counterpoint Technology Market Research is a global research firm specializing in products in the technology, media and telecom (TMT) industry. It services major technology and financial firms with a mix of monthly reports, customized projects, and detailed analyses of the mobile and technology markets. Its key analysts are seasoned experts in the high-tech industry.

Analyst Contacts

Akshara Bassi


Peter Richardson


 Neil Shah


Follow Counterpoint Research

Related Posts

Guest Post: Artificial Intelligence: Irrational Exuberance is in Full Swing

As surely as autumn and winter follow summer, the current exuberance around AI is not going to last simply because the machines remain incapable of living up to the expectations that have been set for them.

These cycles typically take the form of a discovery of some description followed by a ramping of expectations which in turn leads to large amounts of money being invested for fear of missing out (FOMO).

The problem is that the expectations that are set are always unrealistic, meaning that when the time comes to deliver on those expectations, disappointment sets in. This is followed by collapsing valuations, bankruptcies and forced consolidation as investors are no longer willing to suspend disbelief.

This is the fourth AI Hype cycle with the others occurring in the 1960s, 1980s and 2017-2019, and this hype cycle looks exactly the same as the others except that it is much larger. Looking at investment activity and news flow, it is also very clear exactly where we are in the cycle.

First, expectations 

  • The ability of Large Language Models (LLMs) to mimic human behavior has convinced some of the big names (like Professor Geoffrey Hinton) that artificial superintelligence is now materially closer than it was before.
  • While LLMs do have some very useful and lucrative use cases, they still have no causal understanding of the tasks they are performing.
  • This is why they hallucinate, make the most basic factual errors and are generally completely unreliable.
  • Therefore, the machines remain as stupid as ever. There is no evidence whatsoever that these machines are able to think.
  • But the problem is that they are so good at pretending to think that they are able to fool the great minds that created them.
  • Instead, all they do is calculate statistical relationships, meaning that the big promises that have been made will not be kept.

Second, investment

  • There are already many examples of money being thrown at start-ups with valuations and fundamentals being an afterthought:
  • OpenAI’s $30-billion valuation with a corporate culture that doesn’t want to make any profit.
  • Inflexion AI raising $1.3 billion from Microsoft and NVIDIA at an estimated valuation of around $5 billion despite having only been around for a year and having no commercial product.
  • Mistral AI raising $113 million at a $260-million pre-money valuation despite being only a few weeks old with no revenues, no product and probably only the vaguest idea of what it is going to do.
  • This can be described as the very definition of a bubble where rationality gets lost in the mad rush toward the next big thing. A lot of shirts are going to be lost.

The latest innovations around LLMs have produced some remarkable abilities which, no doubt, will be put to both good and lucrative use. However, the technology upon which they are based has not changed, meaning that the limitations that prevented digital assistants and autonomous driving from being useful for anything more than the most basic tasks are also going to trip LLMs up.

Furthermore, this is no longer the exclusive realm of the big, well-financed companies that can pay tens of millions of dollars for massive compute capacity, as the hobbyists and enthusiasts are now creating generative AI. Meta Platforms’ series of LLMs called LlaMa are now freely available to anyone who wants to tinker and advances in training techniques have meant that it is possible to fine-tune a 7bn parameter model on a powerful laptop.

This is why there are models popping up all over the place that are completely free to use. Some of them actually work quite well. Hence, the pricing of $20 per month for services like GPT-4, Perplexity AI and Midjourney may soon come under relentless pressure. This is really bad news for investors relying on spreadsheets for their return because no one seems to have modeled this scenario out.

The first sign of trouble will come when companies come back to the market after spending the money on fancy offices and expensive staff but nothing to show for the investments so far. This is when the down rounds begin, disillusionment sets in, reality makes its presence felt and winter begins.

One suspects this will begin sometime in the first half of 2024 and the fallout will not be pretty.

(This guest post was written by Richard Windsor, our Research Director at Large.  This first appeared on Radio Free Mobile. All views expressed are Richard’s own.) 

Related Posts

Rakuten and Dish – 2022 Will Be A Critical Year For Greenfield Networks

Rakuten and Dish are pioneering the development of open, cloud-native 5G multi-vendor networks and operate in highly competitive and mature markets. Although with very different backgrounds, both are broadly similarly sized companies with established businesses. Both initially joined the mobile market as MVNOs and are at varying stages of deploying their own network infrastructure. However, there are also many important differences between the two companies, particularly with respect to their network architectures and vendor ecosystems.

Rakuten vs Dish

Rakuten has deployed a 4G network and ultimately intends to migrate all customers to a cloud-native 5G SA network. Dish will deploy a 5G SA core-based network from day 1. Rakuten has developed its own telco cloud with all network functions deployed at its own data centres located at various Rakuten premises. In contrast, Dish is adopting an “off-prem” model with the whole network running on AWS data centres, at least initially.

Rakuten relies mostly on single vendors with a heavy emphasis on its own in-house developed technologies and acquisitions. It also has ambitions to become a telco platform provider generating revenues from its software and technology expertise. Dish has generally adopted a dual-vendor strategy to avoid reliance on a single supplier. Although early days, Dish could conceivably follow Rakuten’s lead in time and similarly launch its own services platform.

Rakuten: Subscribers and Network Coverage

At the end of September, Rakuten Mobile had 5.1 million subscribers, having added just 2 million subscribers during the past 12 months, despite the recent expansion of its 4G network. In a country with a population of over 125 million, this is hardly a huge amount. In contrast, market leader NTT DoCoMo has more than 83 million subscribers.

Management still maintains – probably to alleviate investor concerns – that Rakuten will break even by 2023. It thus has a 12 to 18-month window to dramatically increase subscriber acquisitions. Offering much cheaper tariffs, the company is banking on consumers switching en-masse from bigger rivals, as well as finally being able to hit its 96% network coverage target in early 2022. However, in a largely cost insensitive market and with notoriously fickle consumers, will this be enough to attract sufficient new subscribers? Although the break-even target now apparently includes revenues from its telco software platform Symphony – perhaps a saving grace – management expects losses to be at their worst in the first quarter of 2022 but to improve from Q2 onwards.

In the likely event that Rakuten deviates from its planned 2023 break-even path, it is vital that the company can nevertheless show a substantial ramp up in subscriber acquisition over the next six months to demonstrate that its aggressive loss-leading pricing strategy can work in Japan. Otherwise, there is a danger that investors will start to lose confidence in the company, which may also extend to telco customers at its Symphony business.

                                     © Counterpoint Research, Data Source: Rakuten Group

Exhibit 1:  Rakuten Subscriber Growth

Dish: Network and Service Launch Plans

Against a backdrop of falling pay TV and MVNO subscribers, Dish faces challenges in both its established and new businesses. At its 3Q earnings call, the company reported 10,98 million pay TV subscribers, down 13,000 YoY and 8.77 million mobile subscribers, down 121,000 YoY. Up until now, Dish has been able to blunt the impact of its pay TV subscriber losses with higher prices. However, pricing power cannot last forever.

Dish claims that it will be able to build a fourth nationwide 5G network for $10 billion and plans to launch in its first market – Las Vegas – in early 2022. With regulatory deadlines to provide 70% population coverage by the end of 2023, it is now in a race against time to deploy its 5G network and make the business a success. As with Rakuten, network roll-out will be its biggest challenge initially. While using AWS for its core network may yield cost advantages, putting the entire 5G network in the public cloud is a risky bet, as the recent outages at AWS demonstrate.

Unlike Rakuten, Dish plans initially to target the wholesale and enterprise markets rather than focusing exclusively on the consumer market, where it lacks its rivals’ brand recognition. With its considerable spectrum assets, targeting the wholesale market looks like a no brainer. However, with limited network coverage initially, selling to enterprises will not be easy, plus Dish has no experience of working with enterprises.  Despite its recent ties with AWS, it will be challenging to generate revenues from the enterprise market.

The Endgame – Big Tech Takeover?

Building a commercially successful network from scratch with new, innovative technologies that promise significant cost benefits is easier said than done. Rakuten’s progress to date has revealed a lot of issues and there are lessons here for other aspiring greenfield networks. While the contract award with 1&1 Drillisch demonstrates that there is demand for its technology, this is a business where scale and mindshare are required to be successful. With continued high cash burn and a high credit risk rating, any disappointment on the subscriber front over the next six months could hit investor sentiment as well as impact its ability to expand its Symphony business.

Like Rakuten in Japan, Dish is smaller than its main rivals and may also struggle financially, particularly as it will need to offer substantial discounts to attract customers. Counterpoint Research believes that an eventual takeover by a big tech company such as AWS, Google, Microsoft or even Apple looks the most likely outcome for Dish, while over in Japan, open RAN ambitions by domestic vendors, coupled with national pride, will probably ensure that Rakuten Mobile will survive in some form or other.


Related Reports

Rakuten’s Future – Operator, Vendor or Something Else?

Cloud RAN – Waiting for a Viable Business Case?

Infrastructure Insights – Cloud RAN Dominates 3rd Quarter

Microsoft Bets Big on Cloud Gaming

The recently-concluded edition of the annual Electronics Entertainment Expo (E3), where the video game industry gets together and showcases the latest and greatest upcoming game titles, had many key announcements. Microsoft’s Xbox Cloud Gaming announcements demonstrated its ambitions to build out its cloud gaming offerings and services. Another console giant, Nintendo, announced two new cloud-based games for the Nintendo Switch. Nvidia’s GeForce Now, Google’s Stadia and Amazon’s Luna announced more additions to their libraries. However, all these announcements were dwarfed by Microsoft’s lofty goals for cloud gaming.

Current cloud gaming efforts

There are already multiple cloud gaming experiences available, each catering to slightly different consumer groups. Nvidia’s GeForce Now enables PC players to stream their PC games on different iOS and Android devices. Then there are exclusive cloud-based platforms such as Google Stadia and Amazon Luna which enable you to play games on your browser, streaming stick, and certain Android and now even iOS devices (for Stadia). Lastly, console makers such as Sony, Nintendo and Xbox have their own cloud-based offerings. PlayStation Now is the most traditional approach, allowing subscribers to stream older PS2, PS3 and PS4 games on demand on their latest consoles and Windows PCs. Nintendo has a cloud streaming service for the Nintendo Switch, called Nintendo Cloud Streaming, which enables players to run full games on their Switch that they are able to purchase from the eShop. It is the most nascent cloud offering, having only four titles so far. Microsoft’s cloud gaming service Xbox Cloud Gaming comes as an added benefit with its monthly Xbox Game Pass Ultimate subscription. Formerly known as Project xCloud, this cloud gaming service enables streaming of over 260 titles to various devices. There are various other cloud gaming providers as well.

Shadow (which just got acquired by Jezby Cloud) offers a cloud computing service that enables subscribers to have access to a high-end computer in the cloud that they can use for gaming or other processor demanding applications. Other players like Blacknut and Boosteroid specialize purely in video games streamed from the cloud. The market itself has slowly been consolidating. Hatch, which specialized in mobile cloud gaming, shuttered its offices in 2020 despite having early promising deals with carriers such as AT&T. Jump on This, another smaller cloud gaming provider specializing in indie-games, also ceased operations in 2020.

Cloud gaming competition 2021

Microsoft’s E3 announcements and follow-ups

Microsoft spoke at length about its vision for cloud gaming, and it didn’t stop at selling consoles. Xbox Game Pass Ultimate (which retails for $15 per month and is essentially an all-you-can-eat buffet of games) enables subscribers to stream video games through web browsers now, which gives Microsoft much farther reach as it can be used on iOS devices such as iPhones, iPads and Macs through browser support for Safari and Chrome. Xbox Game Pass Ultimate will also launch in Australia, Brazil, Mexico and Japan later this year.

Microsoft is further expanding its reach to more actual screens, as it plans to integrate directly into internet-connected TVs by working with TV manufacturers. It is also working on its own streaming device to enable cloud gaming on any display. Lastly, Microsoft is upgrading its data server racks to Xbox Series X servers to improve the streaming experience.

The server update went live just a week after the E3 announcements. Now, over 260 games can be played with the new Series X server blades enabling faster loading times, higher refresh rates and more graphics-related options. Not to be outdone, Microsoft has also hired Google Stadia design director Kim Swift to be a part of Xbox Game Studio Publishing to help build partnerships with independent studios for new cloud games.

Xbox Game Pass Ultimate

The Implications

These are by far the biggest announcements Microsoft has made on cloud gaming. With an aim to reach 3 billion players, the company is first integrating cloud gaming more into its core competency, i.e. consoles. Many products such as Google Stadia and Amazon Luna are using cloud gaming as a standalone model as they see this as the future of gaming. However, there are still issues of latency and connectivity that make cloud gaming cumbersome at times.

What Microsoft is doing is building the foundation for a future where cloud gaming will be the center of its strategy. For now, it is utilizing the technology and features of cloud gaming to bolster its console play. For example, it allows consumers to first try out a game via the cloud before purchasing and downloading the full version for the console. Cloud gaming will certainly continue to grow and improve on performance.

2021 may be a breakout year for cloud gaming, especially with current component shortages limiting the supply of consoles and other tech gadgets. 5G can be another driver for cloud gaming despite previous false starts. Stadia has previously partnered with Verizon 5G Home to offer Stadia Premier Edition, including a Stadia controller for free for three months, in 2020. In June 2021, AT&T began offering Stadia Pro for six months to new 5G unlimited wireless subscribers. The hardest part of these subscription offerings is providing a demonstrable value-add for consumers to continue paying for the subscription once the free period ends. 2021 looks to be the most promising time for these cloud efforts to become sticky and show growth.

Reliance Jio: World's First 'Super Operator'?

In the world’s second largest mobile market, Reliance Jio, India’s leading Communication Service Provider (CSP) or mobile operator, has taken the entire global ICT industry by storm with its vision and the most successful “serial” fund-raising over the last eight weeks. Reliance Group Chairman Mukesh Ambani (world’s sixth richest man with a net worth of $72 billion), with the launch of greenfield 4G network Jio in 2016, changed the entire competitive and technology landscape in India, amassing close to 400 million 4G subscriptions.

India went from 14 operators down to 4, with two out of the remaining four already on a life support and a third one reeling under debt, instantly offering Reliance Jio a monopoly position. A monopoly position married with serious money muscle and a clairvoyant vision to build a digital platform to empower and democratize technology in a country of 1.3 billion people, making it the most attractive company for any investor or stakeholder to partner with. It is called Jio Platforms.

Jio Platforms Needed Strategic Partnerships to Cross the Chasm

Jio, while amassing a healthy base of 4G subscribers consuming almost 13GB per month, has been still generating an ARPU of less than $2 per user. But to create stickiness, it has also build a platform full of services spanning Content, Commerce, Cloud and Communication, the four key pillars of any consumer’s digital lives. Jio has also vertical ambitions of own-branded devices and in-house development of network and data center elements as it already generated to much scale and has a strong buying power against suppliers. Having said that, the deeper capability, reach, execution and adoption of some of these OTT apps and services has been behind as it competes with bigger rivals. This has also led to Jio going in for acquisitions and a partnership route over the last year or so to help it build, integrate and execute this vision for  its 400 million subscribers, tens of millions of businesses, hundreds of millions of households and beyond. There have been significant gaps in its platform’s capabilities to help push digital platform services and applications over the chasm. Though the vision is still great but needed investments and partnerships with scale and tech know-how. The slider graphic at the end of the post highlights the breadth of Jio Platforms but the capability to execute and adopt has been low to moderate.

$20 Billion Investments in Jio Platforms in Eight Weeks

With the world reeling with the COVID-19 pandemic, the demand for operator services has shot up, offering the right moment for Jio to announce a series of investments cum partnerships from the biggest technology companies such as Facebook, Google, Intel and Qualcomm, and investors such as Silver Lake, KKR, Vista, Saudi Arabia’s PIF, General Atlantic and TPG, which it has been working on for several months. Reliance raised $20 billion in just eight weeks, selling close to 33% stake, as valuation of Jio Platforms climbed to $60 billion, giving it the best funding round ever for any technology company. With this money, Jio has cleared all its debts, which include more than $11 billion to build its 4G network. The end-result: Jio Platforms is now debt-free, well-funded, has a clear vision and the monopoly status to become a ‘Super Operator’.

‘Super Operator’

We believe Reliance Jio or Jio Platforms is transforming into a Super Operator which is no longer a dumb pipe. With a platform approach, it is laying a strong foundation to play a key role across the users’ digital lives and businesses’ digital transformation journeys. None of the operators we can think of globally promises to build, offer and control what Jio is capable of.

Jio’s strategic partnerships with key companies such as Facebook, Google and Microsoft, the three biggest tech giants, will help it drive Commerce, Communication and Cloud areas, respectively, where it had been weak in terms of capabilities, reach and adoption. Further, acquisitions such as Haptik (AI voice Assistants), Embibe (Education Content Platform), Reverie (Multiple Language Integration), Savvn (Music Streaming),  Tesserect (AR/VR) and Radisys (Network Stack) bridge many capability gaps to foster in the upcoming 5G era.

With 5G around the corner, Jio can lay out a fully-controlled greenfield network to build the software stack via Radisys in fixed as well as wireless networks. It remains to be seen if there are any other big moves on the Radio Access Network (RAN) hardware side or it will depend on the long-term partner Samsung.

With Google, Jio aims to democratize 5G hardware with plans to offer the most affordable 5G Android smartphone in the market with the OS specially optimized for a low-cost 5G Jio smartphone. We estimate this to soft launch in Q4 2021 and proliferate through 2022 and 2023 with a target price point of sub-$100.

This is how the portfolio vs capability graphic looks with investments, partnerships and acquisitions to make the company a Super Operator. Though all eyes will be on Jio over the next five years on how it builds on its vision and helps generate great ROI for the investors and partners once it goes public and beyond the Indian shores.

For more details on what Reliance Jio announced, from its latest partnerships to new offerings from video conferencing to AR/VR headsets to e-commerce platform and more, follow the upcoming post here.

Microsoft Gets Serious About Telco Cloud With Metaswitch Acquisition

Hot on the heels of its acquisition of Affirmed Networks in April, Microsoft last week announced that it was buying UK-based Metaswitch Networks. Metaswitch is a network functions virtualisation (NFV) pioneer founded in 1981 and provides networking software stacks to Tier-1 operators and system vendors around the world. The terms of the deal were not announced.

Trend to Cloud-Native and Network Convergence

Telcom companies are transitioning from running network functions on specialist dedicated hardware to software-based network functions running as virtual machines in the cloud. At the same, mobile and fixed networks are converging, driven by legacy Tier-1 operators who want to leverage the same cloud native infrastructure for both their mobile and fixed assets.

Interestingly for Microsoft, Metaswitch serves both mobile and fixed operators and so the company will be able to benefit from the trend to cloud native as well as the convergence of mobile and fixed networks. However, a potential downside is that operators may choose to sweat out their assets, particularly in the current uncertain economic climate, and upgrade in small increments as operational expenditures still represent a big investment and cost. For some operators there is also concern that Microsoft may soon stop supporting Metaswitch’s non-cloud products.

Cloud Migrating to the Edge

Another important trend is the that the cloud is migrating to the edge and Microsoft is also participating very actively in this trend. A few weeks ago, the company launched its Azure Edge Zones which connects directly with 5G networks and Azure Private Edge Zones, designed to connect to LTE/5G private networks with on-premises Azure Stack edge.

It has also been collaborating with MNOs to integrate its edge computing capabilities and Azure cloud services with 5G networks with AT&T being the prime example of this. Microsoft is working on a Proof of Concept platform with AT&T that it hopes will provide it with new low-latency connectivity capabilities that will enable new services such as online gaming, video conferencing for remote meetings and a number of IoT uses cases in healthcare, retail, public safety and manufacturing.

Other operator partners include Etisalat, NTT Communications, Proximus, Rogers, SK Telecom, Telefonica, Telstra and Vodafone. In addition, it has numerous private company partnerships, particularly in manufacturing which will allow companies building their own private cellular networks to set-up their own private Azure data centres (Exhibit 1).

Rather than being located in a few large central data centres, Microsoft intends to deploy standalone Azure Edge Zones into a large number of smaller, local data centres starting in New York, Los Angeles and Miami.

Needless to say, Microsoft is not alone in this space and its biggest rivals are also engaged in extending the cloud to the edge. Amazon announced its Wavelength edge computing platform last year and Google has announced its Global Mobile Edge Cloud and Anthos for Telecom platforms.

Exhibit 1. Microsoft Private Edge Zone Ecosystem (Source: Microsoft)

Targeting the telco cloud

The big Internet giants such as Amazon, Google, Microsoft and others see the telco cloud as a major opportunity and are looking at ways of leveraging their existing cloud infrastructure to build a presence at the edge.

For Microsoft, the acquisition of Metaswitch dramatically increases its telecoms know-how at a stroke. The company already has some of the best cloud infrastructure in the world and this acquisition signals that it is serious about being a big player in this space. The acquisition also provides it with many big-name telcos as customers as Metaswitch has a long history of working with many of the biggest telecom companies in the world. Although Microsoft intends to work with its telco partners to offer its services to enterprises, its ability to offer its edge cloud directly to private networks operators, i.e. non-telco companies such as manufacturing and industrial companies, utilities, etc. means that telecom operators could potentially be by-passed completely.

With more than $140 billion cash on its balance sheet, expect more telco-related acquisitions to follow!

Microsoft with Azure Sphere Looks to Set Gold Standard in End-to-End IoT Security

Traditionally, security circled around securing network and software applications. However, as more devices get connected to the internet, and threats rise, there is an unprecedented need to secure hardware alongside the data flow from edge devices to the cloud. Hence, integrating security across all four layers (hardware, software, network, and cloud) becomes vital for a secure IoT deployment. We are already seeing this being adopted across data-centric devices such as smartphones.

What are the options to enable hardware security?

The key is to secure the hardware at the chipset (MCU/SoC) level to first secure the data flowing through the internal bus. This can be done by embedding Secure Elements (SE) such as Physical Unclonable Function PUFs, Trusted Platform Module (TPMs), or Hardware Security Module (HSM) to the system within the devices. Further, key injection in the secure enclave/PUF along with cryptographic key management to ensure the secure identity of the devices and to create secure tunneling of data flowing within the device and then from the device to the cloud.

How will secure hardware help Microsoft?

Microsoft is the leading end-to-end IoT platform provider globally connecting millions of edge IoT devices across tens of thousands of enterprises to its Azure cloud via its Azure IoT platform. Microsoft also has been offering Azure Edge IoT software to enable computing and intelligent decision making at the edge. As a result, Microsoft must ensure the millions of devices running its Azure instances are not compromised and securely connected to its cloud.

In light of this, Microsoft has been looking to build secure chips with silicon partners to create a “hardware-based root of trust”. This will help solve cloning and counterfeit issues and will also establish secure authentication with its IoT hub platform via a unique trusted identity.

To achieve this goal, back in 2018, Microsoft announced Azure Sphere to build multi-layered end-to-end security. Since then Microsoft Azure Sphere has evolved and constitutes three key elements:

Counterpoint Research Microsoft Azure

Source: Microsoft
  • Hardware: Azure Sphere embeds secure keys (public) within a secure MCU/MPU powered by its Pluton security subsystem.
    • Pluton includes a security processor unit with a random number generator (RNG)
    • Tamper and side-channel attack resistant
    • Other cryptography and encryption tools
    • Secure booting for remote attestation and certificate-based security

As an example, the MediaTek MT3620 contains an isolated security subsystem with its own Arm Cortex-M4F core that handles secure boot and secure system operation. This M4F security processor features a 128kB secured TCM and a 64kB secured mask ROM bootloader.

Counterpoint Research Microsoft Azure   Source: Microsoft
  • Software: Azure Sphere OS:
    • Azure Sphere OS is made up of a custom Linux kernel, which runs on 2.4MB code storage, which is carefully tuned for the flash and RAM footprint of the Azure Sphere MCU to reduce its attack surface.
    • The OS communicates with the Azure Sphere Security service in the cloud for secure device authentication, network management, application management for all outbound traffic.
    • It undertakes secure monitoring to protect memory, flash and other MCU resources limiting exposure.
    • The OS includes Microsoft-provided application runtime to restrict access to file I/O or shell access.
    • It also includes a high-level application platform which is signed by Microsoft Certificate Authority (CA) through a trusted pipeline to maintain all software other than the device-specific applications.
  • Cloud: Azure Sphere Security Service
    • Azure Sphere Security Service brokers trust for device-to-cloud communication, detects threats, and renews device security via CA based-authentication, failure reporting and automatic updates for OS.
    • Azure Sphere in the cloud thus embeds with a private key that enables asymmetric encryption and authenticates devices with paired public keys at the time of the manufacturing process.
    • Further, Azure Sentinel provides cloud security through Artificial Intelligence.

The integration of all three elements enables the hardware root of trust with asymmetric encryption. Further, it creates a secure tunnel for the secure flow of data from chip to cloud ensuring both the data security at rest and in transit.

Following chart depicts Azure Sphere running on a Guardian IoT module for a brownfield IoT deployment

Counterpoint Research Microsoft Azure

Source: Microsoft

Growing Partner Ecosystem:

  • Chipsets:
    • In 2018, ST Micro’s STM32, a secure MCU embed with a secure element and integrated with Azure IoT C SDK, which enables direct and secure connectivity to the Azure IoT Hub, as well as full support for Azure device management.
    • In mid-2019, NXP’sMX 8 series, integrates Microsoft’s Azure Sphere security architecture and Pluton Security Subsystem.
    • MediaTek MT3620 is Azure Sphere ready
    • At the end of 2019, Qualcomm’s 9205 LTE multimode modem supporting both LTE-M / NB-IoT was integrated with Microsoft’s Azure Sphere.
  • Modules
    • Avnet and qiio offer Avnet Guardian 100 and qiio q200 Guardian (add-on) modules for retrofitting on exiting brownfield devices which lack connectivity and security but need to be connected to the Internet.
    • Other modules include Avnet AES-MS-MT3620, AI-Link WF-M620-RSC1 and USI Wi-Fi module with Bluetooth option.

With this approach, Microsoft is building a highly scalable and secure approach to onboard, manage and connect IoT devices and ensure the data is securely transmitted from device to cloud. This eliminates the need for most IoT customers to hire expensive security professionals.

Case Study: Starbucks

Starbucks has deployed Azure Sphere across its stores in North America. Each Starbucks store has around ten to twelve pieces of equipment that are operational for more than fifteen hours a day and are needed to be connected to the cloud for beverage related data (10 to 12 data points worth 5MB generated per beverage), asset monitoring and any predictive maintenance to avoid disruptions. This is important as any equipment breakdown is directly proportional to the store’s performance, its business and customer dissatisfaction. Starbucks has therefore been using the guardian modules deployed by Azure Sphere with the help of Microsoft across all its brownfield equipment to securely connect and aggregate the data to the cloud.

Counterpoint Research Microsoft Azure

Source: Microsoft

Chip-to-Cloud Security is the Gold Standard

Security and privacy are global concerns around IoT, irrespective of country. Security is one of the major roadblocks for IoT. However, in the past two years, we have seen the adoption of chip-to-cloud security due to an increase in awareness of the threats and its scalable solution. The end-to-end security will be critical to the success of any future IoT deployments to protect the asset as well as the data which, in most cases, is even more valuable.

Term of Use and Privacy Policy

Counterpoint Technology Market Research Limited


In order to access Counterpoint Technology Market Research Limited (Company or We hereafter) Web sites, you may be asked to complete a registration form. You are required to provide contact information which is used to enhance the user experience and determine whether you are a paid subscriber or not.
Personal Information When you register on we ask you for personal information. We use this information to provide you with the best advice and highest-quality service as well as with offers that we think are relevant to you. We may also contact you regarding a Web site problem or other customer service-related issues. We do not sell, share or rent personal information about you collected on Company Web sites.

How to unsubscribe and Termination

You may request to terminate your account or unsubscribe to any email subscriptions or mailing lists at any time. In accessing and using this Website, User agrees to comply with all applicable laws and agrees not to take any action that would compromise the security or viability of this Website. The Company may terminate User’s access to this Website at any time for any reason. The terms hereunder regarding Accuracy of Information and Third Party Rights shall survive termination.

Website Content and Copyright

This Website is the property of Counterpoint and is protected by international copyright law and conventions. We grant users the right to access and use the Website, so long as such use is for internal information purposes, and User does not alter, copy, disseminate, redistribute or republish any content or feature of this Website. User acknowledges that access to and use of this Website is subject to these TERMS OF USE and any expanded access or use must be approved in writing by the Company.
– Passwords are for user’s individual use
– Passwords may not be shared with others
– Users may not store documents in shared folders.
– Users may not redistribute documents to non-users unless otherwise stated in their contract terms.

Changes or Updates to the Website

The Company reserves the right to change, update or discontinue any aspect of this Website at any time without notice. Your continued use of the Website after any such change constitutes your agreement to these TERMS OF USE, as modified.
Accuracy of Information: While the information contained on this Website has been obtained from sources believed to be reliable, We disclaims all warranties as to the accuracy, completeness or adequacy of such information. User assumes sole responsibility for the use it makes of this Website to achieve his/her intended results.

Third Party Links: This Website may contain links to other third party websites, which are provided as additional resources for the convenience of Users. We do not endorse, sponsor or accept any responsibility for these third party websites, User agrees to direct any concerns relating to these third party websites to the relevant website administrator.

Cookies and Tracking

We may monitor how you use our Web sites. It is used solely for purposes of enabling us to provide you with a personalized Web site experience.
This data may also be used in the aggregate, to identify appropriate product offerings and subscription plans.
Cookies may be set in order to identify you and determine your access privileges. Cookies are simply identifiers. You have the ability to delete cookie files from your hard disk drive.