(Use the buttons below to download the complete chart)
Highlights:
XR headset shipments declined 49% YoY in Q2 2023. The decline was significantly more than in previous second quarters as the market struggled with lackluster demand.
The performance of the newly launched next-generation Sony PSVR2 (PlayStation VR2), along with the price reduction on Meta’s Quest 2, saved the global market from a bigger decline.
Meta captured half of the shipments in Q2 2023, similar as in Q1 2023. The share decline was a result of the highly anticipated launch of Sony’s successor to its 2016 headset PSVR.
2023 is the year of next-generation VR headset launches. The PSVR2, E4 and Vive XR Elite are some of the prominent launches so far. And then, of course, Apple has announced its Vision Pro and Meta its Quest 3.
WATCH: AjnaLens VR Training – Teleporting Trainees to Job Site
For a more detailed AR & VR headsets (XR) shipments tracker, click below:
This is a comprehensive database of Extended Reality (XR) headset model level shipments by quarter including retail price and 30+ specifications and features. It covers tethered as well as standalone Virtual Reality (VR) and Augmented Reality (AR) headset models. We are tracking 35+ XR brands and 70+ headset models by memory variants. Covers 99% of the global market Data: Model level shipments of XR headsets including retail price, specs, and features. Time Period: Q1 2020 – Q2 2023
For detailed insights on the data, please reach out to us at sales(at)counterpointresearch.com. If you are a member of the press, please contact us at press(at)counterpointresearch.com for any media enquiries.
Twenty years ago, I was an equity analyst for a Wall Street investment bank. At the time, my research director liked to get all the analysts to write occasional thought pieces. In the following article written in June 2003, I chose to write a speculative piece that looked back to 2003 from five years in the future, i.e. 2008. I speculated that there would be quite a few technological leaps in the five intervening years.
Given the 20 years that have now passed since I wrote the article, how many of those technologies have actually come into being? As you will see, not many, while others that were not foreseen have matured – for example, app-based smartphones and music streaming.
Without specifically naming it as artificial intelligence, I foresaw a role for cloud-based intelligent software agents that would provide intuitive assistance in multiple situations – a true digital assistant. These have not come into being and they are not even much discussed. We do have digital assistants such as Apple’s Siri, Google Assistant or Amazon’s Alexa, but they are mostly incapable of anything more than answering simple questions and certainly couldn’t be trusted to book travel tickets, make restaurant reservations or update other people’s diaries. While ChatGPT and derivatives of Large Language Models seem superficially smarter, they are still not yet at the stage of being able to function as a general assistant.
One other technology referenced in the article that is still far from maturity, is augmented reality. The glasses described were not too far-fetched – Microsoft’s HoloLens can achieve some of what is described and Epson and Vuzix, for example, have developed glasses that are in use by field service engineers. But these products are not able to reference real-world objects. Apple’s forthcoming Vision Pro, while technically brilliant, would not be a suitable solution for the use case described.
At the end of the article, I listed companies that I expected to be playing a significant role in the development of the various technologies highlighted. But where are those companies now?
For context, and for the younger readers, around the turn of this century, third-generation cellular licenses had been expensively auctioned in several countries and many mobile operators were struggling to generate a return on their investment. Oh, how things have changed (or not)! As an analyst covering mobile technology, I could see that investors were valuing mobile operators solely on their voice and text revenues, with zero value being ascribed to future data revenues. My article was also an attempt to awaken investors to the potential value beyond voice.
Anyway, here’s the report that I wrote in mid-2003. It was written as though it were an article in a business newspaper.
Special Report – June 2008
Connected People
It is just eight years since European wireless telecom companies became the subject of outright derision for spending billions of dollars on licenses to operate third-generation cellular networks. Now the self-same companies have become core to our everyday existence. Their stock, which bottomed in the middle of 2002, has risen steadily ever since.
The original promise of 3G technology was high-speed data networking coupled with an exceptional capacity for both voice and data. But critics said that it was an innovation users didn’t need, want or would be willing to pay for.
When the first commercial 3G networks appeared in 2003 and faltered at the first step, the doubters started to look dangerously like they had a point. But the universe is fickle and within the last two or three years, the combination of maturing networks and the inevitable power of Moore’s Law has started to deliver wireless devices and applications that would have been thought of, if not as science fiction, then at least science-stretching-the-bounds-of-credibility, when the licenses were issued.
However, while the long-time infamy of 3G means it is taking the starring role as industry watchers chart the chequered history of the technology, it is the supporting cast of technologies that has really delivered the goods. Without them, 3G would have remained just another method to access the backbone network.
The following snapshots from one perfectly ordinary day last month show how the coordinated application of a whole slew of technologies has subtly but distinctly altered our lives.
Bristol – May 1, 2008, 12:57 pm
Beads of sweat form on the face of Jim McKenna, a 24-year-old technician, as he studies the guts of a damaged generator. McKenna is a member of a rapid response team, looking after mission-critical power generation facilities across Southern England.
“Dave, I’ve located the damaged circuits, I think I can repair it, but the control unit is non-standard and I’ve not seen one like it before. Can you help me out here?”
McKenna’s voice is picked up by a tiny transducer microphone embedded in a Bluetooth-enabled hands-free earbud. The bud is so small it nestles unobtrusively in the technician’s ear. The earbud is wirelessly connected to the small transceiver on McKenna’s belt. His voice activates a ‘push’-to-talk connection to his controller in the Scottish technical support center. The word push is in quotes because it is his voice that effects the push, leaving McKenna’s hands entirely free.
In the Edinburgh-based command center, David Sanderson, an experienced engineer, maximizes the image from one of a half-dozen sub-screens that compete for his attention. Each screen shows live pictures from his team of technicians with data about their location and degree of job completion.
Sanderson taps the screen again and, 400 miles away in Bristol, a tiny camera on McKenna’s smart glasses zooms in on the generator specification plate. Sanderson peers intently at the screen:
“I see a code on the side panel. I’ve highlighted it for you. Can you scan it? I can then pull the circuit files for you”.
Seemingly in mid-air, a red circle appears around a barcode away to McKenna’s right. The heads-up display in McKenna’s glasses maintains a fix on the code even though he moves his head. He leans across and uses the camera to scan the code, which is instantaneously transmitted back to Edinburgh where the circuit plans are uploaded from the database. Sanderson extracts the relevant section before speaking again to McKenna.
“Jim, I’m initiating the synchronization, you should have it in a few seconds.”
The 3G transceiver on Jim’s belt receives the information and immediately routes it to his smart glasses via Bluetooth. As Jim looks at the damaged circuitry, the heads-up display begins to superimpose the circuit diagram over the actual circuits, adjusting for size. He spends a few minutes comparing the damaged circuits with the schematic images. He calls for more backup.
“Dave, the problem is definitely in this sector of the step-down circuit,” McKenna points to a series of circuit boards, “is there a suggested workaround in the troubleshooting file?”
Within minutes the heads-up display starts guiding McKenna through a series of measures that isolates and bypasses the damaged circuits. Within 20 minutes, McKenna successfully reboots the system – power is restored.
Five years ago, very little of the above could have been done as efficiently and intuitively. Field service engineers needed substantial experience to tackle complex tasks – they also had to carry heavy, often ruggedized PCs and a whole series of manuals on CD-ROMs. Technical backup, where available, was a cellular voice call.
Liverpool Street, London, May 1, 2008, 2:32 pm
Joanne King, an equity analyst, is meeting a buy-side client. As they settle into the soft leather chairs of the meeting room, she slides a flexible plastic sheet across the table. The sheet is printed with electronic ink. The latest marketing pack was downloaded to her mobile terminal on the way over in the taxi. She taps the screen of her smartphone and the slide set appears on the sheet. As Joanne and her client discuss the vagaries of the stock market, they are able to use virtual tabs to flip between ‘pages’ within the pack. When the client requests more information on the balance sheet of one of companies they’re discussing, Joanne is able to pull down the necessary information, adding it to the slide set.
Partway through the discussion, Joanne hears a subtle tone in her ear indicating an urgent communication request from her personal digital assistant. She apologizes to the client before initiating the communication path. “Wildfire, what’s the problem?” she knows that Wildfire will only override her no-interrupt rule if an issue requires immediate attention.
“An air traffic control strike in Paris has disrupted all flights. Your 6 pm Brussels flight is showing a two-hour delay and may be canceled. The best alternative is to take the Eurostar train. Services leave at 16:30 and 18:30.”
After a moment’s thought, Joanne comes to a decision: “Book the 16:30, please.” Conscious of the topics still to cover in her meeting, she adds, “Can you also have a taxi waiting when I am through here?”
Wildfire confirms the instructions and drops back into meeting mode. Joanne apologizes to the client and resumes her meeting. Meanwhile, Joanne’s software agent communicates with various travel services, canceling her flight reservation and booking the rail service.
Having learned from Joanne’s prior behavior, the agent books a First Class seat in a carriage toward the front of the train. The agent also communicates with a taxi firm – a car will be waiting when her meeting is completed. The agent is authorized to spend money within predefined limits. Simultaneously, the agent modifies Joanne’s expense report and calendar.
Joanne’s dinner date with friends in Brussels will be hard to keep given the change in travel plans. The agent negotiates with the diaries of her three dinner guests and the reservation computer at their chosen restaurant. A new reservation is agreed and four diaries are updated accordingly.
At the conclusion of her meeting, Joanne leaves the slide set contained in the pre-punched flexible display. Her client will be able to store it in standard folders and refer to it at leisure. Solar cells ensure that there is enough power to display the material without having to worry about battery charge.
As she heads for the taxi, Joanne’s location-aware PDA recognizes she is in motion and, therefore, ready to communicate. “Joanne, you have 2 voice messages, 23 business e-mails and 12 personal e-mails. How would you like me to handle them?” Joanne chooses to listen and respond to a voicemail on the short taxi ride to Waterloo, deferring the e-mails for the train.
Once in her seat on the Eurostar train, Joanne unfolds a screen and keyboard that work alongside her 3G smartphone. Bluetooth provides the link between the smartphone, screen and keyboard. The Light Emitting Polymer screen is extremely lightweight and flexible, yet delivers high contrast and color resolution. Power consumption is low.
Joanne spends an hour responding to the e-mails before kicking off her shoes and taking out an e-book to settle down to listen to some music. She is particularly looking forward to a new album she bought on the way to the station. A song she was unfamiliar with came over the radio in the taxi – loving it, but not knowing what it was, Joanne recorded a quick burst. Vodafone, her service provider, was able to identify the music and offered to sell her the single or album. In anticipation of her long train ride, she chose the album. Leaning back in her seat, she lets the cool beats ease her to Brussels.
In 2003, one-on-one presentations were either made from a PC screen or delivered on regular paper. Meeting interruptions were either obtrusive or impossible, and changing travel reservations on the fly typically required several people – often with intervention by the traveler herself. Meanwhile, mobile e-mail was possible but only on large-screen PCs, compromised by size, weight and power consumption, or devices with screens and keyboards too small for anything other than limited responses.
Hyde Park – May 1, 2:18 pm
Mike Lee is on his way home from high school. He flips his skateboard down three steps and dives for cover in the bushes, the sound of gunfire ringing in his ears. Peering through the leaves, he holds a small flat panel console in front of him. He scans through 120 degrees, concentrating on the screen. The intense rhythms of electro-house are now the loudest sounds he hears, but there is also the distant rap of gunfire. On the screen, he sees the surrounding park, but in addition, the occasional outlandish figure appears, flitting between hiding places among the trees. “Josh! Where are you?” Mike demands in an urgent whisper.
“I’m by the lake dude. Surrounded. Can you get down here? I’m running out of ammo.”
Mike swings around, looking toward the lake through his device. He sees Josh’s position highlighted on the screen. He turns back, takes a deep breath and starts jabbing buttons on his device. Explosions and smoke fill the screen. Then running to the path, he jumps back on his skateboard and carves down the hill to the lake, pitching into the shrubbery next to his buddy Josh. They proceed to engage the advancing enemy in a frenzy of laser grenades, gunfire and whoops of delight.
After a few minutes, they both hear the words they have been waiting for, “Well done men, you have completed Level 12. Hit the download button to move on to the next level.”
Mobile gaming, even as recently as 2003, offered a relatively poor user experience. Simple Java games were the norm. Games now not only involve online buddies but they are also immersed into the surrounding environment, massively enhancing the experience.
3G has come a long way from its ignominious start. However, the real catalyst that has made it a life-changing technology has been the incredible range of diverse technologies that have emerged to support the growth in wireless voice and data applications.
My original cast of technology characters has seen mixed fortunes, some are still around but with different owners while others have disappeared altogether. Few are still going in their original business niche:
Nokiaand Motorolaare brands that are still making mobile devices, but in different guises than in 2003.
I don’t know what became of Sound ID. There is an app called SoundID created by Sonar Works, but it is different and unrelated to the Sound ID identified in the article. But Bluetooth True Wireless earbuds are now a huge market.
Microvision is still in business but has shifted its focus to LiDAR in the automotive space.
Sonim is still in business and still making ruggedized devices, including push-to-talk devices for the safety and security sectors.
Advanced Recognition Technologies was acquired by Scansoft in 2005.
Wildfire was an innovative voice-controlled personal assistant that was acquired by the operator Orange in 2000. But Orange killed the service in 2005.
E-Ink still exists, although Philips parted ways with it in 2005.
Shazam still exists but was acquired by Apple in 2018. When it started in 2002, you had to dial a short number and hold your phone to the sound source. Users would then receive an SMS with the song title and artist.
Cambridge Display Technology is still around. It was floated on Nasdaq in 2004 and acquired by Sumitomo Chemical in 2007.
Hewlett Packard is clearly still around. However, it doesn’t make intelligent software agents. But then again, neither does anyone else, at least not in the way portrayed in the article.
Openwave no longer exists, although many of its businesses have been absorbed into other entities.
Despite the Vision Pro’s steep $3,499 price point, Apple has still managed to create a “wow” factor. The device goes on sale early next year.
The MacBook Air 15 with a thin and light profile but a bigger screen size could be an ideal option for content creators on the go.
By bringing Apple Silicon to the Mac Pro, Apple has completed its transition from Intel, giving it more control over the hardware and software stack.
Updates to iOS, iPadOS, macOS, and WatchOS bring minor improvements, but the personalization and interactive widgets show a more consistent approach to user experience across Apple products.
Apple hosted its annual worldwide developer conference (WWDC 2023) at Apple Park, California, from June 5 to June 9, during which it announced exciting new hardware and software launches. These included fresh devices such as the new MacBook Air, Mac Studio Gen-2, and Mac Pro with M2 Ultra SoC, alongside the latest iOS 17, iPadOS 17, macOS Sonoma, tvOS 17, and WatchOS 10. However, the star of the show was Apple’s hotly anticipated mixed-reality headset, called the ‘Vision Pro’, which took our breath away.
Here is a quick recap of everything Apple announced at WWDC 2023.
Hardware announcements
Apple’s ‘Vision Pro’ mixed-reality headset is finally here
“One more thing…” – this has been synonymous with every major announcement that Apple has made over the past decade, and WWDC 2023 was no different. After months of leaks and rumors, Apple finally unveiled its “revolutionary” new product, the Vision Pro, along with its mixed-reality platform, the VisionOS. Apple has its sights on the next decade and beyond with the Vision Pro, which will be available for sale early next year.
Source: Apple
Pioneering the era of spatial computing, the cutting-edge Vision Pro is powered by an M2 processor along with a custom R1 co-processor for real-time processing. It features a micro-OLED display with 4K resolution per eye. It has 12 cameras, six microphones and five sensors to offer an immersive overall experience.
Users can control the Vision Pro UI with their eyes, voice and hands. As the headset allows for video pass-through, you are not isolated, and you can still see and interact with the people around you. The cameras let you capture 3D photos so you can relive those moments later. You can also turn your laptop screen into a giant display, giving you an unlimited canvas, so you are no longer limited by a display. All your apps can be used anywhere, and you can even resize them. There are many amazing features that Apple demonstrated at WWDC and will continue to refine them ahead of the start of sales early next year. The Apple Vision Pro is priced at a whopping $3,499.
WATCH: Apple Vision Pro Mixed Reality Headset: Quick Look at Key Features
MacBook Air gets bigger, better and more powerful
Though WWDC is a software event, it was dominated by hardware announcements this year, starting with the Mac. The new MacBook Air 15 is incredibly thin at 11.5mm, with Apple claiming it to be the world’s thinnest 15-inch laptop. It weighs 3.3 pounds, and features two Type-C thunderbolt ports, MagSafe charging and a 3.5mm headphone jack. It comes with a 15.3-inch liquid retina display with thin bezels, 500 nits of peak brightness and one billion colors.
Source: Apple
For video calls, the laptop includes a 1080p camera, three-array mics and six speakers so you can hear and be heard loud and clear. Under the hood is an M2 chip which Apple says is 12x faster than the Intel-based MacBook Air and is efficient enough to offer up to 18 hours of battery life. The new 15-inch MacBook Air starts at $1,299 ($1,199 for education) and will be available from the third week of June.
Source: Apple
The 13-inch MacBook Air now starts at $1,099, making it cheaper than before by $100. Meanwhile, the 13-inch MacBook Air M1 retains its $999 price tag, giving users more choices when looking for a MacBook Air.
Mac Studio gets more powerful with M2 Ultra
The Mac Studio is loved by all types of creators, be it for editing photos, videos, podcasts or even presentations. It is now getting a big upgrade with the powerful M2 Max SoC, which Apple says is 25% faster than the previous M1 Max. Apple continued with the stats, saying video editors can now render videos 50% faster on Adobe After Effects.
Source: Apple
Apple also announced the M2 Ultra SoC, which connects two M2 Max die together with ultra-fusion architecture to double the performance. It comes with a 24-core CPU to offer 20% faster CPU performance, and its 76-core GPU is 30% faster than M1 Ultra. There is also a 32-core Neural engine which is 40% faster than the previous generation. It supports 192GB of unified memory. Built on a 5nm process node, it has 134 billion transistors and 800GBs of memory bandwidth. The Mac Studio with M2 Ultra can support six pro display XDR monitors.
Source: Apple
Apple Silicon comes to Mac Pro, completing the transition from Intel
For those heavy and demanding workflows of film editors and sound engineers, Apple is bringing its Apple Silicon with PCI expansion to the Mac Pro. This also completes the transition from Intel to Apple Silicon. Powered by M2 Ultra SoC, the Mac Pro comes with eight thunderbolt ports – two on the front and six at the back. There are six PCI expansion slots too, allowing users to customize their Mac by adding audio/video IO, networking and storage.
Source: Apple
The Mac Studio will start at $1,999 with M2 Max SoC, whereas the Mac Pro will start at $6,999. They will be available from the third week of June.
Software announcements
iOS 17 gets more personalized and intuitive
With the iOS 17, Apple is bringing new experiences, better communication and sharing to iPhone users. For Apple ecosystem users, Phone, iMessage and FaceTime are the three essential apps for everyday communication. The Phone app now comes with personalized contact posters where you can either have a photo or emoji. Apple also supports the vertical layout for Japanese text. And it is not just for calling, this new visual identity is also a part of the contact card for a more consistent experience.
Source: Apple
The next feature is Live Voicemail, where you can see the transcription of the recipient. If you think it is important, you can answer right away. The feature is like Bixby text calls on Samsung smartphones. Apple is also bringing voicemails to FaceTime. When you call someone on FaceTime and they are not available, you can leave a video voicemail instead.
Messages now have search filters. So next time you start a search, include additional words to incorporate filters. This will help you find exactly what you are looking for. Now, there are times when you are in a meeting or traveling and there is a whole bunch of conversation that you missed. There is a new catch-up arrow on the top right which will let you quickly jump to the first message you haven’t seen.
Other key messaging features of the iOS 17 include:
Swipe on the bubble for quick in-line replies
Audio message transcription
In-line location within the conversation
Check-in to keep in touch with your loved ones
Besides location, you can also share battery level and cellular service status
All the information shared with your family and friends is also end-to-end encrypted.
Source: Apple
Next, Apple is also bringing a better Stickers experience. All the recently used stickers and memoji are now available in a brand-new drawer. You can even peel and stick emoji stickers in the conversation, and even rotate and resize them. These stickers are available systemwide, so you can use them with any app.
Apple is also changing the way we share contact details with someone new. A new feature called NameDrop lets you bring your phones closer and share phone numbers and other contact details with the person. The feature can be used with an iPhone and Apple Watch too. But that’s not all, you can now AirDrop content over the internet too.
Source: Apple
There is also a new standby mode. It can be activated by turning the iPhone in landscape mode while charging. It will turn your phone into a desk clock displaying the time, date, weather and alarm information at a glance. You can add widgets and even customize the screen with different clock styles to fit your needs.
Apple had made the assistant hot word even simpler, so instead of “Hey Siri”, now you can just say “Siri” followed by the command. And you can now use back-to-back commands like a conversation, without having to call “Siri” again and again. Lastly, Apple Maps can now be used offline by selecting an area and downloading offline maps. This can be very helpful when there is no cellular network.
Source: Apple
iPadOS gets more personalized with a lock screen and interactive home widgets
With the new iPadOS 17, Apple is adding interactive widgets through which you can carry out tasks without having to open the app. For instance, you can turn the lights on and off from the Home widget, or even play/pause music from the Apple Music widget.
Just like on the iPhone, you will now be able to customize the lock screen as well. From photos to astronomy to a kaleidoscope, there are a lot of options to choose from. On the left, you can also add multiple widgets to get more information at a glance on the lock screen.
Source: Apple
The iPad is also getting Live Activities, allowing users to keep track of food with Uber Eats, travel plans with Flighty, live scores from the sports app, and more, all from the lock screen.
Source: Apple
Taking full advantage of the large screen canvas, the Health app can show rich details of health-related activities like heart rate, steps, and more at a glance in one place.
Source: Apple
The iPadOS 17 can identify the fills in a PDF and add relevant details using autofill, like name, address, phone number and email. You can even add a signature to the document using Apple Pencil. Apple has also added a collaboration feature which can be helpful when working together. You can see each other’s updates in real time as you scribble.
Source: Apple
macOS Sonoma gets a new Gaming Mode and more
A lot of key features from iOS 17 and iPadOS 17 such as widgets, messaging and the “Siri” hot word are coming to macOS Sonoma. With the new update, Apple is bringing new screensavers, widgets that you can now move around that canvas and out from the notification center.
Source: Apple
Apple is finally getting serious about gaming on the macOS and bringing “Game Mode” with the new OS update. It will prioritize CPU and GPU to optimize the gaming experience on Mac. Apple says it has also worked on reducing the audio latency when using AirPods, and input latency when using PlayStation or Xbox controller by doubling the Bluetooth sampling rate.
But what’s even more impressive is the “Game Porting Toolkit” where developers can quickly evaluate if their game can run well on Mac. The process earlier used to take months, but now with the toolkit, it can be completed in days, thus bringing down the development time.
Source: Apple
Another big feature is coming to help users when they are presenting remotely. Using the Apple Silicon and Neural Engine, you will get a new overlay option when doing remote presentations. It can be a small bubble showing your face or a large overlay where you can remain prominent in front of your presentation. But that’s not all, you can even add emoji reactions to your video stream, adding more fun to your video presentation.
Apple has been focusing on user privacy-related features and in the new macOS Sonoma, it is taking a step further with the Safari browser. The new features allow you to lock browser windows, block trackers and more. The new macOS also gains the ability to help you share your passwords and passkeys with your family. Lastly, there is also web app support, allowing you to quickly access your favorite sites.
Source: Apple
watchOS 10 adds widgets for quick access and more
The watchOS 10 now allows you quick access to widgets in a smart stack by just rotating the crown from any watch face. Users can also add a widget that can hold their favorite complications like quick access to a stopwatch, music, or timer. Apps like World Clock get a new update with dynamic background colors reflecting the time of the day in that particular time zone.
Source: Apple
When you wear the Apple Watch and work out, the live activity will also be shown as a widget on the iPhone lock screen. Apple is also updating the Compass and Maps apps with a safety feature to help users that go hiking. The compass will generate two waypoints, with one of them being a cellular waypoint.
Source: Apple
In case you move to an area with no cellular network, the newly generated waypoint will indicate the place you were in the reception area. We think it is a great addition as you can track back to the area where you had reception and make a phone call or send a text message to your family and friends.
There are many other features and improvements, apart from the ones mentioned here, that are coming to Apple Watch with the watchOS 10 update.
Apple announced Vision Pro at the June 5 WWDC with a launch price of $3,499.
It will be released early next year starting with the US, the biggest XR headset market with over 70% share in 2022.
Featuring advanced specs and a sleek design, it has enterprise, gaming, content and connectivity use cases.
However, with a price of 12 times that of an entry-level Quest headset, it is unlikely to ship over half a million units in its first year.
Apple made its long-anticipated foray into the extended reality (XR) market with the announcement of a $3,499 headset, Vision Pro, at this year’s WWDC on June 5. While Apple is calling it an augmented reality (AR) headset, it is effectively a mixed reality headset based on video pass-through, although done better than anyone else. This is an important step forward for the technology which may eventually replace smartphones, personal computers and televisions.
Apple’s short-term and long-term prospects
With such high expectations, Apple’s stock reached an all-time high before the announcement but fell during the keynote address. This shift in investor stance reflects the challenges that complicate this opportunity.
Apple has also not jumped on to the AI bandwagon so far as it is not its core strength but may yield dividends in the nearer term, thus influencing investor perception of the stock’s attractiveness.
Given primarily the hefty price tag, which is 12 times that of an entry-level Quest headset, the first iteration of the headset is unlikely to sell more than half a million units in the first year of availability. Investors’ reaction also reflects this. Apple’s concern, however, is not the day’s stock movement but the next decade and beyond of technological evolution – about a post-smartphone future and how to secure it.
WATCH:Apple Vision Pro Mixed Reality Headset: Quick Look at Key Features
Cutting-edge technology and Apple premium explain the price tag
In order to secure this long-term future, after eight years of work and 5,000 patents, Apple has announced what it describes as “the most advanced personal electronics device ever”. It features Apple’s powerful M2 processor with its custom R1 co-processor that helps manage the computational load from multiple cameras and other sensors in the spatial computing device.
Its two microOLED displays offer an unrivalled viewing experience with more than a 4K-per-eye resolution. So far, only tethered VR devices by Czech-based VRGineers and China-based Pimax have offered headsets with 4K display but in LCD.
The Vision Pro also takes the industry forward with an immersive audio experience enabled by two amplified drivers in audio pods next to each ear.
In demos, Apple employees scanned reviewers’ ears and their surroundings to calibrate spatial audio, besides scanning their faces for Face ID.
The device uses advanced scanning to personalize the experience. Facial scanning is done to create a representation of the user’s face. This is used in, for example, virtual conferencing. Eye movements and facial expressions are rendered faithfully. The device also scans the environment to optimize the audio settings to deliver accurate spatial audio.
With an external battery pack, Vision Pro is just shy of being completely self-sufficient
The headset does not come with controllers as it uses advanced eye, voice and gesture tracking through 12 cameras, 6 microphones and 5 sensors.
An external battery pack, however, prevents the device from being completely standalone despite featuring multiple integrated chipsets which enable autonomous computing. A two-hour battery life, then, is disappointing.
Developer kits and six months to create apps for wide-ranging use cases
The gestation period of six months before the headset is available for purchase in early 2024 in the US will enable developers to build, iterate and test apps on the headset. They carry a heavy weight of expectations to update existing apps for the spatial environment and to create killer new apps offering use cases for both consumers as well as enterprises on Apple’s all-new VisionOS platform.
Scale and size to allow Apple to forge partnerships critical for the technology’s success
The partnerships, such as those Apple has struck with Disney, Unity and Zeiss, are also key to ensuring the success of Vision Pro, and indeed the technology in general, especially in the early days when buyers may need every push to try out a technology with which few are familiar.
Meta has tried this for its enterprise-grade headset, the 2022-launched Quest Pro, with indeterminate although likely unremarkable outcomes. Apple’s advantage lies in its ability to entice a whole host of firms, including Hollywood studios, to create custom content for its headset.
Concerns and challenges that may obstruct Apple’s path to spatial success
Vision Pro is clearly only an early step in what is going to be a long journey before face-worn computers become mainstream. There are several obstacles that obstruct this path and will need to be overcome to realize such a future.
Form factor
While Apple’s ski goggle-like design is sleek and attractive, widespread acceptance can be attained by compressing similar compute in a compact eye-worn glass-like design.
Weight
The headset offloads some of its weight to an external battery pack but is still described by reviewers as being hefty. For a headset to become mainstream, it will need to be lightweight enough to be comfortably worn for extended periods.
Battery
Eventually, the battery needs to be integrated with the main headset while concurrently reducing its weight. Besides, the battery life will also need to be increased to at least 8-10 hours before headsets can come close to becoming integral parts of our daily lives.
Privacy
In this regard, Apple has already taken steps to allay concerns by ensuring that consumer data is protected, and in some cases, not even accessible to Apple. With its current headset looking clearly like a tech device and unlikely to be used for extended periods in public, Apple has also dodged one of the bullets that killed Google Glasses – the fear of headset users breaching the privacy of unsuspecting passersby. However, as Apple’s headset becomes sleeker, these concerns will have to be addressed.
Apple’s success will be the industry’s gain
Regardless of these challenges, Apple’s long-awaited entry into the segment has already generated an upswing in consumer interest towards XR hardware that perhaps even Facebook’s name change to Meta did not. This interest is likely to translate into increased sales of headsets of all types. For those unable to afford Apple’s prices, or unwilling to wait long enough for it to become available for sale (especially outside of the US), rival headsets will be good alternatives to try out the tech.
So, even if the launch of what Apple described as “the most advanced personal electronics device ever” may not be an iPhone moment, it is a positive step and will take the industry forward.
Feel free to reach us at press@counterpointresearch.com for questions regarding our latest research and insights.
Meta announced the launch of the Quest 3 headset on June 1. To be retailed at just under $500, it will be released in autumn. The Quest 3 will have both VR and MR capabilities.
The Quest 2 has also received a $100 price cut, with the entry-level variant available at $299 starting June 4.
Together with its newly discounted predecessor, the Quest 3 is expected to help the company maintain market dominance for now.
Meta’s announcement came days ahead of WWDC, where Apple will reportedly announce its own MR headset.
London, San Diego, New Delhi, Beijing, Buenos Aires, Seoul, Hong Kong – June 5, 2023
The announcement of Meta’s Quest 3 headset at $499.99 and the Quest 2’s $100 price cut to $299 just before the rumoured launch of Apple’s first mixed reality (MR) headset shows the social media parent’s determination to lead the extended reality (XR) headset market.
Meta described the Quest 3, which will have both VR and MR capabilities, as its “most powerful headset yet”. The announcement of a successor to the best-selling XR model in history after three years of no consumer-grade headset launches by Meta is an important step forward for the company as well as for the industry.
In line with the season’s flavour, mixed reality, the Quest 3 features the next generation of Qualcomm’s Snapdragon chipset and yet to be disclosed but likely superior display resolution, memory, battery life and weight.
The Quest 3’s launch in autumn, together with the price cut of the Quest 2, will be enough to maintain Meta’s market dominance in terms of shipments for the foreseeable future.
Apple’s expected announcement of a $3,000 MR headset during this year’s Worldwide Developers Conference (WWDC) on June 5 will create the biggest challenge to Meta since its entry into the segment through the acquisition of Oculus VR in 2014. If Apple succeeds in bringing the cost down and gaining a foothold in the market through successive iterations of the $3,000 headset, it may supplant Meta as the biggest revenue generator in the market which Meta has dominated thus far both in terms of revenue and shipments.
Background
Counterpoint Technology Market Research is a global research firm specializing in products in the TMT (technology, media and telecom) industry. It services major technology and financial firms with a mix of monthly reports, customized projects and detailed analyses of the mobile and technology markets. Its key analysts are seasoned experts in the high-tech industry.
Feel free to reach us at press@counterpointresearch.com for questions regarding our latest research and insights.
The AR/VR (Augmented Reality/Virtual Reality) hype in China has gone through two waves of growth, despite the current relatively small market size. The market and investment hype in China has waned and returned to a more rational level since the beginning of 2023. This can be attributed to the dismal profitability of internet giants under macroeconomic pressure and the underwhelming sales performance of VR devices.
Exhibit 1: Development stage of the AR/VR Industry in China
Source: Counterpoint Analysis
Nonetheless, China’s national government has recognized the long-term potential of XR (eXtended Reality) technologies, with the XR industry prioritized as one of the top seven key industries to constitute the digital economy of China in the country’s 14th Five-Year Plan. Recently, CAICT (China Academy of Information and Communications Technology) has also introduced action plans to foster the integration of virtual reality technologies (including augmented reality and mixed reality technologies) and vertical applications from 2022 to 2026.
The development of VR glasses took off in China in 2016, with standalone devices becoming mainstream in 2019. As of 2023, we are seeing the market frenzy for VR devices subsiding, with the industry waiting for the introduction of Apple’s first MR headset. Meanwhile, the development of AR glasses is still at an early stage, with only limited products available in China prior to 2022. However, since 2022/2023, we are seeing more products being commercialized.
Regarding the XR device value chain, Chinese companies dominate in certain technology aspects, such as the optic and display solutions, battery, and ODM & EMS (Original Device Manufacturer/Electronics Manufacturing Services), but still lag behind in the development of SoC (System on Chip), connectivity, memory, as well as some areas in the sensing and interaction technologies.
Source: Counterpoint Analysis
In the following sections of this report, we will present an analysis of the development of China and key Chinese players in the core AR/VR technology domains.
SoC: Qualcomm clearly leads with the first-mover advantage
Before 2018, most AR/VR headsets in the market were supported by Qualcomm’s Snapdragon mobile platforms and its bundled XR SDKs. Following the launch of the Snapdragon XR1 Platform, popular VR headsets both domestically and internationally were predominantly developed on Qualcomm’s dedicated XR platforms.
Compared to Chinese counterparts, Qualcomm’s XR chips offer distinct advantages in terms of computing power, GPU rendering capabilities, connectivity, and the overall hardware-software integration solutions. Chinese companies, including mainland China players Rockchip, Allwinner Technology, Hisilicon, and Taiwanese player MTK, have developed products for AR/VR headsets. However, only MTK’s VR SoC was adopted by leading player Sony, while solutions developed by mainland China companies were primarily utilized in lower-tier devices featuring VR videos.
Apart from MTK, China currently lacks a competitive AR/VR SoC provider capable of challenging the dominance of international players like Qualcomm and Samsung. While Chinese players such as Rockchip, GPT, and Rokid have announced their ambitions to develop more advanced AR/VR chips, it remains uncertain whether or when their efforts will pay off.
The following sections of the report will cover our analysis on various AR/VR technology domains, including optic technologies such as Fresnel lens, pancake solutions, birdbath/freeform optics/waveguides solutions, display technologies such as Fast-LCD, OLED, L-COS, Micro OLED, Micro LED, sensing & interaction technologies like rotational and transitional movements tracking of head and controllers, hands recognition, eyes tracking, video see-through, connectivity/memory/battery technologies, ODM/EMS services, as well as the global supply ecosystem and the latest development of Chinese companies in these domains. To access the full report, please click this link, or contact sales@counterpointresearch.com.
After announcing the Snapdragon 8 Gen 2 mobile platform yesterday at Snapdragon Summit 2022, Qualcomm announced the augmented reality and audio platforms today. The chipmaker also teased Oryon CPU with a custom architecture for the Snapdragon Compute platform. Below is a summary of all Day 2 announcements from Qualcomm.
Qualcomm Snapdragon AR2 Gen 1 platform for thin and light Augmented Reality glasses
The concept of augmented reality glasses in a thin, lightweight, and unobtrusive form factor has been around for a few years now, but there are a lot of challenges in bringing that technology to market. The challenges include having sufficient computational horsepower on the glasses while offboarding other processing to partner devices or the cloud and doing so at low power within a tight thermal envelope. It also means fast connectivity is needed. And doing this while keeping size and weight minimal and enabling ease of use. Qualcomm, with the purpose-built 4nm Snapdragon AR2 Gen 1 platform, is addressing many of these challenges. Qualcomm is not addressing the optics part, which will remain a challenge, but the AR2 is a big step in the right direction.
The multi-chip architecture of the Qualcomm Snapdragon AR2 Gen 1 platform has three key elements:
First is an AR processor that manages sensing and tracking of the user and the environment, and also manages the critical task of feeding the visual pipeline.
Second is an AR co-processor, which takes care of the sensors, AI, and computer vision. The use of a processor and co-processor allows for overall smaller and lighter designs with fewer interconnects between sensors and the processing hubs.
The third chip is the FastConnect 7800 connectivity module which is the first to bring Wi-Fi 7 connectivity to AR, offering less than 2ms latency, and up to 5.8Gbps peak speeds, while consuming 40% less power.
Source – Qualcomm
Single-chip AR solutions have a larger PCB and wires running all around the temples and the nose bridge. This is where Qualcomm’s latest solution with multiple chips will help reduce all the clutter and make way for a thinner form factor. The main processor PCB is now 40% smaller (10mm x 12mm), whereas the co-processor is also small (4.2mm x 6.2mm), both of which help in reducing 45% of wires around the frame.
Source – Qualcomm
Snapdragon AR2 Gen 1 also focuses on “Distributed Processing”, which, as the name suggests, distributes the processing between the AR glasses and the host (smartphone, a PC, or cloud, or even a mix). It also consumes 50% less power at less than 1W, which is great compared to the Snapdragon XR2 platform.
Qualcomm also says that the Hexagon processor on the new platform offers a 2.5X jump in AI performance for different tasks such as image classification, object recognition, and hand tracking.
Source – Qualcomm
But it is not just the hardware, Qualcomm is also helping developers build immersive AR content with the Snapdragon Spaces dev platform and SDK. The software tools include object recognition and tracking, positional tracking, hand tracking, plane detection, scene understanding, and much more. The Snapdragon 8 Gen 2 platform is Snapdragon Spaces ready, and OEMs such as HONOR, OnePlus, OPPO, Xiaomi, REDMAGIC, Pico, Nreal, and others are already working on bringing these immersive experiences to their devices.
Qualcomm S5 and S3 Gen 2 Sound platforms
Qualcomm unveiled its second-generation Bluetooth audio platforms, S5 Gen 2 and S3 Gen 2, with support for Snapdragon Sound technology. The new platforms are optimized to work with the new Snapdragon 8 Gen 2 SoC, to deliver premium audio experiences. While the Qualcomm S5 Gen 2 is designed for TWS and over-the-ear headphones, S3 Gen 2 is designed for mainstream accessories such as speakers.
Source – Qualcomm
Some key features include:
Optimization for Bluetooth LE Audio with lossless audio support.
48ms latency for lag-free gaming.
Spatial audio with dynamic head-tracking.
Enhanced ANC to solve issues such as wind noise, howling, and more.
Adaptive transparency mode with automatic speech detection.
AuraCast Broadcast Audio support allows users to share music with family and friends in a personal environment or a public setting.
Commercial devices powered by Qualcomm S5 and S3 Gen 2 platforms are expected in H2 2023.
Custom Oryon CPU, a competitor to Apple Silicon
Besides smartphones, Qualcomm has also been focusing on always-on, always-connected compute platforms. The platform includes features like built-in 5G connectivity and low-power, high-performance CPU to offer long-lasting battery life and powerful productivity in a single package. With the Nuvia acquisition, Qualcomm is taking a big leap to “shape the future of computing” with the next-generation custom Oryon CPU.
Presently, Qualcomm relies on ARM for the CPU core design, but with Oryon, it will own the hardware and software stack, and will no longer need to wait for ARM to release new designs. This means it can operate independently more like Intel, AMD, and even Apple. The Oryon CPU is designed for Windows on ARM but can be extended to mobile also. While Qualcomm did not offer any further details, we may have to wait for a year or more to see the custom CPU in action.
Source – Qualcomm
The keynote also saw some announcements from Citibank and Adobe. Citibank announced that it will be transitioning 70% of its global users to Qualcomm-based computers, whereas Adobe said it will be bringing Creative Cloud applications to Snapdragon-based compute platforms in 2023.
Qualcomm also highlighted some of the recent AI advancements in Windows 11 around the mic and camera abilities. When on Teams and Zoom calls, the Neural Processing Unit (NPU) can offer noise-cancellation to focus on the user’s voice while reducing the ambient background noises. The AI camera can also blur the background and auto-track the user to keep them in the frame and in focus. These features can already be found in Microsoft Surface Pro 9 5G powered by Microsoft SQ 3 SoC, which is basically a customized Snapdragon 8CX Gen 3 chipset.
The existing Snapdragon 8cx Gen 3 platform offers good performance for many of the tasks that users undertake on a daily basis, but it doesn’t have the power to perform heavy computational loads; it was notable that Qualcomm compared the performance of its solutions to Intel Core i5 class of processors rather than an Apple M1 or Core i7, for example – for this level of power, we expect that the new Oryon CPU will be needed. But for many users, the power it offers will be more than sufficient and the benefits offered in core use cases combined with long battery life and fanless designs will mean a step-change in the usability of PCs.
The India-based startup, AjnaLens, is one of the Indian players in the XR (Extended Reality) space joining the Metaverse revolution. Founded in 2014, the co-founders have IIT and engineering backgrounds. Designing and manufacturing in India, AjnaLens offers AR (Augmented Reality), VR (Virtual Reality), and Mixed Reality solutions with applications across different sectors from skill training to enterprise and even the Indian defense sector.
We recently got to spend some time at the AjnaLens office in Mumbai to talk with the co-founders, understand the product offerings, and experience the solutions in action.
The company’s mission & services
The key mission of AjnaLens is to focus on upskilling the workforce and bridging the digital divide. The company has joined hands with Tata Technologies to upgrade 150 ITIs (Industrial Training Institutes) in Karnataka, India; to upskill over 9000 students using a VR-based simulator.
AjnaLens also leverages technologies such as artificial intelligence and mixed reality to upgrade defense weapon systems and tanks to help increase the effectiveness of combat missions. The mixed reality glasses can be mounted on soldiers’ helmets, enabling them to efficiently carry out surveillance and security.
Credit – AjnaLens
Though the defense was just a byproduct, it is now most of their business. This military-grade Mixed Reality helmet also includes features like GPS for navigation, night vision, LIDAR, Sonar, and thermal scanners.
There are three core product applications:
• AR-glasses for enterprise
• Mixed Reality glasses for the military
• XR Station for VR training purposes
AjnaLens also has its own app marketplace where it can customize apps based on specific client needs. The marketplace also allows third-party app developers to submit and publish their apps. Using Android OS as a base, AjnaLens has filed for over 15 national & international patents in augmented reality, and its algorithms are its secret sauce for powering and integrating the entire system.
The upskilling challenge for industries
One of the biggest challenges facing industries is training the workforce with new skills and capabilities. The post-COVID-19 hybrid and remote working is making training even more challenging. But with VR and metaverse, these challenges can be more easily overcome.
In VR training, like that offered by AjnaLens, workers are instantly teleported to the job site (or workshop). It is one of the most effective ways to develop new skills and train the workforce. Scientific research has proven that VR training is more completely and readily absorbed by the brain than traditional classroom-type training.
VR training can offer several benefits, but the two important aspects are that it offers realistic simulations and the ability to teach even hard skills. And what better example than a flight simulator where challenging emergency scenarios can be recreated for pilot training?
WATCH: AjnaLens VR Training – Teleporting Trainees to Job Site
AjnaLens VR for training institutes: Immersive & interactive way of learning
The AjnaLens team offered us a demo of its VR solution for training institutes, and we were left impressed.
The VR headset, AjnaLite 2, is tethered to a dedicated VR workstation called Ajna XR Station, and the software is scalable across different use cases; institutes just need to load the training modules.
Currently, it supports a variety of jobs such as welding, painting, fire, and safety training.
With this 360-degree immersive environment, students or workers can learn skills like painting for automobiles and aviation.
Upon completing the tasks, students get instant grades and they can practice for an unlimited time until they perfect the processes.
As there is no need to have actual paint and car doors to learn painting, it allows organizations to greatly reduce overall training costs.
For those who wear specs, the VR glasses have an adjustable dial to adjust the lens power.
The display is bright, and crisp and did not cause any eye-fatigue issues during our limited usage.
The VR glasses and equipment like a spray gun and the welding gun have trackers to track your movement.
We were impressed with the painting job demo, and the precision of details with the angle of spray and distance.
AjnaLens AR glasses for enterprise
AjnaLens also has tethered AR glasses and ambient-aware (see-through) type features.
These are lightweight glasses that have a 2K display, speaker, and camera.
It has a 50-degree field of view and can be used to create a virtual space from the connected device.
These glasses are powered by tethering to a Qualcomm Snapdragon 845 SoC or above-powered smartphone, or a laptop or tablet using a Type-C cable.
There is no processing on the glasses, it only has MCUs for the camera, tracking, sensors, and audio.
This virtual space can have holograms and avatars, digital twins, web browsers, CAD designs, and Office apps.
Users can also take virtual team calls over platforms like Microsoft Teams or Zoom.
We did give it a try and it was quite comfortable to wear.
The display was bright enough, and color reproduction was good too.
Overall, we were left impressed with the demos we saw at AjnaLens’ office, and with this hybrid work culture and remote assistance use cases, there is room to grow and expand beyond India. AjnaLens is one of the companies in the XR space to watch out for.
With the time spent on console games, home entertainment and online education increasing against the backdrop of COVID-19, AR/VR devices are drawing more attention. Development of a killer consumer application is the key to the XR market’s growth. But prior to that, improvements are needed on the hardware side. Technical barriers and inconvenience in wearing AR/VR headsets remain. Therefore, R&D needs to be ramped up in this direction to minimize limitations in use.
Limits to AR/VR displays
AR/VR headsets should be good enough for an immersive experience. Also, they should be light in weight so that they can be worn for a long time. Further, AR/VR devices released so far may cause eye strains due to the low resolution of the displays. Also, low refresh rate delays screen updates, causing dizziness just by using it for a short time.
Technical requirements for AR/VR displays
Fine Pixel Size
High Refresh Rate
High Resolution
AR/VR display must first be made to a very fine pixel size for accurate color and image reproduction. Also, a high refresh rate is important for AR/VR displays. Smartphones must have a high refresh rate of at least 120Hz to reduce motion blur on video, but even the most recent AR/VR devices, like the Oculus Quest 2, have a 72Hz refresh rate, which is far short and must be at least 120Hz. Resolution is also important. The average resolution of OLED smartphones is 550 ppi but AR/VR devices require about 3,500 ppi because they feature near-eye displays.
OLEDoS: Solution for ultra-high resolution
Source: LG Display Blog
OLEDoS (OLED on Silicon) is a display panel that typically has a diagonal length of less than 1 inch and meets the 3000 ppi-4000 ppi resolution criteria of AR/VR device displays. Existing OLED displays use Low-Temperature-Poly-Silicon (LTPS) or Oxide TFT based on glass substrates. But OLEDoS uses silicon-wafer-based CMOS substrates. Using silicon substrates, ultra-fine circuit structures typically used in semiconductor processes can be reproduced, which in turn lead to the creation of ultra-high-resolution OLEDs when organic matter is deposited on them.
Specifications of OLED and OLEDoS
Source: Sony / Korea Science
OLEDoS is also called Micro OLED in the market and features high efficiency, high luminance, infinite contrast, fast response and long LED life compared to OLED. Because the size is smaller than 1 inch, the user does not see the panel directly but sees the enlarged image through the optical lens. When used on AR/VR equipment, it shows high resolution in a small, lightweight wearable device.
Apple is also likely to install OLEDoS in its second-generation AR/VR product which is expected to enter the market around 2025. In addition, it is expected that augmented reality that meets the above technical conditions will be implemented as Meta is likely to install OLEDoS in its Meta Quest 3 device, expected to be released in 2023.
OLEDoS expected to achieve 28% share in 2025
Currently, OLEDoS faces high market entry barriers. This is because the technology is yet to ripen fully. The cost of production of the semiconductor substrate remains high even as the related value chain is yet to be completely formed. However, with Apple and Meta expected to introduce OLEDoS-based AR/VR equipment in the next two to three years, many manufacturers would actively adopt OLEDoS. More production will result in a lower per-unit cost, prompting more demand and increasing the share of OLEDoS to nearly one-third of the market by 2025.
Next big supplier of AR/VR displays
It is expected that two OLEDoS displays will be installed inside Apple’s first headset, and Sony will be the first supplier this time. LG Display is expected to supply general OLEDs that are applied to the external indicators. In the long run, however, Apple is expected to choose LG Display as a supplier of OLEDoS over Sony. Although Sony’s technology is somewhat ahead now, the company has its own gaming console, making it a potential competitor to Apple in the XR market, where having a killer application is the key. Once Apple enters the XR market, it is expected to grow at a rapid pace while Samsung Display is expected to catch up with arch-rival LG at a rapid pace. Once again, we will be able to see the fierce race between SDC and LGD in the XR market.
Ahead of MWC 2022, OPPO announced the flagship Find X5 and Find X5 Pro series smartphones. Both smartphones were showcased at OPPO’s booth along with other products which include the fastest smartphone charging solutions, AR Glass, and 5G CPE Device. Fast charging on smartphones is becoming a key trend to watch out for this year with OEMs racing to offers the fastest charging solution on the market.
OPPO Find X5 Series
OPPO has always introduced new, premium experiences with the Find series smartphones. From a pop-up camera to a periscope style zoom lens to a microscope camera system, and a compact foldable smartphone, OPPO has done something different with every new device. This year, OPPO is focusing on improving the photography experience, and to achieve that, it has now partnered with Hasselblad for the Find X5 series. The partnership aims to improve mobile photography using Hasselblad’s color science for natural color calibration.
In addition, the Find X5 and Find X5 Pro smartphones also come with OPPO’s own MariSilicon imaging NPU. It unlocks features like 4K Night & 4K Ultra HDR video recording, AI Noise Reduction to improve night photography. The dedicated NPU also helps in speeding up other AI-based image processing tasks.
The Find X5 Pro features a triple rear camera setup which includes a 50MP primary sensor with 3-axis sensor-shift and 2-axis lens shift technology from Cambridge Mechatronics. It helps in compensating the shaky hand movements when clicking photos, and jerks when recording videos, thus allowing you to capture blur-free photos and videos. There is also a 50MP ultrawide lens and 13MP telephoto lens completing the camera system.
The Find X5 Pro is powered by a Qualcomm Snapdragon 8 Gen 1 SoC, 5G connectivity, and a 6.7-inch AMOLED LTPO display that can refresh between 1Hz to 120Hz depending on the on-screen content.
WATCH: MWC 2022: Quick Look at OPPO Find X5 Pro
150W and 240W Fast Charging Solutions
OPPO also introduced the industry’s fastest smartphone charging tech. First is the 150W SuperVOOC flash charge, fast charging solution that can charge a 4,500mAh battery to 50% in just five minutes, whereas a full charge will take about 15 minutes. OnePlus’ smartphone with 150W fast charging will be launched in Q2, 2022. Next is the 240W fast charging tech which can fully charge a 4,500mAh battery in just nine minutes.
Now, on one hand, we have the fast-charging tech with which smartphone OEMs are trying to address one pain point. But there are other concerns related to the battery life and the damage that fast charging could induce. To allay these concerns, OPPO is also packing a Battery Health Engine tech with the Find X5 Series, which the company says can retain 80% of the battery’s original capacity even after 1,600 charge cycles. This is more than the industry standard which is around 800 cycles. So, if we consider the average smartphone replacement cycle of about 28-30 months, the smartphone battery should be good enough until the users upgrade to a new smartphone.
BONUS VIDEO: Fast Charging – A Key Trend This Year at MWC 2022
OPPO AR GLASS
At MWC 2022, OPPO also showcased a demo of its AR Glass 2021. These AR (Augmented Reality) glasses are much closer to the casual eyewear that most of us are used to wearing. These glasses feature an 0.71-inch OLED display which brings 90-inch TV viewing experience just three meters away. As these are AR glasses, the display is transparent, allowing you to see the surroundings as well.
These AR glasses draw their power from a tethered OPPO smartphone with Snapdragon 865 SoC and above. I tried connecting a couple of other smartphones like the Huawei P50 Pro and an iQOO smartphone, and while the smartphone screen was mirrored on the glasses, it could not adjust the resolution and aspect ratio to the required one, possibly because it needs ColorOS and a dedicated app.
Otherwise, the display experience was good. Content is limited for now, and though it has built-in speakers, I was unable to hear clearly as the floor was too noisy. But maybe these are good to use at home and in the office meeting rooms. One other thing about these OPPO AR Glasses is that you can get lenses matching the eye power of your prescription glasses. The AR and XR (extended reality) space is seeing increasing development, but there are still many significant hurdles to overcome before mainstream adoption can occur.
5G CPE Device
Lastly, OPPO also introduced a 5G CPE T2 device, basically, a 5G Hub built to convert 5G signals into LAN or Wi-Fi network connection. The device could be ideal to use in a small office or home office environment where there is no existing broadband infrastructure. With a 5G SIM card installed, the CPE device can let multiple devices like smartphones, TVs, laptops, and more to connect and access fast 5G internet.
OPPO mentioned that the device is made from recycled materials. It also comes with O-Reserve 2.0 smart antenna technology and is integrated with Qualcomm’s Snapdragon X62 5G Modem-RF system. The hardware architecture is designed in a way where the X62 5G modem can be swapped with the Snapdragon X65 modem depending on operator needs. There is no word on the pricing, but the OPPO 5G CPE T2 will be available in H2, 2022 across the Middle East, Africa, Asia Pacific, and Europe markets.
In order to access
Counterpoint Technology Market Research Limited (Company or We hereafter) Web sites, you may be asked to complete a registration form. You are required to provide contact information which is used to enhance the user experience and determine whether you are a paid subscriber or not.
Personal Information
When you register on we ask you for personal information. We use this information to provide you with the best advice and highest-quality service as well as with offers that we think are relevant to you. We may also contact you regarding a Web site problem or other customer service-related issues. We do not sell, share or rent personal information about you collected on Company Web sites.
How to unsubscribe and Termination
You may request to terminate your account or unsubscribe to any email subscriptions or mailing lists at any time.
In accessing and using this Website, User agrees to comply with all applicable laws and agrees not to take any action that would compromise the security or viability of this Website. The Company may terminate User’s access to this Website at any time for any reason. The terms hereunder regarding Accuracy of Information and Third Party Rights shall survive termination.
Website Content and Copyright
This Website is the property of Counterpoint and is protected by international copyright law and conventions. We grant users the right to access and use the Website, so long as such use is for internal information purposes, and User does not alter, copy, disseminate, redistribute or republish any content or feature of this Website. User acknowledges that access to and use of this Website is subject to these TERMS OF USE and any expanded access or use must be approved in writing by the Company.
– Passwords are for user’s individual use
– Passwords may not be shared with others
– Users may not store documents in shared folders.
– Users may not redistribute documents to non-users unless otherwise stated in their contract terms.
Changes or Updates to the Website
The Company reserves the right to change, update or discontinue any aspect of this Website at any time without notice. Your continued use of the Website after any such change constitutes your agreement to these TERMS OF USE, as modified.
Accuracy of Information:
While the information contained on this Website has been obtained from sources believed to be reliable, We disclaims all warranties as to the accuracy, completeness or adequacy of such information. User assumes sole responsibility for the use it makes of this Website to achieve his/her intended results.
Third Party Links:
This Website may contain links to other third party websites, which are provided as additional resources for the convenience of Users. We do not endorse, sponsor or accept any responsibility for these third party websites, User agrees to direct any concerns relating to these third party websites to the relevant website administrator.
Cookies and Tracking
We may monitor how you use our Web sites. It is used solely for purposes of enabling us to provide you with a personalized Web site experience.
This data may also be used in the aggregate, to identify appropriate product offerings and subscription plans. Cookies may be set in order to identify you and determine your access privileges. Cookies are simply identifiers. You have the ability to delete cookie files from your hard disk drive.