Samsung’s recently launched Galaxy S20 series reveals several key camera trends we are likely to see for premium smartphones this year. They include large image sensors, powerful optical zoom functionality, 108MP sensors and advanced camera array setups.
Big on image sensors
Image sensors are semiconductors that convert light coming through a lens to a digital signal. These sensors are the major element determining an image’s quality and the number of pixels. Hence, the image sensor’s size is critical to how much detail can be captured from a given shot.
The Galaxy S20 Ultra has a 1/1.33-inch image sensor, larger than in some compact DSLRs (typically 1/2.00-inch – 1/1.70-inch), and we expect competition here to intensify. In the short term, Chinese brands like Huawei, Xiaomi, and OPPO will continue to introduce larger sensors. Huawei P40 Pro Plus already has a 1/1.28-inch sensor but is not available outside of China.
Image sensor size – a rising trend
Hybrid optical the emerging battlefield in zoom
Digital zoom has typically been used in smartphones to enlarge images, but a major issue has been image loss. To overcome this, optical zoom features have increasingly been introduced since 2016 which utilizes multiple lenses. The Galaxy S20 is Samsung’s culmination of these efforts, and its 103mm telephoto lens now rivals those found in DSLRs (100mm-300mm).
Optical zoom by key flagship model
The Galaxy S20 Ultra uses a combination of multiple lenses (referred to as a folded or ‘periscope’ lens) to offer 4X optical zoom, then deploys a “hybrid-optic” 5X-10X zoom function with lossless image by using a combination of sensor cropping and binning technology along with AI multi-frame processing. We recently put the Galaxy S20 Ultra’s cameras through the paces and results are impressive.
This year, we expect competition in high-resolution optical or hybrid optical zoom with lossless image to intensify, with broad adoption of periscope lenses across many flagship models. To enhance lossless zoom, we also expect OEMs to continue with multiple (four to five) sensor implementations.
Linear camera lens array setup
A camera’s lens setup has a significant impact on overall smartphone design. There are various array setups including circular, square, triangle, and linear, with the latter two being most common.
Main camera lens array setups
Triangle arrays enable quicker image processing as each lens is equidistant to the subject, requiring less processing as the camera switches from lens to lens. Linear arrays provide faster auto-focusing functions by distance measurement based on the difference of images captured by two or more lenses.
With the increasing adoption of ‘periscope lenses’ in premium devices, we expect linear arrays to dominate as they are necessary components of the periscope-based camera platform.
Rising camera module BOM costs
With the camera being one of the most competitive elements of premium smartphones, camera component costs have been rising continuously and we expect its share of BOM to increase in 2020. The Galaxy S20 Ultra has one of the costliest camera setups in a smartphone. It accounts for 21% of the total BOM, after the system-on-chip (Snapdragon 865 or Exynos 990) along with the 5G (X55 5G or Exynos 5G 5123) baseband.
The ability of an OEM to control production costs for camera modules will play a critical role in securing competitive advantage, especially in the 2020 environment where consumer price sensitivity is likely to be higher than normal.
Ratio of camera modules costs within the smartphone BOM continues to rise
Hybrid Optical Zoom to See a Wider Adoption
2020 started with the Galaxy S20 series offering hybrid optical zoom capabilities, and the same can be expected with the flagship Galaxy Note series later this year. The feature could also make its way to some A-series smartphones. Besides Samsung, Huawei with its P40 Pro series and Oppo with Find X2 Pro has also introduced hybrid zoom feature. We are expecting more OEMs to embrace a periscope style lens to include hybrid optical zoom capabilities on the upcoming premium smartphones.
Recent developments in autonomous vehicles (AVs) have been encouraging with numerous tests proving safety and reliability improvements. Tesla recently made bold claims – saying its cars would be capable of level 4/5 autonomy in 2020. How realistic is this and by when can we expect a significant rollout of autonomous cars?
Counterpoint Research predicts that, with the resolution of both the technological and regulatory issues, around 15% of new cars sold in 2030 will be fully autonomous (Level 4 to 5).
To make the vision of AVs a reality, OEMs, suppliers, and start-ups are applying cutting-edge technology to solve some of the biggest problems in computing, engineering, software development, and algorithm design today. However, it is regulatory and legal hurdles, rather than technological issues, that will prove to be the biggest barriers for self-driving technology.
Exhibit 1: ADAS Evolution in Automobile
Advanced Driving Assistance Systems (ADAS) are progressively demonstrating the reality of vehicles taking over control from drivers and playing the crucial role of preparing regulators, consumers, and corporations for the possibilities that lay ahead.
ADAS introduction has demonstrated that the challenges holding back adoption of AVs are consumer awareness, pricing, and most significantly, issues in privacy, safety, and security.
Safety remains the critical and overriding component of ADAS technology. ADAS uses both visual and aural warnings if they suspect an accident is imminent. Advanced versions can now actively steer the vehicle away or activate brakes if drivers ignore the warnings. Driver assistance technologies available today include adaptive cruise control, lane-departure warning systems, autonomous emergency braking systems, and parking-assistance systems. Some vehicles also offer the option of blind-spot monitoring and rear crossing traffic alert to supplement the other assistance systems.
As technology evolves and becomes more cost-effective, design engineers are constantly updating their autonomous systems to include new and different types of sensors. Further adding to the complexity is the role cloud computing will eventually play as 5G radio technology is rolled out and finally offers the bandwidth requirements needed for the massive data streams coming off sensor systems.
Exhibit 2: Increasing Focus on Autonomous Vehicles and Safety
Regulatory hurdles holding back autonomous cars from becoming an everyday reality
However, the transition from human-driven to autonomous cars will not be seamless as it remains unclear how AVs fit into existing legal and regulatory frameworks around the world. Debates on how exactly the laws should, and will, handle the introduction of autonomous vehicles have differing and often contradictory conclusions. With existing legal frameworks proving to be inadequate, regulatory changes are urgently necessary to address a variety of barriers preventing the successful introduction of autonomous vehicles. As the reality of commercially available self-driving cars becomes more imminent, concerns about how the law—specifically tort law—will treat liability for autonomous vehicles has risen considerably.
Autonomous vehicles give rise to new liability and ethical issues
Assigning negligence forms the legal basis for liability in road accidents. Car owners, or the driver, are in the first instance liable for losses arising from accidents caused by their vehicles. Consequently, car owners are required to have, at a minimum, third-party liability insurance. Where an accident is a result of a fault or defect in the car, car owners/drivers will then look to the vehicle manufacturer or any of the component/service providers for recovery of any losses. States impose strict liability on producers of defective products for harm caused by those products. Inevitably, the introduction of autonomous vehicles adds another layer of complexity to attributing liability for car accidents. For example, the fact that you are operating a self-driving car, and chose not to override it before an incident, does it amount to negligence on the drivers part, the AI/software developer, the manufacturer of the vehicle or the component supplier?
The problem with autonomy in cars is that drivers will tend to over-rely on them. Tesla’s Autopilot, for example, does not have Level 3 autonomy, but Level 2, at best. The self-drive capability difference between Level 3 – when the car can take full control under certain circumstances – and Level 2 is significant. Tesla’s do not have particularly sophisticated sensors, and fatal crashes have already demonstrated that fact. At Level 3 autonomy, the driver is required to remain ready to take over control at a moment’s notice. Back in 2012, Google had tested Level 3 autonomy but found drivers were too trusting, and so decided not to take Level 3 to market at all, preferring instead to leapfrog towards developing full autonomous Level 5 vehicles, where no steering wheel or any other input is required. There appears to be an emerging consensus that Level 3 autonomy is a bad idea altogether. This means OEMs need to advance from Level 2, something that most leading OEMs have achieved, to at least Level 4. The technological challenges involved in such a jump are akin to progressing directly from powered flight to landing a man on the Moon.
So what does ‘Auto’ stand for again?
Tesla is making bold claims about its autonomous vehicle plans. At an investor event last month, Elon Musk revealed technical details of a new chip and computer for full self-driving capabilities that are already being built into Tesla cars. This is a key part of Tesla’s strategy to make autonomous cars mainstream. The company claims that the new chip will clear the way (subject to receiving regulatory approvals) to improve its software and neural networks to effectively operate its cars as fully autonomous vehicles. In such vehicles, which would be out as early as 2020, drivers would not need to touch the wheel. While it has not always been clear what Musk means when he refers to full self-driving, it is apparent that Tesla does not apply the standard definition of Level 4 or Level 5 autonomy.
Adding further to the debate is the first ever incident of a self-driving Uber car killing a pedestrian in Tempe, Arizona, in March 2018. The simple mundane everyday situation of a crosswalk, turn or intersection, is now presenting to be a much harder and broader ethical predicament, on how a car should decide between the lives of its passengers, and the lives of pedestrians.
Uncertain timeline to having fully autonomous vehicles
AVs adoption rate largely depends on resolving regulatory issues, as well as changing consumer opinions and overcoming significant technological and economic barriers. It also depends on what we mean by ‘autonomous’ – are we referring to entirely autonomous systems or systems that demonstrate some level of autonomy along with some degree of human intervention?
The chief differentiator of driver assisted and fully autonomous systems is the ability to focus away from safety concerns and towards freeing up the driver’s attention and time.
Innovations in the autonomous systems’ capabilities will develop quickly. Autonomy, in limited, predictable environments, such as freeway driving, will likely be available widely within the next five years. TuSimple that provides trucks that self-drive long distance freeway routes are almost ready for commercial launch. A human is in the cab ready to take control though.
Dual control is emerging as the interim half-way mark between ADAS and full autonomy – cars that blend a degree of autonomous control with human driver control. It is this degree of autonomous control that will gradually shift. The notion of a fully autonomous car, where the driver is hands and attention free for the entire journey is, in our view, much further away though.
How far ahead are autonomous vehicles?
Human drivers demonstrate decision making that’s still a long way ahead of an AVs current ability. Human drivers process and react to varying information quickly, making rapid decisions based on experience, judgment, and ethics. Further, humans cautiously negotiate roads occupied by other similarly unpredictable human drivers and improvise when confronted with unique situations. Current AVs may perform some of these tasks possibly faster and more consistently, but none can yet compare with a human across all situations.
In early 2018, GM introduced the Cruise AV – an autonomous hatchback, based on the Chevrolet Bolt EV, drawing significant attention with the absence of a steering wheel and pedals. While GM has not revealed any plans for a production run, it has been petitioning the American government for permission to test the model on public roads in 2019. Validating the significant role such autonomous cars can play in a Mobility as a Service (MaaS) automotive market in the future, Japan’s SoftBank Vision Fund, a leading global large tech investor, invested US$2.2 billion in May 2018 for a 19.6% stake in GM’s autonomous driving business.
Exhibit 3: Key Firms With Permits to Test Self Driving Cars in California
The other key technology enabler to full autonomy will be the development of comprehensive data networks, comprising edge-based and cloud-based infrastructure. This will require the automotive industry working closely with other sectors, such as IT and telecoms industries, as well as public and industry policymakers, reaching agreements on a range of important issues, such as data center design and locations, enabling vehicles to communicate reliably and securely with their local environment.
On the face of it, while the future of AVs looks bright, the automotive industry is still a long way from manufacturing vehicles that can self-drive anywhere and everywhere under all conditions.
In order to access
Counterpoint Technology Market Research Limited (Company or We hereafter) Web sites, you may be asked to complete a registration form. You are required to provide contact information which is used to enhance the user experience and determine whether you are a paid subscriber or not.
When you register on we ask you for personal information. We use this information to provide you with the best advice and highest-quality service as well as with offers that we think are relevant to you. We may also contact you regarding a Web site problem or other customer service-related issues. We do not sell, share or rent personal information about you collected on Company Web sites.
How to unsubscribe and Termination
You may request to terminate your account or unsubscribe to any email subscriptions or mailing lists at any time.
In accessing and using this Website, User agrees to comply with all applicable laws and agrees not to take any action that would compromise the security or viability of this Website. The Company may terminate User’s access to this Website at any time for any reason. The terms hereunder regarding Accuracy of Information and Third Party Rights shall survive termination.
Website Content and Copyright
– Passwords are for user’s individual use
– Passwords may not be shared with others
– Users may not store documents in shared folders.
– Users may not redistribute documents to non-users unless otherwise stated in their contract terms.
Changes or Updates to the Website
Accuracy of Information:
While the information contained on this Website has been obtained from sources believed to be reliable, We disclaims all warranties as to the accuracy, completeness or adequacy of such information. User assumes sole responsibility for the use it makes of this Website to achieve his/her intended results.
Third Party Links:
This Website may contain links to other third party websites, which are provided as additional resources for the convenience of Users. We do not endorse, sponsor or accept any responsibility for these third party websites, User agrees to direct any concerns relating to these third party websites to the relevant website administrator.
Cookies and Tracking
We may monitor how you use our Web sites. It is used solely for purposes of enabling us to provide you with a personalized Web site experience.
This data may also be used in the aggregate, to identify appropriate product offerings and subscription plans. Cookies may be set in order to identify you and determine your access privileges. Cookies are simply identifiers. You have the ability to delete cookie files from your hard disk drive.