Deep Learning Set to Ignite the Machine Vision Market to $193B by 2023


Reading time ( words)

Deep Learning (DL) techniques are taking machine vision systems to next level in terms of their technical capabilities, driving the mass adoption of the technology in several industries, including the automotive, retail, consumer, industrial, and surveillance sectors. DL-based machine vision marks a departure from other approaches used in the machine vision sector, which were more limited in terms of their application. On the back of this growth in adoption, ABI Research, a market-foresight advisory firm providing strategic guidance on the most compelling technologies, forecasts that machine vision technology will see a CAGR of 53% between 2018 and 2023, with US$193.8 billion of direct annual revenue generated from machine vision services and hardware by the end of the forecast period.

Machine Vision vendors previously relied on hardcoded feature detection techniques, which meant they could only be applied in highly controlled environments, such as inspecting a single type of object on a production line. DL-based machine vision systems are far more flexible, one system can recognize many object types and be deployed in a range of circumstances. An example of such systems would be in cashier-less stores like that of Amazon Go, where cameras track the movement of both customers and items around the store. Another example would be the machine vision systems being employed to support autonomous driving, these can make distinctions between multiple types of road users. “It is these new DL-based applications among others that are set to drive growth in the machine vision space, which would have been impossible using traditional machine vision techniques,” says Jack Vernon, Industry Analyst at ABI Research.

“If we look at some of the applications increasing adoption of machine vision systems, we will see that it is the innovations in deep learning that are driving their growth. Take, for instance, advanced driver assistance systems (ADAS), which are a core technology in autonomous driving. By 2023 some 37.831 million vehicles shipped will contain between level 2 to 5 ADAS. Over half of the 34.446 million level 2 ADAS systems shipped in that year will use DL-based machine vision, while the remaining level 3-5 vehicles will all use the approach – this represents a massive growth in adoption of machine technology and will contribute enormously to the growth in direct revenues form machine vision,” says Vernon.

The same DL-based image recognition techniques used in machine vision are also being applied to sensors outside of traditional RGB (primary color) cameras, these will also have a transformative effect in those markets, and likely significantly increase adoption on those technologies. For instance, the use of LiDAR systems will be incorporated into autonomous driving systems, on the back of the fact that deep learning enables machines to interpret LiDAR data in a more sophisticated way, allowing software to identify features of the landscape and other road users.

DL-based image recognition techniques are also going to change how many different sensor systems are going to be used. In the healthcare space, a number of startups and large research entities are building DL-based image recognition software that can identify health issues directly from MRI, radar, x-ray data. These examples demonstrate how DL-based machine vision techniques are transforming not only the growth of RGB camera systems but also how many other different sensors will be used in future.

There are however several hurdles for the industry to negotiate if the industry is to reach the potential outlined above. Few companies have fully settled on their favored hardware and software technology for machine vision applications across different verticals, creating opportunities and competition for many vendors in both spaces. Consequently, established players and disruptive startups are competing aggressively across the technology stack as potential customers for their technology chase high-value applications like autonomous driving. The scale of the opportunities in the area have attracted significant sums of capital to invest in machine vision over the past 4 years, a trend that looks set to continue for another 2 years. In 2017 venture capitalists invested US$2.7 billion in machine vision startups, showing the strong attractiveness of the area.

About ABI Research

ABI Research provides strategic guidance for visionaries needing market foresight on the most compelling transformative technologies, which reshape workforces, identify holes in a market, create new business models and drive new revenue streams. ABI’s own research visionaries take stances early on those technologies, publishing groundbreaking studies often years ahead of other technology advisory firms. ABI analysts deliver their conclusions and recommendations in easily and quickly absorbed formats to ensure proper context. Our analysts strategically guide visionaries to take action now and inspire their business to realize a bigger picture. 

Share

Print


Suggested Items

Small Eye in the Sky: Special Forces Will Soon Have New Enduring ISR Option

04/29/2019 | Lockheed Martin
Combating counterinsurgency, conducting reconnaissance, collecting information vital to national security, United States Special Forces conduct some of the most sensitive and critical missions.

Designing Chips for Real Time Machine Learning

04/01/2019 | DARPA
DARPA’s Real Time Machine Learning (RTML) program seeks to reduce the design costs associated with developing ASICs tailored for emerging ML applications by developing a means of automatically generating novel chip designs based on ML frameworks.

Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning

05/07/2018 | DARPA
Machine learning (ML) and artificial intelligence (AI) systems have significantly advanced in recent years. However, they are currently limited to executing only those tasks they are specifically designed to perform and are unable to adapt when encountering situations outside their programming or training.



Copyright © 2019 I-Connect007. All rights reserved.