Toshiba Incorporating DNN Tech Into Visconti 5 Image Chip

Japanese technology giant Toshiba has developed a hardware platform incorporating deep neural network (DNN) technology to improve image processing in advanced driver assistance systems (ADAS).

The company will integrate the DNN hardware with conventional image processing technology, with sample shipments of the Visconti 5 image-recognition processor starting in September 2019.

Toshiba’s DNN technology uses an algorithm modeled on the neural networks of the human brain, and is expected to perform recognition processing as accurately, or better than, the human brain.

This algorithm will allow the system to identify different road traffic situations and a variety of obstacles and road markings from all directions and in near real time.

As it gathers more information, the DNN can offer more accurate detection and identification of a wider range of objects than image recognition based on conventional machine learning.

Autonomous driving increases the computational requirements of a vehicle by as much as 100 times compared with the most advanced human-driven cars.

This includes the need to process inputs from multiple cameras, radars and Lidar sensors around the car, interpret the data and use it to make driving decisions.

In its announcement, Toshiba points out that vehicles are increasingly expected to come with ADAS features, and these features are also expected to offer increasingly advanced capabilities.

“This trend will increase the demand for more advanced and capable systems,” a company release announcing the Visconti 5 noted.

For instance, the 2020 version of the European New Car Assessment Programme (Euro NCAP), the EU-backed safety standard, adds testing to avoid collisions at intersections.

Deep learning is used to tackle a host of complex problems where a traditional programming approach won’t work. In the creation of AVs, deep learning offers a solution for object recognition in images.

A car equipped with ten high-resolution cameras generates 2 gigapixels per second of data, which takes 250 trillion operations per second to process, according to chipmaker Nvidia, which published a report in October about onboard computing power in AVs.

Three years ago, Toshiba partnered with Japanese auto components supplier Denso to jointly develop an artificial intelligence technology called Deep Neural Network-Intellectual Property (DNN-IP) to help develop ADAS technologies.

Image sensors and cameras are going to play a critical role in the development of self-driving vehicles.

Toshiba already offers the TMPV75 and TMPV76 series of image recognition processors, which integrate RISC-type Media Processing Engines (MPEs) to improve image data processing performance.

The processors are capable of processing input images from one to four cameras in real time, and allow connection of up to eight cameras.

Other chipmaking heavyweights such as ARM are moving into the autonomous vehicle space. In September the company unveiled the Cortex-A76AE (Automotive Enhanced), ARM’s CPU architecture specifically designed for self-driving vehicles.

For the Cortex-A76AE, ARM significantly redesigned the A76 platform to add a capability called Split-Lock for added safety. Put simply, Split-Lock allows two CPU cores to operate either in lock-step (both carrying out the same task) or split mode (performing different tasks and applications).

— Nathan Eddy is a filmmaker and freelance journalist based in Berlin. Follow him on Twitter.


Leave a comment

Your email address will not be published. Required fields are marked *