Oxbotica Shows Speed of Recognition Required by AVs

Driverless vehicles will need to detect at least 150 other cars per second to cope with inner city traffic conditions.

This is one of the findings from the UK’s autonomous vehicle software provider Oxbotica in a study using its autonomous technology on the streets of London. Its technology can also detect changes in traffic stop lights in 1/2,000th of a second – faster than the human eye, according to the company which is the commercial wing of the Oxford scientists behind the Mars Rover vehicles.

It claims to be running its Universal Autonomy software system in cities, mines, airports, quarries and ports. The company says its software can run on everyday computer hardware similar to the power of an average desktop PC.

The company is trialing five autonomous vehicles in London as part of the DRIVEN consortium, an £13.6M ($16.9M) research project that seeks to address real-world challenges facing self-driving vehicles such as insurance, cyber-security and data privacy. The project builds on Oxbotica’s initial trials in the Borough of Hounslow in December 2018 deciding the capital is proving the ideal testing ground owing to its historic infrastructure and complex road networks. The city is ranked as the sixth most congested city in the world and records more than 30,000 road casualties annually.

The company uses its machine learning algorithms and vision perception technology in existing autonomous driving trials including running a driverless car through the tiny streets of Oxford and a truck working a mine in Northern Australia.

Paul Newman, Oxbotica founder, said: “As humans, we get better at driving the more experience we have but we don’t share our learnings with each other. This is the covenant for autonomous vehicles. They learn as a community in a way that we don’t. If we humans have a mishap or see something extraordinary, we aren’t guaranteed to make our neighbor or colleague a better driver.

“Even if we could learn from each other like computers can, we can’t share at scale, across vast numbers and we can’t do it all the time. That’s what our AI software will do for every host vehicle wherever it is in the world. Providing life-long shared learning and, with it, in-depth and continually improved knowledge of the local area allowing our cars to not just read the roads but to predict common hazards with ever greater sophistication.”

— Paul Myles is a seasoned automotive journalist based in London. Follow him on Twitter @Paulmyles_

 


Leave a comment

Your email address will not be published. Required fields are marked *