The Disrupters: Algocian sees AI spending time in the classroom

If autonomous cars are the future, they’re going to need to be built with scalable technology that doesn’t break the bank. In order for that to happen, automakers must figure out a way to lower the price of the most expensive components or turn to cheaper alternatives.

Algocian, a computer vision artificial intelligence start-up based in Toronto, Canada, hopes to fulfil automakers’ needs by building a cost-effective solution. The company is working on an AI platform that works with any camera and runs on a $5 (£4) CPU. “It’s really a generic machine learning engine which can be trained to recognise any kind of object,” said its CEO Karim Ali. “Our engine can accept any kind of sensor input: camera, radar, LiDAR. We specify what kind of object and pattern to look for.”

Ali demonstrated the platform by connecting Algocian’s algorithm to a camera using a Raspberry Pi. The camera identified people within its view and relayed that info on a computer monitor. The people were highlighted with green boxes as the algorithm tracked their movement, while other objects were ignored.

Ali added: “This could power autonomous vehicles that need to see the world around them. If you look at what companies are doing today, they power all of their artificial intelligence essentially on supercomputers or very powerful GPUs that go for thousands of dollars and use hundreds of watts. What we have here is the equivalent for object detection running on 10 watts.” Only the CPU, a quad-core ARM Cortex-A53, is actually needed to power Algocian’s algorithm. Ali said that at high volumes the processor alone would retail for $5 or less.

Learning as you go

Deep learning is thought to be a key technology for autonomous development, primarily because it will allow automobiles to adapt. Ali, who describes his company as a deep learning solution, is very impressed by the potential of this emerging technology. “One of the great things about deep learning is the ability to throw a lot of data at your learning algorithm and it’s able to handle it,” he said. “In terms recognising new objects, that’s the beauty of machine learning. That’s what really sets it apart from everything else out there. What you build is a generic engine that can take inputs and produce the output that you desire.”

For example, Algocian would not have to change the software if it wanted to add car detection to its algorithm. The company would simply change the input learning to the algorithm and teach it to look for automobiles.

There are limitations, however. The technology is not yet at the point where it can learn entirely on its own. Said Ali: “I think you’ll hear a lot of different opinions if you ask different experts on that. You’ll have the people who tell you everything is learnable with sufficient data. You hear that a lot from the community. I think it’s true but that is a huge assumption. How will you get every possible scenario out there, every possible category of objects?”

On the upside, it may not be necessary to build an algorithm that knows everything. “In order to have autonomous vehicles, you could probably get away with a little bit less than what humans can do,” Ali explained. “There are situations where the car can just stop and wait, try to get its senses.”

The car would still need to identify and differentiate between cars, motorcycles, pedestrians and other common objects. It would also need to properly read traffic lights. Said Ali: “I think you’re at a point where it’s human-like understanding, not quite human, and it’s enough to have the car be driverless.”

Interacting with the world

There have been countless discussions about the various ways autonomous vehicles will communicate with each other but very little has been said about how they will talk to pedestrians. What happens if a kid drops his ball and it bounces in the street? The car might know that it should stop but how will the kid know if and when it is safe to get the ball?

“The car needs to interact with the world,” said Ali. “It needs to tell the kid, ‘It’s safe, I’m not gonna drive.’ That, to me, is very important. And even more general, when you come to a cross walk or the light is red and you’re a pedestrian, do you cross? What if the car starts moving? That thing needs to communicate to you, ‘It is safe to cross. I’m not gonna drive, I know you’re there.’ Just like a driver gives you a nod, just like a driver would look at the kid and say, ‘Go get your ball.’”

No Internet required

Finally, Ali weighed in on the debate surrounding autonomous cars and connectivity. “I think you have to be independent of the cloud,” he said. “You can use the cloud when and if possible but it can’t be relying on an Internet connection to be driverless. It’s just not possible. Maybe in 40 or 50 years. Today the infrastructure is not there.”


Leave a comment

Your email address will not be published. Required fields are marked *