Grey wave behind the direction of ADAS development: Part I

With the intention of fostering  transparency and greater open cooperation throughout the automotive industry, Toyota, in 2011, established its Collaborative Safety Research Center, and then, with $50M (£32.5M) in hand, immediately began looking for universities, research hospitals, federal agencies and other institutions interested in partnering up on some research projects.

According to its director, Chuck Gulash, the idea was for CSRC to act as a catalyst for sharing knowledge, and establishing communities of scientists, doctors, engineers and other people interested in solving automotive safety questions together, and that the work they did would be for the good of everyone. “You don't have to be in a Toyota vehicle to benefit we're doing without partners,” says Gulash, who will be speaking at the forthcoming TU-Automotive conference  Active Safety: ADAS to Autonomous 2015.

In CSRC's first five years of operation, they have sponsored and funded 34 different automotive safety research projects with 17 different partner institutions. Much of it has been focused on different aspects of active safety; accident prevention and mitigation, pedestrian protection and standards development. “We see it taking us more and more into the automated vehicle realm,” says Gulash. “We plan every step of the way, learning and applying to future active systems. We take each stage as a learning process.”

If the idea is making the world safe for ADAS, then automotive safety is a good place to start because if there is a single core value shared by everyone inside the connected-car space, it is that safety is non-negotiable. Automotive safety is also all the justification necessary to make the case for the connected-car. Connected cars are safer, by whole orders of magnitude, for the driver, the passengers, other drivers, other passengers along and all the bike riders, pedestrians, kids with skateboards and everyone else out there on the Great Global Highway. Smart cars improve driving, reduce accidents and make motor vehicle operations more efficient, cost effective and enjoyable. 

The funny thing is that while everybody believes these tenets with a kind of ferocious sincerity, right after it comes the split, the Great Schism, if you will, that divides everybody in the business into two distinct, seemingly irreconcilable camps. In one, you have the people who develop systems designed to maximise driver awareness and engagement, while other camp develops systems to assist the driver to the point of taking over, and ultimately removing them from the operational loop.

So does this mean this Great Schism will soon lead to the carmakers and Tier One manufacturers murdering each other in their beds? Actually, no, or at least not anytime in the near term. Right now both groups are so busy working with each other that few, if any, are even aware a divide exists. The reason they are so busy, of course, is that the great post-war car-buying demographic is now going from their fifties into their sixties. They still think of themselves as young and want to continue driving like they did back in the 1970s and ’80s, only they now recognise that their brains and reaction times are no longer what they were and they welcome what the burgeoning driver assist technologies can offer to keep them driving right up near the top of their game.

It's an exciting time we're in, because with ADAS soon to be a standard feature among even mid-range vehicles, there is suddenly an intense need to come up with fresh, fundamental understandings about the way drivers' brains process information. Only a few years ago, it was not really necessary to have such an understanding because drivers didn't have so many things to deal with as they drove. There was the road ahead, there were two or three rear-looking mirrors, and a couple of gauges on the dashboard that only told you something critical when the red oil pressure light was going off. Drivers had to be aware and engaged but they didn't have to deal with information overload. With the connected-car, they suddenly have information coming at them, seemingly from all directions.

Now, it's gotten serious. OEMs and TSPs all need to re-examine the standing assumptions and all the little things which until now, been safe to overlook and find out where the possible flaws are and correct them before there's a problem. The goal here is to not just to reduce driver distractions but to figure out ways to maximise to brain's ability to take in information and understand it. Suddenly, we have both the technology and the intense curiosity necessary accomplish great things in the field of driver cognition. It is leading scientists and engineers to examine areas that had never occurred to anyone to examine before.

Improving driver awareness by reducing distraction

Let's start with distraction. It is also one of the areas that CSRC partner institution MIT AgeLab is intensely interested in and AgeLab researcher Bruce Mehler's specialty. More specifically, his area is 'cognitive loading,' or, how much information a driver's brain can handle while still remaining engaged with the road. Mehler does this by putting drivers behind the wheel in an advanced driving simulator called the AwareCar, which is rigged to monitor their heartbeat, eye movement, sweat gland activity and other vital signs while performing various tasks as they drive. These tasks can include using the brakes and changing radio stations, both manually and by voice command. Another test has the drivers repeating back series of numbers that they'd heard read out on the car's radio. Among his findings Mehler learned that drivers using voice command interfaces for controlling in-car navigation or USB-connected music devices spent longer with their eyes off the road, than they usually would using conventional systems.

“Voice-activated systems in newer radios might seem to offer advantages over tuning older radios but problems arise when the system requires clarification,” says Mehler. “A user could say, 'I want to go to Boylston Street,' but the system will find Boylston Street, Boylston Avenue, and Boylston Place, and then repeat the options to the driver and ask them to select one. I could use voice and answer it and keep my eyes on the road, but now I'm keeping a fair amount of information in my head and that could create a level of cognitive demand which makes it harder to attend to my driving. So while I may be looking at the road, I may not be noticing the things on the road that I should.”

Mehler had also recently done a study of frequent and infrequent cell phone used and says he made an interesting discovery in the process. “People who report using cell phones frequently, even when they don't have it in their hands, then to drive faster, switch lanes twice as often and spend more time in the far-left lane. They also are more likely to slam on their brakes,” says Mehler.

Too much visual demand can compromise safety. With ADAS now on the verge of becoming commonplace in cars, unless all the informational inputs flashing at the driver get minimised, the active safety systems might end up getting used more than initially intended. NTSA recommends that smart car visual interface systems be designed so that they don't require drivers to take their eyes off the road for more than two seconds at a time, or a total of twelve seconds for an interaction. This also happens to be the time it takes for a car, traveling at a normal highway speed to cover half the length of a football field. With this in mind, it's vital that driver interactions using visual displays be carried out correctly in the absolute minimum amount of time possible. Under these circumstances, shaving off milliseconds makes total sense because milliseconds do add up and keep awareness from eroding.

Mehler and his AgeLab colleague, Bryan Reimer, have been researching ways of optimising driver-vehicle interfaces and have made some surprising discoveries about how something as overlooked as font type used in visual displays can made a big difference in driver distraction.

Fonts and text styling have always been part and parcel of a manufacturer's branding. The idea was always for the user to associate it, subliminally or otherwise, with the brand. It never occurred to anyone that font design might play any actual part in overloading the driver's brain. Even a few years ago, it really didn't matter that much. Now it does. Optimising typeface characteristics is a simple, effective way of providing a significant reduction in interface demand and associated distraction.

“We looked at the Eurostyle 'Grotesque' and the 'Humanist' fonts; two commonly used typeface styles used throughout the auto industry,” says Reimer.                                                                                                                                                                              

“Car infotainment systems use a mix of font styles, chosen for no apparent particular reason,” says Mehler. “Nobody had bothered to study its effect until now. In both our studies the humanist font significantly reduced the time that participants glanced away from the road, reducing glance time by 10.6 %.

“It used to not matter so much, since controls were all on fixed, hard buttons,” adds Reimer. “But now touchscreens and LCD displays are all over the dashboard, making it more of a key issue. If we can gain a 10% improvement by changing font characteristics, another 5% from background, and a few more from colour, that can all add up to a sizeable effect.”

“Eurostyle is very popular in the automotive industry because it conveys power and energy,” says Steve Matteson, a typeface expert participating in the project. “However the letterforms are mechanically rigid and compact, tightly spaced and nearly indistinguishable from each other.”  The humanist genre, on the other hand, is better for automotive interfaces, he says, because it is deeply rooted in our psyche and founded on the classic book typefaces. 

Oddly enough, the three admit, these findings generally only apply to male drivers. “While men can show a 10% difference in glance times, depending on the font, women's reactions are exactly the same regardless what font they are looking at,” says Mehler. Neither of them would hazard a guess why this might be.

You can catch up with all the latest developments at Active Safety: ADAS to Autonomous 2015 at Novi, MI, October 12-13.

Leave a comment

Your email address will not be published. Required fields are marked *