video overview

Publisher of The Virginia Engineer

Print-Publishing Services
Web Site Design-Coding-Hosting
Business Consulting


Autonomous Vehicle Training Headed In Wrong Direction
June 11, 2018

The technologies associated with the increased use of autonomous vehicles, specifically programming, has in recent days illustrated a major obstacle according to information provided by Arizona State University (ASU). According to Aviral Shrivastava, a computer science associate professor in Arizona State University’s Ira A. Fulton Schools of Engineering, “The autonomous car industry is trying to walk a line between a human-like driving experience and guaranteed safety. At the moment, the familiarity of human-like driving is the norm and puts safety at risk.”

Arizona State University Associate Professor Aviral Shrivastava's research involves developing technologies used in autonomous vehicle programming. Photo by Ken Fagan/ASU Now.

“Google, Uber and others in the field are using humans to teach cars how to drive themselves,” explained Prof. Shrivastava. “And that’s the problem. They are learning from human drivers, all of whom are fallible, and the autonomous cars are in turn mirroring our unsafe driving behaviors.”

As a case in point, a fatal accident involving a self-driving Uber vehicle and a pedestrian earlier this month in Tempe, Arizona, caused Uber to suspend its driverless operations in Arizona.

The video captured by the vehicle just before the accident illustrates the pedestrian was crossing the road, outside of a pedestrian walkway, in the dark. Lights from the car, streetlights and ambient lighting failed to illuminate the pedestrian.

“Since the Uber car could not detect anything in the dark area, it did what a human driver might have done — proceeded as though there was no one in the road. When the car’s lights brought the woman suddenly into view, the car was travelling too fast to stop,” Prof. Shrivastava said.

Prof. Shrivastava asserts that an autonomous vehicle should travel only at the speed at which it can stop before its range of vision ends — the vehicle should be traveling slowly enough that it can instantly stop if an obstruction suddenly comes into view.

“If a human driver causes an accident, it is unfortunate but normal,” he said. “If an autonomous car causes an accident on the other hand, it is unacceptable, and it can shut down the whole autonomous car industry.”

“As long as human behaviors are the foundation of automated driving technology, safety will continue to be an issue,” Prof. Shrivastava added. “The priority for autonomous cars should be safety, rather than a human-like driving experience.”

Funded by the National Institute of Standards and Technology and the National Science Foundation, Prof. Shrivastava’s research focuses on cyber-physical systems designs — mechanisms like autonomous vehicles in which a computer controls a physical system — that guarantee the behavior of the systems.

“For example, we look at how can we build a car in which there is a guarantee that if an obstacle is detected, brakes will be applied within one millisecond,” explained Prof. Shrivasta.

  ------   News Item Archive  -----  
The Virginia Engineer on facebook
The Virginia Engineer RSS Feed