Uber’s Deadly Robot Ride Foiled by Issue With Deciphering Detected Objects

March 21, 2018 by

The tragedy of the first pedestrian killed by an autonomous vehicle points to a potential vulnerability with the nascent technology now being tested on the open roads: While robo-cars, powered by sophisticated sensors and cameras, can reliably see their surroundings, the software doesn’t always understand what it detects.

New details about the Uber Technologies Inc. autonomous vehicle that struck and killed a woman in Tempe, Arizona, indicate that neither the self-driving system nor the human safety driver behind the wheel hit the brakes when she apparently stepped off a median and onto the roadway at around 10 p.m., according to an account the Tempe police chief gave to the San Francisco Chronicle. The human driver told police he didn’t see the pedestrian coming, and the autonomous system behaved as if it hadn’t either.

Experts say that the sophisticated sensors on the autonomous vehicle almost certainly detected the woman pushing her bicycle laden with bags along the median, close to the road. But it’s possible the car’s lidar and radar sensors, which scan the surroundings for objects, may not have realized it was detecting a person. (Uber did not immediately respond to a request for comment.)

“The real challenge is you need to distinguish the difference between people and cars and bushes and paper bags and anything else that could be out in the road environment,” said Matthew Johnson-Roberson, an engineering professor at the University of Michigan who works with Ford Motor Co. on autonomous vehicle research. “The detection algorithms may have failed to detect the person or distinguish her from a bush.”

Driverless cars “see” the world around them using data from cameras as well as radar and lidar sensors that bounce laser light off objects to assess shape and location. High-speed processors crunch the data to provide a 360-degree view of lanes, traffic, pedestrians, signs, stoplights and anything else in the vehicle’s path. That’s supposed to enable the vehicle to know, in real time, where to go and when to stop. But pedestrian identification remains a major challenge for self-driving systems.

Autonomous vehicles also struggle to master the elements. Snow, ice and even rain can obscure sensors and render the most advanced computing power useless. That’s one reason most self-driving cars are being tested in sunny climates like Arizona and Texas. Early autonomous cars have been knocked for creating hazards by rigidly following the rules and overly cautious driving.

“You need to distinguish the difference between people and cars and bushes and paper bags”

In the case of the fatal collision in Tempe, the dark conditions aren’t likely to have played a role. While darkness can limit the vision of the cameras, radar functions equally well in day or night. Lidar actually functions better in the dark because the glare of sunshine can sometimes create interference, said Raj Rajkumar, a professor of electrical and computer engineering at Carnegie Mellon University who works on autonomous vehicles.

To Rajkumar, there’s little doubt that “lidar would certainly have detected an obstacle.” Any shortcoming would likely be ascribed to classification software “because it was an interesting combination of bicycle, bags and a pedestrian standing stationary on the median,” he said. Had the software recognized a pedestrian standing close to the road, he added, “it would have at least slammed on the brakes.”

Rajkumar said many of the companies he works with require human safety drivers to take the wheel when pedestrians are present along the roadway. “Just as a precautionary measure,” he said. “Nobody wants to deal with this kind of outcome.”

After the Uber collision, the car continued traveling at 38 miles per hour, according to the Tempe police chief, and the driver told police he wasn’t aware of the pedestrian until the car collided with her. Signs in the area post speed limits between 35 and 40 mph.

To some observers, the speed of the self-driving vehicle itself raises safety questions. “Why was Uber speeding?” asked Bryant Walker Smith, a University of South Carolina law professor who studies self-driving cars. “Crashes that are unavoidable at higher speeds are avoidable at lower speeds.”

That highlights what Johnson-Roberson describes as a shortcoming in robot reasoning. Had the sensors on the Uber vehicle recognized that a person was on the median, it could have slowed down or hit the brakes as a precaution, as humans do every day when they detect a risk along the road.

“I live in Ann Arbor, a college town,” Johnson-Roberson said. “So on football weekends, when there’s a bunch of drunk college kids, I drive at a lower speed. Those are the kind of human decisions we make to anticipate a situation, and that’s hard with autonomous cars. We’re not there yet.”

The race to get robot rides onto the road has pushed some companies to take too many risks, said Jake Fisher, director of auto testing at Consumer Reports. “There are some out there who are trying very hard to make self-driving cars a reality before the technology is developed,” Fisher said. “A tragedy like this could undermine people’s faith in this technology and actually set the development back many years.”

The promise of self-driving cars is that they will eliminate deaths on the highway by precisely piloting vehicles without human error, which regulators say is the cause of 94 percent of fatalities on American roads. In order for that promise to be realized, driverless cars need to be able to recognize a person when it sees one.

“We are building these cars to take humans around, and there are going to be humans in our cities and on our sidewalks and in our streets,” Johnson-Roberson said. “We’re not going to ban people from cities, so we’re going to have to learn how to work with them.”