Intel Proposes System to Make Self-Driving Cars Blameless
Intel Corp. has developed a system it says ensures that self-driving vehicles can’t cause accidents where they are at fault, an effort to reassure a skeptical public and help speed adoption of driverless cars on the road.
The world’s largest chipmaker is publishing a set of standards, based on mathematical formulas, that will govern the behavior of robot cars and trucks. If they’re adopted, Intel argues, it will bring certainty to questions of liability and blame in the event of an accident.
“Any useful autonomous vehicle is going to be involved in accidents,” said Dan Galves a vice president at Mobileye, a maker of autonomous vehicle technology that Intel bought earlier this year. “One thing that is clear is that the public is going to be a lot less forgiving of accidents that are caused by machines.”
Intel is one of several component makers that see the increasing need for computing in vehicles, caused by the move toward autonomy, as a new growth market. While car makers, their suppliers and companies such as Uber Technologies Inc. and Alphabet Inc.’s Waymo are conducting on-the-road tests, Intel and its rivals need the industry to move beyond trials and into production to get a return on the dollars they’re pouring into research and development.
Intel is trying to come up with a framework that will help prevent the potential chaos of putting machine-driven vehicles and those piloted by unpredictable humans on the road at the same time, a necessary step on the path to a future where steering wheels become obsolete. The company has taken descriptions of behavior and circumstances that were involved in almost all accidents tracked by the U.S. National Highway Traffic Safety Administration and come up with mathematical models to create a measurable “safe state” for autonomous vehicles.
The standards, if endorsed by the automotive industry, its suppliers and regulators, would also be the basis of software in the vehicles that make sure the rules are followed. That would help speed the validation of autonomous technology, something that would benefit Intel’s chip-making business.
To illustrate what Intel has in mind, under the guidelines a robot vehicle would move past parked cars at a speed slow enough to make sure it could stop in time to avoid a pedestrian who suddenly stepped out into the road. That calculation is possible because we know the maximum speed at which a human can move and can model it, according to Intel. Similarly, computers can easily calculate the safe stopping distance to a vehicle in front and make sure the vehicle they’re piloting stays far enough away. If an aggressive human driver cuts in front of the robot car and causes an accident, the standards would clearly show who’s fault it was, even if the machine-driven car rear-ended the other vehicle.
Intel is arguing that the current path that the industry is following won’t work or will take too long. Slow-moving, ultra-cautious vehicles are of limited use and aren’t that safe, because they clog the roads and don’t fit in with the flow of human-piloted traffic. Attempts to prove that self-driving cars are safe by putting them on the road, having them learn from experience and then measuring how few accidents they have compared with driven vehicles is also ineffective, partly because any accident attracts huge amounts of public attention, Intel said.
- US High Court Declines Appeal, Upholds Coverage Ruling on Treated Wood
- Swiss Re: Mitigating Flood Risk 10x More Cost Effective Than Rebuilding
- Survey: Majority of P/C Insurance Decision makers Say Industry Will Be Powered by AI in Future
- Verisk: A Shift to More EVs on The Road Could Have Far-Reaching Impacts