According to Waymo, the autonomous vehicle company owned by Google, “Self-driving vehicles hold the promise to improve road safety and offer new mobility options to millions of people.” While driverless cars may be the wave of the future, headlines peppered with news of autonomous vehicle crashes—and now a pedestrian death involving an Uber self-driving car—have left many drivers wondering if sharing the roads with artificial intelligence will truly be as safe as they’ve been told.
Safety is the Incentive
The development of self-driving vehicles has primarily been to lower the rates of injuries and fatalities caused by collisions. A vehicle that can operate without human assistance runs no risk of driving drunk, can’t be distracted by a mobile phone, radio or navigation controls, or another passenger, and will never fall asleep at the wheel.
These vehicles also have the power to improve accessibility to millions of senior citizens and those who are blind or have limited vision by allowing them door-to-door transportation in communities where public transportation is either insufficient or does not exist at all.
Autonomous cars promise to benefit people who would usually be content to drive themselves with the gift of “time better spent” during their travel or commute. With a trustworthy autopilot, few would turn down the opportunity to read, study, play a game, or even nap instead of enduring the stress of driving.
However, all of these potential perks do not come without a handful of well-justified concerns and reservations. Could your car be attacked by malicious hackers? What if it contracts a virus? Will it be able to figure out what to do if there’s a sudden change in weather or if road signs or markings are missing? How much can the car really “see”?
Putting Driverless Cars to the Test
To help advance driverless technology and to address some of the specific concerns of motorists and pedestrians, the leading developers of autonomous cars have established a system of tests that challenge them to real-life scenarios. Depending on how the technology responds in those scenarios, engineers can then make improvements to the vehicle’s controlling Artificial Intelligence (AI) so that it can adapt and calculate safer responses in the future.
The three levels of scenario testing include:
After the safety of the base vehicle is assessed and the self-driving hardware is tested, autonomous vehicles are subjected to simulation testing. These simulations involve creating and modifying software models of some of the most difficult and unpredictable situations drivers face on the road. The AI, sometimes with help from software engineers, learns to use probability to calculate the safest reaction for the vehicle. This phase of development can take years.
2. Closed-Course Testing
This level of testing pairs the latest software update, a safety-tested base vehicle, and an experienced test driver on a closed track to observe and measure the car’s responses to various driving challenges. Data recovered from these tests can be used to further improve the vehicle’s software if needed.
3. Road Testing
After extensive simulations and closed-course tests, autonomous vehicles are put onto public roads for testing and observation with a human safety driver behind the wheel to take over control of the car in the event of an emergency. California and Michigan allow autonomous vehicles to be tested on public roads without a safety driver, but those tests are generally conducted only after road tests with a safety driver have been successfully completed.
Road testing has been so prevalent in recent years that you may have already shared the highway with a self-driving car and not even realized it. Waymo has put their vehicles through over five million miles of real road testing in Washington, California, Arizona, and Texas. As of February, 2018, road testing of autonomous vehicles was approved in a total of 26 states.
Autonomous Vehicle Crash Statistics
Since 2005, there have been 21 crashes or collisions involving autonomous test vehicles. Those self-driving cars included models from various developers, including Mercedes, Tesla, Google/Waymo, Uber, and Navya. Out of 21 total incidents, the autonomous vehicles were found to be at fault for only two.
According to studies conducted by Tesla (in cooperation with the National Highway Traffic Safety Administration) there is a fatality every 94 million miles (150 million km) among all types of vehicles in the United States. With one fatality attributed to the company’s autopilot system after 130 million miles (208 million km) driven by its customers, the data shows that their autonomous feature performs more safely than statistical averages.
The Uncertainties of Innovation
As with human drivers, autonomous vehicles are not without their flaws. Tesla has struggled with improving their AI’s visibility and recognition of pedestrians and cyclists. Mercedes was challenged with improving the brake engagement time of their autonomous assist features to avoid collisions. Waymo was forced to take a closer look at its AI’s decision-making ability to rank potential hazards based on their level of danger.
Above all, every autonomous vehicle developer has struggled with human unpredictability. NHTSA research has concluded that 90% of all motor vehicle collisions are the result of human error. Despite the behavioral statistics and probabilities that AI can observe and make adaptations for, there will always be the danger of a crash if human drivers share the road with self-driving cars.
However, the findings of this NHTSA study combined with the crash statistics of autonomous vehicles do point to these “robotic” cars as a safe method of transportation, even during their development and testing phases, with lower numbers of injuries and crash-related fatalities per miles driven than human-operated vehicles.
Was this article helpful? Like and follow us on Facebook to learn about more articles like this one as soon as they’re published!
Share this article with your friends: