On September 19, 2016, the National Highway Transportation Safety Administration (NHTSA) issued a Policy regarding automated vehicles. NHTSA issued the Policy for the purpose of providing guidance to the states, manufacturers, and the public at large over how best to address the variety of challenges posed by the continued progress of automated vehicle technology. These challenges include how states can best regulate automated vehicles, and how NHTSA can encourage technological advances while promoting vehicle safety.
One immediate contribution made by the Policy is in the area of classifying vehicles according to their level of automated function. Previously, NHTSA used a scheme of classifying vehicles that differed from many industry groups, including SAE International (formerly the Society of Automotive Engineers). NHTSA has now adopted SAE’s classification system, which includes six levels, from 0 to 5, with 0 being a vehicle that possesses no automated function, and 5 being a vehicle that performs all driver functions under all driving conditions. NHTSA considers those vehicles at levels 3, 4 and 5 to be Highly Automated Vehicles (HAVs), meaning the vehicle itself is monitoring the driving environment while steering, accelerating and braking under at least some driving conditions. The adoption of the SAE levels should eliminate a great deal of ambiguity about how vehicles are classified, an important step to crafting a sensible legal and regulatory scheme that will apply to different types of vehicles.
So what impact does this have on transportation liability? In the very near term, not much. The Policy is advisory, not a rulemaking, and therefore is most useful for projecting what approach the federal and state governments will take in the years ahead. In this way, the Policy is reassuring, demonstrating that NHTSA’s focus includes a number of important and challenging issues, such as vehicle cybersecurity. The Policy recognizes that HAVs will be potentially vulnerable not only to the unauthorized extraction of the extremely large amounts of data they must gather, but also to third-party interference with vehicle operation. It suggests that manufacturers immediately begin considering which best practices to incorporate into their systems development, and suggests a variety to consider, including those offered by NHTSA, SAE, and the National Institute for Standards and Technology.
For transportation risk managers, the most interesting signal being sent by NHTSA may have to do with the transition of driver responsibility that occurs from level 3 to level 4 vehicles. A level 3 vehicle possesses a high degree of automated function that permits the vehicle to operate without the constant monitoring of the driver. An example of this level of technology includes Audi’s prototype A7, but as of now there are no commercially available level 3 vehicles. The problem with level 3 is the transitional space it occupies. A level 3 vehicle offers a number of safety-enhancing features making the vehicle nearly autonomous, such that it requires no human monitoring, until (perhaps very suddenly) it does, either because of a vehicle error, or an unanticipated driving condition. In this situation a driver may have been lulled into a sense of complacency and inattentiveness by the advanced automation of his car, and be unable to respond in time.
NHTSA recognizes this problem, and encourages manufacturers to consider whether to include driver engagement monitoring into their level 3 vehicles. “The truck was supposed to be driving” is likely to be an extremely unpopular defense with juries, however. For this reason, the commercial transportation industry may want to avoid level 3 automation and its challenges altogether, at least until it has been sufficiently tested and normalized by passenger automobiles.
The other interesting signal being sent by NHTSA has to do with driver liability. In a Washington Post interview that coincided with the release of the Policy, Transportation Secretary Anthony Foxx remarked that, “when a human being is operating [a] vehicle, the conventional rules of state law would apply.” However, Foxx confirmed that NHTSA intends to regulate “when software is operating the vehicle.”
This is noteworthy for two reasons. First, it focuses attention on a perplexing question juries may be forced to contend with: how do you determine whether, at the time of the accident, it was the responsibility of the human driver or the vehicle software to be operating the vehicle and monitoring driving conditions? Second, how, exactly, will NHTSA regulations address liability for accidents involving level 4 and 5 vehicles? Pursuant to those regulations, will an accident caused by a vehicle operating autonomously be treated as a products liability case, requiring the testimony of experts to attempt to educate a jury on the arcana of computer code?
One approach to tort liability that addresses these issues is to maintain current state approaches to liability for automobile accidents, and extend it to cover automated vehicles, even those that are fully autonomous. When a fully autonomous vehicle is involved in an accident, liability could be determined according to the familiar standard of the reasonable driver. If the vehicle departed from this standard, its operator would be liable. This standard could evolve with the technology, has the advantage of eliminating the question of who should have been operating the vehicle, removes the need for an expensive and confusing battle of the experts. Defendants who believe the accident was due to a mechanical or software issue with their vehicle could, as now, raise this as a defense.
The proper approach to liability is just one of many questions posed by the adoption of automated vehicle technology. It is helpful that NHTSA has recognized this and other important issues, and that its Policy encourages the studies and discussions that will lead to their eventual resolution.