Will Ethics-Based Challenges and the Failure of Policy Makers to Intervene Ultimately Derail the Proliferation of High Level Autonomous Vehicles?

Type de publication:

Conference Paper

Auteurs:

Greig Mordue

Source:

Gerpisa colloquium, Paris (2019)

Mots-clés:

Autonomous Vehicles; Ethics; Regulatory Policy

Résumé:

The tendency for policy to lag the introduction of new technology confounds multiple industries and stakeholders (see Saxena et al, 2017; Justo-Hanani and Dayan, 2015; Naylor et al., 2015.) For policy makers in rapidly changing, newly emerging technology areas, the challenge is to anticipate and balance the governance of risks associated with the technology: regulatory policy, with its role in advancing the underlying innovation: industrial policy (OECD, 2011; Bosso, 2010). “Technology symbolizes markets, enterprise, and growth, while regulation represents government, bureaucracy, and limits to growth.” (Wiener, 2004 p.483).

The burgeoning autonomous vehicle space epitomizes the tendency for technological advancement to eclipse regulatory oversight. By way of example, the most widely-accepted method of assessing vehicle autonomy is that developed by the Society of Automotive Engineers (SAE). SAE offer a six level scale to assess the extent to which a vehicle has self-driving capabilities. At the second highest rating, Level 4, SAE explains that “an automated system can conduct the driving task and monitor the driving environment, and the human need not take back control (of the vehicle), but the automated system can operate only in certain environments and under certain conditions” (NHTSA, 2016, p9). In other words, by Level 4, the vehicle assumes increasing levels of control; at times assuming a role more active than that of the driver. Ford has indicated it will achieve Level 4 by 2021 (Belvedere, 2017); Daimler anticipates doing so by the early 2020s (Daimler, 2017); and BMW has suggested it will have Level 4 available by 2021 (Prodhan, 2017). However, a looming challenge, is that other than Germany, no jurisdictions have articulated regulations guiding on-road deployment of SAE upper tier autonomous capability underpinned by a coherent ethical foundation (Awad et al, 2018). Thus, as the early 2020s edge closer and as firms like Ford, Daimler, BMW and others prepare to launch vehicles with advanced capabilities, the implications of the gap between autonomous vehicle technology and regulatory oversight will escalate.

The consequences of this chasm are given evidence in the nature of the decisions that currently rest with drivers and will shift to autonomous vehicles in the potentially not-so-distant future. Drivers and vehicles in the here and now make decisions in the here and now. Should I brake … accelerate … steer left … veer right? However, programmers of autonomous vehicles and operators of the infrastructure underpinning them, are not required to make snap decisions. They have the benefit of time and thus, bear the burden of subsequently being judged by different and higher standards; ones reflective of the deliberate nature of the decisions underlying their work. Because those standards include aspects of morality and ethics, morality and ethics will become compulsory aspects of the standard by which their decisions are judged. Accordingly, accountability for the operation and management of the vehicle, which until now has rested primarily with the driver, will increasingly shift to some combination of the vehicle manufacturer, operators of the technological systems communicating with the vehicle, and the policy makers that influence their development (Germany, 2017).

The urgency and the challenge of codifying an ethical foundation to guide the development and proliferation of autonomous vehicles is given evidence in the near-infinite combination of scenarios and decisions that those directing the actions of the persons programming autonomous vehicles (e.g. policy makers, regulators, engineers, OEMs or some combination thereof) are able to consider. Should the algorithms that guide these vehicles prioritize the safety of drivers over passengers … occupants over pedestrians … young versus old … good drivers over bad … jaywalking pedestrians over rule-abiding driver? Ultimately, some one or some body will make these decisions. The question is whether the process that gives rise to those judgments unfolds via coordinated design or through a series of defaults. Buzbee (2003) suggests that when multiple regulators share jurisdiction over a potential regulatory opportunity, but where certainty over primary accountability is unclear (as is the case for guidance of autonomous vehicle decision making), a form of “regulatory commons” occurs: pressure is fragmented and regulatory action stalls. Alternatively, the regulatory void gets filled through some combination of case law (see Petty 2015) or by those most directly and immediately affected (Hajer, 2003). In the case of autonomous vehicles, the latter option most likely means the void gets filled by OEMs and the programmers and engineers who employ them. Is that appropriate?

Clearly, issues surrounding ethics and morals are not easy issues to resolve. The fact that comparably prosaic matters like fuel economy, safety requirements and emissions have defied harmonization efforts suggests that the development of global standards governing autonomous vehicles will be challenging. Moreover, as higher (Level 5) capabilities develop … as advanced machine learning and artificial intelligence becomes embedded in vehicles … as dilemma-inducing situations occur, the absence of an ethical foundation may eventually restrict autonomous vehicle proliferation.

So far, these matters are more theoretical than practical. However, this research starts to demonstrate the tangible nature of the imminent issues. A basis for alternative programming assumptions and decisions is offered, including a suite of ethical foundations to be considered by those engaged in autonomous vehicle programming. To assess the effects that programming assumptions have on courses of action autonomous vehicles may take in a dilemma-inducing situation, we create a situation in which the autonomous vehicle is compelled to select from several decision options. While the purpose is not to direct policy makers to one ethics-based programming decision over another, it does provide a basis for recognition of the fact that these decisions have important – life and death – implications and that failure to regulate autonomous vehicles’ programming decisions during dilemma-inducing circumstances may ultimately derail proliferation of upper-level capabilities.

GIS Gerpisa / gerpisa.org
61 avenue du Président Wilson - 94230 CACHAN
+33(0)1 47 40 59 50

Copyright© Gerpisa
Concéption Tommaso Pardi
Administration Géry Deffontaines

Créé avec l'aide de Drupal, un système de gestion de contenu "opensource"
randomness