With the increasing ubiquity of autonomous and semi-autonomous vehicles, there has also been increasingly frequent media scrutiny of the risks that they pose. Every serious accident is reported and pored over, and manufacturers and designers will frequently be criticised for the risk which they have exposed their users to.
What that criticism seldom engages with, as this interesting in-depth piece from Bloomberg does, is the motivations for autonomous vehicle manufacturers to be pushing to get their vehicles out on the road. It is very easy to assume that the principle motivation is profit and of course that will be a factor, but it would be wrong to assume that it is the pursuit of profit that leads vehicles to be put into production before their driver support systems can achieve consistent full autonomy, or have a guaranteed flawless safety record.
As the Bloomberg piece quoted below makes clear, the level of road deaths without this technology is so high, that every day in which these products are not on the roads means more deaths that might have been avoided.
Although it is never a calculation that should be taken lightly, the balancing exercise required to be undertaken is far from novel. Simply confining ourselves to the world of automotives, almost every regulatory decision that is made (whether it is mandatory safety features for vehicles, modifications to speed limits on roads, or expenditure on road maintenance or lighting) is accompanied by consideration of the potential for lives saved versus costs of implementation and enforcement, or the interests of the automotive industry.
What is different about the development and commercialisation of autonomous vehicles is that, perhaps for the first time, these macro "life or death" decisions are being taken not by governments but by technology companies, and the consumers that buy their products. It will be interesting to see how the burgeoning ethical component to consumers' buying decisions will impact on the ways in which products like autonomous vehicles are marketed and sold. One thing is for sure, there are going to be a good deal more of these discussions to be had, along the road.
Should robots be flawless before they’re allowed on the road, or simply better than the average human driver? “Humans have shown nearly zero tolerance for injury or death caused by flaws in a machine,” said Gill Pratt, who heads autonomous research for Toyota Motor Corp., in a 2017 speech. “It will take many years of machine learning, and many more miles ... to achieve the perfection required.” But such a high standard could paradoxically lead to more deaths than a lower one. In a 2017 study for Rand Corp., researchers Nidhi Kalra and David Groves assessed 500 different what-if scenarios for the development of the technology. In most, the cost of waiting for almost-perfect driverless cars, compared with accepting ones that are only slightly safer than humans, was measured in tens of thousands of lives.