Although we are often awestruck by human ingenuity, there are fairly firm limits to the range of cognitive tasks that are reasonable to expect out any person. Complex interactions within and between large technological systems are frequently opaque to even experts, and most people find it extremely challenging to babysit technological controls for long periods of time. Despite being dead obvious in hind sight, military leaders were surprised to find that personnel tasked with monitoring then newly developed radar screens for a once-in-a-blue-moon sighting soon became complacent and dozed off. It has been recognized for decades that, even though improved maintenance scheduling alongside technical advancements like autopilot have enhanced the safety record of airlines, new and often deadly mistakes have occurred as a result of automating the control of passenger jets. Pilots are now tasked with monitoring gauges and babysitting the autopilot, just the tasks that humans are poorly suited for. Unsurprisingly, when there is an autopilot error and the human pilot is thrust back into control--often in a crisis moment requiring immediate and accurate decision making--they make elementary and deadly errors.
It has been recognized by technology scholars for decades that automation creates unintended consequences, especially for complex and tightly coupled systems--such as navigating an automobile through a maze of traffic seventy miles per hour. Technology scholars have proposed informating as a safer alternative, one that recognizes how people actually interact effectively with technologies rather than trying to cut them out of the loop. In informated processes the human driver would still be in control, but computerized components would aim to ensure that they would make timely, accurate, and more sensible decisions. Informated automobiles would be explicitly designed to make their human operators better drivers. It is in no way guaranteed that a properly informated driver could not outperform a car on autopilot.
Exactly why companies like Google have not worked to develop informating technologies for automobiles is anyone's guess. I suspect that it has more to do with the reigning business model in the 21st century than anything like a concern for safety. Lacking firm data on what a massively automated highway actually would actually look like, claims of improved safety with driverless and semi-autonomous cars are more speculations, conveniently used for public relations purposes, than "proven" science. Companies like Google have a financial stake in getting drivers to spend less and less time at the wheel. Time spent operating an automobile is time not used producing personal data for Google on a digital device. Autonomous automobiles are part of a growing network of technologies aimed at producing an evermore detailed digital profile of a persons' desires and purchasing proclivities.
Yet companies like GM and Audi have been hard at work developing semi-autonomous driving technologies, despite not having the same financial stake in people's drive time becoming more occupied by Netflix binges and Facebook rants than navigating traffic. They may be pursuing such technologies for a more mundane reason: not wanting to appear to be "behind the times." Indeed, given the often fickle and unpredictable swings in consumer markets, car companies are prone tobandwagons.
At the same time, there is also the pervasive--and evidence resistant--cultural belief that automated technologies automatically outperform human operators in all the (relevant) aspects of a job. Certainly computers have an advantage when it comes to highly routinized or algorithmic tasks: games, assembly-line work, etc., but no program has been able to replicate human judgement. Yet it seems taken for granted that any time a human being can be replaced by a robot--e.g., care bots for the elderly and children or diagnosing algorithms--progress has been made. Indeed, some go so far as to believe that a kind of heaven on Earth can be realized by digitizing our own bodies and consciousness and mixing them with artificial intelligence algorithms.
As clear as it is, upon reflection, that such problematic beliefs, business interests, and potentially misguided strategies are at play in any automation effort, there is a profound lack of self-awareness or honesty by the automaters themselves and media reports. Citizens might still decide, of course, that automated automobiles are worth the risks, even when challenged to weigh some of the issues that I have brought up here, but at least they would be doing so consciously. Indeed, the most problematic automated process within technological civilization may be that of technological change itself. Quasi-decisions get made about the direction of technological innovation as if by autopilot; societies react more than consciously steer. And semi-random technological drifts get interpreted as if they were part of an inevitable process of evolution toward Modernity. Unless social scientists succeed in figuring out how to cure societies of their technological sleepwalking, innovators seem destined to continually lurch from error to mistake to disaster.