A recent MIT Technology Review article posed a thought provoking question with an obvious answer--at least to anyone familiar with the history of technology: "Are Semi-Autonomous Cars Making Us Worse Drivers?" It is difficult not to see autonomous and semi-autonomous driving technology as another case where widespread techno-enthusiasm leads otherwise intelligent people to act unintelligently. Indeed, an answer to the Technology Review's question was available long before driver-assist technologies ever hit the road.
Although we are often awestruck by human ingenuity, there are fairly firm limits to the range of cognitive tasks that are reasonable to expect out any person. Complex interactions within and between large technological systems are frequently opaque to even experts, and most people find it extremely challenging to babysit technological controls for long periods of time. Despite being dead obvious in hind sight, military leaders were surprised to find that personnel tasked with monitoring then newly developed radar screens for a once-in-a-blue-moon sighting soon became complacent and dozed off. It has been recognized for decades that, even though improved maintenance scheduling alongside technical advancements like autopilot have enhanced the safety record of airlines, new and often deadly mistakes have occurred as a result of automating the control of passenger jets. Pilots are now tasked with monitoring gauges and babysitting the autopilot, just the tasks that humans are poorly suited for. Unsurprisingly, when there is an autopilot error and the human pilot is thrust back into control--often in a crisis moment requiring immediate and accurate decision making--they make elementary and deadly errors. It has been recognized by technology scholars for decades that automation creates unintended consequences, especially for complex and tightly coupled systems--such as navigating an automobile through a maze of traffic seventy miles per hour. Technology scholars have proposed informating as a safer alternative, one that recognizes how people actually interact effectively with technologies rather than trying to cut them out of the loop. In informated processes the human driver would still be in control, but computerized components would aim to ensure that they would make timely, accurate, and more sensible decisions. Informated automobiles would be explicitly designed to make their human operators better drivers. It is in no way guaranteed that a properly informated driver could not outperform a car on autopilot. Exactly why companies like Google have not worked to develop informating technologies for automobiles is anyone's guess. I suspect that it has more to do with the reigning business model in the 21st century than anything like a concern for safety. Lacking firm data on what a massively automated highway actually would actually look like, claims of improved safety with driverless and semi-autonomous cars are more speculations, conveniently used for public relations purposes, than "proven" science. Companies like Google have a financial stake in getting drivers to spend less and less time at the wheel. Time spent operating an automobile is time not used producing personal data for Google on a digital device. Autonomous automobiles are part of a growing network of technologies aimed at producing an evermore detailed digital profile of a persons' desires and purchasing proclivities. Yet companies like GM and Audi have been hard at work developing semi-autonomous driving technologies, despite not having the same financial stake in people's drive time becoming more occupied by Netflix binges and Facebook rants than navigating traffic. They may be pursuing such technologies for a more mundane reason: not wanting to appear to be "behind the times." Indeed, given the often fickle and unpredictable swings in consumer markets, car companies are prone tobandwagons. At the same time, there is also the pervasive--and evidence resistant--cultural belief that automated technologies automatically outperform human operators in all the (relevant) aspects of a job. Certainly computers have an advantage when it comes to highly routinized or algorithmic tasks: games, assembly-line work, etc., but no program has been able to replicate human judgement. Yet it seems taken for granted that any time a human being can be replaced by a robot--e.g., care bots for the elderly and children or diagnosing algorithms--progress has been made. Indeed, some go so far as to believe that a kind of heaven on Earth can be realized by digitizing our own bodies and consciousness and mixing them with artificial intelligence algorithms. As clear as it is, upon reflection, that such problematic beliefs, business interests, and potentially misguided strategies are at play in any automation effort, there is a profound lack of self-awareness or honesty by the automakers themselves and media reports. Citizens might still decide, of course, that automated automobiles are worth the risks, even when challenged to weigh some of the issues that I have brought up here, but at least they would be doing so consciously. Indeed, the most problematic automated process within technological civilization may be that of technological change itself. Quasi-decisions get made about the direction of technological innovation as if by autopilot; societies react more than consciously steer. And semi-random technological drifts get interpreted as if they were part of an inevitable process of evolution toward Modernity. Unless social scientists succeed in figuring out how to cure societies of their technological sleepwalking, innovators seem destined to continually lurch from error to mistake to disaster. Comments are closed.
|
Details
AuthorTaylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research. Archives
July 2023
Blog Posts
On Vaccine Mandates Escaping the Ecomodernist Binary No, Electing Joe Biden Didn't Save American Democracy When Does Someone Deserve to Be Called "Doctor"? If You Don't Want Outbreaks, Don't Have In-Person Classes How to Stop Worrying and Live with Conspiracy Theorists Democracy and the Nuclear Stalemate Reopening Colleges & Universities an Unwise, Needless Gamble Radiation Politics in a Pandemic What Critics of Planet of the Humans Get Wrong Why Scientific Literacy Won't End the Pandemic Community Life in the Playborhood Who Needs What Technology Analysis? The Pedagogy of Control Don't Shovel Shit The Decline of American Community Makes Parenting Miserable The Limits of Machine-Centered Medicine Why Arming Teachers is a Terrible Idea Why School Shootings are More Likely in the Networked Age Against Epistocracy Gun Control and Our Political Talk Semi-Autonomous Tech and Driver Impairment Community in the Age of Limited Liability Conservative Case for Progressive Politics Hyperloop Likely to Be Boondoggle Policing the Boundaries of Medicine Automating Medicine On the Myth of Net Neutrality On Americans' Acquiescence to Injustice Science, Politics, and Partisanship Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate Privacy Threats and the Counterproductive Refuge of VPNs Andrew Potter's Macleans Shitstorm The (Inevitable?) Exportation of the American Way of Life The Irony of American Political Discourse: The Denial of Politics Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton Science's Legitimacy Problem Forbes' Faith-Based Understanding of Science There is No Anti-Scientism Movement, and It’s a Shame Too American Pro Rugby Should Be Community-Owned Why Not Break the Internet? Working for Scraps Solar Freakin' Car Culture Mass Shooting Victims ARE on the Rise Are These Shoes Made for Running? Underpants Gnomes and the Technocratic Theory of Progress Don't Drink the GMO Kool-Aid! On Being Driven by Driverless Cars Why America Needs the Educational Equivalent of the FDA On Introversion, the Internet and the Importance of Small Talk I (Still) Don't Believe in Digital Dualism The Anatomy of a Trolley Accident The Allure of Technological Solipsism The Quixotic Dangers Inherent in Reading Too Much If Science Is on Your Side, Then Who's on Mine? The High Cost of Endless Novelty - Part II The High Cost of Endless Novelty Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age The Symbolic Analyst Sweatshop in the Winner-Take-All Society On Digital Dualism: What Would Neil Postman Say? Redirecting the Technoscience Machine Battling my Cell Phone for the Good Life Categories
All
|