Alternative Modernities

Thoughts

  • Thoughts
  • The Divide
  • Technically Together
  • Current Research
  • Teaching
  • About Me
  • Vitae

7/10/2017

Automating Medicine

Read Now
 
Picture
The stock phrase that “those who do not learn history are doomed to repeat it” certainly seems to hold true for technological innovation. After a team of Stanford University researchers recently developed an algorithm that they say is better at diagnosing heart arrhythmias than a human expert, all the MIT Technology Review could muster was to rhetorically ask if patients and doctors could ever put their trust in an algorithm. I won’t dispute the potential for machine learning algorithms to improve diagnoses; however, I think we should all take issue when journalists like Will Knight depict these technologies so uncritically, as if their claimed merits will be unproblematically realized without negative consequences.
​
Indeed, the same gee-whiz reporting likely happened during the advent of computerized autopilot in the 1970s—probably with the same lame rhetorical question: “Will passengers ever trust a computer to land a plane?” Of course, we now know that the implementation of autopilot was anything but a simple story of improved safety and performance. As both Robert Pool and Nicholas Carr have demonstrated, the automation of facets of piloting created new forms of accidents produced by unanticipated problems with sensors and electronics as well as the eventual deskilling of human pilots. That shallow, ignorant reporting for similar automation technologies, including not just automated diagnosis but also technologies like driverless cars, continues despite the knowledge of those previous mistakes is truly disheartening.

The fact that the tendency to not dig too deeply into the potential undesirable consequences of automation technologies is so widespread is telling. It suggests that something must be acting as a barrier to people’s ability to think clearly about such technologies. The political scientists Charles Lindblom called these barriers “impairments to critical probing,” noting the role of schools and the media in helping to ensure that most citizens refrain from critically examining the status quo.

Such impairments to critical probing with respect to automation technologies are visible in the myriad simplistic narratives that are often presumed rather than demonstrated, such as in the belief that algorithms are inherently safer than human operators. Indeed, one comment on Will Knight’s article prophesized that “in the far future human doctors will be viewed as dangerous compared to AI.”

Not only are such predictions impossible to justify—at this point they cannot be anything more than wildly speculative conjectures—but they fundamentally misunderstand what technology is. Too often people act as if technologies were autonomous forces in the world, not only in the sense that people act as if technological changes were foreordained and unstoppable but also in how they fail to see that no technology functions without the involvement of human hands. Indeed, technologies are better thought of as sociotechnical systems.

Even a simple tool like a hammer cannot existing without underlying human organizations, which provide the conditions for its production, nor can it act in the world without it having been designed to be compatible with the shape and capacities of the human body. A hammer that is too big to be effectively wielded by a person would be correctly recognized as an ill-conceived technology; few would fault a manual laborer forced to use such a hammer for any undesirable outcomes of its use.

Yet somehow most people fail to extend the same recognition to more complex undertakings like flying a plane or managing a nuclear reactor: in such cases, the fault is regularly attributed to “human error.” How could it be fair to blame a pilot, who only becomes deskilled as a result of their job requiring him or her to almost exclusively rely on autopilot, for mistakenly pulling up on the controls and stalling the plane during an unexpected autopilot error? The tendency to do so is a result of not recognizing autopilot technology as a sociotechnical system. Autopilot technology that leads to deskilled pilots, and hence accidents, is as poorly designed as a hammer incompatibly large for the human body: it fails to respect the complexities of the human-technology interface.

Many people, including many of my students, find that chain of reasoning difficult to accept, even though they struggle to locate any fault with it. They struggle under the weight of the impairing narrative that leads them to assume that the substitution of human action with computerized algorithms is always unalloyed progress. My students’ discomfort is only further provoked when presented with evidence that early automated textile technologies produced substandard, shoddy products—most likely being implemented in order to undermine organized labor rather than to contribute to a broader, more humanistic notion of progress. In any case, the continued power of automation=progress narrative will likely stifle the development of intelligent debate about automated diagnosis technologies.

If technological societies currently poised to begin automating medical care are to avoid repeating history, they will need to learn from past mistakes. In particular, how could AI be implemented so as to enhance the diagnostic ability of doctors rather than deskill them? Such an approach would part ways with traditional ideas about how computers should influence the work process, aiming to empower and “informate” skilled workers rather than replace them. As Siddhartha Mukherjee has noted, while algorithms can be very good at partitioning, e.g., distinguishing minute differences between pieces of information, they cannot deduce “why,” they cannot build a case for a diagnosis by themselves, and they cannot be curious. We only replace humans with algorithms at the cost of these qualities.

Citizens of technological societies should demand that AI diagnostic systems are used to aid the ongoing learning of doctors, helping them to solidify hunches and not overlook possible alternative diagnoses or pieces of evidence. Meeting such demands, however, may require that still other impairing narratives be challenged, particularly the belief that societies must acquiescence to the “disruptions” of new innovations, as they are imagined and desired by Silicon Valley elites—or the tendency to think of the qualities of the work process last, if at all, in all the excitement over extending the reach of robotics. 

Share

Details

    Author

    Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research. 

    Follow @dots_t

    Archives

    July 2023
    January 2023
    July 2022
    June 2022
    March 2022
    January 2022
    November 2021
    August 2021
    March 2021
    January 2021
    December 2020
    October 2020
    August 2020
    June 2020
    May 2020
    March 2020
    December 2019
    September 2019
    February 2019
    December 2018
    November 2018
    September 2018
    March 2018
    February 2018
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    January 2017
    November 2016
    June 2016
    May 2016
    March 2016
    June 2015
    February 2015
    December 2014
    September 2014
    August 2014
    June 2014
    May 2014
    April 2014
    March 2014
    January 2014
    December 2013
    November 2013
    September 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013

    Blog Posts
    On Vaccine Mandates
    Escaping the Ecomodernist Binary
    No, Electing Joe Biden Didn't Save American Democracy
    When Does Someone Deserve to Be Called "Doctor"?
    If You Don't Want Outbreaks, Don't Have In-Person Classes
    How to Stop Worrying and Live with Conspiracy Theorists
    Democracy and the Nuclear Stalemate
    Reopening Colleges & Universities an Unwise, Needless Gamble
    Radiation Politics in a Pandemic
    What Critics of Planet of the Humans Get Wrong
    Why Scientific Literacy Won't End the Pandemic
    Community Life in the Playborhood
    Who Needs What Technology Analysis?
    The Pedagogy of Control
    Don't Shovel Shit
    The Decline of American Community Makes Parenting Miserable
    The Limits of Machine-Centered Medicine
    Why Arming Teachers is a Terrible Idea
    Why School Shootings are More Likely in the Networked Age
    Against Epistocracy
    Gun Control and Our Political Talk
    Semi-Autonomous Tech and Driver Impairment
    Community in the Age of Limited Liability
    Conservative Case for Progressive Politics
    Hyperloop Likely to Be Boondoggle
    Policing the Boundaries of Medicine
    Automating Medicine
    On the Myth of Net Neutrality
    On Americans' Acquiescence to Injustice
    Science, Politics, and Partisanship
    Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
    Privacy Threats and the Counterproductive Refuge of VPNs
    Andrew Potter's Macleans Shitstorm
    The (Inevitable?) Exportation of the American Way of Life
    The Irony of American Political Discourse: The Denial of Politics
    Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
    ​Science's Legitimacy Problem
    Forbes' Faith-Based Understanding of Science
    There is No Anti-Scientism Movement, and It’s a Shame Too
    American Pro Rugby Should Be Community-Owned
    Why Not Break the Internet?
    Working for Scraps
    Solar Freakin' Car Culture
    Mass Shooting Victims ARE on the Rise
    Are These Shoes Made for Running?
    Underpants Gnomes and the Technocratic Theory of Progress
    Don't Drink the GMO Kool-Aid!
    On Being Driven by Driverless Cars
    Why America Needs the Educational Equivalent of the FDA

    On Introversion, the Internet and the Importance of Small Talk
    I (Still) Don't Believe in Digital Dualism
    The Anatomy of a Trolley Accident
    The Allure of Technological Solipsism
    The Quixotic Dangers Inherent in Reading Too Much
    If Science Is on Your Side, Then Who's on Mine?
    The High Cost of Endless Novelty - Part II
    The High Cost of Endless Novelty
    Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
    The Symbolic Analyst Sweatshop in the Winner-Take-All Society
    On Digital Dualism: What Would Neil Postman Say?
    Redirecting the Technoscience Machine
    Battling my Cell Phone for the Good Life

    Categories

    All
    Academic Life
    Acquiescence
    Automation
    Bias
    Black Mirror
    Cognitive Limitations
    Common Sense
    Community
    Conspiracy Theory
    Continuity Arguments
    CrossFit
    Deficit Model
    Democracy
    Diagnostic Style Of Politics
    Digital Dualism
    Digital Technology
    Disaster
    Disconnection
    Economic Democracy
    Economics
    Energy Reduction
    Epistocracy
    Fanaticism
    Foulcault
    Gmo Food
    Governance Of Technoscience
    Green Chemistry
    Green Illusions
    Gun Violence
    Inequality
    Intelligent Trial And Error
    Internet
    LBGTQ
    Legitimacy
    Megachurches
    Mesh Networks
    Nanoscience
    Narratives
    Nature
    NCAA
    Neophilia
    Net Neutrality
    Networked Individualism
    New Urbanism
    Nuclear Energy
    Panopticon
    Paranoia
    Permissionless Innovation
    PhD
    Philosophical Liberalism
    Political Talk
    Politics
    Progress
    Pseudoscience
    Renewable Energy
    Science
    Science And The Military
    Scientific Controversy
    Scientism
    Social Capital
    Social Networks
    Sweatshops
    Technocracy
    Technological Liberalism
    Technological Momentum
    Technological Solipsism
    Technological Somnambulism
    Technology
    The Facts
    The Good Life
    Thick Community
    Tristan Harris
    Trust
    Uncertainty
    Unintended Consequences
    Virtual Others
    Wall Street Journal
    Winner-take-all Society
    Worker Cooperatives

    RSS Feed

    Blogs I Follow:
    Technopolis
    ​Responsible Innovation
    Rough Type
    Technoscience as if People Mattered
© COPYRIGHT TAYLOR DOTSON 2016. ALL RIGHTS RESERVED.
  • Thoughts
  • The Divide
  • Technically Together
  • Current Research
  • Teaching
  • About Me
  • Vitae