To hear some of my scientist friends explain it, contemporary medicine is threatened by a tidal wave of pseudoscientific, quackish alternative practices. That narrative has always struck me as a bit of an overreaction. Even though a non-negligible percentage of people have forgone vaccines for their children and many regularly use supplements that run the gamut from the relatively harmless to the risky, the vast majority of people go see a regular doctor when they're ill. So what has some advocates of mainstream medicine in a dither? Why are they so intent on making mountains out of these molehills?
Consider a recent article "Acupuncture Still Doesn't Work." Its author, a self-identified epidemiologist named Gid M-K, exerts considerable effort in order to try to twist a recent study into yet another mark against the ostensible scourge of acupuncturists. His argument is, in turn, based on a recent study evaluating the benefits of acupuncture for acute pain: namely, that of people coming to an emergency room for low back pain, ankle sprains, and migraines. Despite the fact that the authors of that study themselves conclude that acupuncture is comparable in efficacy to drugs, except for migraines, and also cite studies that conclude that acupuncture is more effective than sham treatments, Gid ends his piece with the claim that "acupuncture works no better than placebo. This has been shown time and again in studies from all over the world. There’s no reason to believe that it should work, and when you test it with robust evidence, it doesn’t." One doesn't have to be scientist themselves to recognize that such a large discrepancy between the evidence an author cites and their conclusions is demonstrative of something other than solid scientific thinking. Yet such flaws in reasoning are quite common among those, including scientists, involved in the science-pseudoscience debate. One should not be surprised that they are so common, however, for these debates are not really (or perhaps not completely) about the conduct of science but rather the politics of expertise. Certainly the question of therapeutic efficacy remains important; these debates do exhibit some of the qualities associated with science. Nevertheless, the political dimensions of the debate are revealed by how critical questions about efficacy are selectively applied. If advocates for mainstream medicine were really just concerned about the harms of scientifically questionable medical interventions they would devote more attention to the mainstream doctors and surgeons who routinely administer treatments that are out-of-date, not in line with research findings, or have not been proven effective in a clinical trial. The fact that some mainstream doctors' behavior may be no more based on the weight of evidence than an acupuncturist, however, receives scant attention. Hence, it becomes clear that the debate is not just about the efficacy or the scientific backing of different treatments but rather is a battle over who is permitted to treat illness. While there are institutions that try to combine mainstream medicine with alternative approaches, most acupuncturists are trained at different schools and are steeped in a very different medical paradigm. As a result, many in mainstream medicine appear to feel threatened by people going to see alternative practitioners: it is likely seen as a threat to their standing as the preeminent experts on human health. Therefore, they engage in what science and technology studies scholars call "boundary-work": they mobilize political rhetoric aimed at keeping practices like acupuncture outside of the sphere of accepted medicine in order to maintain their own relative autonomy. That is, acupuncture is viewed as a problem not simply because of its uncertain therapeutic value but because acupuncturists are viewed as competing with mainstream doctors. Medicine, just like science itself, is not just about knowledge but about resources and power: Who gets to decide what treatment a patient is to receive? Who gets what support in the form of research dollars and in terms of being covered by insurance? Insofar as the situation is or appears to be zero-sum--the more support and acceptance for acupuncture, the less for mainstream medicine--than advocates of mainstream medicine can be expected to react fanatically, no differently than any other interest group. Because the source of the dispute is not so much scientific or empirical but political, so is the solution. The problem lies in the way we categorize medicine and health, which in turn is a result of the zeal of early champions of science-based medicine, who threw the baby out with the bathwater of pre-20th century medicine (much of which was no doubt harmful). Medicine became only that which could be reduced to biological mechanism. Consequently the pyschosocial facets of human health and wellness became neglected. Consider how 20th century doctors thought it more convenient to restrain and induce a zombie-like state in pregnant women, relying on episiotomies and forceps to birth babies. At its worst, mainstream medicine doesn't see people, only bodies needing fixing. This is why efforts toward integrative medicine are so important. Reconceptualizing patients as multifaceted persons who should be treated in mind and body eliminates the ostensible incommensurability of evidence-based medicine and treatments like acupuncture. Personally I have little faith in the Qi-based explanations for acupuncture's efficacy. I only know that few other treatments leave me feeling as relaxed as acupuncture; few others are so good at relieving painful muscle tension without side-effects. Given the risks of opioid addiction, efforts to eliminate the option of acupuncture for pain relief seem callous. No doubt other alternative treatments are riskier than they are worth, but their following won't be diminished by advocates of mainstream medicine only further entrenching themselves in the mechanistic model of 20th century medicine and stepping up their boundary-work efforts. Indeed, that move only exposes them to be more interested in their own political autonomy than patients' well-being.
The stock phrase that “those who do not learn history are doomed to repeat it” certainly seems to hold true for technological innovation. After a team of Stanford University researchers recently developed an algorithm that they say is better at diagnosing heart arrhythmias than a human expert, all the MIT Technology Review could muster was to rhetorically ask if patients and doctors could ever put their trust in an algorithm. I won’t dispute the potential for machine learning algorithms to improve diagnoses; however, I think we should all take issue when journalists like Will Knight depict these technologies so uncritically, as if their claimed merits will be unproblematically realized without negative consequences.
Indeed, the same gee-whiz reporting likely happened during the advent of computerized autopilot in the 1970s—probably with the same lame rhetorical question: “Will passengers ever trust a computer to land a plane?” Of course, we now know that the implementation of autopilot was anything but a simple story of improved safety and performance. As both Robert Pool and Nicholas Carr have demonstrated, the automation of facets of piloting created new forms of accidents produced by unanticipated problems with sensors and electronics as well as the eventual deskilling of human pilots. That shallow, ignorant reporting for similar automation technologies, including not just automated diagnosis but also technologies like driverless cars, continues despite the knowledge of those previous mistakes is truly disheartening. The fact that the tendency to not dig too deeply into the potential undesirable consequences of automation technologies is so widespread is telling. It suggests that something must be acting as a barrier to people’s ability to think clearly about such technologies. The political scientists Charles Lindblom called these barriers “impairments to critical probing,” noting the role of schools and the media in helping to ensure that most citizens refrain from critically examining the status quo. Such impairments to critical probing with respect to automation technologies are visible in the myriad simplistic narratives that are often presumed rather than demonstrated, such as in the belief that algorithms are inherently safer than human operators. Indeed, one comment on Will Knight’s article prophesized that “in the far future human doctors will be viewed as dangerous compared to AI.” Not only are such predictions impossible to justify—at this point they cannot be anything more than wildly speculative conjectures—but they fundamentally misunderstand what technology is. Too often people act as if technologies were autonomous forces in the world, not only in the sense that people act as if technological changes were foreordained and unstoppable but also in how they fail to see that no technology functions without the involvement of human hands. Indeed, technologies are better thought of as sociotechnical systems. Even a simple tool like a hammer cannot existing without underlying human organizations, which provide the conditions for its production, nor can it act in the world without it having been designed to be compatible with the shape and capacities of the human body. A hammer that is too big to be effectively wielded by a person would be correctly recognized as an ill-conceived technology; few would fault a manual laborer forced to use such a hammer for any undesirable outcomes of its use. Yet somehow most people fail to extend the same recognition to more complex undertakings like flying a plane or managing a nuclear reactor: in such cases, the fault is regularly attributed to “human error.” How could it be fair to blame a pilot, who only becomes deskilled as a result of their job requiring him or her to almost exclusively rely on autopilot, for mistakenly pulling up on the controls and stalling the plane during an unexpected autopilot error? The tendency to do so is a result of not recognizing autopilot technology as a sociotechnical system. Autopilot technology that leads to deskilled pilots, and hence accidents, is as poorly designed as a hammer incompatibly large for the human body: it fails to respect the complexities of the human-technology interface. Many people, including many of my students, find that chain of reasoning difficult to accept, even though they struggle to locate any fault with it. They struggle under the weight of the impairing narrative that leads them to assume that the substitution of human action with computerized algorithms is always unalloyed progress. My students’ discomfort is only further provoked when presented with evidence that early automated textile technologies produced substandard, shoddy products—most likely being implemented in order to undermine organized labor rather than to contribute to a broader, more humanistic notion of progress. In any case, the continued power of automation=progress narrative will likely stifle the development of intelligent debate about automated diagnosis technologies. If technological societies currently poised to begin automating medical care are to avoid repeating history, they will need to learn from past mistakes. In particular, how could AI be implemented so as to enhance the diagnostic ability of doctors rather than deskill them? Such an approach would part ways with traditional ideas about how computers should influence the work process, aiming to empower and “informate” skilled workers rather than replace them. As Siddhartha Mukherjee has noted, while algorithms can be very good at partitioning, e.g., distinguishing minute differences between pieces of information, they cannot deduce “why,” they cannot build a case for a diagnosis by themselves, and they cannot be curious. We only replace humans with algorithms at the cost of these qualities. Citizens of technological societies should demand that AI diagnostic systems are used to aid the ongoing learning of doctors, helping them to solidify hunches and not overlook possible alternative diagnoses or pieces of evidence. Meeting such demands, however, may require that still other impairing narratives be challenged, particularly the belief that societies must acquiescence to the “disruptions” of new innovations, as they are imagined and desired by Silicon Valley elites—or the tendency to think of the qualities of the work process last, if at all, in all the excitement over extending the reach of robotics. Few issues stoke as much controversy, or provoke as shallow of analysis, as net neutrality. Richard Bennett’s recent piece in the MIT Technology Review is no exception. His views represent a swelling ideological tide among certain technologists that threatens not only any possibility for democratically controlling technological change but any prospect for intelligently and preemptively managing technological risks. The only thing he gets right is that “the web is not neutral” and never has been. Yet current “net neutrality” advocates avoid seriously engaging with that proposition. What explains the self-stultifying allegiance to the notion that the Internet could ever be neutral?
Bennett claims that net neutrality has no clear definition (it does), that anything good about the current Internet has nothing to do with a regulatory history of commitment to net neutrality (something he can’t prove), and that the whole debate only exists because “law professors, public interest advocates, journalists, bloggers, and the general public [know too little] about how the Internet works.” To anyone familiar with the history of technological mistakes, the underlying presumption that we’d be better off if we just let the technical experts make the “right” decision for us—as if their technical expertise allowed them to see the world without any political bias—should be a familiar, albeit frustrating, refrain. In it one hears the echoes of early nuclear energy advocates, whose hubris led them to predict that humanity wouldn’t suffer a meltdown in hundreds of years, whose ideological commitment to an atomic vision of progress led them to pursue harebrained ideas like nuclear jets and using nuclear weapons to dig canals. One hears the echoes of those who managed America’s nuclear arsenal and tried to shake off public oversight, bringing us to the brink of nuclear oblivion on more than one occasion. Only armed with such a poor knowledge of technological history could someone make the argument that “the genuine problems the Internet faces today…cannot be resolved by open Internet regulation. Internet engineers need the freedom to tinker.” Bennett’s argument is really just an ideological opposition to regulation per se, a view based on the premise that innovation better benefits humanity if it is done without the “permission” of those potentially negatively affected. Even though Bennett presents himself as simply a technologist whose knowledge of the cold, hard facts of the Internet leads him to his conclusions, he is really just parroting the latest discursive instantiation of technological libertarianism. As I’ve recently argued, the idea of “permissionless innovation” is built on a (intentional?) misunderstanding of the research on how to intelligently manage technological risks as well as the problematic assumption that innovations, no matter how disruptive, have always worked out for the best for everyone. Unsurprisingly the people most often championing the view are usually affluent white guys who love their gadgets. It is easy to have such a rosy view of the history of technological change when one is, and has consistently been, on the winning side. It is a view that is only sustainable as long as one never bothers to inquire into whether technological change has been an unmitigated wonder for the poor white and Hispanic farmhands who now die at relatively younger ages of otherwise rare cancers, the Africans who have mined and continue to mine Uranium or coltan in despicable conditions, or the permanent underclass created by continuous technological upheavals in the workplace not paired with adequate social programs. In any case, I agree with Bennett’s argument in a later comment to the article: “the web is not neutral, has never been neutral, and wouldn't be any good if it were neutral.” Although advocates for net neutrality are obviously demanding a very specific kind of neutrality: that ISPs do not treat packets differently based on where they originate or where they’re going, the idea of net neutrality has taken on a much broader symbolic meaning, one that I think constrains people’s thinking about Internet freedoms rather than enhances it. The idea of neutrality carries so much rhetorical weight in Western societies because their cultures are steeped in a tradition of philosophical liberalism. Liberalism is a philosophical tradition based in the belief that the freedom of individuals to choose is the greatest good. Even American political conservatives really just embrace a particular flavor of philosophical liberalism, one that privileges the freedoms enjoyed by supposedly individualized actors unencumbered by social conventions or government interference to make market decisions. Politics in nations like the US proceeds with the assumption that society, or at least parts of it, can be composed in such a way to allow individuals to decide wholly for themselves. Hence, it is unsurprising that changes in Internet regulations provoke so much ire: The Internet appears to offer that neutral space, both in terms of the forms of individual self-expression valued by left-liberals and the purportedly disruptive market environment that gives Steve Jobs wannabes wet dreams. Neutrality is, however, impossible. As I argue in my recent book, even an idealized liberal society would have to put constraints on choice: People would have to be prevented from making their relationship or communal commitments too strong. As loathe as some leftists would be to hear it, a society that maximizes citizens’ abilities for individual self-expression would have to be even more extreme than even Margaret Thatcher imagined it: composed of atomized individuals. Even the maintenance of family structures would have to be limited in an idealized liberal world. On a practical level it is easy to see the cultivation of a liberal personhood in children as imposed rather than freely chosen, with one Toronto family going so far as to not assign their child a gender. On plus side for freedom, the child now has a new choice they didn’t have before. On the negative side, they didn’t get to choose whether or not they’d be forced to make that choice. All freedoms come with obligations, and often some people get to enjoy the freedoms while others must shoulder the obligations. So it is with the Internet as well. Currently ISPs are obliged to treat packets equally so that content providers like Google and Netflix can enjoy enormous freedoms in connecting with customers. That is clearly not a neutral arrangement, even though it is one that many people (including Google) prefer. However, the more important non-neutrality of the Internet, one that I think should take center stage in debates, is that it is dominated by corporate interests. Content providers are no more accountable to the public than large Internet service providers. At least since it was privatized in the mid-90s, the Internet has been biased toward fulfilling the needs of business. Other aspirations like improving democracy or cultivating communities, if the Internet has even really delivered all that much in those regards, have been incidental. Facebook wants you to connect with childhood friends so it can show you an ad for a 90s nostalgia t-shirt design. Google wants to make sure neo-nazis can find the Stormfront website so they can advertise the right survival gear to them. I don’t want a neutral net. I want one biased toward supporting well-functioning democracies and vibrant local communities. It might be possible for an Internet to do so while providing the wide latitude for innovative tinkering that Bennett wants, but I doubt it. Indeed, ditching the pretense of neutrality would enable the broader recognition of the partisan divisions about what the Internet should do, the acknowledgement that the Internet is and will always be a political technology. Whose interests do you want it to serve?
One of the biggest challenges that I think social scientists should be committing themselves to solving is the question of how to enable large-scale social change. Our age is rife with injustices: growing income inequality, an increasingly brutal police-prison-industrial complex, among others. At the same time, these injustices are frustratingly chronic. Positive change, if it has occurred at all, has been ploddingly slow. I think that a big contributor is the unwillingness or inability of average people to imagine change as possible, a necessary condition for them to even begin to advocate for reform. Yet, as anyone who is has read the commentary on a critical article on these issues has probably seen, many Americans seem willing to spare no effort in trying to justify the status quo as either inevitable or the best of all possible worlds. As Steve Fraser argues in The Age of Acquiescence, building a more equal society will require attacking and reconceiving the narratives that today prop up the status quo.
Take college sports, arguably one of most egregiously unjust labor systems in the US. Nowhere else can you find people laboring—indeed college football is like a fulltime job—and inflicting long-term damage to their bodies for little reward. The NCAA generates a billion dollars in revenue, all the while players are contractually barred from reaping the fruits of their labor. As others have pointed out, the “NCAA is a plantation, and the players are the sharecroppers.” That many, if not most, of the prospective players hail from poorer, black regions of the country makes the system seem even more destructive. Football combines start to bear an eerie resemblance to the auction block when one reflects on all these similarities. The response to such observations always seems to be the same: Don’t these players voluntarily sign the dotted line on the contract? Aren’t they free to do otherwise? The rhetoric of choice is one of the most pernicious discourses today, one that is routinely mobilized to prevent people from digging too deep into systematic inequalities. It is a discourse that tries to eliminate deep thinking about the innumerable coercions faced by most people by reframing them all as choices. Consider Paul Ryan’s recent bizarre claim that cuts to Medicaid and the elimination of the ACA wouldn’t eliminate people’s healthcare: Such people would be simply “choosing” not to have it any longer. The transformation of the inability to pay for something into a free choice is just one of the daftest—though politically expedient—outcomes of choice-based rhetoric. In the context of college sports, it ignores that players coming out of the most deprived areas of the country typically have few other opportunities for a college education or many other routes out of poverty. The rhetoric of choice projects the latitude of choice available to only the most affluent citizens onto everyone, regardless of what their lives actually look like. The case of college sports also illuminates how the mere possibility of success, no matter how infinitesimal, can lead people to tolerate otherwise intolerable circumstances. Compare it to the Black Mirror episode “15 Million Merits.” Work in the society depicted in this episode is unmitigated drudgery: Citizens’s work lives entail endlessly pedaling on stationary bikes. Their only respite comes from a constant connection to an array of entertainment possibilities, and their only hope for a way out lies in winning Hot Shot, an America’s Got Talent-like game show. The metaphor in “15 Million Merits” couldn’t be clearer: Clawing one’s way out of the doldrums of working in what David Graeber has labeled “bullshit jobs” is largely a roll of the dice, dependent on the caprice of those who do have the power to decide. The hosts of Hot Shot sit with an air of superiority, judging who is worthy and who is not—much like a few of the hosts of the show Shark Tank. Like college ball players who must subject their bodies to four years of strain for a shot at an NFL contract, some workers acquiesce to an unjust working arrangement partly because they too are caught up in dreams of getting to be one of the lucky few to strike it rich. I’m not the first to note that Americans are limited in their ability to think critically about class because of a belief that inequality is okay as long as they have a chance of being on the right side of it. A common quote, routinely misattributed to John Steinbeck, laments how “the poor [in America] see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.” The underlying narrative that success invariably comes to those who show grit and determination adds to the rhetoric of choice to prevent critical questions about the sources of poverty. I will never forget the panicked look on a student, who in a class discussion about economic fairness, tried to claim that if he were parachuted into Haiti that he would be successful in six months; while uttering something horrible, he nonetheless seemed to be straining under an immense load of cognitive dissonance, attempting to resolve the conflict between a narrative that gave him hope about his own future and its implication that Haitians are somehow poor because they don’t know how to work as hard as middle-class white people. In any case, also noteworthy in “15 Million Merits” is how those who, for whatever reason, are unable to handle the strain of cycling all day are treated. They are widely abused, distinguished by particular clothing, and targeted for mockery in violent video games and on television game shows—that society’s equivalent of Jerry Springer and Cops. Citizens of this imagined society, much like our own, are partly driven to labor—often to the detriment of their mental and physical well-being—by the fear of being poor and mocked and the belief that perhaps they too can achieve a state of transcendent affluence. Who gives any thoughts to the hundreds or thousands student athletes who, once injured, are often deprived of their scholarship? Often not earning a degree, or perhaps not one that is worth anything, and carrying a potentially disabling injury, such as cervical spine damage, once phenomenal athletes on the way to stardom become just another impoverished nobody, another one of the “takers” denigrated in contemporary conservative discourse. It seems to me that achieving a more just American society will not be possible without the simultaneous demise of these poverty justifying narratives. Not only will new narratives be necessary, but such narratives will need to be uttered by the right people. As great as it is that attendees of Ivy League universities and participants in urban art collectives have developed counter narratives to those that today justify status quo inequalities, it seems unlikely that such narratives will ever resonate with average citizens. A recent video by The Onion makes much the same point in satirically depicting a Trump voter whose mind was changed after reading 800 pages of queer feminist theory. In my mind, much of the humanities and social sciences are not worth the paper they have been printed on, if they cannot be persuasively conveyed to non-academic—indeed, uneducated—audiences. Unfortunately, many of the academics I know are too busy denigrating Trump voters for being ignorant to consider how things might actually change. Certainly there are things to like about the March for Science. As you are likely aware, scientists and engineers have a reputation for being politically aloof. I, for one, am glad to see events like it, which run contrary to that stereotype.
The March for Science website describes the event as a nonpartisan call for politicians to recognize that science upholds the public good: in other words, science matters. I want to push those of you reading this post to critically examine this slogan—to treat it as you would any truth claim. On face value, there seems to be little to disagree with: of course science should matter. Good luck solving any 21st century challenge without it. Hence, I think it is more interesting to ask, “Which science should matter? And how much?” Some of you may find this to be a provocative turn of phrase, because it applies to science a standard definition of politics: that is, politics as any answer to the question “What gets what, when, and how?” This is a provocative question because many people, including many scientists and engineers, tend to believe that politics is everything science is not and vice-versa, which in turn supports the idea that advocating for science can be a non-partisan activity, that it can be an apolitical social movement. To say today that science should matter, but little more than that, could be construed to imply that we ought to continue with science as we had prior to recent electoral results. Such an implication would appear to be rooted in the presumption that science was previously nonpartisan and only recently tainted by political agendas. Is that a wise presumption? Certainly the current administration’s attempts to excise climate science from NASA and muzzle the EPA can be recognized as political. But what about the historical relationship between science and military applications, running all the way from Archimedes to the United States today—where some $77 billion gets spent on military R&D annually compared to $69 billion on nondefense research? What about the fact that a paltry portion of public research money is dedicated to developing non-toxic alternatives to the suspected and confirmed carcinogens and endocrine disruptors found inside most consumer products, toxins which invariably end up in the environment and, thus, in human bodies. Compare that to the billions that always seems await every new overhyped and highly risky area of innovation: nano-tech, syn-bio, and so on. I don’t assume that you will agree with my own valuation of the relative worthiness of these different areas of science, but I hope you can join me in recognizing that such discrepancies in funding and attention do not exist because one area is more scientific than the others. If historians who can study our time period even exist in 100 years, they will likely find our belief that science is nonpartisan as perplexing to say the least. How could a sophisticated society believe in such an idea when it is obvious that some areas of science matter more than others and some science gets ignored? How could they sustain such a belief when the advantages of military R&D and the harms of toxic consumer products clearly accrue more strongly to some people than others? Some clearly win because of this arrangement, while others lose. I don’t say this to denigrate science but to denigrate one of the myths that undergirds the political aloofness that is so common among scientists and engineers. My message to you is that you’re already and always partisan. That is a reality that will not disappear simply by not believing in it. Accepting this message, I would argue, is not as destructive as one might believe at first. Rather, I think it is freeing: it enables one to act more wisely in the world, rather than be misguided by a “flat Earth theory” of politics. There is no abyss to fall into wherein one ceases to be scientific, in turn becoming political. One is already and always both. Therefore, it is not a question of whether science and engineering is partisan or not, but a question of what kind of partisans scientists and engineers should be: self-conscious ones or ones asleep at the wheel? What kind of technoscientific world will you be a partisan for? Which science should matter? And how much? It is an understatement to say that the case of Anna Stubblefield is simply controversial. Opinions of the former Rutgers professor, who was recently sentenced to some 10 odd years in prison for the charge of sexually assaulting a disabled man, are highly polarized. When reading comments on recent news stories on the case, one finds not only people who find her absolutely abhorrent but also people who empathize or support her side. No doubt there are important issues to consider regarding the rights of disabled persons, professional ethics, racism, and the nature of consent. However, I want to focus on how the framing of the case as a battle between science and pseudoscience prevents us from sensibly dealing with the politics underlying the issue. The case is strongly shaped by a broader dispute over of the scientific status of “facilitated communication” (FC), a technique claimed by its advocates to allow previously voiceless people with cerebral palsy or autism to speak. As its name suggests, a facilitator helps guide the disabled person’s hand to a keyboard. In the most favorable reading of the practice, the facilitator simply balances out the muscle contractions and lessens the physical barriers to typing. Some see the practice, however, as more than mere assistance: they claim that the facilitator is the one really doing the typing, either consciously or unconsciously. In the former case, FC is a wonderful gift for those suffering from disabilities and their families. In latter reading, facilitators are charlatans, utilizing a pseudoscientific technique to deceive people. "Given our inability to see into the minds of people so disabled, both sides of the debate end up speaking for them in light of indirect observations." This latter view seems to have won out in the case of Anna Stubblefield, who claims that DJ--a man with profound physical and suspected mental disabilities—consented to have sex with her via FC. The court rules that FC did not meet the state standards for science. Hence, Stubblefield was unable to mount a much of a defense vis-à-vis FC.
Most people fail to grasp, however, exactly how hard it is to distinguish science and pseudoscience—despite whatever popularizers like Neil DeGrasse Tyson or Bill Nye seem to claim. Science does not simply produce unquestionable facts, rather it is a skilled practice; its capacity to prove truth is always partial, seen far better in hindsight than in the moment. As science and technology studies scholars well illustrate, experiments are incredibly complex—only becoming more so when their results are controversial. The fact that many scientific activities are heavily dependent on the skill of the scientist is on the one hand obvious, but nevertheless eludes most people. Mid-20th century experiments attempting to transfer memories (e.g., fear of the dark, how to run a maze) between planarian worms or mice exemplify this facet of science. Skeptical and supportive scientists went back and forth incessantly over methodological disagreements in trying to determine whether the observed effects were “real,” eventually considering more than 70 separate variables as possible influences on the outcome of memory transfer experiments. Even though some skeptical scientists derided skill-based variables as a so-called “golden hands” argument, there are plenty of areas of science where an experimentalist’s skill makes or breaks an experiment. Biologists, in particular, frequently lament the difficulty of keeping an RNA sample from breaking down or find themselves developing fairly eccentric protocols for getting “good” results out of a Western Blot or bioassay experiment. What some will view as ad-hoc “golden hands” excuses are often simply facets of doing a complex and highly sensitive procedure. A similar dispute over the role of the skill of the practitioner makes FC controversial. After rosy beginnings, skeptical scientists produced results that cast doubt on the technique. Experiments involved the attempt to duplicate text generated with the help of a disabled person’s usual facilitator with a “naïve” facilitator or the asking of questions to which the facilitator wouldn’t know the answer. Indeed, just such an experiment was conducted with DJ, for which both sides claimed victory (Jeff McMahan and Peter Singer, for instance, argue that DJ is more cognitively able than the prosecution would have one believe). As has been the case for other controversial scientific phenomenon, FC only becomes more complex the more deeply one looks into it. Advocates of the method raise their own doubts about studies claiming to disprove the technique’s effectiveness, contending that facilitation requires skills and sensitivities unique to the person being facilitated and that the stressfulness of the testing environment skews the results in the favor of skeptics. There is enough uncertainty surrounding the abilities of those with autism or cerebral palsy to make reasonable arguments either way. Given our inability to see into the minds of people so disabled, both sides of the debate end up speaking for them in light of indirect observations. Again, my point is not to try to argue one way or another for FC but to merely point out that the phenomenon under consideration is immensely complex; we simplify it only at our peril. Indeed, the history of science and technology provides plenty of evidence suggesting that we are better off acknowledging that even today’s best science is unlikely to provide sure answers to a controversial debate. Advocates of nuclear energy, for instance, once claimed that their science proved that an accident was a near impossibility, happening perhaps once in ten thousand years. Similarly, some petroleum geology experts have claimed that it is physically impossible for fracking to introduce natural gas and other contaminants to water supplies: there is simply too much rock in between. Yet, an EPA scientist has recently produced fairly persuasive evidence to the contrary. “Settled science” rhetoric has mainly served to shut down inquiry, and the discovery of contrary findings in ensuing decades only adds support to the view that reaching something like scientific certainty is a long and difficult struggle. As a result, scientific controversies are often as much settled politically as scientifically: they are as much battles of rhetoric as facts. Rather than pretend that absolute certitude were possible, what if we proceeded with controversial practices like FC guided by the presumption that we might be wrong about it? What if we assumed that it was possible the method could work—perhaps for a very small percentage of autistics and those born with severe cerebral palsy--but that we are challenged in our ability to know for whom it worked? Moreover, self-deception—like many believe Anna Stubblefield fell prey to—remains a pervasive risk. The situation changes dramatically. Rather than commit oneself to idea that something is either pure truth or complete pseudoscience, the issue can be framed in terms of risk: given that we may be wrong, who might suffer which benefits and harms? How many cases of sham communication via FC balances out the possibility of a non-communicative person losing their voice? In other words, do we prefer false positives or false negatives? Such a perspective challenges people to think more deeply about what matters with respect to FC. Surely the prospect of disabled people being abused or killed because of communication that originates more with the facilitator than the person being facilitated is horrifying. Yet, on the other hand, Daniel Engeber describes meeting families who feel like FC has been a godsend. Even in the scenario in which FC only provides a comforting delusion, is anyone being harmed? A philosophy professor I once knew remarked that he’d take a good placebo over nothing at all any day of the week. On what grounds do we have to deprive people of controversial (even potentially fictitious) treatment if it is not too harmful and potentially increases the well-being of at least some of the people involved? I don’t have an answer to these questions, but I do know that we cannot begin to debate them if we hide behind a simplistic partitioning of all knowledge into either science or pseudoscience, pretending that such designations can do our politics for us The recent vote by Congress, approved by President Trump, to eliminate the FCC rules that constrain the ability of internet service providers (ISPs) to track Internet users’ data has left a lot of people worried about their online privacy. Indeed, the average person’s search history does not merely reflect their consumer desires but also exposes their most personal secrets, worries, and anxieties and their health, relationship, and financial struggles. Virtual private networks (VPNs), which hide a user’s Internet behavior by funneling their transmissions through a third party, have been touted as way for average people to protect themselves from being tracked by their ISP. VPNs don’t work, however, but not for the reason one might think. They represent a technical fix for a problem that ultimately requires a technological solution.
Much of the public’s imagination for solving collective problems like privacy is stunted by the belief that technology can and will come to our rescue. Even though most people would recognize that technologies are more than merely the gadgets in our pockets or on our desks upon reflection, they nevertheless act as if they were not tightly intertwined into larger technical, cultural, and political systems. We look to solar panels to save us from climate change, presuming them to be unquestionably “green” despite their resource intensity and the all the pollution that results from their production and disposal. We ignore that rebound effects and consumer behavior often cancel out technical improvements in efficiency. The trust put in VPNs to solve the problem of Internet privacy reflects a similar ignorance of the broader sociopolitical context of communication technologies. In fact, VPN services belong to a class of technical fixes to collective problems already well studied by sociologists: inverted quarantines. Consider the response by citizens to the prospect of nuclear attack during the Cold War, namely building backyard bunkers. Another is how people will buy bottled water to protect themselves from perceived contaminants in their municipal supply. VPNs are like digital bunkers. Users put themselves in a protective digital cocoon to protect just themselves (hopefully) from online privacy threats. Inverted quarantines are deceptively alluring solutions to collective problems, having three main limitations. First, they are individualistic and, hence, class-based solutions. Simply put, the level of protection one receives depends upon a willingness and ability to pay. VPN protection will run a person anywhere from five dollars month for a basic proxy to several times that amount in monthly fees and a several hundred dollar VPN router for those wanting a premium level of privacy. Just as bottled water frames access to clean drinking water as a marker of status, the perceived need to rely on VPNs transforms privacy from a right to a luxury good available mainly to the middle and upper classes. The second limitation of inverted quarantines is that they are often imaginary refuges. For instance, some studies have found that bottled water is often no cleaner and tastes little better than many cities’ municipal supply; some even have higher levels of certain contaminants. Likewise, the tragic irony of the bunker building mania of the 1950s was that backyard shelters provided a cruel illusion of safety: no family had any hope of surviving the aftermath of a nuclear war--even if they were lucking to make it through the initial blast. Indeed, observers have pointed out that VPNs come with significant costs in terms of access speed, and many sites (like Netflix) will not provide access to their content if you’re using one. Most importantly, VPN services only protect a user’s privacy insofar as they can be trusted not to gather and store personal data. There are a lot of reasons to suspect that many VPN services will begin to collect data. As for-profit companies they are only committed to a user’s rights as long as not respecting them is unprofitable and illegal. As Internet scholar Lawrence Lessig noted, the pressure to reduce people’s online privacy comes more from the market than from governments. Little would prevent VPN companies from being like other tech firms, such as Facebook, altering privacy agreements quietly whenever it suited them. Hence, citizens should avoid falling prey to naïve optimism about the intentions of these companies. Unfortunately, the general tendency is to see them as saviors. Indeed, whenever net neutrality rules are at risk, large corporate firms like Facebook and Google get framed as warriors for civil liberties. Citizens forget that these firms’ belief in net neutrality has little to do with freedom but the bottom line: Google and Facebook maximize their earning potential when users are free to consume as much content they desire (and hence provide them with the more data to mine and analyze). Unsurprisingly, sites most likely to take a financial hit from users being increasingly worried about who is tracking their behavior, like pornography sites, are some of the few firms to express much concern about changes to FCC rules. The biggest undesirable unintended consequence from VPN-based inverted quarantine solutions to Internet privacy is that they act as political anesthetics. Users who can afford them will be tempted to say, “I’m protected. What do I care about FCC rules?” Viewing other cases of inverted quarantine leaves little reason for optimism. Citizens who can afford expensive water filters or sidestep other environment concerns by buying organic food or expensive non-toxic alternatives to consumer products, even though they may care deeply about the environment, end up less strongly advocating for new EPA or FDA regulations. The issue may matter to them, but it nevertheless has less salience: they won’t vote someone out of office for undoing environmental legislation. Likewise, citizens who protect themselves with VPNs will not feel as strong of a motivation to remove from office the politicians who voted to eliminate FCC regulations. VPN services are at best a temporary solution. At worst they will distract Americans from the heart of the problem: a corporate dominate Internet. Shoddy Band-Aid technical fixes don’t address the fact that the current Internet is built on advertising. The entire economic basis of the contemporary online world presents a conflict of interest with regard to users’ right to privacy. Corporate firms will not stop trying to dig ever deeper into users’ private lives without changes to the Internet as a sociotechnical system. No doubt it is hard to imagine what a completely public alternative might look like but that shouldn’t stop people from starting to dream up different designs. We might start with the creation of municipal service providers, perhaps combined with community run mesh networks. In any case, the economic arrangement through which Internet access is provided is socially constructed: things could be otherwise. In the same way that many societies have considered goods like healthcare or electricity to be too important to leave completely up to for-profit firms and the market, information access and privacy could become treated more as a public good than a privatized, ad-driven commodity. Andrew Potter’s recent Maclean’s article claiming that Quebec is suffering from a pathological degree of social malaise has certainly raised eyebrows. Indeed, he has recently resigned from one of his posts at McGill University in response to public outcry—and no doubt the Quebec University’s administrations view of the matter. I won’t delve into the question regarding perceived damage to academic freedom that this resignation may or may not represent; rather, I take issue with the way in which Potter charts Quebec’s purported social decline—seeing it as reflective of a widespread failure to grasp the diverse character of social community.
On the one hand, some of the statistics Potter cites to support his case are alarming, especially those regarding the relatively small size of social networks and volunteering rates in Quebec. On the other hand, Quebec is noteworthy in terms of having one of the highest rates of happiness/social well-being in Canada. At a minimum, this apparent discrepancy is something that needs explained. One would, of course, scarcely imagine that a province suffering from widespread social malaise would be simultaneously happy. Potter, moreover, draws heavily on Robert Putnam’s concept of social capital, which posits that certain social and political activities help build the civic foundation for well-functioning democratic societies. Being familiar with Putnam's work—it has inspired my own research into the character of contemporary community life--I think that Andrew Potter has taken some liberties with it. Sure volunteering may be low, but Quebecers are known for being politically active, which is another contributor to and reflection of social capital. At the same time, Potter seems to conflate social capital with level of conformance to a non-Quebecer's idea of law and order. He argues that the colorful pants worn by police as a sign of corrosion of “social cohesion and trust in institutions.” While I am not an expert on Quebecois culture, it is hard not to see this as reflecting an English-Canadians cultural bias. Indeed, the impulse to denigrate protest and collective bargaining disproportionately afflicts Anglophones. For those less afflicted, the camo pants might evoke a feeling of solidarity. Left-wing Americans, for the sake of comparison, rarely decry the blocking of streets and highways during protests as the demise of social cohesion. That is not the only place where Potter could have been more sensitive to how cultural differences make social issues much more complex than one might initially think. He cites, for instance, the fact that far fewer Quebecers express the belief that “most people can be trusted.” As a social scientist Potter should be able to readily acknowledge that cultural differences can have a big impact on survey data. It is often claimed—on the basis of survey research--that Asian countries are much less happy than those in the West. However, once one recognizes the fact that readily labeling oneself as happy conflicts with Asian expectations for modesty, such interpretations of the survey data soon seem dubious. Given that Quebec’s rates of happiness and high marks in other dimensions of social capital, one wonders if individually low levels of trust simply reflects a cultural hesitancy to seem too trusting or gullible. Some of the confusion in Potter’s piece may be the result of not explicitly acknowledging different scales of analysis. Quebec is unique compared to other provinces in terms of its social policy (i.e., L'economie sociale): heavily subsidized daycare, generous support of cooperatives, high labor participation, etc. In many ways its citizens are more communitarian than people elsewhere, but more at the level of the province than locality or nation, more via official politics than through non-governmental volunteering. Maybe they don't quite have the ideal mixture by some accounts, but it seems hyperbolic to argue that the whole society is in a state of alienated malaise. In any case, both the controversy over Potter’s article and its analytical limitations are suggestive of the need for far better understandings of what community is. The term often evokes a fuzzy, warm feeling in some people, and worries about suppression of individuality in others. At the same time, few people seem aware of what exactly what they mean by the word: using it to describe racial groups (e.g., the black community), and online forum, and physical places—even though none of these things seem to be communal in even slightly the same way. Community is a multi-scalar, multi-dimensional, and highly diversified phenomenon. The sooner people recognize that, the sooner we can start to have more productive public conversation about what might be missing in contemporary forms of togetherness and how we might collectively realize more fulfilling alternatives. Adam Nossiter has recently published a fascinating look at the decline of small to medium French cities in the New York Times. I recommend not only reading the article but also perusing the comments section, for the latter gives some insight into the larger psycho-cultural barriers to realizing thicker communities.
Nossiter's article is a lament over the gradual economic and social decline of Albi, a small city of around 50 thousand inhabitants not far from Toulouse. He is troubled by the extent to which the once vibrant downtown has become devoid of social and economic activity, apart from, that is, the periodic influx of tourists interested in its rustic charm as a medieval-era town. Nossiter's piece, however, is not a screed against tourists; rather, he notes that the large proportion of visitors can prevent one from noticing that the town itself now has few amenities to offer locals: It is a single bakery and no local butcher, grocery, or cafe. Residents obtain their needs from supermarkets and malls at the outskirts of town. One might be tempted to dismiss Nossiter's concerns as mere "nostalgia" in the face of "real progress." Indeed, many of those commenting on the article do just that, suggesting that young people want an exciting night life offered by nearby metropolises and that local shops are relics of the past that were destined to be destroyed by the ostensibly lower prices and greater efficiency of malls and big box stores. I think, however, that it is unwise to do so, if one wishes to think carefully and intelligently about the issue. Appeals to progress and inevitability are not so much statements of fact, indeed evidence to back them up is quite limited, but instead rhetorical moves meant to shut down debate; their aim, intentionally or not, is to naturalize a process that is actually sociopolitical. If France is at all like the United States, and I suspect it is, the erection of malls was nothing preordained but a product of innumerable policy decisions and failures of foresight. So contingent was the outcome on these external variables that it seems obtuse to try to claim that it was the result of simply providing consumers with what they wanted. Readers interested in the details can look forward to my soon to be released book Technically Together (MIT Press). For the purposes of this post I can only summarize a few of the ways in which downtown economic decay is not inevitable. The ability for a big-box store or mall to turn a profit is dependent on far more than just the owner's business acumen. Such stores are only attractive to the extent that governments spend public funds to make them easy to get to. Indeed, big box prices are low enough to attract Americans because of the invisible subsidy provided by citizens' tax dollars in building roads and highways. Many, if not most, malls and big box stores were built with public funds, either as the result of favorable tax deductions offered by municipalities or schemes like tax-increment financing. Lacking the political clout of the average corporate retailer, a local butcher is unlikely to receive the same deal. Other forms of subsidy are more indirect. Few shoppers factor in the additional costs of gasoline or car repairs when pursuing exurban discount shopping. Given AAA's estimate of the yearly cost of driving as in excess of ten thousand dollars per year, the full cost of a ten mile drive to the mall is significant, even if it is not salient to consumers. Indeed, they forget it by the time they arrive at the register. Moreover, what about the additional health care costs incurred by driving rather than walking or the psychic costs of living in areas no longer offering embodied community? Numerous studies have found that local community is one of the biggest contributors to a long life and spry old age. It seems unlikely to be mere coincidence that Americans have become increasing medicated against psychological disorders as their previously thick communities have fragmented into diffuse social networks. While these costs do not factor into the prices consumers enjoy via discount exurban shopping, citizens still pay them. Despite the fact that these sociopolitical drivers are fairly obvious if one takes the time to think about them, "just so" stories that try to explain the status quo as in line with the inexorable march of progress remain predominate. Psychologists have theorized that the power of such stories results from the intense psychological discomfort that many people would feel if faced with the possibility that the world as they know it is either unjust or was arrived at via less-than-fair means. Progress narratives are just one of the ways in which citizens psychically shore up an arbitrary and, in the view of many, undesirable status quo. Indeed, Americans, as well as Europeans and others to an increasing extent, seem to have an intense desire to justify the present by appealing to past abstract "market forces." Yale political economist Charles Lindblom argued that the tendency for citizens to reason their way into believing that what is good for economic elites is good for everyone was one of the main sources of business's relatively privileged position in society. In fact, many people go so far to talk as if the market were a dangerous but nonetheless productive animal that one must placate with favorable treatment and a long leash, apparently not realizing that acting in accordance to such logic makes the market system seem less like a beacon of freedom and more like a prison. One thing remains certain: As long as citizens think and act as if changes like the economic decline of downtown areas in small cities are merely the price of progress, it will be impossible to do anything but watch them decay. 1/16/2017 CfP: Can Improved Science and Technology Mean Progress? - 4S Annual Meeting, Boston, 2017Read Now Call for Papers
Open Panel: Can Improved Science and Technology Mean Progress? More Intelligently Steering Technoscientific Systems Organized for Annual 4S Meeting to be held in Boston, Massachusetts, August 30-September 2, 2017 Description Must technoscientific “progress” proceed so technocratically? Dominant innovation discourses implicitly support the view that scientific knowledge and technological innovation automatically translate into improved living. Such a view has led to the promotion of “permissionless innovation,” an ideology that legitimates a hands-off approach to the “disruptive technologies” designed by Silicon Valley entrepreneurs and freedom of research in their R&D departments. However, scholars have shown that sociotechnical innovations typically benefit some people and organizations more than others. Thus it is clear to many within STS that those wishing to enact non-technocratic visions of progress face social as well as technical barriers. To mitigate or head off the worst consequences of permissionless innovation and other discourses that naturalize the politics of technoscientifc change, scholars must consider alternative ways of steering technoscientific agendas aside from allowing small groups of politically and financially powerful elites to make most of the decisions. How might new technologies and research programs be shaped to be more suitable for public purposes before markets let them loose into the world? The purpose of this panel is to explicitly examine what would be required to guide science and technology toward better fulfilling more humans’ needs more of the time. Possible topics include, but are not limited to, mechanisms for slowing the pace of technoscientific change, addressing the privileged position of particular decision-makers, counteracting the subtle effects of “permissionless innovation” and other naturalizing discourses, and better enabling lay citizens and experts to critically probe the politics of innovation. Submission Submission Deadline: March 1, 2017. Submit paper, session, and making and doing proposals here: https://convention2.allacademic.com/one/ssss/4s17/ Please check the box to submit your paper to open panel "Can Improved Science and Technology Mean Progress? More Intelligently Steering Technoscientific Systems." You can find more details about the conference on http://www.4sonline.org/meeting Organizers For more information contact: Taylor Dotson, New Mexico Tech, Taylor.Dotson@nmt.edu Michael Bouchey, Rensselaer Polytechnic Institute, bouchm4@rpi.edu |
Details
AuthorTaylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research. Archives
July 2023
Blog Posts
On Vaccine Mandates Escaping the Ecomodernist Binary No, Electing Joe Biden Didn't Save American Democracy When Does Someone Deserve to Be Called "Doctor"? If You Don't Want Outbreaks, Don't Have In-Person Classes How to Stop Worrying and Live with Conspiracy Theorists Democracy and the Nuclear Stalemate Reopening Colleges & Universities an Unwise, Needless Gamble Radiation Politics in a Pandemic What Critics of Planet of the Humans Get Wrong Why Scientific Literacy Won't End the Pandemic Community Life in the Playborhood Who Needs What Technology Analysis? The Pedagogy of Control Don't Shovel Shit The Decline of American Community Makes Parenting Miserable The Limits of Machine-Centered Medicine Why Arming Teachers is a Terrible Idea Why School Shootings are More Likely in the Networked Age Against Epistocracy Gun Control and Our Political Talk Semi-Autonomous Tech and Driver Impairment Community in the Age of Limited Liability Conservative Case for Progressive Politics Hyperloop Likely to Be Boondoggle Policing the Boundaries of Medicine Automating Medicine On the Myth of Net Neutrality On Americans' Acquiescence to Injustice Science, Politics, and Partisanship Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate Privacy Threats and the Counterproductive Refuge of VPNs Andrew Potter's Macleans Shitstorm The (Inevitable?) Exportation of the American Way of Life The Irony of American Political Discourse: The Denial of Politics Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton Science's Legitimacy Problem Forbes' Faith-Based Understanding of Science There is No Anti-Scientism Movement, and It’s a Shame Too American Pro Rugby Should Be Community-Owned Why Not Break the Internet? Working for Scraps Solar Freakin' Car Culture Mass Shooting Victims ARE on the Rise Are These Shoes Made for Running? Underpants Gnomes and the Technocratic Theory of Progress Don't Drink the GMO Kool-Aid! On Being Driven by Driverless Cars Why America Needs the Educational Equivalent of the FDA On Introversion, the Internet and the Importance of Small Talk I (Still) Don't Believe in Digital Dualism The Anatomy of a Trolley Accident The Allure of Technological Solipsism The Quixotic Dangers Inherent in Reading Too Much If Science Is on Your Side, Then Who's on Mine? The High Cost of Endless Novelty - Part II The High Cost of Endless Novelty Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age The Symbolic Analyst Sweatshop in the Winner-Take-All Society On Digital Dualism: What Would Neil Postman Say? Redirecting the Technoscience Machine Battling my Cell Phone for the Good Life Categories
All
|