Are Americans losing their grip on reality? It is difficult not to think so in light of the spread of QANON conspiracy theories, which posit that a deep-state ring of Satanic pedophiles is plotting against President Trump. A recent poll found that some 56% of Republican voters believe that at least some of the QANON conspiracy theory is true. But conspiratorial thinking has been on the rise for some time. One 2017 Atlantic article claimed that America had “lost its mind” in its growing acceptance of post-truth. Robert Harris has more recently argued that the world had moved into an “age of irrationality.” Legitimate politics is threatened by a rising tide of unreasonableness, or so we are told. But the urge to divide people in rational and irrational is the real threat to democracy. And the antidote is more inclusion, more democracy—no matter how outrageous the things our fellow citizens seem willing to believe.
Despite recent panic over the apparent upswing in political conspiracy thinking, salacious rumor and outright falsehoods has been an ever-present feature of politics. Today’s lurid and largely evidence-free theories about left-wing child abuse rings have plenty of historical analogues. Consider tales of Catherine the Great’s equestrian dalliances and claims that Marie Antoinette found lovers in both court servants and within her own family. Absurd stories about political elites seems to have been anything but rare. Some of my older relatives believed in the 1990s that the government was storing weapons and spare body parts underneath Denver International Airport in preparation for a war against common American citizens—and that was well before the Internet was a thing. There seems to be little disagreement that conspiratorial thinking threatens democracy. Allusions to Richard Hofstadter’s classic essay on the “paranoid style of American politics” have become cliché. Hofstadter’s targets included 1950s conservatives that saw Communist treachery around every corner, 1890s populists railing against the growing power of the financial class, and widespread worries about the machinations of the Illuminati. He diagnosed their politics paranoid in light of their shared belief that the world was being persecuted by a vast cabal of morally corrupt elites. Regardless of their specific claims, conspiracy theories’ harms come from their role in “disorienting” the public, leading citizens to have grossly divergent understandings of reality. And widespread conspiratorial thinking drives the delegitimation of traditional democratic institutions like the press and the electoral system. Journalists are seen as pushing “fake news.” The voting booths become “rigged.” Such developments are no doubt concerning, but we should think carefully about how we react to conspiracism. Too often the response is to endlessly lament the apparent end of rational thought and wonder aloud if democracy can survive while being gripped by a form of collective madness. But focusing on citizens' perceived cognitive deficiencies presents its own risks. Historian Ted Steinberg called this the “diagnostic style” of American political discourse, which transforms “opposition to the cultural mainstream into a form of mental illness.” The diagnostic style leads us to view QANONers, and increasingly political opponents in general, as not merely wrong but cognitively broken. They become the anti-vaxxers of politics. While QANON believers certainly seem to be deluding themselves, isn’t the tendency by leftists to blame Trump’s popular support on conservative’s faculty brains and an uneducated or uninformed populace equally delusional? The extent to which such cognitive deficiencies are actually at play is beside the point as far as democracy is concerned. You can’t fix stupid, as the well-worn saying has it. Diagnosing chronic mental lapses actually leaves us very few options for resolving conflicts. Even worse, it prevents an honest effort to understand and respond to the motivations of people with strange beliefs. Calling people idiots will only cause them to dig in further. Responses to the anti-vaxxer movement show as much. Financial penalties and other compulsory measures tend to only anger vaccine hesitant parents, leading them to more often refuse voluntary vaccines and become more committed in their opposition. But it does not take a social scientific study to know this. Who has ever changed their mind in response to the charge of stupidity or ignorance? Dismissing people with conspiratorial views blinds us to something important. While the claims themselves might be far-fetched, people often have legitimate reasons for believing them. African Americans, for instance, disproportionately believe conspiracy theories regarding the origin of HIV, such as that it was man-made in a laboratory or that the cure was being withheld, and are more hesitant of vaccines. But they also rate higher in distrust of medical institutions, often pointing to the Tuskegee Syphilis Study and ongoing racial disparities as evidence. And from British sheepfarmers’ suspicion of state nuclear regulators in the aftermath of Chernobyl to mask skeptics’ current jeremiads against the CDC, governmental mistrust has often developed after officials’ overconfident claims about the risks turned out to be inaccurate. What might appear to an “irrational” rejection of the facts is often a rational response to a power structure that feels distant, unresponsive, and untrustworthy. The influence of psychologists has harmed more than it has helped in this regard. Carefully designed studies purport to show that believers in conspiracy theories lack the ability to think analytically or claim that they suffer from obscure cognitive biases like “hypersensitive agency detection.” Recent opinion pieces exaggerate the “illusory truth effect,” a phenomenon discovered in psych labs that repeated exposure to false messages leads to a relatively slight increase in the number subjects rating them as true or plausible. The smallness of this, albeit statistically significant, effect doesn’t stop commentators from presenting social media users as if they were passive dupes, who only need to be told about QANON so many times before they start believing it. Self-appointed champions of rationality have spared no effort to avoid thinking about the deeper explanations for conspiratorial thinking. Banging the drum over losses in rationality will not get us out of our present situation. Underneath our seeming inability to find more productive political pastures is a profound misunderstanding of what makes democracy work. Hand Wringing over “post-truth” or conspiratorial beliefs is founded on the idea that the point of politics is to establish and legislate truths. Once that is your conception of politics, the trouble with democracy starts to look like citizens with dysfunctional brains. When our fellow Americans are recast as cognitively broken, it becomes all too easy to believe that it would be best to exclude or diminish the influence of people who believe outrageous things. Increased gatekeeping within the media or by party elites and scientific experts begins to look really attractive. Some, like philosopher Jason Brennan, go even further. His 2016 book, Against Democracy, contends that the ability to rule should be limited to those capable of discerning and “correctly” reasoning about the facts, while largely sidestepping the question of who decides what the right facts are and how to know when we are correctly reasoning about them. But it is misguided to think that making our democracy only more elitist will throttle the wildfire spread of conspiratorial thinking. If anything, doing so will only temporarily contain populist ferment, letting pressure build until it eventually explodes or (if we are lucky) economic growth leads it to fizzle out. Political gatekeeping, by mistaking supposed deficits in truth and rationality for the source of democratic discord, fails to address the underlying cause of our political dysfunction: the lack of trust. Signs of our political system’s declining legitimacy are not difficult to find. A staggering 71 percent of the Americans believe that elected officials don’t care about the average citizen or what they think. Trust in our government has never been lower, with only 17 percent of citizens expressing confidence about Washington most or all the time. By diagnosing rather than understanding, we cannot see that conspiratorial thinking is the symptom rather than the disease. The spread of bizarre theories about COVID-19 being a “planned” epidemic or child-abuse rings is a response to real feelings of helplessness, isolation, and mistrust as numerous natural and manmade disasters unfold before our eyes—epochal crises that governments seem increasingly incapable of getting a handle on. Many of Hofstadter’s listed examples of conspiratorial thought came during similar moments: at the height of the Red Scare and Cold War nuclear brinkmanship, during the 1890s depression, or in the midst of pre-Civil War political fracturing. Conspiracy theories offer a simplified world of bad guys and heroes. A battle between good and evil is a more satisfying answer than the banality of ineffectual government and flawed electoral systems when one is facing wicked problems. Perhaps social media adds fuel to the fire, accelerating the spread of outlandish proposals about what ails the nation. But it does so not because it short-circuits our neural pathways to crash our brains’ rational thinking modules. Conspiracy theories are passed by word of mouth (or Facebook likes) by people we already trust. It is no surprise that they gain traction in a world where satisfying solutions to our chronic, festering crises are hard to find, and where most citizens are neither afforded a legible glimpse into the workings of the vast political machinery that determines much of their lives nor the chance to actually substantially influence it. Will we be able to reverse course before it is too late? If mistrust and unresponsiveness is the cause, the cure should be the effort to reacquaint Americans with the exercise of democracy on a broad-scale. Hofstadter himself noted that, because the political process generally affords more extreme sects little influence, public decisions only seemed to confirm conspiracy theorists’ belief that they are a persecuted minority. The urge to completely exclude “irrational” movements forgets that finding ways to partially accommodate their demands is often the more effective strategy. Allowing for conscientious objections to vaccination effectively ended the anti-vax movement in early 20th century Britain. Just as interpersonal conflicts are more easily resolved by acknowledging and responding to people’s feelings, our seemingly intractable political divides will only become productive by allowing opponents to have some influence on policy. That is not to say that we should give into all their demands. Rather it is only that we need to find small but important ways for them to feel heard and responded to, with policies that do not place unreasonable burdens on the rest of us. While some might pooh-pooh this suggestion, pointing to conspiratorial thinking as evidence of how ill-suited Americans are for any degree of political influence, this gets the relationship backwards. Wisdom isn’t a prerequisite to practicing democracy, but an outcome of it. If our political opponents are to become more reasonable it will only be by being afforded more opportunities to sit down at the table with us to wrestle with just how complex our mutually shared problems are. They aren’t going anywhere, so we might as well learn how to coexist.
There have been no shortage of (mainly conservative) pundits and politicians suggesting that the path to fewer school shootings is armed teachers—and even custodians. Although it is entirely likely that such recommendations are not really serious but rather meant to distract from calls for stricter gun control legislation, it is still important to evaluate them. As someone who researches and teaches about the causes of unintended consequences, accidents, and disasters for a living, I find the idea that arming public school workers will make children safer highly suspect—but not for the reasons one might think.
If there is one commonality across myriad cases of political and technological mistakes, it would be the failure to acknowledge complexity. Nuclear reactors designed for military submarines were scaled up over an order of magnitude to power civilian power plants without sufficient recognition of how that affected their safety. Large reactors can get so hot that containing a meltdown becomes impossible, forcing managers to be ever vigilant to the smallest errors and install backup cooling systems—which only increased difficult to manage complexities. Designers of autopilot systems neglected to consider how automation hurt the abilities of airline pilots, leading to crashes when the technology malfunctioned and now-deskilled pilots were forced to take over. A narrow focus on applying simple technical solutions to complex problems generally leads to people being caught unawares by ensuing unanticipated outcomes. Debate about whether to put more guns in schools tends to emphasize the solution’s supposed efficacy. Given that even the “good guy with a gun” best positioned to stop the Parkland shooting failed to act, can we reasonably expect teachers to do much better? In light of the fact that mass shootings have even occurred at military bases, what reason do we have to believe that filling educational institutions with armed personnel will reduce the lethality of such incidents? As important as these questions are, they divert our attention to the new kinds of errors produced by applying a simplistic solution—more guns—to a complex problem. A comparison with the history of nuclear armaments should give us pause. Although most American during the Cold War worried about potential atomic war with the Soviets, Cubans, or Chinese, much of the real risks associated with nuclear weapons involve accidental detonation. While many believed during the Cuban Missile Crisis that total annihilation would come from nationalistic posturing and brinkmanship, it was actually ordinary incompetence that brought us closest. Strategic Air Command’s insistence on maintaining U2 and B52 flights and intercontinental ballistic missiles tests during periods of heightened risked a military response from the Soviet Union: pilots invariably got lost and approached Soviet airspace and missile tests could have been misinterpreted to be malicious. Malfunctioning computer chips made NORAD’s screens light up with incoming Soviet missiles, leading the US to prep and launch nuclear-armed jets. Nuclear weapons stored at NATO sites in Turkey and elsewhere were sometimes guarded by a single American soldier. Nuclear armed B52s crashed or accidently released their payloads, with some coming dangerously close to detonation. Much the same would be true for the arming of school workers: The presence and likelihood routine human error would put children at risk. Millions of potentially armed teachers and custodians translates into an equal number of opportunities for a troubled student to steal weapons that would otherwise be difficult to acquire. Some employees are likely to be as incompetent as Michelle Ferguson-Montogomery, a teacher who shot herself in the leg at her Utah school—though may not be so lucky as to not hit a child. False alarms will result not simply in lockdowns but armed adults roaming the halls and, as result, the possibility of children killed for holding cellphones or other objects that can be confused for weapons. Even “good guys” with guns miss the target at least some of the time. The most tragic unintended consequence, however, would be how arming employees would alter school life and the personalities of students. Generations of Americans mentally suffered under Cold War fears of nuclear war. Given the unfortunate ways that many from those generations now think in their old age: being prone to hyper-partisanship, hawkish in foreign affairs, and excessively fearful of immigrants, one worries how a generation of kids brought up in quasi-militarized schools could be rendered incapable of thinking sensibly about public issues—especially when it comes to national security and crime. This last consequence is probably the most important one. Even though more attention ought to be paid toward the accidental loss of life likely to be caused by arming school employees, it is far too easy to endlessly quibble about the magnitude and likelihood of those risks. That debate is easily scientized and thus dominated by a panoply of experts, each claiming to provide an “objective” assessment regarding whether the potential benefits outweigh the risks. The pathway out of the morass lies in focusing on values, on how arming teachers—and even “lockdown” drills— fundamentally disrupts the qualities of childhood that we hold dear. The transformation of schools into places defined by a constant fear of senseless violence turns them into places that cannot feel as warm, inviting, and communal as they otherwise could. We should be skeptical of any policy that promises greater security only at the cost of the more intangible features of life that make it worth living. As a scholar concerned about the value of democracy within contemporary societies, especially with respect to the challenges presented by increasingly complex (and hence risky) technoscience, a good check for my views is to read arguments by critics of democracy. I had hoped Jason Brennan's Against Democracy would force me to reconsider some of the assumptions that I had made about democracy's value and perhaps even modify my position. Hoped.
Having read through a few chapters, I am already disappointed and unsure if the rest of the book is worth the my time. Brennan's main assertion is that because some evidence shows that participation in democratic politics has a corrupting influence--that is, participants are not necessarily well informed and often end becoming more polarized and biased in the process--we would be better off limiting decision making power to those who have proven themselves sufficiently competent and rational, to epistocracy. Never mind the absurdity of the idea that a process for judging those qualities in potential voters could ever be made in an apolitical, unbiased, or just way, Brennan does not even begin with a charitable or nuanced understanding of what democracy is or could be. One early example that exposes the simplicity of Brennan's understanding of democracy--and perhaps even the circularity of his argument--is a thought experiment about child molestation. Brennan asks the reader to consider a society that has deeply deliberated the merits of adults raping children and subjected the decision to a majority vote, with the yeas winning. Brennan claims that because the decision was made in line with proper democratic procedures, advocates of a proceduralist view of democracy must see it as a just outcome. Due to the clear absurdity and injustice of this result, we must therefore reject the view that democratic procedures (e.g., voting, deliberation) themselves are inherently just. What makes this thought experiment so specious is that Brennan assumes that one relatively simplistic version of a proceduralist, deliberative democracy can represent the whole. Ever worse, his assumed model of deliberative democracy--ostensibly not too far from what already exists in most contemporary nations--is already questionably democratic. Not only is majoritarian decision-making and procedural democracy far from equivalent, but Brennan makes no mention of whether or not children themselves were participants in either the deliberative process or the vote, or even would have a representative say through some other mechanism. Hence, in this example Brennan actually ends up showing the deficits of a kind of epistocracy rather than democracy, insofar as the ostensibly more competent and rationally thinking adults are deliberating and voting for children. That is, political decisions about children already get made by epistocrats (i.e., adults) rather than democratically (understood as people having influence in deciding the rules by which they will be governed for the issues they have a stake in). Moreover, any defender of the value of democratic procedures would likely counter that a well functioning democracy would contain processes to amplify or protect the say of less empowered minority groups, whether through proportional representation or mechanisms to slow down policy or to force majority alliances to make concessions or compromises. It is entirely unsurprising that democratic procedures look bad when one's stand-in for democracy is winner-take-all, simple majoritarian decision-making. His attack on democratic deliberations is equally short-sighted. Criticizing, quite rightly, that many scholars defend deliberative democracy with purely theoretical arguments, while much of the empirical evidence shows that many average people dislike deliberation and are often very bad at it, Brennan concludes that, absent promising research on how to improve the situation, there is no logical reason to defend deliberative democracy. This is where Brennan's narrow disciplinary background as a political theorist biases his viewpoint. It is not at all surprising to a social scientist that average people would fail to deliberate well nor like it when the near entirety of contemporary societies fails to prepare them for democracy. Most adults have spent 18 years or more in schools and up to several decades in workplaces that do not function as democracies but rather are authoritarian, centrally planned institutions. Empirical research on deliberation has merely uncovered the obvious: People with little practice with deliberative interactions are bad at them. Imagine if an experiment put assembly line workers in charge of managing General Motors, then justified the current hierarchical makeup of corporate firms by pointing to the resulting non-ideal outcomes. I see no reason why Brennan's reasoning about deliberative democracy is any less absurd. Finally, Brennan's argument rests on a principle of competence--and concurrently the claim that citizens have a right to governments that meet that principle. He borrows the principle from medical ethics, namely that a patient is competent if they are aware of the relevant facts, can understand them, appreciate their relevance, and can reason about them appropriately. Brennan immediately avoids the obvious objections about how any of the judgements about relevance and appropriateness could be made in non-political ways to merely claim that the principle is non-objectionable in the abstract. Certainly for the simplified thought examples that he provides of plumber's unclogging pipes and doctors treating patients with routine conditions the validity of the principle of competence is clear. However, for the most contentious issues we face: climate change, gun control, genetically modified organisms, etc., the facts themselves and the reliability of experts are themselves in dispute. What political system would best resolve such a dispute? Obviously it could not be a epistocracy, given that the relevance and appropriateness of the "relevant" expertise itself is the issue to be decided. Perhaps Brennan's suggestions have some merit, but absent a non-superficial understanding of the relationship between science and politics the foundation of his positive case for epistocracy is shaky at best. His oft repeated assertion that epistocracy would likely produce more desirable decisions is highly speculative. I plan on continuing to examine Brennan's arguments regarding democracy, but I find it ironic that his argument against average citizens--that they suffer too much from various cognitive maladies to reason well about public issues--applies equally to Brennan. Indeed, the hubris of most experts is deeply rooted in their unfounded belief that a little learning has freed them from the mental limitations that infect the less educated. In reality, Brennan is a partisan like anyone else, not a sagely academic doling out objective advice. Whether one turns to epistocratic ideas in light of the limitations of contemporary democracies or advocate for ensuring the right preconditions for democracies to function better comes back to one's values and political commitments. So far it seems that Brennan's book demonstrates his own political biases as much as it exposes the ostensibly insurmountable problems for democracy.
It is hard to imagine anything more damaging to the movements for livable minimum wages, greater reliance on renewable energy resources, or workplace democracy than the stubborn belief that one must be a “liberal” to support them. Indeed, the common narrative that associates energy efficiency with left-wing politics leads to absurd actions by more conservative citizens. Not only do some self-identified conservatives intentionally make their pickup trucks more polluting at high costs (e.g., “rolling coal”) but they will shun energy efficient—and money saving— lightbulbs if their packaging touts their environmental benefits. Those on the left, often do little to help the situation, themselves seemingly buying into the idea that conservatives must culturally be everything leftists are not and vice-versa. As a result, the possibility to ally for common purposes, against a common enemy (i.e., neoliberalism), is forgone.
The Germans have not let themselves be hindered by such narratives. Indeed, their movement toward embracing renewables, which now make up nearly a third of their power generation market, has been driven by a diverse political coalition. A number villages in the German conservative party (CDU) heartland now produce more green energy than they need, and conservative politicians supported the development of feed-in tariffs and voted to phase out nuclear energy. As Craig Morris and Arne Jungjohann describe, the German energy transition resonates with key conservative ideas, namely the ability of communities to self-govern and the protection of valued rural ways of life. Agrarian villages are given a new lease on life by farming energy next to crops and livestock, and enabling communities to produce their own electricity lessens the control of large corporate power utilities over energy decisions. Such themes remain latent in American conservative politics, now overshadowed by the post-Reagan dominance of “business friendly” libertarian thought styles. Elizabeth Anderson has noticed a similar contradiction with regard to workplaces. Many conservative Americans decry what they see as overreach by federal and state governments, but tolerate outright authoritarianism at work. Tracing the history of conservative support for “free market” policies, she notes that such ideas emerged in an era when self-employment was much more feasible. Given the immense economies of scale possible with post-Industrial Revolution technologies, however, the barriers to entry for most industries are much too high for average people to own and run their own firms. As a result, free market policies no longer create the conditions for citizens to become self-reliant artisans but rather spur the centralization and monopolization of industries. Citizens, in turn, become wage laborers, working under conditions far more similar to feudalism than many people are willing to recognize. Even Adam Smith, to whom many conservatives look for guidance on economic policy, argued that citizens would only realize the moral traits of self-reliance and discipline—values that conservatives routinely espouse—in the right contexts. In fact, he wrote of people stuck doing repetitive tasks in a factory: “He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible to become for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life. Of the great and extensive interests of his country he is altogether incapable of judging”
Advocates of economic democracy have overlooked a real opportunity to enroll conservatives in this policy area. Right leaning citizens need not be like Mike Rowe—a man who ironically garnered a following among “hard working” conservatives by merely dabbling in blue collar work—and mainly bemoan the ostensible decline in citizens’ work ethic. Conservatives could be convinced that creating policies that support self-employment and worker-owned firms would be far more effective in creating the kinds of citizenry they hope for, far better than simply shaming the unemployed for apparently being lazy. Indeed, they could become like the conservative prison managers in North Dakota (1), who are now recognizing that traditionally conservative “tough on crime” legislation is both ineffective and fiscally irresponsible—learning that upstanding citizens cannot be penalized into existence.
Another opportunity has been lost by not constructing more persuasive narratives that connect neoliberal policies with the decline of community life and the eroding well-being of the nation. Contemporary conservatives will vote for politicians who enable corporations to outsource or relocate at the first sign of better tax breaks somewhere else, while they simultaneously decry the loss of the kinds of neighborhood environments that they experienced growing up. Their support of “business friendly” policies had far different implications in the days when the CEO of General Motors would say “what is good for the country is good for General Motors and vice versa.” Compare that to an Apple executive, who baldly stated: “We don’t have an obligation to solve America’s problems. Our only obligation is making the best product possible.” Yet fights for a higher minimum wage and proposals to limit the destructively competitive processes where nations and cities try to lure businesses away from each other with tax breaks get framed as anti-American, even though they are poised to reestablish part of the social reality that conservatives actually value. Communities cannot prosper when torn asunder by economic disruptions; what is best for a multinational corporation is often not what is best for nation like the United States. It is a tragedy that many leftists overlook these narratives and focus narrowly on appeals to egalitarianism, a moral language that political psychologists have found (unsurprisingly) to resonate only with other leftists. The resulting inability to form alliances with conservatives over key economic and energy issues allows libertarian-inspired neoliberalism to drive conservative politics in the United States, even though libertarianism is as incompatible with conservativism as it is with egalitarianism. Libertarianism, by idealizing impersonal market forces, upholds an individualist vision of society that is incommensurable with communal self-governance and the kinds of market interventions that would enable more people to be self-employed or establish cooperative businesses. By insisting that one should “defer” to the supposedly objective market in nearly all spheres of life, libertarianism threatens to commodify the spaces that both leftists and conservatives find sacred: pristine wilderness, private life, etc. There are real challenges, however, to more often realizing political coalitions between progressives and conservatives, namely divisions over traditionalist ideas regarding gender and sexuality. Yet even this is a recent development. As Nadine Hubbs shows, the idea that poor rural and blue collar people are invariably more intolerant than urban elites is a modern construction. Indeed, studies in rural Sweden and elsewhere have uncovered a surprising degree of acceptance for non-hetereosexual people, though rural queer people invariably understand and express their sexuality differently than urban gays. Hence, even for this issue, the problem lies not in rural conservatism per se but with the way contemporary rural conservatism in America has been culturally valenced. The extension of communal acceptance has been deemphasized in order to uphold consistency with contemporary narratives that present a stark urban-rural binary, wherein non-cis, non-hetereosexual behaviors and identities are presumed to be only compatible with urban living. Yet the practice, and hence the narrative, of rural blue collar tolerance could be revitalized. However, the preoccupation of some progressives with maintaining a stark cultural distinction with rural America prevents progressive-conservative coalitions from coming together to realize mutually beneficial policy changes. I know that I have been guilty of that. Growing up with left-wing proclivities, I was guilty of much of what Nadine Hubbs criticizes about middle-class Americans: I made fun of “rednecks” and never, ever admitted to liking country music. My preoccupation with proving that I was really an “enlightened” member of the middle class, despite being a child of working class parents and only one generation removed from the farm, only prevented me from recognizing that I potentially had more in common with rednecks politically than I ever would with the corporate-friendly “centrist” politicians at the helm of both major parties. No doubt there is work to be done to undo all that has made many rural areas into havens for xenophobic, racist, and homophobic bigotry; but that work is no different than what could and should be done to encourage poor, conservative whites to recognize what a 2016 SNL sketch so poignantly illustrated: that they have far more in common with people of color than they realize. Note 1. A big oversight in the “work ethic” narrative is that it fails to recognize that slacking workers are often acting rationally. If one is faced with few avenues for advancement and is instantly replaced when suffering an illness or personal difficulties, why work hard? What white collar observers like Rowe might see as laziness could be considered an adaptation to wage labor. In such contexts, working hard can be reasonably seen as not the key to success but rather a product of being a chump. A person would be merely harming their own well-being in order to make someone else rich. This same discourse in the age of feudalism would have involved chiding peasants for taking too many holidays.
The stock phrase that “those who do not learn history are doomed to repeat it” certainly seems to hold true for technological innovation. After a team of Stanford University researchers recently developed an algorithm that they say is better at diagnosing heart arrhythmias than a human expert, all the MIT Technology Review could muster was to rhetorically ask if patients and doctors could ever put their trust in an algorithm. I won’t dispute the potential for machine learning algorithms to improve diagnoses; however, I think we should all take issue when journalists like Will Knight depict these technologies so uncritically, as if their claimed merits will be unproblematically realized without negative consequences.
Indeed, the same gee-whiz reporting likely happened during the advent of computerized autopilot in the 1970s—probably with the same lame rhetorical question: “Will passengers ever trust a computer to land a plane?” Of course, we now know that the implementation of autopilot was anything but a simple story of improved safety and performance. As both Robert Pool and Nicholas Carr have demonstrated, the automation of facets of piloting created new forms of accidents produced by unanticipated problems with sensors and electronics as well as the eventual deskilling of human pilots. That shallow, ignorant reporting for similar automation technologies, including not just automated diagnosis but also technologies like driverless cars, continues despite the knowledge of those previous mistakes is truly disheartening. The fact that the tendency to not dig too deeply into the potential undesirable consequences of automation technologies is so widespread is telling. It suggests that something must be acting as a barrier to people’s ability to think clearly about such technologies. The political scientists Charles Lindblom called these barriers “impairments to critical probing,” noting the role of schools and the media in helping to ensure that most citizens refrain from critically examining the status quo. Such impairments to critical probing with respect to automation technologies are visible in the myriad simplistic narratives that are often presumed rather than demonstrated, such as in the belief that algorithms are inherently safer than human operators. Indeed, one comment on Will Knight’s article prophesized that “in the far future human doctors will be viewed as dangerous compared to AI.” Not only are such predictions impossible to justify—at this point they cannot be anything more than wildly speculative conjectures—but they fundamentally misunderstand what technology is. Too often people act as if technologies were autonomous forces in the world, not only in the sense that people act as if technological changes were foreordained and unstoppable but also in how they fail to see that no technology functions without the involvement of human hands. Indeed, technologies are better thought of as sociotechnical systems. Even a simple tool like a hammer cannot existing without underlying human organizations, which provide the conditions for its production, nor can it act in the world without it having been designed to be compatible with the shape and capacities of the human body. A hammer that is too big to be effectively wielded by a person would be correctly recognized as an ill-conceived technology; few would fault a manual laborer forced to use such a hammer for any undesirable outcomes of its use. Yet somehow most people fail to extend the same recognition to more complex undertakings like flying a plane or managing a nuclear reactor: in such cases, the fault is regularly attributed to “human error.” How could it be fair to blame a pilot, who only becomes deskilled as a result of their job requiring him or her to almost exclusively rely on autopilot, for mistakenly pulling up on the controls and stalling the plane during an unexpected autopilot error? The tendency to do so is a result of not recognizing autopilot technology as a sociotechnical system. Autopilot technology that leads to deskilled pilots, and hence accidents, is as poorly designed as a hammer incompatibly large for the human body: it fails to respect the complexities of the human-technology interface. Many people, including many of my students, find that chain of reasoning difficult to accept, even though they struggle to locate any fault with it. They struggle under the weight of the impairing narrative that leads them to assume that the substitution of human action with computerized algorithms is always unalloyed progress. My students’ discomfort is only further provoked when presented with evidence that early automated textile technologies produced substandard, shoddy products—most likely being implemented in order to undermine organized labor rather than to contribute to a broader, more humanistic notion of progress. In any case, the continued power of automation=progress narrative will likely stifle the development of intelligent debate about automated diagnosis technologies. If technological societies currently poised to begin automating medical care are to avoid repeating history, they will need to learn from past mistakes. In particular, how could AI be implemented so as to enhance the diagnostic ability of doctors rather than deskill them? Such an approach would part ways with traditional ideas about how computers should influence the work process, aiming to empower and “informate” skilled workers rather than replace them. As Siddhartha Mukherjee has noted, while algorithms can be very good at partitioning, e.g., distinguishing minute differences between pieces of information, they cannot deduce “why,” they cannot build a case for a diagnosis by themselves, and they cannot be curious. We only replace humans with algorithms at the cost of these qualities. Citizens of technological societies should demand that AI diagnostic systems are used to aid the ongoing learning of doctors, helping them to solidify hunches and not overlook possible alternative diagnoses or pieces of evidence. Meeting such demands, however, may require that still other impairing narratives be challenged, particularly the belief that societies must acquiescence to the “disruptions” of new innovations, as they are imagined and desired by Silicon Valley elites—or the tendency to think of the qualities of the work process last, if at all, in all the excitement over extending the reach of robotics. Adam Nossiter has recently published a fascinating look at the decline of small to medium French cities in the New York Times. I recommend not only reading the article but also perusing the comments section, for the latter gives some insight into the larger psycho-cultural barriers to realizing thicker communities.
Nossiter's article is a lament over the gradual economic and social decline of Albi, a small city of around 50 thousand inhabitants not far from Toulouse. He is troubled by the extent to which the once vibrant downtown has become devoid of social and economic activity, apart from, that is, the periodic influx of tourists interested in its rustic charm as a medieval-era town. Nossiter's piece, however, is not a screed against tourists; rather, he notes that the large proportion of visitors can prevent one from noticing that the town itself now has few amenities to offer locals: It is a single bakery and no local butcher, grocery, or cafe. Residents obtain their needs from supermarkets and malls at the outskirts of town. One might be tempted to dismiss Nossiter's concerns as mere "nostalgia" in the face of "real progress." Indeed, many of those commenting on the article do just that, suggesting that young people want an exciting night life offered by nearby metropolises and that local shops are relics of the past that were destined to be destroyed by the ostensibly lower prices and greater efficiency of malls and big box stores. I think, however, that it is unwise to do so, if one wishes to think carefully and intelligently about the issue. Appeals to progress and inevitability are not so much statements of fact, indeed evidence to back them up is quite limited, but instead rhetorical moves meant to shut down debate; their aim, intentionally or not, is to naturalize a process that is actually sociopolitical. If France is at all like the United States, and I suspect it is, the erection of malls was nothing preordained but a product of innumerable policy decisions and failures of foresight. So contingent was the outcome on these external variables that it seems obtuse to try to claim that it was the result of simply providing consumers with what they wanted. Readers interested in the details can look forward to my soon to be released book Technically Together (MIT Press). For the purposes of this post I can only summarize a few of the ways in which downtown economic decay is not inevitable. The ability for a big-box store or mall to turn a profit is dependent on far more than just the owner's business acumen. Such stores are only attractive to the extent that governments spend public funds to make them easy to get to. Indeed, big box prices are low enough to attract Americans because of the invisible subsidy provided by citizens' tax dollars in building roads and highways. Many, if not most, malls and big box stores were built with public funds, either as the result of favorable tax deductions offered by municipalities or schemes like tax-increment financing. Lacking the political clout of the average corporate retailer, a local butcher is unlikely to receive the same deal. Other forms of subsidy are more indirect. Few shoppers factor in the additional costs of gasoline or car repairs when pursuing exurban discount shopping. Given AAA's estimate of the yearly cost of driving as in excess of ten thousand dollars per year, the full cost of a ten mile drive to the mall is significant, even if it is not salient to consumers. Indeed, they forget it by the time they arrive at the register. Moreover, what about the additional health care costs incurred by driving rather than walking or the psychic costs of living in areas no longer offering embodied community? Numerous studies have found that local community is one of the biggest contributors to a long life and spry old age. It seems unlikely to be mere coincidence that Americans have become increasing medicated against psychological disorders as their previously thick communities have fragmented into diffuse social networks. While these costs do not factor into the prices consumers enjoy via discount exurban shopping, citizens still pay them. Despite the fact that these sociopolitical drivers are fairly obvious if one takes the time to think about them, "just so" stories that try to explain the status quo as in line with the inexorable march of progress remain predominate. Psychologists have theorized that the power of such stories results from the intense psychological discomfort that many people would feel if faced with the possibility that the world as they know it is either unjust or was arrived at via less-than-fair means. Progress narratives are just one of the ways in which citizens psychically shore up an arbitrary and, in the view of many, undesirable status quo. Indeed, Americans, as well as Europeans and others to an increasing extent, seem to have an intense desire to justify the present by appealing to past abstract "market forces." Yale political economist Charles Lindblom argued that the tendency for citizens to reason their way into believing that what is good for economic elites is good for everyone was one of the main sources of business's relatively privileged position in society. In fact, many people go so far to talk as if the market were a dangerous but nonetheless productive animal that one must placate with favorable treatment and a long leash, apparently not realizing that acting in accordance to such logic makes the market system seem less like a beacon of freedom and more like a prison. One thing remains certain: As long as citizens think and act as if changes like the economic decline of downtown areas in small cities are merely the price of progress, it will be impossible to do anything but watch them decay. When reading some observer's diagnoses of what ails the United States, one can get the impression that Americans are living in an unprecedented age of public scientific ignorance. There is reason, however, to wonder if people today are really any more ignorant of facts like water boiling at lower temperatures at higher altitudes or if any more people believe in astrology than in the past. According to some studies, Americans have never been more scientifically literate. Nevertheless, there is no shortage of hand-wringing about the remaining degree of public scientific illiteracy and what it might mean for the future of the United States and American democracy. Indeed, scientific illiteracy is targeted as the cause of the anti-vaccination movement as well as opposition to genetically modified organisms (GMOs) and nuclear power. However, I think such arguments misunderstand the issue. If America has a problem with regard to science, it is not due to a dearth of scientific literacy but a decline in science's public legitimacy.
The thinking underlying worries about widespread scientific illiteracy is rooted in what is called the “deficit model.” In the deficit model, the cause of the discrepancy between the beliefs of scientists and those are the public is, in the words of Dietram Scheufele and Matthew Nisbet, a “failure in transmission.” That is, it is believed that negligence of the media to dutifully report the “objective” facts or the inability of an irrational public to correctly receive those facts prevents the public from having the “right” beliefs regarding issues like science funding or the desirability of technologies like genetically modified organisms. Indeed, a blogger for Scientific American blames the opposition of liberals to nuclear power on “ignorance” and “bad psychological connections.” It is perhaps only a slight exaggeration to say that the deficit model depicts anyone who is not a technological enthusiast as uninformed, if not idiotic. Regardless of whether or not the facts regarding these issues are actually “objective” or totally certain (both sides dispute the validity of each other’s arguments on scientific grounds), it remains odd that deficit model commentators view the discrepancy between scientists’ and the public’s views on GMOs and other issues as a problem for democracy. Certainly they are correct that it is preferable to have a populace that can think critically and suffers from few cognitive impairments to inquiry when it comes to wise public decision making. Yet, the idea that, when properly “informed” of the relevant facts, scientifically literate citizens would immediately agree with experts is profoundly undemocratic: It belittles and erases all the relevant disagreements about values and rights. Such a view ignores, for instance, the fact that the dispute over GMO labeling has as much to do with ideas about citizens’ right to know and desire for transparency as the putative safety of GMOs. By acting as if such concerns do not matter – that only the outcome of recent safety studies do – the people sharing those concerns are deprived of a voice. The deficit model inexorably excludes those not working within a scientistic framework from democratic decision making. Given the deficit model’s democratic deficits as well as the lack of any evidence that scientific illiteracy is actually increasing, advocates of GMOs and other potentially risky instances of technoscience ought to look elsewhere for the sources of public scientific controversy. If anything has changed in the last decades it is that science and technology have less legitimacy. Indeed, science writers could better grasp this point by reading one of their own. Former Discover writer Robert Pool notes that the point of legal and regulatory challenges to new technoscience is not simply to render it safer but also more acceptable to citizens. Whether or not citizens accept a new technology depends upon the level of trust they have of technical experts (and the firms they work for). Opposition to GMOs, for instance, is partly rooted in the belief that private firms such as Monsanto cannot be trusted to adequately test their products and that the FDA and EPA are too toothless (or captured by industry interests) to hold such companies to a high enough standard. Technoscientists and cheerleading science writers are probably oblivious to the workings and requirements for earning public trust because they are usually biased to seeing new technologies as already (if not inherently) legitimate. Those deriding the public for failing to recognize the supposedly objective desirability of potentially risky technology, moreover, have fatally misunderstood the relationship between expertise, knowledge, and legitimacy. It is unreasonable to expect members of the public to somehow find the time (or perhaps even the interest) to learn about the nuances of genetic transmission or nuclear safety systems. Such expectations place a unique and unfair burden on lay citizens. Many technical experts, for instance, might be found to be equally ignorant of elementary distinctions in the social sciences or philosophy. Yet, few seem to consider such illiteracies to be equally worrisome barriers to a well-functioning democracy. In any case, as political scientists Joseph Morone and Edward Woodhouse argue, the position of the public is not to evaluate complex or arcane technoscientific problems directly but to decide which experts to trust to do so. Citizens, according to Morone and Woodhouse, were quite reasonable to turn against nuclear power when overoptimistic safety estimates were proven wrong by a series of public blunders, including accidents at Three Mile Island and Chernobyl, as well as increasing levels of disagreement among experts. Citizens’ lack of understanding of nuclear physics was beside the point: The technology was oversold and overhyped. The public now had good grounds to believe that experts were not approaching nuclear energy or their risk assessments responsibly. Contrary to the assumptions of deficit modelers, legitimacy is not earned simply through technical expertise but via sociopolitical demonstrations of trustworthiness. If technoscientific experts were to really care about democracy, they would think more deeply about how they could better earn legitimacy in the eyes of the public. At the very least, research in science and technology studies provides some guidance on how they ought not to proceed. For example, after post-Chernobyl accident radiation rained down on parts of Cumbria, England, scientists quickly moved in to study the effects as well as ensure that irradiated livestock did not get moved out of the area. Their behavior quickly earned them the ire of local farmers. Scientists not only ignored the relevant expertise that farmers had regarding the problem but also made bold pronouncements of fact that were later found to be false, including the claim that the nearby Sellafield nuclear processing plant had nothing to do with local radiation levels. The certainty with which scientists made their uncertain claims as well as their unwillingness to respond to criticism by non-scientists led farmers to distrust them. The scientists lost legitimacy as local citizens came to believe that they were sent there by the national government to stifle inquiry into what was going on rather than learn the facts of the matter. Far too many technoscientists (or at least their associated cheerleaders in popular media) seem content to repeat the mistakes of these Cumbrian radiation scientists. “Take your concerns elsewhere. The experts are talking,” they seem to say when non-experts raise concerns, “Come back when you’ve got a science degree.” Ironically (and tragically), experts’ embrace of deficit model understandings of public scientific controversies undermines the very mechanisms by which legitimacy is established. If the problem is really a deficit of public trust, diminishing the transparency of decisions and eliminating possibilities for citizen participation is self-defeating. Anything looking like a constructive and democratic resolution to controversies like GMOs, fracking, or nuclear energy is only likely to happen if experts engage with and seek to understand popular opposition. Only then can they begin to incrementally reestablish trust. Insofar as far too many scientists and other experts believe they deserve public legitimacy simply by their credentials – and some even denigrate lay citizens as ignorant rubes – public scientific controversies are likely to continue to be polarized and pathological. The belief that science and religion (and science and politics for that matter) are exact opposites is one of the most tenacious and misguided viewpoints held by Americans today, one that is unfortunately reinforced by many science journalists. Science is not at all faith-based, claims Forbes contributor Ethan Siegel in his rebuke of Matt Emerson’s suggestion otherwise. In arguing against the role of faith in science, however, Siegel ironically embraces a faith-based view of science. His perspective is faith-based not because it has ties to organized religion, obviously, but rather because it is rooted in an idealization of science disconnected from the actual evidence on scientific practice. Siegel mythologizes scientists, seeing them as impersonal and unbiased arbiters of truth. Similar to any other thought-impairing fundamentalism, the faith-based view of science, if too widespread, is antithetical to the practice of democracy.
Individual scientists, being human, fall prey to innumerable biases, conflicts of interest, motivated reasoning and other forms of impaired inquiry. It sanctifies them to expect otherwise. Drug research, for instance, is a tangled thicket of financial conflicts of interest, wherein some scientists go to bat for pharmaceutical companies in order to prevent generics from coming to market and put their names on articles ghost-written by corporations. Some have wondered if scientific medical studies can be trusted, given that many, if not most, are so poorly designed. Siegel, of course, would likely respond that the above cases are simply pathological cases science, which will hopefully be eventually excised from the institution of science as if they were a malignant growths. He consistently tempers his assertions with an appeal to what a “good scientist” would do: “There [is no] such a thing as a good scientist who won’t revise their beliefs in the face of new evidence” claims Siegel. Rather go the easy route and simply charge him with committing a No True Scotsman fallacy, given that many otherwise good scientists often appear to hold onto their beliefs despite new evidence, it is better to question whether his understanding of “good” science stands up to close scrutiny. The image of scientists as disinterested and impersonal arbiters of truth, immediately at the ready to adjust their beliefs in response to new evidence, is not only at odds with the last fifty years of the philosophy and social study of science, it also conflicts with what scientists themselves will say about “good science.” In Ian Mitroff’s classic study of Apollo program scientists investigating the moon and its origins, one interviewed scientist derided what Siegel presents as good science as a “fairy tale,” noting that most of his colleagues did not impersonally sift through evidence but looked explicitly for what would support their views. Far from seeing it as pathological, however, one interviewee stated “bias has a role in science and serves it well.” Mitroff’s scientists argued that ideally disinterested scientists would fail to have the commitment to see their theories through difficult periods. Individual scientists need to have faith that they will persevere in the face of seemingly contrary evidence in order to do the work necessary to defend their theories. Without this bias-laden commitment, good theories would be thrown away prematurely. Further grasping why scientists, in contrast to their cheerleaders in popular media, would defend bias as often good for science requires recognizing that the faith-based understanding of science is founded upon a mistaken view of objectivity. Far too many people see objectivity as inhering within scientists when it really exists between scientists. As political scientist Aaron Wildavsky noted, “What is wanted is not scientific neuters but scientists with differing points of view and similar scientific standards…science depends on institutions that maintain competition among scientists and scientific groups who are numerous, dispersed and independent.” Science does not progress because individual scientists are more angelic human beings who can somehow enter a laboratory and no longer see the world with biased eyes. Rather, science progresses to the extent that scientists with diverse and opposing biases meet in disagreement. Observations and theories become facts not because they appear obviously true to unbiased scientists but because they have been met with scrutiny from scientists with differing biases and the arguments for them found to be widely persuasive. Different areas of science have varied in terms of how well they support vibrant and progressive levels of disagreement. Indeed, part of the reason why so many studies are later found to be false is the fact that scientists are not incentivized to repeat studies done by their colleagues; such studies are generally not publishable. Moreover, entire fields have suffered from cultural biases at one time or another. The image of the human egg as a passive “damsel in distress” waiting for a sperm to penetrate her persisted in spite of contrary evidence partly because of a traditional male bias within the biological sciences. Similar biases were discovered in primatology and elsewhere as scientific institutions became more diverse. Without enterprising scientists asking seemingly heretical questions of what appears to be “sound science” on the basis of sometimes meager evidence, entrenched cultural biases masquerading as scientific facts might persist indefinitely. The recognition that scientists often exhibit flawed and motivated reasoning, bias, personal commitments and the exercise of faith nearly as much as anyone else is important not merely because it is a more scientific understanding of science, but also because it is politically consequential. If citizens see scientists as impersonal arbiters of truth, they are likely to eschew subjecting science to public scrutiny. Political interference in science might seem undesirable, of course, when it involves creationists getting their religious views placed alongside evolution in high school science books. Nevertheless, as science and technology studies scholars Edward Woodhouse and Jeff Howard have pointed out, the belief that science is value-neutral and therefore best left up to scientists has enabled chemists (along with their corporate sponsors) to churn out more and more toxic chemicals and consumer products. Americans’ homes and environments are increasingly toxic because citizens leave the decision over the chemistry behind consumer products up to industrial chemists (and their managers). Less toxic consumer products are unlikely to ever exist in significant numbers so long as chemical scientists are considered beyond reproach. Science is far too important to be left up to an autonomous scientific clergy. Dispensing with the faith-based understanding proffered by Siegel is the first step toward a more publically accountable and more broadly beneficial scientific enterprise. 5/26/2014 Are These Shoes Made for Running? Uncertainty, Complexity and Minimalist FootwearRead Now Repost from Technoscience as if People Mattered In almost every technoscientific controversy participants could take better account of the inescapable complexities of reality and the uncertainties of their knowledge. Unfortunately, many people suffer from significant cognitive barriers that prevent them from doing so. That is, they tend to carry the belief that their own side is in unique possession of Truth and that only their opponents are in any way biased, politically motivated or otherwise lacking in sufficient data to support their claims. This is just as clear in the case of Vibram Five Finger shoes (i.e., “toe shoes”) as it is for GMO’s and climate change. Much of humanity would be better off, however, if technological civilization responded to these contentious issues in ways more sensitive to uncertainty and complexity. Five Fingers are the quintessential minimalist shoe, receiving much derision concerning its appearance and skepticism about its purported health benefits. Advocates of the shoes claim that its minimalist design helps runners and walkers maintain a gait similar to being barefoot while enjoying protection from abrasion. Padded shoes, in contrast, seem to encourage heel striking and thereby stronger impact forces in runners’ knees and hips. The perceived desirability of a barefoot stride is in part based on the argument that it better mimics the biomechanical motion that evolved in humans over millennia and the observation of certain cultures that pursue marathon long-distance barefoot running. Correlational data suggests that people in places that more often eschew shoes suffer less from chronic knee problems, and some recent studies find that minimalist shoes do lead to improved foot musculature and decreased heel striking. Opponents, of course, are not merely aesthetically opposed to Five Fingers but mobilize their own sets of scientific facts and experts. Skeptics cite studies finding higher rates of injury among those transitioning to minimalist shoes than those wearing traditional footwear. Others point to “barefoot cultures” that still run with a heel striking gait. The recent settlement by Vibram with plaintiffs in a class-action lawsuit, moreover, seems to have been taken as a victory of rational minds over pseudoscience by critics who compare the company to 19th century snake oil salesmen. Yet, this settlement was not an admission that the shoes did nothing but merely that recognition that there are not yet unequivocal scientific evidence to back up the company’s claims about the purported health benefits of the shoes.
Neither of the positions, pro or con, is immediately more “scientific” than the other. Both sides use value-laden heuristics to take a position on minimalist shoes in the absence of controlled, longitudinal studies that might settle the facts of the matter. The unspoken presumption among critics of minimalist shoes is that highly padded, non-minimalist shoes are unproblematic when really they are an unexamined sociotechnical inheritance. No scientific study has justified adding raised heels, pronation control and gel pads to sneakers. Advocates of minimalist shoes and barefoot running, on the other hand, trust the heuristic of “evolved biomechanics” and “natural gait” given the lack of substantial data on footwear. They put their trust in the argument that humans ran fine for millenia without heavily padded shoes. There is nothing inherently wrong, of course, about these value commitments. In everyday life as much as in politics, decisions must be made with incomplete information. Nevertheless, participants in debates over these decisions too frequently present themselves as in possession of a level of certainty they cannot possibly have, given that the science on what kinds of shoes humans ought to wear remains mostly undone. At the same time, it seems unfair to leave footwear consumers in the position of having to fumble with the decision between purchasing a minimalist or non-minimalist shoe. A technological civilization sensitized to uncertainty and complexity would take a different approach to minimalist shoes than the status quo process of market-led diffusion with very little oversight or monitoring. To begin, the burden of proof would be more appropriately distributed. Advocates of minimalist shoes are typically put in the position of having to prove the safety and desirability of them, despite the dearth of conclusive evidence whether or not contemporary running shoes are even safe. There are risks on both sides. Minimalist shoes may end up injuring those who embrace them or transition too quickly. However, if they do in fact encourage healthier biomechanics, it may be that multitudes of people have been and continue to be unnecessarily destined for knee and hip replacements by their clunky New Balances. Both minimalist and non-minimalist shoes need to be scrutinized. Second, use of minimalist shoes should be gradually scaled-up and matched with well-funded, multipartisan monitoring. Simply deploying an innovation with potential health benefits and detriments then waiting for a consumer response and, potentially, litigation means an unnecessarily long, inefficient and costly learning process. Longitudinal studies on Five Fingers and other minimalist shoes could have begun as soon as they were developed or, even better, when companies like Nike and Reebok started adding raised heels and gel pads. Monitoring of minimalist shoes, moreover, would need to be broad enough to take account of confounding variables introduced by cultural differences. Indeed, it is hard to compare American joggers to barefoot running Tarahumara Indians when the former have typically been wearing non-minimalist shoes for their whole lives and tend to be heavier and more sedentary. Squat toilets make for a useful analogy. Given the association of western toilets with hiatal hernias and other ills, abandoning them would seem like a good idea. However, not having grown up with them and likely being overweight or obese, many Westerners are unable to squat properly, if at all, and would risk injury using a squat toilet. Most importantly, multi-partisan monitoring would help protect against clear conflicts of interest. The controversy over minimalist and non-minimalist shoes impacts the interests of experts and businesses. There is a burgeoning orthotics and custom running shoes industry that not only earns quite a lot of revenue in selling special footwear and inserts but also certifies only certain people as having the “correct” expertise concerning walking and running issues. They are likely to adhere to their skepticism about minimalist shoes as strongly as oil executives do on climate change, for better or worse. Although large firms are quickly introducing their own minimalist shoes designs, a large-scale shift toward them would threaten their business models: Since minimalist shoes do not have cushioning that breaks down over time, there is no need to replace them every three to six months. Likewise, Vibram itself is unlikely to fully explore the potential limitations of their products. Finally, funds should have been set aside for potential victims. Given a long history of unintended consequences resulting from technological change, it should not have come as a surprise that a dramatic shift in footwear would produce injuries in some customers. Vibram Five Finger shoes, in this way, are little different from other innovations, such as the Toyota Prius’ electronically controlled accelerator pedal or novel medications like Vioxx. Had Vibram been forced to proactively set aside funds for potential victims, they would have been provided an incentive to more carefully study their shoes’ effects. Moreover, those ostensibly injured by the company’s product would not have to go through such a protracted and expensive legal battle to receive compensation. Although the process I have proposed might seem strange at first, the status quo itself hardly seems reasonable. Why should companies be permitted to introduce new products with little accountability for the risks posed to consumers and no requirements to discern what risks might exist? There is no obvious reason why footwear and sporting equipment should not be treated similarly to other areas of innovation where the issues of uncertainty and complexity loom large, like nanotechnology or new pharmaceuticals. The potential risks for acute and chronic harms are just as real, and the interests of manufacturers and citizens are just as much in conflict. Are Vibram Five Finger shoes made for running? Perhaps. But without changes to the way technological civilization governs new innovations, participants in any controversy are provided with neither the means nor sufficient incentive to find the answer. Peddling educational media and games is a lot like selling drugs to the parents of sick children: In both cases, the buyers are desperate. Those buying educational products often do so out of concern (or perhaps fear) for their child’s cognitive “health” and, thereby, their future as employable and successful adults. The hope is that some cognitive “treatment,” like a set of Baby Einstein DVDs or an iPad app, will ensure the “normal” mental development of their child, or perhaps provide them an advantage over other children. These practices are in some ways no different than anxiously shuttling infants and toddlers to pediatricians to see if they “are where they should be” or fretting over proper nutrition. However, the desperation and anxiety of parents serves as an incentive for those who develop and sell treatment options to overstate their benefits, if not outright deceive. Although regulations and institutions (i.e., the FDA) exist to help that ensure parents concerned about their son or daughter’s physiological development are not being swindled, those seeking to improve or ensure proper growth of their child’s cognitive abilities are on their own, and the market is replete with the educational equivalent of snake oil and laudanum.
Take the example of Baby Einstein. The developers of this DVD series promise that they are designed to “enrich your baby’s happiness” and “encourage [their] discovery of the world.” The implicit reference to Albert Einstein is meant to persuade parents that these DVDs provide a substantial educational benefit. Yet, there is good reason to be skeptical of Baby Einstein. The American Academy of Pediatrics, for instance, recommends against exposing children under two to television and movies to children as a precaution against the potential development harms. A 2007 study broke headlines when researchers found evidence that the daily watching of educational DVDs like Baby Einstein may slow communicative development in infants but had no significant effects on toddlers[1]. At the time, parents were already shelling out $200 million a year to Baby Einstein with the hope of stimulating their child’s brain. What they received, however, was likely no more than an overhyped electronic babysitter. Today, the new hot market for education technology is not DVDs but iPad and smartphone apps. Unsurprisingly, the cognitive benefits provided by them are just as uncertain. As Celilia Kang notes, “despite advertising claims, there are no major studies that show whether the technology is helpful or harmful.” Given this state of uncertainty, firms can overstate the benefits provided by their products and consumers have little to guide them in navigating the market. Parents are particularly easy marks. Much like how an individual receiving a drug or some other form of medical treatment is often in a poor epistemological position to evaluate its efficacy (they have little way of knowing how they would have turned out without treatment or with an alternative), parents generally cannot effectively appraise the cognitive boost given to their child by letting them watch a Baby Einstein DVD or play an ostensibly literacy-enhancing game on their iPad. They have no way of knowing if little Suzy would have learned her letters faster or slower with or without the educational technology, or if it were substituted with more time for play or being read to. They simply have no point of comparison. Lacking a time machine, they cannot repeat the experiment. Move over, some parents might be motivated to look for reasons to justify their spending on educational technologies or simply want to feel that they have agency in improving their child’s capacities. Therefore, they are likely to suffer from a confirmation bias. It is far too easy for parents to convince themselves that little David counted to ten because of their wise decision to purchase an app that bleats the numbers out of the tablet’s speakers when they jab their finger toward the correct box. Educational technologies have their own placebo effect. It just so happens to affect the minds of parents, not the child using the technology. Moreover, determining whether or not one’s child has been harmed is no easy matter. Changes in behavior could be either over or under estimated depending on to what extent parents suffers from an overly nostalgic memory of their own childhood or generational amnesia concerning real significant differences. Yet, it is not only parents and their children who may be harmed by wasting time and money on learning technologies that are either not substantively more effective or even cognitively damaging. School districts spend billions of taxpayer money on new digital curricula and tools with unproven efficacy. There are numerous products, from Carengie’s “Cognitive Tutor” to Houghton Mifflin Harcourt’s “Destination Reading,” that make extravagant claims about their efficacy but have been found not to significantly improve learning outcomes over traditional textbooks when reviewed by the Department of Education. Nevertheless, both are still for sale. Websites for these software packages claim that they are “based on over 20 years of research into how students think and learn” and “empirical research and practice that helps identify, prevent, and remediate reading difficulties.” Nowhere is it stated on the companies’ websites that third party research suggests that these expensive pieces of software may not actually improve outcomes. Even if some educational technologies prove to be somewhat more effective than a book or numbered blocks, they may still be undesirable for other reasons. Does an app cut into time that might otherwise be spent playing with parents or siblings? Children, on average, already spend seven hours each day in front of screens, which automatically translates into less time spent outdoors on non-electronic hobbies and interactions. The cultural presumption that improved educational outcomes always lie with the “latest and greatest” only exacerbates this situation. Do educational technologies in school districts come at the costs of jobs for teachers or cut into budgets for music and arts programs? The Los Angeles school district has cut thousands of teachers from their payroll in recent years but, as Carlo Rotella notes, is spending $500 million in bond money to purchase iPads. All the above concerns do not even broach the subject of how people raised on tablets might be changed in undesirable ways as a result. What sorts of expectations, beliefs and dispositions might their usage be more compatible? Given concerns about how technologies like the Internet influence how people think in general, concerned citizens should not let childhood be dominated by them without adequate debate and testing. Because of the potential for harm, uncertainty of benefit and the difficulty for consumers to be adequately informed concerning either, the US should develop an equivalent to the FDA for educational technologies. Many Americans trust the FDA to prevent recurrences of pharmaceutical mistakes like thalidomide, the morning sickness drug that led to dead and deformed babies. Why not entrust a similar institution to help ensure that future children are not cognitively stunted, as may have happened with Baby Einstein DVDs, or simply that parents and school districts do not waste money on the educational equivalent of 19th century hair tonics and “water cures?” The FDA, of course, is not perfect. Some aspects of human health are too complex to be parsed out through the kinds of experimental studies the FDA requires. Just think of the perpetual controversy over what percentage of people’s diet should come from fats, proteins and starches. Likewise, some promising treatments may never get pursued because the return on investment may not match the expenses incurred in getting FDA approval. The medicinal properties of some naturally occurring substances, for instance, have often not been substantively tested because, in that state, they cannot be patented. Finally, how to intervene in the development of children is ultimately a matter of values. Even pediatric science has been shaped by cultural assumptions about what an ideal adult looks like. For instance, mid-twentieth century pediatricians insisted, in contrast to thousands of years of human history, that sleeping alone promoted the healthiest outcomes for children. Today, it is easy to recognize that such science was shaped by Western myths of the self-reliant or rugged individual. The above problems would likely also affect any proposed agency for assessing educational technologies. What makes for “good” education depends on one's opinion concerning what kind of person education ought to produce. Is it more important that children can repeat the alphabet or count to ten at earlier and earlier ages or that they can approach the world with not only curiosity and wonder but also as a critical inquirer? Is the extension of the logic and aims of the formal education system to earlier and earlier ages via apps and other digital devices even desirable? Why not redirect some of the money going to proliferating iPad apps and robotic learning systems to ensuring all children have the option to attend something more like the "forest kindergartens" that have existed in Germany for decades? No scientific study that can answer such questions. Nevertheless, something like an Educational Technology Association would, in any case, represent one step toward a more ethically responsible and accountable educational technology industry. _______________________________________ [1] Like any controversial study, its findings are a topic of contention. Other scholars have suggested that the data could be made to show a positive, negative or neutral result, depending on statistical treatment. The authors of the original study have countered, arguing that the critics have not undermined the original conclusion that the educational benefits of these DVDs are dubious at best and may crowd-out more effective practices like parents reading to their children. |
Details
AuthorTaylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research. Archives
July 2023
Blog Posts
On Vaccine Mandates Escaping the Ecomodernist Binary No, Electing Joe Biden Didn't Save American Democracy When Does Someone Deserve to Be Called "Doctor"? If You Don't Want Outbreaks, Don't Have In-Person Classes How to Stop Worrying and Live with Conspiracy Theorists Democracy and the Nuclear Stalemate Reopening Colleges & Universities an Unwise, Needless Gamble Radiation Politics in a Pandemic What Critics of Planet of the Humans Get Wrong Why Scientific Literacy Won't End the Pandemic Community Life in the Playborhood Who Needs What Technology Analysis? The Pedagogy of Control Don't Shovel Shit The Decline of American Community Makes Parenting Miserable The Limits of Machine-Centered Medicine Why Arming Teachers is a Terrible Idea Why School Shootings are More Likely in the Networked Age Against Epistocracy Gun Control and Our Political Talk Semi-Autonomous Tech and Driver Impairment Community in the Age of Limited Liability Conservative Case for Progressive Politics Hyperloop Likely to Be Boondoggle Policing the Boundaries of Medicine Automating Medicine On the Myth of Net Neutrality On Americans' Acquiescence to Injustice Science, Politics, and Partisanship Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate Privacy Threats and the Counterproductive Refuge of VPNs Andrew Potter's Macleans Shitstorm The (Inevitable?) Exportation of the American Way of Life The Irony of American Political Discourse: The Denial of Politics Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton Science's Legitimacy Problem Forbes' Faith-Based Understanding of Science There is No Anti-Scientism Movement, and It’s a Shame Too American Pro Rugby Should Be Community-Owned Why Not Break the Internet? Working for Scraps Solar Freakin' Car Culture Mass Shooting Victims ARE on the Rise Are These Shoes Made for Running? Underpants Gnomes and the Technocratic Theory of Progress Don't Drink the GMO Kool-Aid! On Being Driven by Driverless Cars Why America Needs the Educational Equivalent of the FDA On Introversion, the Internet and the Importance of Small Talk I (Still) Don't Believe in Digital Dualism The Anatomy of a Trolley Accident The Allure of Technological Solipsism The Quixotic Dangers Inherent in Reading Too Much If Science Is on Your Side, Then Who's on Mine? The High Cost of Endless Novelty - Part II The High Cost of Endless Novelty Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age The Symbolic Analyst Sweatshop in the Winner-Take-All Society On Digital Dualism: What Would Neil Postman Say? Redirecting the Technoscience Machine Battling my Cell Phone for the Good Life Categories
All
|