Although it is a common cliché that a new sitting president will “unite” us, the idea of the “American people” falling in line behind a new administration is a fantasy. Presidential approval ratings have always been polarized along party lines—although the partisan gap has widened over the decades. Yet such rhetoric remains an integral part of the post-election political theater, helpful for relieving some of the partisan pressure built up over a frequently nasty campaign season.
The gaping chasm between Biden supporters and Trump voters, however, is not going to be narrowed by rhetorical olive branches. The pre-election belief that ousting Trump would restore political normalcy is rooted in the mistaken idea that his 2016 success was an aberration, a freak anomaly that would be soon forgotten after a more respectable statesman or woman took over the Oval Office. That story is nothing more than a flight of fancy, one that ignores the underlying causes for rampant polarization, “post-truth,” Trumpism, or however one diagnoses our political malaise. The problem is that Americans worship entirely different notions of Truth.
Many of my fellow leftist friends and colleagues desperately want to believe that Trumpism is little more than the last gasps of a dying racist culture. It is a convenient move, which allows them to think of Trump voters as the political equivalent of anti-vaxxers. Once diagnosed as hopelessly deluded and on the wrong side of history, there is no longer any need to understand Trump voters. They only need to be defeated.
Washington Monthly contributor David Atkins openly wondered how we could “deprogram” the 75 million or so people who voted for Trump, dismissing their electoral choice as due to belonging to “a conspiracy theory fueled belligerent death cult against reality & basic decency.” What a way to talk about an almost full quarter of the population! Only the most dyspeptic will admit to daydreaming about ideological reeducation drives, at least publicly. Optimists, on the other hand, allow themselves to believe that “truth and rationality” will inevitably win out, that the wave of popular support for politicians like Trump will naturally subside, ebbing away like the tide.
The desire to equate of Trumpism with white supremacy is understandable, even if it is probably political escapism. Trump’s company faced a 1973 federal lawsuit for racial discriminating against black tenants, and he took out a full-page ad in the New York Times in 1989 to call for the death of five black and Latino youth falsely accused of sexually assaulting a white jogger. His 2016 campaign rhetoric included insinuating that Mexican immigrants mostly brought drugs and crime to the United States, and promised tough-on-crime legislation that disproportionately burdens minority neighborhoods.
The “dying racist America” thesis is not merely propped up by a sober accounting of Trump’s panoply of racial misdeeds but by attacking the next plausible explanation: working class discontent. Adam Serwer’s review of the 2016 election statistics raises serious challenges to that argument. Clinton won a greater proportion of voters making less than $50,000. And neither did the opioid epidemic ripping through white rural America seem to tip the election toward Trump. White working-class citizens whose loved ones were coping with mental health problems, financial troubles, and substance abuse expressed less support for Trump, not more.
But then again Biden was most successful in the counties where the bulk of America’s economic activity occur. The 477 counties accounting for 70 percent of the US economy went to the former vice-president, while a greater proportion of counties that have faced weak job, population, and economic growth went to Trump. To top it off, Trump lost white votes in 2020 and made significant, albeit single-digit, gains among African Americans, Hispanics, and Asian Americans. Although Trump’s base has traditionally been with white voters who railed against immigration and the perceived feminization of society, neither the “economically disenfranchised working class” nor the “white supremacy” theses seem to really explain the former president’s appeal.
Science vs. Common Sense
Political movements are rarely understood by focusing on members’ deficiencies. An especially misleading starting point is wherever those movements’ opponents think the deficits are. Political allies are more strongly bonded by their shared strengths than their flaws—however obvious and fatal the latter may be.
Data on the employment backgrounds of political donations are insightful here. The Democrats are increasingly of not just the party of the professional class, but the college-educated in general. Scientists, engineers, lawyers, teachers, and accountants dominated donations to Biden, whereas Trump’s support came from business owners, police officers, truckers, construction workers, pilots, stay-at-home mothers, and farmers. This split doesn’t map well onto to income-based conceptions of “working class.” Business owners and pilots often make a good living for their families, while well-educated social workers do not. Rather, the difference lies in which kind of knowledge defines their competency.
Today’s political divisions are only indirectly class and racially based, the more fundament divide is between formally schooled expertise and experience-based judgment. Office workers, service employees, and elite knowledge-sector analysts are on one side, whereas blue-collar manual laborers and business owners are on the other. Despite our culture’s tendency to equate intelligence with a talent for mathematical abstraction or the patience (or pretension) to read dense books by French authors, writers like Mike Rose and Matthew Crawford remind us of the discerning judgements necessary for competence in physical work. Blue collar workers and business owners don’t explicitly analyze symbols but have precisely tuned gut feelings that help them do their jobs well.
Much of America’s political malaise is due to the polarized veneration of these competing styles of knowledge. The idolization of science translates into support for technocracy, or rule by experts. The same worship but for common sense drives an attraction to populism. It is this division above all others that increasingly separates Democrats and Republicans. In his victory speech, Biden explicitly painted himself as a defender of science, promising to put properly credentialed experts in charge of pandemic planning.
In contrast, Trump and his allies have taken great lengths to champion how (some) lay people (fail to) understand public problems. Recall how Newt Gingrich dismissed FBI crime statistics as immaterial in the run-up to the 2016 election, stating that “the average American…does not think that crime is down.” He followed up with, “As a political candidate, I’ll go with what people feel.” Likewise, faced with polls and election numbers showing a Biden victory, Trump voters saw the occurrence of voter fraud as “common sense.” They reason that since Democratic politicians and media outlets despise Trump so much, why wouldn’t they collude with poll workers to spoil the election? As Democrats have portrayed themselves as defending “the facts,” the right has doubled down on gut feeling.
This same dynamic increasingly defines America’s racial politics as well. Armed with concepts and theories from academics such as Robin D’Angelo, people don’t end up having rich, empathy-evoking conversations about their experiences of race in America. Rather, the more militant members of the “woke” left play armchair psychologist, focusing on diagnosing fellow citizens’ racist personality dysfunctions and prescribing the appropriate therapeutic interventions. They demand that others learn to understand their lives through the vocabulary of critical race scholars, whose conclusions are presented as indisputable.
White people who struggle with some critical race theorists’ stark view that one is either racist or actively anti-racist are labeled as suffering from “white fragility,” an irrational and emotional response to having one’s own undeniable racism exposed. Regardless of whether one thinks this judgement is apt or not, it is difficult to deny that the prognosis comes off as arrogant and dismissive. How else would we expect participants to respond when hearing that some of their cherished ideas about valuing hard work, individualism, or the nuclear family is just part of an inherently racist “white culture”? But empathic understanding is often in short supply when antiracists find themselves in a rare teachable moment. As a New York City Education Councilwoman put it to her colleague in a now viral video, “Read a book. Read Ibram Kendi….It is not my job to educate you.”
Many conservative Americans have reacted to the inroads that critical race theory has made into the workplace and popular discourse by retreating to a similarly incurious form of “common sense.” Ideas from critical race theory become labeled as political correctness run amok, or even “cult indoctrination.” All this is somehow imposed on the rest of the nation by left-wing academic activists whose far-reaching powers somehow only disappear when it comes to concretely influencing policymaking or to getting their 18-year-old students to actually read for class.
Regardless of where one’s sympathies lie in this debate, it seems clear that productive conversations across party lines about the extent of America’s racial problems and what could be done about them are simply not happening.
The real tragedy of today’s political moment is that the elevation of “truth” over politics has locked us into a vicious cycle. Technocracy breeds popular distrust in experts. And the specter of populism spurs the professional class to further wrap themselves in the mantle of “the facts.” And no one budges on anything. Consider ongoing protests of pandemic protection measures. Resistance to state government’s social distancing and stay-at-home requirements has not been a rejection of science so much as the rejection of the inflexibility, strictness, and occasional arbitrariness of executive orders.
Yet many governors have reacted to the pandemic situation as if politics is no longer necessary, as if leaving any room for negotiation were dangerous. California’s Gov. Gavin Newsom stated that “health outcomes and science – not politics” would guide reopening plans. And Michigan’s Gov. Gretchen Whitmer concurred, contending that her decision was a “data-driven approach based on facts, based on science, based on recommendations from experts.”
And much of the media repeats ad nauseum the facts of the matter. It is as if journalists believe that if they present just the right infographic on ICU beds or presentation of the size of one’s social “bubble”—along with a healthy dollop of social shaming—less risk averse Americans will immediately change their holiday plans and social lives to sequester at home. When that fails to happen, the talking points shift to the irrationality and immorality of a public that fails to respect facts.
The problem with the mantra “please just listen to the science!” is that science literacy doesn’t determine whether citizens listen or not. What matters is the perceived trustworthiness of experts and public officials. Expert guidance only gets legitimated through a kind of common sense. Citizens go “with their guts” on whether the elected officials and their science-based advice seems sensible and trustworthy.
One of the best ways to inadvertently sow distrust in experts is to portray them as an unquestionable authority. Anywhere where “post-truth” seems to be on the upswing—in the response to the pandemic or for childhood vaccinations—the determinations of experts are handed down as if divinely decreed, or as if citizens’ reservations need to be only “managed” or countered. The stakes are judged to be so high that ordinary people are deemed unworthy of being consulted regarding how exactly expert recommendations should be implemented. Health researchers and doctors don’t think to ask the vaccine hesitant how they might be made to feel more comfortable with the vaccination process; state governors never bothered to ask people how pandemic protections might be implemented in ways that are less disruptive to their lives.
The other way is for experts to get things wrong. When “the facts” are initially presented as unassailable, later corrections aren’t taken as the routine self-corrections of the scientific process but rather as evidence that the science has been tainted by politics. Early in the pandemic, health experts told the public that masks were ineffective at preventing a COVID-19 infection, but then turned around and recommended that they be mandatory in many states’ pandemic guidelines.
The minor inconveniences of mask wearing and reduced restaurant occupancy have blown up to become perceived as gross violations of freedom in the “common sense” of a substantial portion of Americans not because those citizens deny science but rather because expert opinion was arrogantly applied. Officials failed to be honest and admit that even the scientists were feeling their way through an uncertain situation and that they too make value judgements in light of what they don’t yet understand.
And governmental policies have been inconsistent and sometimes unfair. I can get ticketed in my own state if I walk around a deserted athletic field without a mask. In many states, bars and restaurants are open but outdoor playgrounds remain closed. It doesn’t take a scientist to know that these rules probably do not match the on-the-ground risks. But when leaders and experts overplay their hand, some portion of citizens just start taking expert guidance less seriously. Precious political capital is wasted to enforce compliance in ways that may be pure hygiene theater. Neither has it helped that governors such as Andrew Cuomo originally seemed content to waste vaccines, lest they go to someone who has been calculated by state bureaucrats to be less deserving.
Beyond Political Road Rage
Amidst the trench warfare between technocrats lobbying fact-laced truth bombs and populists laying “common sense” landmines, Americans have largely forgotten how to do politics. The underlying reasoning is understandable. If our collective problems are recast as completely solvable by straightforwardly applying science or something called common sense, what reason is there to debate, negotiate, or listen? Doing so would needlessly give ground to ignorant rubes and brainwashed ideologues.
Americans need not like the viewpoints of their compatriots, but they ought to at least try to understand them. But far too many people appear to put their faith in Truth with a capital T. If only their opponents could be led to see the light and recognize “the facts” or popular “common sense” that they would they quit arguing and toe the line. If only politics were so simple.
It is easy to imagine things getting much worse. A less dysfunctional public sphere feels like a utopian daydream. A good first step would be for Democrats to stop wrapping themselves in the mantle of science and learn to listen better to Americans who don’t have a college degree. Everywhere that populism has developed, it has been in reaction to the popular perception that the political establishment is increasingly elitist and distant. Is it any surprise that 64% of Americans now disagree with the idea that politicians “care what people like me think”? Appearing to hand important political decisions over to scientists in distant universities and governmental agencies is literally the worst thing that elected officials could do right now.
For their part, conservatives, libertarians, and others should lay off the polarizing rhetoric about universities and experts. Rather than attack expertise itself, it is far better to focus instead on efforts to intellectually diversify higher education or turn political conversations back toward the moral (rather than factual) divides that define them. The claim that institutions of higher education are “under attack” by an “insidious” woke ideology is needlessly melodramatic. It reinforces the image of the world as hopeless divided between incommensurable worldviews and further fuels polarization between credentialed expertise and practical common sense. Worst of all, it portrays wokeness is something to be excised from the body politic rather than faced up to as a legitimate political adversary. Hyperbolic handwringing about traitorous cultural Marxists is as antidemocratic as any of the more draconian proposals coming from the left.
Taking one’s political adversaries seriously means making a serious effort to persuade fence sitters—and not just through shaming. When a majority of Americans balked at the “Defund the Police” slogan over the summer, defenders doubled down on the phrase. The problem wasn’t with their poor choice of rhetoric, they seemed to imply, but that the public was misinformed about what the phrase really meant. I can’t think of a more potent example of willful political unawareness.
Whether it is woke-ism or Trumpism, fanatical political movements tell us that something is seriously amiss in our body politic, that an area of public discontent is in a need of serious attention. We must be willing to take those discontents seriously. Focusing on apparent deficits in factual understanding or commonsense reasoning prevents us from better addressing the underlying causes of popular frustration.
George Carlin once mused, “Have you ever noticed that anybody driving slower than you is an idiot and anyone going faster than you is a maniac?” American political discourse is little different, having become the rhetorical equivalent of road rage. Citizens need more productive ways to talk about their collective problems. Unless people are willing to admit that people can hold a different political viewpoint without being either an idiot of a maniac, there is little hope for American democracy. Power will just continue to vacillate between aloof technocrats and populist buffoons.
Are Americans losing their grip on reality? It is difficult not to think so in light of the spread of QANON conspiracy theories, which posit that a deep-state ring of Satanic pedophiles is plotting against President Trump. A recent poll found that some 56% of Republican voters believe that at least some of the QANON conspiracy theory is true. But conspiratorial thinking has been on the rise for some time. One 2017 Atlantic article claimed that America had “lost its mind” in its growing acceptance of post-truth. Robert Harris has more recently argued that the world had moved into an “age of irrationality.” Legitimate politics is threatened by a rising tide of unreasonableness, or so we are told. But the urge to divide people in rational and irrational is the real threat to democracy. And the antidote is more inclusion, more democracy—no matter how outrageous the things our fellow citizens seem willing to believe.
Despite recent panic over the apparent upswing in political conspiracy thinking, salacious rumor and outright falsehoods has been an ever-present feature of politics. Today’s lurid and largely evidence-free theories about left-wing child abuse rings have plenty of historical analogues. Consider tales of Catherine the Great’s equestrian dalliances and claims that Marie Antoinette found lovers in both court servants and within her own family. Absurd stories about political elites seems to have been anything but rare. Some of my older relatives believed in the 1990s that the government was storing weapons and spare body parts underneath Denver International Airport in preparation for a war against common American citizens—and that was well before the Internet was a thing.
There seems to be little disagreement that conspiratorial thinking threatens democracy. Allusions to Richard Hofstadter’s classic essay on the “paranoid style of American politics” have become cliché. Hofstadter’s targets included 1950s conservatives that saw Communist treachery around every corner, 1890s populists railing against the growing power of the financial class, and widespread worries about the machinations of the Illuminati. He diagnosed their politics paranoid in light of their shared belief that the world was being persecuted by a vast cabal of morally corrupt elites.
Regardless of their specific claims, conspiracy theories’ harms come from their role in “disorienting” the public, leading citizens to have grossly divergent understandings of reality. And widespread conspiratorial thinking drives the delegitimation of traditional democratic institutions like the press and the electoral system. Journalists are seen as pushing “fake news.” The voting booths become “rigged.”
Such developments are no doubt concerning, but we should think carefully about how we react to conspiracism. Too often the response is to endlessly lament the apparent end of rational thought and wonder aloud if democracy can survive while being gripped by a form of collective madness. But focusing on citizens' perceived cognitive deficiencies presents its own risks. Historian Ted Steinberg called this the “diagnostic style” of American political discourse, which transforms “opposition to the cultural mainstream into a form of mental illness.” The diagnostic style leads us to view QANONers, and increasingly political opponents in general, as not merely wrong but cognitively broken. They become the anti-vaxxers of politics.
While QANON believers certainly seem to be deluding themselves, isn’t the tendency by leftists to blame Trump’s popular support on conservative’s faculty brains and an uneducated or uninformed populace equally delusional? The extent to which such cognitive deficiencies are actually at play is beside the point as far as democracy is concerned. You can’t fix stupid, as the well-worn saying has it. Diagnosing chronic mental lapses actually leaves us very few options for resolving conflicts. Even worse, it prevents an honest effort to understand and respond to the motivations of people with strange beliefs. Calling people idiots will only cause them to dig in further.
Responses to the anti-vaxxer movement show as much. Financial penalties and other compulsory measures tend to only anger vaccine hesitant parents, leading them to more often refuse voluntary vaccines and become more committed in their opposition. But it does not take a social scientific study to know this. Who has ever changed their mind in response to the charge of stupidity or ignorance?
Dismissing people with conspiratorial views blinds us to something important. While the claims themselves might be far-fetched, people often have legitimate reasons for believing them. African Americans, for instance, disproportionately believe conspiracy theories regarding the origin of HIV, such as that it was man-made in a laboratory or that the cure was being withheld, and are more hesitant of vaccines. But they also rate higher in distrust of medical institutions, often pointing to the Tuskegee Syphilis Study and ongoing racial disparities as evidence. And from British sheepfarmers’ suspicion of state nuclear regulators in the aftermath of Chernobyl to mask skeptics’ current jeremiads against the CDC, governmental mistrust has often developed after officials’ overconfident claims about the risks turned out to be inaccurate. What might appear to an “irrational” rejection of the facts is often a rational response to a power structure that feels distant, unresponsive, and untrustworthy.
The influence of psychologists has harmed more than it has helped in this regard. Carefully designed studies purport to show that believers in conspiracy theories lack the ability to think analytically or claim that they suffer from obscure cognitive biases like “hypersensitive agency detection.” Recent opinion pieces exaggerate the “illusory truth effect,” a phenomenon discovered in psych labs that repeated exposure to false messages leads to a relatively slight increase in the number subjects rating them as true or plausible. The smallness of this, albeit statistically significant, effect doesn’t stop commentators from presenting social media users as if they were passive dupes, who only need to be told about QANON so many times before they start believing it. Self-appointed champions of rationality have spared no effort to avoid thinking about the deeper explanations for conspiratorial thinking.
Banging the drum over losses in rationality will not get us out of our present situation. Underneath our seeming inability to find more productive political pastures is a profound misunderstanding of what makes democracy work. Hand Wringing over “post-truth” or conspiratorial beliefs is founded on the idea that the point of politics is to establish and legislate truths. Once that is your conception of politics, the trouble with democracy starts to look like citizens with dysfunctional brains.
When our fellow Americans are recast as cognitively broken, it becomes all too easy to believe that it would be best to exclude or diminish the influence of people who believe outrageous things. Increased gatekeeping within the media or by party elites and scientific experts begins to look really attractive. Some, like philosopher Jason Brennan, go even further. His 2016 book, Against Democracy, contends that the ability to rule should be limited to those capable of discerning and “correctly” reasoning about the facts, while largely sidestepping the question of who decides what the right facts are and how to know when we are correctly reasoning about them.
But it is misguided to think that making our democracy only more elitist will throttle the wildfire spread of conspiratorial thinking. If anything, doing so will only temporarily contain populist ferment, letting pressure build until it eventually explodes or (if we are lucky) economic growth leads it to fizzle out. Political gatekeeping, by mistaking supposed deficits in truth and rationality for the source of democratic discord, fails to address the underlying cause of our political dysfunction: the lack of trust.
Signs of our political system’s declining legitimacy are not difficult to find. A staggering 71 percent of the Americans believe that elected officials don’t care about the average citizen or what they think. Trust in our government has never been lower, with only 17 percent of citizens expressing confidence about Washington most or all the time. By diagnosing rather than understanding, we cannot see that conspiratorial thinking is the symptom rather than the disease.
The spread of bizarre theories about COVID-19 being a “planned” epidemic or child-abuse rings is a response to real feelings of helplessness, isolation, and mistrust as numerous natural and manmade disasters unfold before our eyes—epochal crises that governments seem increasingly incapable of getting a handle on. Many of Hofstadter’s listed examples of conspiratorial thought came during similar moments: at the height of the Red Scare and Cold War nuclear brinkmanship, during the 1890s depression, or in the midst of pre-Civil War political fracturing. Conspiracy theories offer a simplified world of bad guys and heroes. A battle between good and evil is a more satisfying answer than the banality of ineffectual government and flawed electoral systems when one is facing wicked problems.
Perhaps social media adds fuel to the fire, accelerating the spread of outlandish proposals about what ails the nation. But it does so not because it short-circuits our neural pathways to crash our brains’ rational thinking modules. Conspiracy theories are passed by word of mouth (or Facebook likes) by people we already trust. It is no surprise that they gain traction in a world where satisfying solutions to our chronic, festering crises are hard to find, and where most citizens are neither afforded a legible glimpse into the workings of the vast political machinery that determines much of their lives nor the chance to actually substantially influence it.
Will we be able to reverse course before it is too late? If mistrust and unresponsiveness is the cause, the cure should be the effort to reacquaint Americans with the exercise of democracy on a broad-scale. Hofstadter himself noted that, because the political process generally affords more extreme sects little influence, public decisions only seemed to confirm conspiracy theorists’ belief that they are a persecuted minority. The urge to completely exclude “irrational” movements forgets that finding ways to partially accommodate their demands is often the more effective strategy. Allowing for conscientious objections to vaccination effectively ended the anti-vax movement in early 20th century Britain. Just as interpersonal conflicts are more easily resolved by acknowledging and responding to people’s feelings, our seemingly intractable political divides will only become productive by allowing opponents to have some influence on policy. That is not to say that we should give into all their demands. Rather it is only that we need to find small but important ways for them to feel heard and responded to, with policies that do not place unreasonable burdens on the rest of us.
While some might pooh-pooh this suggestion, pointing to conspiratorial thinking as evidence of how ill-suited Americans are for any degree of political influence, this gets the relationship backwards. Wisdom isn’t a prerequisite to practicing democracy, but an outcome of it. If our political opponents are to become more reasonable it will only be by being afforded more opportunities to sit down at the table with us to wrestle with just how complex our mutually shared problems are. They aren’t going anywhere, so we might as well learn how to coexist.
America’s nuclear energy situation is a microcosm of the nation’s broader political dysfunction. We are at an impasse, and the debate around nuclear energy is highly polarized, even contemptuous. This political deadlock ensures that a widely disliked status quo carries on unabated. Depending on one’s politics, Americans are left either with outdated reactors and an unrealized potential for a high-energy but climate-friendly society, or are stuck taking care of ticking time bombs churning out another two thousand tons of unmanageable radioactive waste every year
Continue reading at The New Atlantis
Back during the summer, Tristan Harris sparked a flurry of academic indignation when he suggested that we needed a new field called “Science & Technology Interaction” or STX, which would be dedicated to improving the alignment between technologies and social systems. Tweeters were quick to accuse him of “Columbizing,” claiming that such a field already existed in the form of Science & Technology Studies (STS) or similar such academic department. So ignorant, amirite?
I am far more sympathetic. If people like Harris (and earlier Cathy O’Neil) have been relatively unaware of fields like Science and Technology Studies, it is because much of the research within these disciplines is mostly illegible to non-academics, not all that useful to them, or both. I really don’t blame them for not knowing. I am even an STS scholar myself, and the table of contents of most issues of my field’s major journals don’t really inspire me to read further.
And in fairness to Harris and contrary to Academic Twitter, the field of STX that he proposes does not already exist. The vast majority of STS articles and books dedicate single digit percentages of their words to actually imagining how technology could better match the aspirations of ordinary people and their communities. Next to no one details alternative technological designs or clear policy pathways toward a better future, at least not beyond a few pages at the end of a several-hundred-page manuscript.
My target here is not just this particular critique of Harris, but the whole complex of academic opiners who cite Foucault and other social theory to make sure we know just how “problematic” non-academics’ “ignorant” efforts to improve technological society are. As essential as it is to try to improve upon the past in remaking our common world, most of these critiques don’t really provide any guidance for what steps we should be taking. And I think that if scholars are to be truly helpful to the rest of humanity they need to do more than tally and characterize problems in ever more nuanced ways. They need to offer more than the academic equivalent of fiddling while Rome burns.
In the case of Harris, we are told that underlying the more circumspect digital behavior that his organization advocates is a dangerous preoccupation with intentionality. The idea of being more intentional is tainted by the unsavory history of humanistic thought itself, which has been used for exclusionary purposes in the past. Left unsaid is exactly how exclusionary or even harmful it remains in the present.
This kind of genealogical take down has become cliché. Consider how one Gizmodo blogger criticizes environmentalists’ use the word “natural” in their political activism. The reader is instructed that because early Europeans used the concept of nature to prop up racist ideas about Native Americans that the term is now inherently problematic and baseless. The reader is supposed to believe from this genealogical problematization that all human interactions with nature are equally natural or artificial, regardless of whether we choose to scale back industrial development or to erect giant machines to control the climate.
Another common problematiziation is of the form “not everyone is privileged enough to…”, and it is often a fair objection. For instance, people differ in their individual ability to disconnect from seductive digital devices, whether due to work constraints or the affordability or ease of alternatives. But differences in circumstances similarly challenge people’s capacity to affordably see a therapist, retrofit their home to be more energy efficient, or bike to work (and one might add to that: read and understand Foucault). Yet most of these actions still accomplish some good in the world. Why is disconnection any more problematic than any other set of tactics that individuals use to imperfectly realize their values in an unequal and relatively undemocratic society? Should we just hold our breaths for the “total overhaul…full teardown and rebuild” of political economies that the far more astute critics demand?
Equally trite are references to the “panopticon,” a metaphor that Foucault developed to describe how people’s awareness of being constantly surveilled leads them to police themselves. Being potentially visible at all times enables social control in insidious ways. A classic example is the Benthamite prison, where a solitary guard at the center cannot actually view all the prisoners simultaneously, but the potential for him to be viewing a prisoner at any given time is expected to reduce deviant behavior.
This gets applied to nearly any area of life where people are visible to others, which means it is used to problematize nearly everything. Jill Grant uses it to take down the New Urbanist movement, which aspires (though fairly unsuccessfully) to build more walkable neighborhoods that are supportive of increased local community life. This movement is “problematic” because the densities it demands means that citizens are everywhere visible to their neighbors, opening up possibilities for the exercise of social control. Whether not any other way of housing human beings would not result in some form of residential panopticon is not exactly clear, except perhaps by designing neighborhoods so as to prohibit social community writ large.
Further left unsaid in these critiques is exactly what a more desirable alternative would be. Or at least that alternative is left implicit and vague. For example, the pro-disconnection digital wellness movement is in need of enhanced wokeness, to better come to terms with “the political and ideological assumptions” that they take for granted and the “privileged” values they are attempting to enact in the world.
But what does that actually mean? There’s a certain democratic thrust to the criticism, one that I can get behind. People disagree about what is “the good life” and how to get there, and any democratic society would be supportive of a multitude of them. Yet the criticism that the digital wellness movement seems to center around one vision of “being human,” one emphasizing mindfulness and a capacity to exercise circumspect individual choosing, seems hollow without the critics themselves showing us what should take its place. Whatever the flaws with digital wellness, it is not as self-stultifying as the defeatist brand of digital hedonism implicitly left in the wake of academic critiques that offer no concrete alternatives. Perhaps it is unfair to expect a full-blown alternative; yet few of these critiques offer even an incremental step in the right direction.
Even worse, this line of criticism can problematize nearly everything, losing its rhetorical power as it is over-applied. Even academia itself is disciplining. STS has its own dominant paradigms, and critique is mobilized in order to mold young scholars into academics who cite the right people, quote the correct theories, and support the preferred values. My success depends on me being at least “docile enough” in conforming myself to the norms of the profession.
I also exercise self-discipline in my efforts to be a better spouse and a better parent. I strive to be more intentional when I’m frustrated or angry, because I too often let my emotions shape my interactions with loved ones in ways that do not align with my broader aspirations. More intentionality in my life has been generally a good thing, so long as my expectations are not so unrealistic as to provoke more anxiety than the benefits are worth. But in a critical mode where self-discipline and intentionality automatically equate to self-subjugation, how exactly are people to exercise agency in improving their own lives?
In any case, advocating devices that enable users to exercise greater intentionality over their digital practices is not a bad thing per se. Citizens pursue self-help, meditate, and engage in other individualistic wellness activities because the lives they live are constrained. Their agency is partly circumscribed by their jobs, family responsibilities, and incomes, not to mention the more systemic biases of culture and capitalism. Why is it wrong for groups like Harris’ center to advocate efforts that largely work within those constraints?
Yet even that reading of the digital wellness movement seems uncharitable. Certainly Harris’ analysis lacks the sophistication of a technology scholar’s, but he has made it obvious that he recognizes that dominant business models and asymmetrical relations of power underlay the problem. To reduce his efforts to mere individualistic self-discipline is borderline dishonest, though he no doubt emphasizes the parts of the problem he understands best. Of course it will likely take more radical changes to realize the humane technology than Harris advocates, but it is not totally clear whether individualized efforts necessarily detract from people’s ability or the willingness demand more from tech firms and governments (i.e., are they like bottled water and other “inverted quarantines”?). At least that is a claim that should be demonstrated rather than presumed from the outset.
At its worst, critical “problematizing” presents itself as its own kind of view from nowhere. For instance, because the idea of nature has been constructed in various biased throughout history, we are supposed to accept the view that all human activities are equally natural. And we are supposed to view that perspective as if it were itself an objective fact rather than yet another politically biased social construction.
Various observers mobilize much the same critique about claims regarding the “realness” of digital interactions. Because presenting the category of “real life” as being apart from digital interactions is beset with Foulcauldian problematics, we are told that the proper response is to no longer attempt the qualitative distinctions that that category can help people make—whatever its limitations. It is probably no surprise that the same writer wanting to do away with the digital-real distinction is enthusiastic in their belief that the desires and pleasures of smartphones somehow inherently contain the “possibility…of disrupting the status quo.” Such critical takes give the impression that all technology scholarship can offer is a disempowering form of relativism, one that only thinly veils the author’s underlying political commitments.
The critic’s partisanship is also frequently snuck in the backdoor by couching criticism in an abstract commitment to social justice. The fact that the digital wellness movement is dominated by tech bros and other affluent whites implies that it must be harmful to everyone else—a claim made by alluding to some unspecified amalgamation of oppressed persons (women, people of color, or non-cis citizens) who are insufficiently represented. It is assumed but not really demonstrated that people within the latter demographics would be unreceptive or even damaged by Harris’ approach. But given the lack of actual concrete harms laid out in these critiques, it is not clear whether the critics are actually advocating for those groups or that the social-theoretical existence of harms to them is just a convenient trope to make a mainly academic argument seem as if it actually mattered.
People’s prospects for living well in the digital age would be improved if technology scholars more often eschewed the deconstructive critique from nowhere. I think they should act instead as “thoughtful partisans.” By that I mean that they would acknowledge that their work is guided by a specific set of interests and values, ones that are in the benefit of particular groups.
It is not an impartial application of social theory to suggest that “realness” and “naturalness” are empty categories that should be dispensed with. And a more open and honest admission of partisanship would at least force writers to be upfront with readers regarding what the benefits would actually be to dispensing with those categories and who exactly would enjoy them—besides digital enthusiasts and ecomodernists. If academics were expected to use their analysis to the clear benefit of nameable and actually existing groups of citizens, scholars might do fewer trite Foucauldian analyses and more often do the far more difficult task of concretely outlining how a more desirable world might be possible.
“The life of the critic easy,” notes Anton Ego in the Pixar film Ratatouille. Actually having skin in the game and putting oneself and one’s proposals out in the world where they can be scrutinized is far more challenging. Academics should be pushed to clearly articulate exactly how it is the novel concepts, arguments, observations, and claims they spend so much time developing actually benefit human beings who don’t have access to Elsevier or who don't receive seasonal catalogs from Oxford University Press. Without them doing so, I cannot imagine academia having much of a role in helping ordinary people live better in the digital age.
As a scholar concerned about the value of democracy within contemporary societies, especially with respect to the challenges presented by increasingly complex (and hence risky) technoscience, a good check for my views is to read arguments by critics of democracy. I had hoped Jason Brennan's Against Democracy would force me to reconsider some of the assumptions that I had made about democracy's value and perhaps even modify my position. Hoped.
Having read through a few chapters, I am already disappointed and unsure if the rest of the book is worth the my time. Brennan's main assertion is that because some evidence shows that participation in democratic politics has a corrupting influence--that is, participants are not necessarily well informed and often end becoming more polarized and biased in the process--we would be better off limiting decision making power to those who have proven themselves sufficiently competent and rational, to epistocracy. Never mind the absurdity of the idea that a process for judging those qualities in potential voters could ever be made in an apolitical, unbiased, or just way, Brennan does not even begin with a charitable or nuanced understanding of what democracy is or could be.
One early example that exposes the simplicity of Brennan's understanding of democracy--and perhaps even the circularity of his argument--is a thought experiment about child molestation. Brennan asks the reader to consider a society that has deeply deliberated the merits of adults raping children and subjected the decision to a majority vote, with the yeas winning. Brennan claims that because the decision was made in line with proper democratic procedures, advocates of a proceduralist view of democracy must see it as a just outcome. Due to the clear absurdity and injustice of this result, we must therefore reject the view that democratic procedures (e.g., voting, deliberation) themselves are inherently just.
What makes this thought experiment so specious is that Brennan assumes that one relatively simplistic version of a proceduralist, deliberative democracy can represent the whole. Ever worse, his assumed model of deliberative democracy--ostensibly not too far from what already exists in most contemporary nations--is already questionably democratic. Not only is majoritarian decision-making and procedural democracy far from equivalent, but Brennan makes no mention of whether or not children themselves were participants in either the deliberative process or the vote, or even would have a representative say through some other mechanism. Hence, in this example Brennan actually ends up showing the deficits of a kind of epistocracy rather than democracy, insofar as the ostensibly more competent and rationally thinking adults are deliberating and voting for children. That is, political decisions about children already get made by epistocrats (i.e., adults) rather than democratically (understood as people having influence in deciding the rules by which they will be governed for the issues they have a stake in). Moreover, any defender of the value of democratic procedures would likely counter that a well functioning democracy would contain processes to amplify or protect the say of less empowered minority groups, whether through proportional representation or mechanisms to slow down policy or to force majority alliances to make concessions or compromises. It is entirely unsurprising that democratic procedures look bad when one's stand-in for democracy is winner-take-all, simple majoritarian decision-making.
His attack on democratic deliberations is equally short-sighted. Criticizing, quite rightly, that many scholars defend deliberative democracy with purely theoretical arguments, while much of the empirical evidence shows that many average people dislike deliberation and are often very bad at it, Brennan concludes that, absent promising research on how to improve the situation, there is no logical reason to defend deliberative democracy. This is where Brennan's narrow disciplinary background as a political theorist biases his viewpoint. It is not at all surprising to a social scientist that average people would fail to deliberate well nor like it when the near entirety of contemporary societies fails to prepare them for democracy. Most adults have spent 18 years or more in schools and up to several decades in workplaces that do not function as democracies but rather are authoritarian, centrally planned institutions. Empirical research on deliberation has merely uncovered the obvious: People with little practice with deliberative interactions are bad at them. Imagine if an experiment put assembly line workers in charge of managing General Motors, then justified the current hierarchical makeup of corporate firms by pointing to the resulting non-ideal outcomes. I see no reason why Brennan's reasoning about deliberative democracy is any less absurd.
Finally, Brennan's argument rests on a principle of competence--and concurrently the claim that citizens have a right to governments that meet that principle. He borrows the principle from medical ethics, namely that a patient is competent if they are aware of the relevant facts, can understand them, appreciate their relevance, and can reason about them appropriately. Brennan immediately avoids the obvious objections about how any of the judgements about relevance and appropriateness could be made in non-political ways to merely claim that the principle is non-objectionable in the abstract. Certainly for the simplified thought examples that he provides of plumber's unclogging pipes and doctors treating patients with routine conditions the validity of the principle of competence is clear. However, for the most contentious issues we face: climate change, gun control, genetically modified organisms, etc., the facts themselves and the reliability of experts are themselves in dispute. What political system would best resolve such a dispute? Obviously it could not be a epistocracy, given that the relevance and appropriateness of the "relevant" expertise itself is the issue to be decided. Perhaps Brennan's suggestions have some merit, but absent a non-superficial understanding of the relationship between science and politics the foundation of his positive case for epistocracy is shaky at best. His oft repeated assertion that epistocracy would likely produce more desirable decisions is highly speculative.
I plan on continuing to examine Brennan's arguments regarding democracy, but I find it ironic that his argument against average citizens--that they suffer too much from various cognitive maladies to reason well about public issues--applies equally to Brennan. Indeed, the hubris of most experts is deeply rooted in their unfounded belief that a little learning has freed them from the mental limitations that infect the less educated. In reality, Brennan is a partisan like anyone else, not a sagely academic doling out objective advice. Whether one turns to epistocratic ideas in light of the limitations of contemporary democracies or advocate for ensuring the right preconditions for democracies to function better comes back to one's values and political commitments. So far it seems that Brennan's book demonstrates his own political biases as much as it exposes the ostensibly insurmountable problems for democracy.
After news broke of the Las Vegas shooting, which claimed some 59 lives, professional and lay observers did not hesitate in trotting out the same rhetoric that Americans have heard time and time again. Those horrified by the events demanded that something be done; indeed, the frequency and scale of these events should be horrifying. Conservatives, in response, emphasized evidence for what they see as the futility of gun control legislation. Yet it is not so much gun control itself that seems futile but rather our collective efforts to accomplish almost any policy change. The Onion satirized America's firearm predicament with the same headline used after numerous other shootings: “‘No Way to Prevent This,’ Says Only Nation Where This Regularly Happens.” Why is it that we Americans seem so helpless to effect change with regard to mass shootings? What explains our inability to collectively act to combat these events?
Political change is, almost invariably, slow and incremental. Although the American political system is, by design, uniquely conservative and biased toward maintaining the status quo, that is not the only reason why rapid change rarely occurs. Democratic politics is often characterized as being composed by a variety of partisan political groups, all vying with one another to get their preferred outcome on any given policy area: that is, as pluralistic. When these different partisan groups are relatively equal and numerous, change is likely to be incremental because of substantial disagreements between these groups and the fact that each only has a partial hold on power. Relative equality among them means that any policy must be a product of compromise and concession—consensus is rarely possible. Advocates of environmental protection, for instance, could not expect to convince governments to immediately dismantle of coal-fired power plants, though they might be able to get taxes, fines, or subsidies adjusted to discourage them; the opposition of industry would prevent radical change. Ideally, the disagreements and mutual adjustments between partisans would lead to a more intelligent outcome than if, say, a benevolent dictator unilaterally decided.
While incremental policy change would be expected even in an ideal world of relatively equal partisan groups, things can move even slower when one or more partisan groups are disproportionately powerful. This helps explain why gun control policy—and, indeed, environmental protections, and a whole host of other potentially promising changes—more often stagnates than advances. Businesses occupy a relatively privileged position compared to other groups. While the CEO of Exxon can expect the president’s ear whenever a new energy bill is being passed, average citizens—and even heads of large environmental groups—rarely get the same treatment. In short, when business talks, governments listen. Unsurprisingly the voice of the NRA, which is in essence a lobby group for the firearm industry, sounds much louder to politicians than anyone else’s—something that is clear from the insensitivity of congressional activity to widespread support for strengthening gun control policy.
But there is more to it that just that. I am not the first person to point out that the strength of the gun lobby stymies change. Being overly focused the disproportionate power wielded by some in the gun violence debate, we miss the more subtle ways in which democratic political pluralism is itself in decline.
Another contributing factor to the slowness of gun policy change is the way Americans talk about issues like gun violence. Most news stories, op-eds, and tweets are laced with references to studies and a plethora of national and international statistics. Those arguing about what should be done about gun violence act as if the main barrier to change has been that not enough people have been informed of the right facts. What is worse is that most participants seem already totally convinced of the rightness of their own version or interpretation of those facts: e.g., employing post-Port Arthur Australian policy in the US will reduce deaths or restrictive gun laws will lead to rises in urban homicides. Similar to two warring nations both believing that they have God uniquely on their side, both sides of the gun control debate lay claim to being on the right side of the facts, if not rationality writ large.
The problem with such framings (besides the fact that no one actually knows what the outcome would be until a policy is tried out) is that anyone who disagrees must be ignorant, an idiot, or both. That is, exclusively fact-based rhetoric—the scientizing of politics—denies pluralism. Any disagreement is painted as illegitimate, if not heretical. Such as view leads to a fanatical form of politics: There is the side with “the facts” and the side that only needs informed or defeated, not listened to. If “the facts” have already pre-determined the outcome of policy change, then there is no rational reason for compromise or concession, one is simply harming one’s own position (and entertaining nonsense).
If gun control policy is to proceed more pluralistically, then it would seem that rhetorical appeals to the facts would need dispensed with—or at least modified. Given that the uncompromising fanaticism of some of those involved seems rooted in an unwavering certainty regarding the relevant facts, emphasizing uncertainty would likely be a promising avenue. In fact, psychological studies find that asking people to face the complexity of public issues and recognize the limits of their own knowledge leads to less fanatical political positions.
Proceeding with a conscious acknowledgement of uncertainty would have the additional benefit of encouraging smarter policy. Guided by an overinflated trust that a few limited studies can predict outcomes in exceedingly complex and unpredictable social systems, policy makers tend to institute rule changes or laws with no explicit role for learning. Despite that even scientific theories are only tentatively true, ready to be turned over by evermore discerning experimental tests or shift in paradigm, participants in the debate act as if events in Australia or Chicago have established eternal truths about gun control. As a result, seldom is it considered that new policies could be tested gradually, background check and registration requirements that become more stringent over time or regional rollouts, with an explicit emphasis on monitoring for effectiveness and unintended consequences—especially consequences for the already marginalized.
How Americans debate issues like gun control would be improved in still other ways if the narrative of “the facts” were not so dominant in people’s speech. It would allow greater consideration of values, feelings, and experiences. For instance, gun rights advocates are right to note that semiautomatic “assault” weapons are responsible for a minority of gun deaths, but their narrow focus on that statistical fact prevents them from recognizing that it is not their “objective” danger that motivates their opponents but their political riskiness. The assault rifle, due to its use in horrific mass shootings, has come to symbolize American gun violence writ large. For gun control advocates it is the antithesis of conservatives’ iconography of the flag: It represents everything that is rotten about American culture. No doubt reframing the debate in that way would not guarantee more productive deliberation, but it would at least enable political opponents some means of beginning to understand each others' position.
Even if I am at least partly correct in diagnosing what ails American political discourse, there remains the pesky problem of how to treat it. Allusions to “the facts,” attempts to leverage rhetorical appeals to science for political advantage, have come to dominant political discourse over the course of decades—and without anyone consciously intending or dictating it. How to effect movement in the opposite direction? Unfortunately, while some social scientists study these kinds of cultural shifts as they occur throughout history, practically none of them research how beneficial cultural changes could be generated in the present. Hence, perhaps the first change citizens could advocate for would be more publicly responsive and relevant social research. Faced with an increasingly pathological political process and evermore dire consequences from epochal problems, social scientists can no longer afford to be so aloof; they cannot afford to simply observe and analyze society while real harms and injustices continue unabated.
It is hard to imagine anything more damaging to the movements for livable minimum wages, greater reliance on renewable energy resources, or workplace democracy than the stubborn belief that one must be a “liberal” to support them. Indeed, the common narrative that associates energy efficiency with left-wing politics leads to absurd actions by more conservative citizens. Not only do some self-identified conservatives intentionally make their pickup trucks more polluting at high costs (e.g., “rolling coal”) but they will shun energy efficient—and money saving— lightbulbs if their packaging touts their environmental benefits. Those on the left, often do little to help the situation, themselves seemingly buying into the idea that conservatives must culturally be everything leftists are not and vice-versa. As a result, the possibility to ally for common purposes, against a common enemy (i.e., neoliberalism), is forgone.
The Germans have not let themselves be hindered by such narratives. Indeed, their movement toward embracing renewables, which now make up nearly a third of their power generation market, has been driven by a diverse political coalition. A number villages in the German conservative party (CDU) heartland now produce more green energy than they need, and conservative politicians supported the development of feed-in tariffs and voted to phase out nuclear energy. As Craig Morris and Arne Jungjohann describe, the German energy transition resonates with key conservative ideas, namely the ability of communities to self-govern and the protection of valued rural ways of life. Agrarian villages are given a new lease on life by farming energy next to crops and livestock, and enabling communities to produce their own electricity lessens the control of large corporate power utilities over energy decisions. Such themes remain latent in American conservative politics, now overshadowed by the post-Reagan dominance of “business friendly” libertarian thought styles.
Elizabeth Anderson has noticed a similar contradiction with regard to workplaces. Many conservative Americans decry what they see as overreach by federal and state governments, but tolerate outright authoritarianism at work. Tracing the history of conservative support for “free market” policies, she notes that such ideas emerged in an era when self-employment was much more feasible. Given the immense economies of scale possible with post-Industrial Revolution technologies, however, the barriers to entry for most industries are much too high for average people to own and run their own firms. As a result, free market policies no longer create the conditions for citizens to become self-reliant artisans but rather spur the centralization and monopolization of industries. Citizens, in turn, become wage laborers, working under conditions far more similar to feudalism than many people are willing to recognize.
Even Adam Smith, to whom many conservatives look for guidance on economic policy, argued that citizens would only realize the moral traits of self-reliance and discipline—values that conservatives routinely espouse—in the right contexts. In fact, he wrote of people stuck doing repetitive tasks in a factory:
“He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible to become for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life. Of the great and extensive interests of his country he is altogether incapable of judging”
Advocates of economic democracy have overlooked a real opportunity to enroll conservatives in this policy area. Right leaning citizens need not be like Mike Rowe—a man who ironically garnered a following among “hard working” conservatives by merely dabbling in blue collar work—and mainly bemoan the ostensible decline in citizens’ work ethic. Conservatives could be convinced that creating policies that support self-employment and worker-owned firms would be far more effective in creating the kinds of citizenry they hope for, far better than simply shaming the unemployed for apparently being lazy. Indeed, they could become like the conservative prison managers in North Dakota (1), who are now recognizing that traditionally conservative “tough on crime” legislation is both ineffective and fiscally irresponsible—learning that upstanding citizens cannot be penalized into existence.
Another opportunity has been lost by not constructing more persuasive narratives that connect neoliberal policies with the decline of community life and the eroding well-being of the nation. Contemporary conservatives will vote for politicians who enable corporations to outsource or relocate at the first sign of better tax breaks somewhere else, while they simultaneously decry the loss of the kinds of neighborhood environments that they experienced growing up. Their support of “business friendly” policies had far different implications in the days when the CEO of General Motors would say “what is good for the country is good for General Motors and vice versa.” Compare that to an Apple executive, who baldly stated: “We don’t have an obligation to solve America’s problems. Our only obligation is making the best product possible.”
Yet fights for a higher minimum wage and proposals to limit the destructively competitive processes where nations and cities try to lure businesses away from each other with tax breaks get framed as anti-American, even though they are poised to reestablish part of the social reality that conservatives actually value. Communities cannot prosper when torn asunder by economic disruptions; what is best for a multinational corporation is often not what is best for nation like the United States. It is a tragedy that many leftists overlook these narratives and focus narrowly on appeals to egalitarianism, a moral language that political psychologists have found (unsurprisingly) to resonate only with other leftists.
The resulting inability to form alliances with conservatives over key economic and energy issues allows libertarian-inspired neoliberalism to drive conservative politics in the United States, even though libertarianism is as incompatible with conservativism as it is with egalitarianism. Libertarianism, by idealizing impersonal market forces, upholds an individualist vision of society that is incommensurable with communal self-governance and the kinds of market interventions that would enable more people to be self-employed or establish cooperative businesses. By insisting that one should “defer” to the supposedly objective market in nearly all spheres of life, libertarianism threatens to commodify the spaces that both leftists and conservatives find sacred: pristine wilderness, private life, etc.
There are real challenges, however, to more often realizing political coalitions between progressives and conservatives, namely divisions over traditionalist ideas regarding gender and sexuality. Yet even this is a recent development. As Nadine Hubbs shows, the idea that poor rural and blue collar people are invariably more intolerant than urban elites is a modern construction. Indeed, studies in rural Sweden and elsewhere have uncovered a surprising degree of acceptance for non-hetereosexual people, though rural queer people invariably understand and express their sexuality differently than urban gays. Hence, even for this issue, the problem lies not in rural conservatism per se but with the way contemporary rural conservatism in America has been culturally valenced. The extension of communal acceptance has been deemphasized in order to uphold consistency with contemporary narratives that present a stark urban-rural binary, wherein non-cis, non-hetereosexual behaviors and identities are presumed to be only compatible with urban living. Yet the practice, and hence the narrative, of rural blue collar tolerance could be revitalized.
However, the preoccupation of some progressives with maintaining a stark cultural distinction with rural America prevents progressive-conservative coalitions from coming together to realize mutually beneficial policy changes. I know that I have been guilty of that. Growing up with left-wing proclivities, I was guilty of much of what Nadine Hubbs criticizes about middle-class Americans: I made fun of “rednecks” and never, ever admitted to liking country music. My preoccupation with proving that I was really an “enlightened” member of the middle class, despite being a child of working class parents and only one generation removed from the farm, only prevented me from recognizing that I potentially had more in common with rednecks politically than I ever would with the corporate-friendly “centrist” politicians at the helm of both major parties. No doubt there is work to be done to undo all that has made many rural areas into havens for xenophobic, racist, and homophobic bigotry; but that work is no different than what could and should be done to encourage poor, conservative whites to recognize what a 2016 SNL sketch so poignantly illustrated: that they have far more in common with people of color than they realize.
1. A big oversight in the “work ethic” narrative is that it fails to recognize that slacking workers are often acting rationally. If one is faced with few avenues for advancement and is instantly replaced when suffering an illness or personal difficulties, why work hard? What white collar observers like Rowe might see as laziness could be considered an adaptation to wage labor. In such contexts, working hard can be reasonably seen as not the key to success but rather a product of being a chump. A person would be merely harming their own well-being in order to make someone else rich. This same discourse in the age of feudalism would have involved chiding peasants for taking too many holidays.
The belief that science and religion (and science and politics for that matter) are exact opposites is one of the most tenacious and misguided viewpoints held by Americans today, one that is unfortunately reinforced by many science journalists. Science is not at all faith-based, claims Forbes contributor Ethan Siegel in his rebuke of Matt Emerson’s suggestion otherwise. In arguing against the role of faith in science, however, Siegel ironically embraces a faith-based view of science. His perspective is faith-based not because it has ties to organized religion, obviously, but rather because it is rooted in an idealization of science disconnected from the actual evidence on scientific practice. Siegel mythologizes scientists, seeing them as impersonal and unbiased arbiters of truth. Similar to any other thought-impairing fundamentalism, the faith-based view of science, if too widespread, is antithetical to the practice of democracy.
Individual scientists, being human, fall prey to innumerable biases, conflicts of interest, motivated reasoning and other forms of impaired inquiry. It sanctifies them to expect otherwise. Drug research, for instance, is a tangled thicket of financial conflicts of interest, wherein some scientists go to bat for pharmaceutical companies in order to prevent generics from coming to market and put their names on articles ghost-written by corporations. Some have wondered if scientific medical studies can be trusted, given that many, if not most, are so poorly designed.
Siegel, of course, would likely respond that the above cases are simply pathological cases science, which will hopefully be eventually excised from the institution of science as if they were a malignant growths. He consistently tempers his assertions with an appeal to what a “good scientist” would do: “There [is no] such a thing as a good scientist who won’t revise their beliefs in the face of new evidence” claims Siegel. Rather go the easy route and simply charge him with committing a No True Scotsman fallacy, given that many otherwise good scientists often appear to hold onto their beliefs despite new evidence, it is better to question whether his understanding of “good” science stands up to close scrutiny.
The image of scientists as disinterested and impersonal arbiters of truth, immediately at the ready to adjust their beliefs in response to new evidence, is not only at odds with the last fifty years of the philosophy and social study of science, it also conflicts with what scientists themselves will say about “good science.” In Ian Mitroff’s classic study of Apollo program scientists investigating the moon and its origins, one interviewed scientist derided what Siegel presents as good science as a “fairy tale,” noting that most of his colleagues did not impersonally sift through evidence but looked explicitly for what would support their views. Far from seeing it as pathological, however, one interviewee stated “bias has a role in science and serves it well.” Mitroff’s scientists argued that ideally disinterested scientists would fail to have the commitment to see their theories through difficult periods. Individual scientists need to have faith that they will persevere in the face of seemingly contrary evidence in order to do the work necessary to defend their theories. Without this bias-laden commitment, good theories would be thrown away prematurely.
Further grasping why scientists, in contrast to their cheerleaders in popular media, would defend bias as often good for science requires recognizing that the faith-based understanding of science is founded upon a mistaken view of objectivity. Far too many people see objectivity as inhering within scientists when it really exists between scientists. As political scientist Aaron Wildavsky noted, “What is wanted is not scientific neuters but scientists with differing points of view and similar scientific standards…science depends on institutions that maintain competition among scientists and scientific groups who are numerous, dispersed and independent.” Science does not progress because individual scientists are more angelic human beings who can somehow enter a laboratory and no longer see the world with biased eyes. Rather, science progresses to the extent that scientists with diverse and opposing biases meet in disagreement. Observations and theories become facts not because they appear obviously true to unbiased scientists but because they have been met with scrutiny from scientists with differing biases and the arguments for them found to be widely persuasive.
Different areas of science have varied in terms of how well they support vibrant and progressive levels of disagreement. Indeed, part of the reason why so many studies are later found to be false is the fact that scientists are not incentivized to repeat studies done by their colleagues; such studies are generally not publishable. Moreover, entire fields have suffered from cultural biases at one time or another. The image of the human egg as a passive “damsel in distress” waiting for a sperm to penetrate her persisted in spite of contrary evidence partly because of a traditional male bias within the biological sciences. Similar biases were discovered in primatology and elsewhere as scientific institutions became more diverse. Without enterprising scientists asking seemingly heretical questions of what appears to be “sound science” on the basis of sometimes meager evidence, entrenched cultural biases masquerading as scientific facts might persist indefinitely.
The recognition that scientists often exhibit flawed and motivated reasoning, bias, personal commitments and the exercise of faith nearly as much as anyone else is important not merely because it is a more scientific understanding of science, but also because it is politically consequential. If citizens see scientists as impersonal arbiters of truth, they are likely to eschew subjecting science to public scrutiny. Political interference in science might seem undesirable, of course, when it involves creationists getting their religious views placed alongside evolution in high school science books. Nevertheless, as science and technology studies scholars Edward Woodhouse and Jeff Howard have pointed out, the belief that science is value-neutral and therefore best left up to scientists has enabled chemists (along with their corporate sponsors) to churn out more and more toxic chemicals and consumer products. Americans’ homes and environments are increasingly toxic because citizens leave the decision over the chemistry behind consumer products up to industrial chemists (and their managers). Less toxic consumer products are unlikely to ever exist in significant numbers so long as chemical scientists are considered beyond reproach.
Science is far too important to be left up to an autonomous scientific clergy. Dispensing with the faith-based understanding proffered by Siegel is the first step toward a more publically accountable and more broadly beneficial scientific enterprise.
Looking upon all the polarized rhetoric concerning vaccines, GMO crops, climate change, and processed foods one might be tempted to conclude that the American status quo is under attack by a fervent anti-science movement. Indeed, it is not hard to find highly educated and otherwise intelligent people making just that claim in popular media. To some, that proposition probably seems commonsensical if not blatantly obvious. Why else would people be skeptical of all these advances in medical, climate, and agricultural sciences? However, looking more closely at the style of argumentation utilized by critics undermines the claim that they are “anti-science.” Rather, if there is any bias to popular deliberation regarding the risks regarding vaccines, climate change, and GMO crops it is a widespread allergy to engaging in political talk about values.
Consider Vani Hari, aka “Food Babe.” Her response to a take-down piece in Gawker is filled with references to studies and links to groups like Consumers Union, the Center for Science in the Public Interest, and the Environmental Working Group, who do employ people with scientific credentials and conduct tests. Groups concerned with potential adverse affects from vaccines, similarly, have their own scientists to fall back on and draw upon highly skeptical and scientific language to highlight uncertainties and as-of-yet undone studies that might help settle safety concerns. If opponents were truly anti-science, they would not exert so much effort to mobilize scientific rhetoric and expertise. Of course, there is still the question of whose expertise is or should be relevant as well as whether or not participants in the debate are attempting a fair and charitable interpretation of available evidence. Nevertheless, the claim that the debate is a result of a split between pro and anti-science factions is pretty much incoherent, if not deluded.
Contrary to recurring moral panics about the supposed emergence of polarized anti-scientism, American scientific controversies are characterized by a surprising amount of agreement. No one seems to be in disagreement over the presumption that debate about GMO crops, vaccines, processed foods, and other controversial instances of technoscience should be settled by scientific facts. Even creationists have leaned heavily on a scientific-sounding use of information theory to try to legitimate so-called “intelligent design.” Regardless, this agreement, in turn, rests on an unquestioned assumption that these are even issues that can be settled by science. Both the pro and the con sides are in many ways radically pro-science.
Because of this pervasive and unstated agreement, the very real value disagreements that drive these controversies are left by the wayside in favor of endless quibbling over what “the facts” really are. Given the experimenter’s regress, the reality that experimental tests and theories are wrought with uncertainties and unforeseen complexities but are nevertheless relied upon to validate each other, there is always some room for both doubt and confirmation bias in nearly all scientific findings. Of course, doubt is usually mobilized selectively by each side in controversies in ways that mirror their underlying value commitments. Those who tend to view developments within modern science as almost automatically amounting to human progress inevitably find some way to depict opponents as out of touch with the scientific method or using improper methodology. Critics of GMOs or pesticides, for their part, routinely claim to find similar inadequacies in existing safety studies. Additional scientific research, moreover, often only uncovers additional uncertainties; more science can just as often make controversies even more intractable.
Therefore, I think that Americans would be better off if social movements were more anti-science. Of course, I do not mean that they would totally disavow the benefits of looking at things scientifically or the more unambiguous benefits of contemporary technoscience. Instead, what I mean is that such groups would reject the assumption that all issues should be viewed, first and foremost, scientistically. Underneath most, if not all, public scientific controversies are real disagreements that relate to values and power. Vaccine critics, rightly or wrongly, are motivated by concerns about a felt loss of liberty regarding their abilities to make health decisions for their children in an age of seemingly pervasive risk. Advocates for more organic farming and fewer petroleum-derived residues on their food and in eco-systems are not only concerned about health risks but the lack of input they have regarding what goes into the products they ingest. The real debate should concern to what extent the technoscientific experts who create the potentially risky and nearly unavoidable products that fill our houses, break down into our local watersheds, and end up in our bloodstreams (along with their allies in business and government) are sufficiently publically accountable.
Advocacy groups, however, are caught in a Catch-22. As long as the main legitimacy-imparting public language is scientistic, those who fill their discourse primarily with a consideration of values will probably have a hard time getting heard. Nevertheless, incremental gains could be had if at least some people endeavored to talk to friends, family, and acquaintances about these matters differently: in more explicitly political ways. Those who support mandatory vaccination would do better to talk about the rights of parents of infants and the elderly to not have worry under the specter of previously eradicated and potentially deadly diseases than to claim a level of certitude about vaccine risk they cannot possibly possess. Advocates of organic farming would do well to frame their opposition to GMOs with reference to questions concerning who owns the means of producing food, who primarily benefits, and who has the power to decide which agricultural products are safe. Far too many citizens talk as if they believe science can do their politics for them. It is about time we put that belief to rest.
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
If You Don't Want Outbreaks, Don't Have In-Person Classes
How to Stop Worrying and Live with Conspiracy Theorists
Democracy and the Nuclear Stalemate
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life