Are Americans losing their grip on reality? It is difficult not to think so in light of the spread of QANON conspiracy theories, which posit that a deep-state ring of Satanic pedophiles is plotting against President Trump. A recent poll found that some 56% of Republican voters believe that at least some of the QANON conspiracy theory is true. But conspiratorial thinking has been on the rise for some time. One 2017 Atlantic article claimed that America had “lost its mind” in its growing acceptance of post-truth. Robert Harris has more recently argued that the world had moved into an “age of irrationality.” Legitimate politics is threatened by a rising tide of unreasonableness, or so we are told. But the urge to divide people in rational and irrational is the real threat to democracy. And the antidote is more inclusion, more democracy—no matter how outrageous the things our fellow citizens seem willing to believe.
Despite recent panic over the apparent upswing in political conspiracy thinking, salacious rumor and outright falsehoods has been an ever-present feature of politics. Today’s lurid and largely evidence-free theories about left-wing child abuse rings have plenty of historical analogues. Consider tales of Catherine the Great’s equestrian dalliances and claims that Marie Antoinette found lovers in both court servants and within her own family. Absurd stories about political elites seems to have been anything but rare. Some of my older relatives believed in the 1990s that the government was storing weapons and spare body parts underneath Denver International Airport in preparation for a war against common American citizens—and that was well before the Internet was a thing.
There seems to be little disagreement that conspiratorial thinking threatens democracy. Allusions to Richard Hofstadter’s classic essay on the “paranoid style of American politics” have become cliché. Hofstadter’s targets included 1950s conservatives that saw Communist treachery around every corner, 1890s populists railing against the growing power of the financial class, and widespread worries about the machinations of the Illuminati. He diagnosed their politics paranoid in light of their shared belief that the world was being persecuted by a vast cabal of morally corrupt elites.
Regardless of their specific claims, conspiracy theories’ harms come from their role in “disorienting” the public, leading citizens to have grossly divergent understandings of reality. And widespread conspiratorial thinking drives the delegitimation of traditional democratic institutions like the press and the electoral system. Journalists are seen as pushing “fake news.” The voting booths become “rigged.”
Such developments are no doubt concerning, but we should think carefully about how we react to conspiracism. Too often the response is to endlessly lament the apparent end of rational thought and wonder aloud if democracy can survive while being gripped by a form of collective madness. But focusing on citizens' perceived cognitive deficiencies presents its own risks. Historian Ted Steinberg called this the “diagnostic style” of American political discourse, which transforms “opposition to the cultural mainstream into a form of mental illness.” The diagnostic style leads us to view QANONers, and increasingly political opponents in general, as not merely wrong but cognitively broken. They become the anti-vaxxers of politics.
While QANON believers certainly seem to be deluding themselves, isn’t the tendency by leftists to blame Trump’s popular support on conservative’s faculty brains and an uneducated or uninformed populace equally delusional? The extent to which such cognitive deficiencies are actually at play is beside the point as far as democracy is concerned. You can’t fix stupid, as the well-worn saying has it. Diagnosing chronic mental lapses actually leaves us very few options for resolving conflicts. Even worse, it prevents an honest effort to understand and respond to the motivations of people with strange beliefs. Calling people idiots will only cause them to dig in further.
Responses to the anti-vaxxer movement show as much. Financial penalties and other compulsory measures tend to only anger vaccine hesitant parents, leading them to more often refuse voluntary vaccines and become more committed in their opposition. But it does not take a social scientific study to know this. Who has ever changed their mind in response to the charge of stupidity or ignorance?
Dismissing people with conspiratorial views blinds us to something important. While the claims themselves might be far-fetched, people often have legitimate reasons for believing them. African Americans, for instance, disproportionately believe conspiracy theories regarding the origin of HIV, such as that it was man-made in a laboratory or that the cure was being withheld, and are more hesitant of vaccines. But they also rate higher in distrust of medical institutions, often pointing to the Tuskegee Syphilis Study and ongoing racial disparities as evidence. And from British sheepfarmers’ suspicion of state nuclear regulators in the aftermath of Chernobyl to mask skeptics’ current jeremiads against the CDC, governmental mistrust has often developed after officials’ overconfident claims about the risks turned out to be inaccurate. What might appear to an “irrational” rejection of the facts is often a rational response to a power structure that feels distant, unresponsive, and untrustworthy.
The influence of psychologists has harmed more than it has helped in this regard. Carefully designed studies purport to show that believers in conspiracy theories lack the ability to think analytically or claim that they suffer from obscure cognitive biases like “hypersensitive agency detection.” Recent opinion pieces exaggerate the “illusory truth effect,” a phenomenon discovered in psych labs that repeated exposure to false messages leads to a relatively slight increase in the number subjects rating them as true or plausible. The smallness of this, albeit statistically significant, effect doesn’t stop commentators from presenting social media users as if they were passive dupes, who only need to be told about QANON so many times before they start believing it. Self-appointed champions of rationality have spared no effort to avoid thinking about the deeper explanations for conspiratorial thinking.
Banging the drum over losses in rationality will not get us out of our present situation. Underneath our seeming inability to find more productive political pastures is a profound misunderstanding of what makes democracy work. Hand Wringing over “post-truth” or conspiratorial beliefs is founded on the idea that the point of politics is to establish and legislate truths. Once that is your conception of politics, the trouble with democracy starts to look like citizens with dysfunctional brains.
When our fellow Americans are recast as cognitively broken, it becomes all too easy to believe that it would be best to exclude or diminish the influence of people who believe outrageous things. Increased gatekeeping within the media or by party elites and scientific experts begins to look really attractive. Some, like philosopher Jason Brennan, go even further. His 2016 book, Against Democracy, contends that the ability to rule should be limited to those capable of discerning and “correctly” reasoning about the facts, while largely sidestepping the question of who decides what the right facts are and how to know when we are correctly reasoning about them.
But it is misguided to think that making our democracy only more elitist will throttle the wildfire spread of conspiratorial thinking. If anything, doing so will only temporarily contain populist ferment, letting pressure build until it eventually explodes or (if we are lucky) economic growth leads it to fizzle out. Political gatekeeping, by mistaking supposed deficits in truth and rationality for the source of democratic discord, fails to address the underlying cause of our political dysfunction: the lack of trust.
Signs of our political system’s declining legitimacy are not difficult to find. A staggering 71 percent of the Americans believe that elected officials don’t care about the average citizen or what they think. Trust in our government has never been lower, with only 17 percent of citizens expressing confidence about Washington most or all the time. By diagnosing rather than understanding, we cannot see that conspiratorial thinking is the symptom rather than the disease.
The spread of bizarre theories about COVID-19 being a “planned” epidemic or child-abuse rings is a response to real feelings of helplessness, isolation, and mistrust as numerous natural and manmade disasters unfold before our eyes—epochal crises that governments seem increasingly incapable of getting a handle on. Many of Hofstadter’s listed examples of conspiratorial thought came during similar moments: at the height of the Red Scare and Cold War nuclear brinkmanship, during the 1890s depression, or in the midst of pre-Civil War political fracturing. Conspiracy theories offer a simplified world of bad guys and heroes. A battle between good and evil is a more satisfying answer than the banality of ineffectual government and flawed electoral systems when one is facing wicked problems.
Perhaps social media adds fuel to the fire, accelerating the spread of outlandish proposals about what ails the nation. But it does so not because it short-circuits our neural pathways to crash our brains’ rational thinking modules. Conspiracy theories are passed by word of mouth (or Facebook likes) by people we already trust. It is no surprise that they gain traction in a world where satisfying solutions to our chronic, festering crises are hard to find, and where most citizens are neither afforded a legible glimpse into the workings of the vast political machinery that determines much of their lives nor the chance to actually substantially influence it.
Will we be able to reverse course before it is too late? If mistrust and unresponsiveness is the cause, the cure should be the effort to reacquaint Americans with the exercise of democracy on a broad-scale. Hofstadter himself noted that, because the political process generally affords more extreme sects little influence, public decisions only seemed to confirm conspiracy theorists’ belief that they are a persecuted minority. The urge to completely exclude “irrational” movements forgets that finding ways to partially accommodate their demands is often the more effective strategy. Allowing for conscientious objections to vaccination effectively ended the anti-vax movement in early 20th century Britain. Just as interpersonal conflicts are more easily resolved by acknowledging and responding to people’s feelings, our seemingly intractable political divides will only become productive by allowing opponents to have some influence on policy. That is not to say that we should give into all their demands. Rather it is only that we need to find small but important ways for them to feel heard and responded to, with policies that do not place unreasonable burdens on the rest of us.
While some might pooh-pooh this suggestion, pointing to conspiratorial thinking as evidence of how ill-suited Americans are for any degree of political influence, this gets the relationship backwards. Wisdom isn’t a prerequisite to practicing democracy, but an outcome of it. If our political opponents are to become more reasonable it will only be by being afforded more opportunities to sit down at the table with us to wrestle with just how complex our mutually shared problems are. They aren’t going anywhere, so we might as well learn how to coexist.
America’s nuclear energy situation is a microcosm of the nation’s broader political dysfunction. We are at an impasse, and the debate around nuclear energy is highly polarized, even contemptuous. This political deadlock ensures that a widely disliked status quo carries on unabated. Depending on one’s politics, Americans are left either with outdated reactors and an unrealized potential for a high-energy but climate-friendly society, or are stuck taking care of ticking time bombs churning out another two thousand tons of unmanageable radioactive waste every year
Continue reading at The New Atlantis
Back during the summer, Tristan Harris sparked a flurry of academic indignation when he suggested that we needed a new field called “Science & Technology Interaction” or STX, which would be dedicated to improving the alignment between technologies and social systems. Tweeters were quick to accuse him of “Columbizing,” claiming that such a field already existed in the form of Science & Technology Studies (STS) or similar such academic department. So ignorant, amirite?
I am far more sympathetic. If people like Harris (and earlier Cathy O’Neil) have been relatively unaware of fields like Science and Technology Studies, it is because much of the research within these disciplines is mostly illegible to non-academics, not all that useful to them, or both. I really don’t blame them for not knowing. I am even an STS scholar myself, and the table of contents of most issues of my field’s major journals don’t really inspire me to read further.
And in fairness to Harris and contrary to Academic Twitter, the field of STX that he proposes does not already exist. The vast majority of STS articles and books dedicate single digit percentages of their words to actually imagining how technology could better match the aspirations of ordinary people and their communities. Next to no one details alternative technological designs or clear policy pathways toward a better future, at least not beyond a few pages at the end of a several-hundred-page manuscript.
My target here is not just this particular critique of Harris, but the whole complex of academic opiners who cite Foucault and other social theory to make sure we know just how “problematic” non-academics’ “ignorant” efforts to improve technological society are. As essential as it is to try to improve upon the past in remaking our common world, most of these critiques don’t really provide any guidance for what steps we should be taking. And I think that if scholars are to be truly helpful to the rest of humanity they need to do more than tally and characterize problems in ever more nuanced ways. They need to offer more than the academic equivalent of fiddling while Rome burns.
In the case of Harris, we are told that underlying the more circumspect digital behavior that his organization advocates is a dangerous preoccupation with intentionality. The idea of being more intentional is tainted by the unsavory history of humanistic thought itself, which has been used for exclusionary purposes in the past. Left unsaid is exactly how exclusionary or even harmful it remains in the present.
This kind of genealogical take down has become cliché. Consider how one Gizmodo blogger criticizes environmentalists’ use the word “natural” in their political activism. The reader is instructed that because early Europeans used the concept of nature to prop up racist ideas about Native Americans that the term is now inherently problematic and baseless. The reader is supposed to believe from this genealogical problematization that all human interactions with nature are equally natural or artificial, regardless of whether we choose to scale back industrial development or to erect giant machines to control the climate.
Another common problematiziation is of the form “not everyone is privileged enough to…”, and it is often a fair objection. For instance, people differ in their individual ability to disconnect from seductive digital devices, whether due to work constraints or the affordability or ease of alternatives. But differences in circumstances similarly challenge people’s capacity to affordably see a therapist, retrofit their home to be more energy efficient, or bike to work (and one might add to that: read and understand Foucault). Yet most of these actions still accomplish some good in the world. Why is disconnection any more problematic than any other set of tactics that individuals use to imperfectly realize their values in an unequal and relatively undemocratic society? Should we just hold our breaths for the “total overhaul…full teardown and rebuild” of political economies that the far more astute critics demand?
Equally trite are references to the “panopticon,” a metaphor that Foucault developed to describe how people’s awareness of being constantly surveilled leads them to police themselves. Being potentially visible at all times enables social control in insidious ways. A classic example is the Benthamite prison, where a solitary guard at the center cannot actually view all the prisoners simultaneously, but the potential for him to be viewing a prisoner at any given time is expected to reduce deviant behavior.
This gets applied to nearly any area of life where people are visible to others, which means it is used to problematize nearly everything. Jill Grant uses it to take down the New Urbanist movement, which aspires (though fairly unsuccessfully) to build more walkable neighborhoods that are supportive of increased local community life. This movement is “problematic” because the densities it demands means that citizens are everywhere visible to their neighbors, opening up possibilities for the exercise of social control. Whether not any other way of housing human beings would not result in some form of residential panopticon is not exactly clear, except perhaps by designing neighborhoods so as to prohibit social community writ large.
Further left unsaid in these critiques is exactly what a more desirable alternative would be. Or at least that alternative is left implicit and vague. For example, the pro-disconnection digital wellness movement is in need of enhanced wokeness, to better come to terms with “the political and ideological assumptions” that they take for granted and the “privileged” values they are attempting to enact in the world.
But what does that actually mean? There’s a certain democratic thrust to the criticism, one that I can get behind. People disagree about what is “the good life” and how to get there, and any democratic society would be supportive of a multitude of them. Yet the criticism that the digital wellness movement seems to center around one vision of “being human,” one emphasizing mindfulness and a capacity to exercise circumspect individual choosing, seems hollow without the critics themselves showing us what should take its place. Whatever the flaws with digital wellness, it is not as self-stultifying as the defeatist brand of digital hedonism implicitly left in the wake of academic critiques that offer no concrete alternatives. Perhaps it is unfair to expect a full-blown alternative; yet few of these critiques offer even an incremental step in the right direction.
Even worse, this line of criticism can problematize nearly everything, losing its rhetorical power as it is over-applied. Even academia itself is disciplining. STS has its own dominant paradigms, and critique is mobilized in order to mold young scholars into academics who cite the right people, quote the correct theories, and support the preferred values. My success depends on me being at least “docile enough” in conforming myself to the norms of the profession.
I also exercise self-discipline in my efforts to be a better spouse and a better parent. I strive to be more intentional when I’m frustrated or angry, because I too often let my emotions shape my interactions with loved ones in ways that do not align with my broader aspirations. More intentionality in my life has been generally a good thing, so long as my expectations are not so unrealistic as to provoke more anxiety than the benefits are worth. But in a critical mode where self-discipline and intentionality automatically equate to self-subjugation, how exactly are people to exercise agency in improving their own lives?
In any case, advocating devices that enable users to exercise greater intentionality over their digital practices is not a bad thing per se. Citizens pursue self-help, meditate, and engage in other individualistic wellness activities because the lives they live are constrained. Their agency is partly circumscribed by their jobs, family responsibilities, and incomes, not to mention the more systemic biases of culture and capitalism. Why is it wrong for groups like Harris’ center to advocate efforts that largely work within those constraints?
Yet even that reading of the digital wellness movement seems uncharitable. Certainly Harris’ analysis lacks the sophistication of a technology scholar’s, but he has made it obvious that he recognizes that dominant business models and asymmetrical relations of power underlay the problem. To reduce his efforts to mere individualistic self-discipline is borderline dishonest, though he no doubt emphasizes the parts of the problem he understands best. Of course it will likely take more radical changes to realize the humane technology than Harris advocates, but it is not totally clear whether individualized efforts necessarily detract from people’s ability or the willingness demand more from tech firms and governments (i.e., are they like bottled water and other “inverted quarantines”?). At least that is a claim that should be demonstrated rather than presumed from the outset.
At its worst, critical “problematizing” presents itself as its own kind of view from nowhere. For instance, because the idea of nature has been constructed in various biased throughout history, we are supposed to accept the view that all human activities are equally natural. And we are supposed to view that perspective as if it were itself an objective fact rather than yet another politically biased social construction.
Various observers mobilize much the same critique about claims regarding the “realness” of digital interactions. Because presenting the category of “real life” as being apart from digital interactions is beset with Foulcauldian problematics, we are told that the proper response is to no longer attempt the qualitative distinctions that that category can help people make—whatever its limitations. It is probably no surprise that the same writer wanting to do away with the digital-real distinction is enthusiastic in their belief that the desires and pleasures of smartphones somehow inherently contain the “possibility…of disrupting the status quo.” Such critical takes give the impression that all technology scholarship can offer is a disempowering form of relativism, one that only thinly veils the author’s underlying political commitments.
The critic’s partisanship is also frequently snuck in the backdoor by couching criticism in an abstract commitment to social justice. The fact that the digital wellness movement is dominated by tech bros and other affluent whites implies that it must be harmful to everyone else—a claim made by alluding to some unspecified amalgamation of oppressed persons (women, people of color, or non-cis citizens) who are insufficiently represented. It is assumed but not really demonstrated that people within the latter demographics would be unreceptive or even damaged by Harris’ approach. But given the lack of actual concrete harms laid out in these critiques, it is not clear whether the critics are actually advocating for those groups or that the social-theoretical existence of harms to them is just a convenient trope to make a mainly academic argument seem as if it actually mattered.
People’s prospects for living well in the digital age would be improved if technology scholars more often eschewed the deconstructive critique from nowhere. I think they should act instead as “thoughtful partisans.” By that I mean that they would acknowledge that their work is guided by a specific set of interests and values, ones that are in the benefit of particular groups.
It is not an impartial application of social theory to suggest that “realness” and “naturalness” are empty categories that should be dispensed with. And a more open and honest admission of partisanship would at least force writers to be upfront with readers regarding what the benefits would actually be to dispensing with those categories and who exactly would enjoy them—besides digital enthusiasts and ecomodernists. If academics were expected to use their analysis to the clear benefit of nameable and actually existing groups of citizens, scholars might do fewer trite Foucauldian analyses and more often do the far more difficult task of concretely outlining how a more desirable world might be possible.
“The life of the critic easy,” notes Anton Ego in the Pixar film Ratatouille. Actually having skin in the game and putting oneself and one’s proposals out in the world where they can be scrutinized is far more challenging. Academics should be pushed to clearly articulate exactly how it is the novel concepts, arguments, observations, and claims they spend so much time developing actually benefit human beings who don’t have access to Elsevier or who don't receive seasonal catalogs from Oxford University Press. Without them doing so, I cannot imagine academia having much of a role in helping ordinary people live better in the digital age.
If your Facebook wall is like mine, you have seen no shortage of memes trying to convince you that a simple explanation for school shootings exists. One claims that their increase coincides with the decline of proper “discipline” (read: corporeal punishment) of children thirty years ago. Yet all sorts of things have changed over the last several decades, especially since 2011 when the frequency of mass shootings tripled. In any case, Europeans are equally unlikely to strike their children but see no uptick in the likelihood of acts of mass violence—the 2011 attack in Norway notwithstanding. Moreover, assault weapons like the AR-15 have been available for fifty years and a federal assault weapon ban (i.e., “The Brady Bill”) expired back in 2004, long before today’s upswing in shootings. Under the slightest bit of scrutiny, any single-cause explanation begins to unravel.
Journalists and other observers often note that the perpetrators of these events were “loners” or socially isolated but do little to no further investigation when it comes time to recommend solutions. It is as if we have begun to accept the existence of such isolated and troubled individuals as if it were natural, as if little could be done to prevent it, as if eliminating civilian weapons or de-secularizing society were less wicked of problems. If there is any mindset my book, Technically Together, tries to eliminate, it is the belief that the social lives offered to us by contemporary networked societies are unalterable—the idea that we have arrived at the best of all possible social worlds. Indeed, it is difficult to square sociologist Keith Hampton’s claim that “because of cellphones and social media, those we depend on are more accessible today than at any point since we lived in small, village-like settlements” with massive increases in the rates of medication use for depression and anxiety, not just the frequency of mass shootings. At the very least, digital technologies—for all their wonders—do less than is needed to remedy feelings of isolation.
Such changes, I contend, suggest that something is very wrong with contemporary practices of togetherness. No doubt most of us get by well enough with some mixture of social networks, virtual communities, and perhaps a handful of neighborly and workplace-based connections (if we’re lucky). That said, most goods, social or otherwise, are unequally distributed. Even if sociologists disagree about whether social ties have changed on average, the distribution of connection has and so have the qualitative dimensions of friendship. For every social butterfly who uses online networks to maintain levels of acquaintanceship that would have been impossible in the days of rolodexes and phone conversations, there are those for whom increasing digital mediation has meant a decline in companionship in both numeracy and intimacy. As nice as “lurking” on Facebook or a pleasant comment from a semi-anonymous Reddit compatriot can be, they cannot match a hug. Indeed, self-reported loneliness and expressed difficulties in sustaining close friendships persist among the older generations and young men despite no lack of digital mechanisms for connecting with others.
Some sociologists downplay this, as if highlighting the downsides to social networks invariably leads to simplistically blaming them for people’s problems. No doubt Internet-critics like Sherry Turkle overlook many of the complexities of digital-age sociality, but only those socially advantaged by contemporary network technologies benefit from viewing them through rose-colored glasses. Certainly an explanation for mass shootings cannot be reduced to the prevalence of digital technologies, just as it cannot be blamed simply on the ostensible disappearance of God from schools, declines in juvenile corporeal punishment, the mere presence of assault weapons, or any of the other purported causes that proliferate in the media. What Internet technologies do provide, however, is a window into society—insofar as they can exacerbate or make more visible social changes set in motion decades earlier.
To try to blame the Internet for social isolation would fail to recognize that it was suburbia that first physically isolated people. It makes the warm intimacy of bodily co-presence hard work; hanging out requires gas money as well as the time and energy to drive to somewhere.
Skeptical readers would probably point out that events like mass shootings became prevalent and accelerated well after the suburb-building boom of the mid-20th century. That objection is easy to counter: social lag. The first suburban dwellers brought with them communal practices learned in small towns or tight-knit urban neighborhoods, and their children maintained some of them. 30 Rock’s Jack Donaghy lamented that 1st generation immigrants work their fingers to the bone, the 2nd goes to college, and the 3rd snowboards and takes improv classes. A similar generational slide could be said about community in suburbia: The 1st generation bowls together; the 2nd organizes neighborhood watch; the 3rd waits with their kids in the car until the school bus arrives.
Even while considering all that the physical makeup of our cities does to stifle community life, it would be a mistake not to recognize that there is something unique about many of our Internet activities that make them far more conducive to feelings of loneliness than other media—even if they do connect us with friends.
Consider how one woman in the BBC documentary, The Age of Loneliness, laments that social media makes her feel even lonelier, because she cannot help but compare her own life to the “highlights reels” posted by acquaintances. Others use the Internet to avoid the painful awkwardness and risk of in-person interactions, getting stuck in a downward spiral of solitude. These features combine with a third to help give birth to mass shooters: The “long tail” of the Internet provides websites that concentrate and amplify pathological tendencies. Forums that encourage and help people with eating disorders continue damaging behaviors are as common as racist, violence-promoting websites, many of which had been frequented by recent mass shooters.
While it is the suburbs that physically isolate people and make physical friendships practically difficult, online social networks too easily exacerbate and highlight that isolation. My point, however, is not to call for dismantling the Internet—though I think it could use a massive redesign. Such a call would be as simple-minded as believing that just eliminating AR-15s or making kids read the Bible in school would prevent acts of mass violence. Appeals to improving mental health services or calls to arm teachers or place military veterans at schools are equally misguided. These are all band-aid solutions that fail to ask about the underlying causes. What we need most is not more guns, God, scrutinization of the mentally ill, or even necessarily gun bans, but a sober evaluation of our social world: Why does it not provide adequate levels of loving togetherness and belonging to nearly everyone? How could it?
To some this might sound like a call to coddle potential murderers. Yet, given that people’s genetics do not fully explain their personalities, societies have to reckon with the fact that mass shooters are not born ready-made monsters but become that way. It is difficult not to see parallels between many young men today and the “lost generation” that was so liable to fall prey to fascism in the early 20th century. The growth of, mainly white, young, and male, mass shooters cannot be totally unrelated to the increase in, mainly white, young, and male, acolytes of prophets like Jordan Peterson, who extol the virtues of traditional notions of male power. Absent work toward ameliorating the “crisis of connection” that many face men currently face, we should be unsurprised if some of them continue to try to replace a lost sense of belonging with violent power fantasies.
As a scholar concerned about the value of democracy within contemporary societies, especially with respect to the challenges presented by increasingly complex (and hence risky) technoscience, a good check for my views is to read arguments by critics of democracy. I had hoped Jason Brennan's Against Democracy would force me to reconsider some of the assumptions that I had made about democracy's value and perhaps even modify my position. Hoped.
Having read through a few chapters, I am already disappointed and unsure if the rest of the book is worth the my time. Brennan's main assertion is that because some evidence shows that participation in democratic politics has a corrupting influence--that is, participants are not necessarily well informed and often end becoming more polarized and biased in the process--we would be better off limiting decision making power to those who have proven themselves sufficiently competent and rational, to epistocracy. Never mind the absurdity of the idea that a process for judging those qualities in potential voters could ever be made in an apolitical, unbiased, or just way, Brennan does not even begin with a charitable or nuanced understanding of what democracy is or could be.
One early example that exposes the simplicity of Brennan's understanding of democracy--and perhaps even the circularity of his argument--is a thought experiment about child molestation. Brennan asks the reader to consider a society that has deeply deliberated the merits of adults raping children and subjected the decision to a majority vote, with the yeas winning. Brennan claims that because the decision was made in line with proper democratic procedures, advocates of a proceduralist view of democracy must see it as a just outcome. Due to the clear absurdity and injustice of this result, we must therefore reject the view that democratic procedures (e.g., voting, deliberation) themselves are inherently just.
What makes this thought experiment so specious is that Brennan assumes that one relatively simplistic version of a proceduralist, deliberative democracy can represent the whole. Ever worse, his assumed model of deliberative democracy--ostensibly not too far from what already exists in most contemporary nations--is already questionably democratic. Not only is majoritarian decision-making and procedural democracy far from equivalent, but Brennan makes no mention of whether or not children themselves were participants in either the deliberative process or the vote, or even would have a representative say through some other mechanism. Hence, in this example Brennan actually ends up showing the deficits of a kind of epistocracy rather than democracy, insofar as the ostensibly more competent and rationally thinking adults are deliberating and voting for children. That is, political decisions about children already get made by epistocrats (i.e., adults) rather than democratically (understood as people having influence in deciding the rules by which they will be governed for the issues they have a stake in). Moreover, any defender of the value of democratic procedures would likely counter that a well functioning democracy would contain processes to amplify or protect the say of less empowered minority groups, whether through proportional representation or mechanisms to slow down policy or to force majority alliances to make concessions or compromises. It is entirely unsurprising that democratic procedures look bad when one's stand-in for democracy is winner-take-all, simple majoritarian decision-making.
His attack on democratic deliberations is equally short-sighted. Criticizing, quite rightly, that many scholars defend deliberative democracy with purely theoretical arguments, while much of the empirical evidence shows that many average people dislike deliberation and are often very bad at it, Brennan concludes that, absent promising research on how to improve the situation, there is no logical reason to defend deliberative democracy. This is where Brennan's narrow disciplinary background as a political theorist biases his viewpoint. It is not at all surprising to a social scientist that average people would fail to deliberate well nor like it when the near entirety of contemporary societies fails to prepare them for democracy. Most adults have spent 18 years or more in schools and up to several decades in workplaces that do not function as democracies but rather are authoritarian, centrally planned institutions. Empirical research on deliberation has merely uncovered the obvious: People with little practice with deliberative interactions are bad at them. Imagine if an experiment put assembly line workers in charge of managing General Motors, then justified the current hierarchical makeup of corporate firms by pointing to the resulting non-ideal outcomes. I see no reason why Brennan's reasoning about deliberative democracy is any less absurd.
Finally, Brennan's argument rests on a principle of competence--and concurrently the claim that citizens have a right to governments that meet that principle. He borrows the principle from medical ethics, namely that a patient is competent if they are aware of the relevant facts, can understand them, appreciate their relevance, and can reason about them appropriately. Brennan immediately avoids the obvious objections about how any of the judgements about relevance and appropriateness could be made in non-political ways to merely claim that the principle is non-objectionable in the abstract. Certainly for the simplified thought examples that he provides of plumber's unclogging pipes and doctors treating patients with routine conditions the validity of the principle of competence is clear. However, for the most contentious issues we face: climate change, gun control, genetically modified organisms, etc., the facts themselves and the reliability of experts are themselves in dispute. What political system would best resolve such a dispute? Obviously it could not be a epistocracy, given that the relevance and appropriateness of the "relevant" expertise itself is the issue to be decided. Perhaps Brennan's suggestions have some merit, but absent a non-superficial understanding of the relationship between science and politics the foundation of his positive case for epistocracy is shaky at best. His oft repeated assertion that epistocracy would likely produce more desirable decisions is highly speculative.
I plan on continuing to examine Brennan's arguments regarding democracy, but I find it ironic that his argument against average citizens--that they suffer too much from various cognitive maladies to reason well about public issues--applies equally to Brennan. Indeed, the hubris of most experts is deeply rooted in their unfounded belief that a little learning has freed them from the mental limitations that infect the less educated. In reality, Brennan is a partisan like anyone else, not a sagely academic doling out objective advice. Whether one turns to epistocratic ideas in light of the limitations of contemporary democracies or advocate for ensuring the right preconditions for democracies to function better comes back to one's values and political commitments. So far it seems that Brennan's book demonstrates his own political biases as much as it exposes the ostensibly insurmountable problems for democracy.
It is hard to imagine anything more damaging to the movements for livable minimum wages, greater reliance on renewable energy resources, or workplace democracy than the stubborn belief that one must be a “liberal” to support them. Indeed, the common narrative that associates energy efficiency with left-wing politics leads to absurd actions by more conservative citizens. Not only do some self-identified conservatives intentionally make their pickup trucks more polluting at high costs (e.g., “rolling coal”) but they will shun energy efficient—and money saving— lightbulbs if their packaging touts their environmental benefits. Those on the left, often do little to help the situation, themselves seemingly buying into the idea that conservatives must culturally be everything leftists are not and vice-versa. As a result, the possibility to ally for common purposes, against a common enemy (i.e., neoliberalism), is forgone.
The Germans have not let themselves be hindered by such narratives. Indeed, their movement toward embracing renewables, which now make up nearly a third of their power generation market, has been driven by a diverse political coalition. A number villages in the German conservative party (CDU) heartland now produce more green energy than they need, and conservative politicians supported the development of feed-in tariffs and voted to phase out nuclear energy. As Craig Morris and Arne Jungjohann describe, the German energy transition resonates with key conservative ideas, namely the ability of communities to self-govern and the protection of valued rural ways of life. Agrarian villages are given a new lease on life by farming energy next to crops and livestock, and enabling communities to produce their own electricity lessens the control of large corporate power utilities over energy decisions. Such themes remain latent in American conservative politics, now overshadowed by the post-Reagan dominance of “business friendly” libertarian thought styles.
Elizabeth Anderson has noticed a similar contradiction with regard to workplaces. Many conservative Americans decry what they see as overreach by federal and state governments, but tolerate outright authoritarianism at work. Tracing the history of conservative support for “free market” policies, she notes that such ideas emerged in an era when self-employment was much more feasible. Given the immense economies of scale possible with post-Industrial Revolution technologies, however, the barriers to entry for most industries are much too high for average people to own and run their own firms. As a result, free market policies no longer create the conditions for citizens to become self-reliant artisans but rather spur the centralization and monopolization of industries. Citizens, in turn, become wage laborers, working under conditions far more similar to feudalism than many people are willing to recognize.
Even Adam Smith, to whom many conservatives look for guidance on economic policy, argued that citizens would only realize the moral traits of self-reliance and discipline—values that conservatives routinely espouse—in the right contexts. In fact, he wrote of people stuck doing repetitive tasks in a factory:
“He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible to become for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life. Of the great and extensive interests of his country he is altogether incapable of judging”
Advocates of economic democracy have overlooked a real opportunity to enroll conservatives in this policy area. Right leaning citizens need not be like Mike Rowe—a man who ironically garnered a following among “hard working” conservatives by merely dabbling in blue collar work—and mainly bemoan the ostensible decline in citizens’ work ethic. Conservatives could be convinced that creating policies that support self-employment and worker-owned firms would be far more effective in creating the kinds of citizenry they hope for, far better than simply shaming the unemployed for apparently being lazy. Indeed, they could become like the conservative prison managers in North Dakota (1), who are now recognizing that traditionally conservative “tough on crime” legislation is both ineffective and fiscally irresponsible—learning that upstanding citizens cannot be penalized into existence.
Another opportunity has been lost by not constructing more persuasive narratives that connect neoliberal policies with the decline of community life and the eroding well-being of the nation. Contemporary conservatives will vote for politicians who enable corporations to outsource or relocate at the first sign of better tax breaks somewhere else, while they simultaneously decry the loss of the kinds of neighborhood environments that they experienced growing up. Their support of “business friendly” policies had far different implications in the days when the CEO of General Motors would say “what is good for the country is good for General Motors and vice versa.” Compare that to an Apple executive, who baldly stated: “We don’t have an obligation to solve America’s problems. Our only obligation is making the best product possible.”
Yet fights for a higher minimum wage and proposals to limit the destructively competitive processes where nations and cities try to lure businesses away from each other with tax breaks get framed as anti-American, even though they are poised to reestablish part of the social reality that conservatives actually value. Communities cannot prosper when torn asunder by economic disruptions; what is best for a multinational corporation is often not what is best for nation like the United States. It is a tragedy that many leftists overlook these narratives and focus narrowly on appeals to egalitarianism, a moral language that political psychologists have found (unsurprisingly) to resonate only with other leftists.
The resulting inability to form alliances with conservatives over key economic and energy issues allows libertarian-inspired neoliberalism to drive conservative politics in the United States, even though libertarianism is as incompatible with conservativism as it is with egalitarianism. Libertarianism, by idealizing impersonal market forces, upholds an individualist vision of society that is incommensurable with communal self-governance and the kinds of market interventions that would enable more people to be self-employed or establish cooperative businesses. By insisting that one should “defer” to the supposedly objective market in nearly all spheres of life, libertarianism threatens to commodify the spaces that both leftists and conservatives find sacred: pristine wilderness, private life, etc.
There are real challenges, however, to more often realizing political coalitions between progressives and conservatives, namely divisions over traditionalist ideas regarding gender and sexuality. Yet even this is a recent development. As Nadine Hubbs shows, the idea that poor rural and blue collar people are invariably more intolerant than urban elites is a modern construction. Indeed, studies in rural Sweden and elsewhere have uncovered a surprising degree of acceptance for non-hetereosexual people, though rural queer people invariably understand and express their sexuality differently than urban gays. Hence, even for this issue, the problem lies not in rural conservatism per se but with the way contemporary rural conservatism in America has been culturally valenced. The extension of communal acceptance has been deemphasized in order to uphold consistency with contemporary narratives that present a stark urban-rural binary, wherein non-cis, non-hetereosexual behaviors and identities are presumed to be only compatible with urban living. Yet the practice, and hence the narrative, of rural blue collar tolerance could be revitalized.
However, the preoccupation of some progressives with maintaining a stark cultural distinction with rural America prevents progressive-conservative coalitions from coming together to realize mutually beneficial policy changes. I know that I have been guilty of that. Growing up with left-wing proclivities, I was guilty of much of what Nadine Hubbs criticizes about middle-class Americans: I made fun of “rednecks” and never, ever admitted to liking country music. My preoccupation with proving that I was really an “enlightened” member of the middle class, despite being a child of working class parents and only one generation removed from the farm, only prevented me from recognizing that I potentially had more in common with rednecks politically than I ever would with the corporate-friendly “centrist” politicians at the helm of both major parties. No doubt there is work to be done to undo all that has made many rural areas into havens for xenophobic, racist, and homophobic bigotry; but that work is no different than what could and should be done to encourage poor, conservative whites to recognize what a 2016 SNL sketch so poignantly illustrated: that they have far more in common with people of color than they realize.
1. A big oversight in the “work ethic” narrative is that it fails to recognize that slacking workers are often acting rationally. If one is faced with few avenues for advancement and is instantly replaced when suffering an illness or personal difficulties, why work hard? What white collar observers like Rowe might see as laziness could be considered an adaptation to wage labor. In such contexts, working hard can be reasonably seen as not the key to success but rather a product of being a chump. A person would be merely harming their own well-being in order to make someone else rich. This same discourse in the age of feudalism would have involved chiding peasants for taking too many holidays.
Few issues stoke as much controversy, or provoke as shallow of analysis, as net neutrality. Richard Bennett’s recent piece in the MIT Technology Review is no exception. His views represent a swelling ideological tide among certain technologists that threatens not only any possibility for democratically controlling technological change but any prospect for intelligently and preemptively managing technological risks. The only thing he gets right is that “the web is not neutral” and never has been. Yet current “net neutrality” advocates avoid seriously engaging with that proposition. What explains the self-stultifying allegiance to the notion that the Internet could ever be neutral?
Bennett claims that net neutrality has no clear definition (it does), that anything good about the current Internet has nothing to do with a regulatory history of commitment to net neutrality (something he can’t prove), and that the whole debate only exists because “law professors, public interest advocates, journalists, bloggers, and the general public [know too little] about how the Internet works.”
To anyone familiar with the history of technological mistakes, the underlying presumption that we’d be better off if we just let the technical experts make the “right” decision for us—as if their technical expertise allowed them to see the world without any political bias—should be a familiar, albeit frustrating, refrain. In it one hears the echoes of early nuclear energy advocates, whose hubris led them to predict that humanity wouldn’t suffer a meltdown in hundreds of years, whose ideological commitment to an atomic vision of progress led them to pursue harebrained ideas like nuclear jets and using nuclear weapons to dig canals. One hears the echoes of those who managed America’s nuclear arsenal and tried to shake off public oversight, bringing us to the brink of nuclear oblivion on more than one occasion.
Only armed with such a poor knowledge of technological history could someone make the argument that “the genuine problems the Internet faces today…cannot be resolved by open Internet regulation. Internet engineers need the freedom to tinker.” Bennett’s argument is really just an ideological opposition to regulation per se, a view based on the premise that innovation better benefits humanity if it is done without the “permission” of those potentially negatively affected. Even though Bennett presents himself as simply a technologist whose knowledge of the cold, hard facts of the Internet leads him to his conclusions, he is really just parroting the latest discursive instantiation of technological libertarianism.
As I’ve recently argued, the idea of “permissionless innovation” is built on a (intentional?) misunderstanding of the research on how to intelligently manage technological risks as well as the problematic assumption that innovations, no matter how disruptive, have always worked out for the best for everyone. Unsurprisingly the people most often championing the view are usually affluent white guys who love their gadgets. It is easy to have such a rosy view of the history of technological change when one is, and has consistently been, on the winning side. It is a view that is only sustainable as long as one never bothers to inquire into whether technological change has been an unmitigated wonder for the poor white and Hispanic farmhands who now die at relatively younger ages of otherwise rare cancers, the Africans who have mined and continue to mine Uranium or coltan in despicable conditions, or the permanent underclass created by continuous technological upheavals in the workplace not paired with adequate social programs.
In any case, I agree with Bennett’s argument in a later comment to the article: “the web is not neutral, has never been neutral, and wouldn't be any good if it were neutral.” Although advocates for net neutrality are obviously demanding a very specific kind of neutrality: that ISPs do not treat packets differently based on where they originate or where they’re going, the idea of net neutrality has taken on a much broader symbolic meaning, one that I think constrains people’s thinking about Internet freedoms rather than enhances it.
The idea of neutrality carries so much rhetorical weight in Western societies because their cultures are steeped in a tradition of philosophical liberalism. Liberalism is a philosophical tradition based in the belief that the freedom of individuals to choose is the greatest good. Even American political conservatives really just embrace a particular flavor of philosophical liberalism, one that privileges the freedoms enjoyed by supposedly individualized actors unencumbered by social conventions or government interference to make market decisions. Politics in nations like the US proceeds with the assumption that society, or at least parts of it, can be composed in such a way to allow individuals to decide wholly for themselves. Hence, it is unsurprising that changes in Internet regulations provoke so much ire: The Internet appears to offer that neutral space, both in terms of the forms of individual self-expression valued by left-liberals and the purportedly disruptive market environment that gives Steve Jobs wannabes wet dreams.
Neutrality is, however, impossible. As I argue in my recent book, even an idealized liberal society would have to put constraints on choice: People would have to be prevented from making their relationship or communal commitments too strong. As loathe as some leftists would be to hear it, a society that maximizes citizens’ abilities for individual self-expression would have to be even more extreme than even Margaret Thatcher imagined it: composed of atomized individuals. Even the maintenance of family structures would have to be limited in an idealized liberal world.
On a practical level it is easy to see the cultivation of a liberal personhood in children as imposed rather than freely chosen, with one Toronto family going so far as to not assign their child a gender. On plus side for freedom, the child now has a new choice they didn’t have before. On the negative side, they didn’t get to choose whether or not they’d be forced to make that choice. All freedoms come with obligations, and often some people get to enjoy the freedoms while others must shoulder the obligations.
So it is with the Internet as well. Currently ISPs are obliged to treat packets equally so that content providers like Google and Netflix can enjoy enormous freedoms in connecting with customers. That is clearly not a neutral arrangement, even though it is one that many people (including Google) prefer.
However, the more important non-neutrality of the Internet, one that I think should take center stage in debates, is that it is dominated by corporate interests. Content providers are no more accountable to the public than large Internet service providers. At least since it was privatized in the mid-90s, the Internet has been biased toward fulfilling the needs of business. Other aspirations like improving democracy or cultivating communities, if the Internet has even really delivered all that much in those regards, have been incidental. Facebook wants you to connect with childhood friends so it can show you an ad for a 90s nostalgia t-shirt design. Google wants to make sure neo-nazis can find the Stormfront website so they can advertise the right survival gear to them.
I don’t want a neutral net. I want one biased toward supporting well-functioning democracies and vibrant local communities. It might be possible for an Internet to do so while providing the wide latitude for innovative tinkering that Bennett wants, but I doubt it. Indeed, ditching the pretense of neutrality would enable the broader recognition of the partisan divisions about what the Internet should do, the acknowledgement that the Internet is and will always be a political technology. Whose interests do you want it to serve?
One of the biggest challenges that I think social scientists should be committing themselves to solving is the question of how to enable large-scale social change. Our age is rife with injustices: growing income inequality, an increasingly brutal police-prison-industrial complex, among others. At the same time, these injustices are frustratingly chronic. Positive change, if it has occurred at all, has been ploddingly slow. I think that a big contributor is the unwillingness or inability of average people to imagine change as possible, a necessary condition for them to even begin to advocate for reform. Yet, as anyone who is has read the commentary on a critical article on these issues has probably seen, many Americans seem willing to spare no effort in trying to justify the status quo as either inevitable or the best of all possible worlds. As Steve Fraser argues in The Age of Acquiescence, building a more equal society will require attacking and reconceiving the narratives that today prop up the status quo.
Take college sports, arguably one of most egregiously unjust labor systems in the US. Nowhere else can you find people laboring—indeed college football is like a fulltime job—and inflicting long-term damage to their bodies for little reward. The NCAA generates a billion dollars in revenue, all the while players are contractually barred from reaping the fruits of their labor. As others have pointed out, the “NCAA is a plantation, and the players are the sharecroppers.” That many, if not most, of the prospective players hail from poorer, black regions of the country makes the system seem even more destructive. Football combines start to bear an eerie resemblance to the auction block when one reflects on all these similarities.
The response to such observations always seems to be the same: Don’t these players voluntarily sign the dotted line on the contract? Aren’t they free to do otherwise? The rhetoric of choice is one of the most pernicious discourses today, one that is routinely mobilized to prevent people from digging too deep into systematic inequalities. It is a discourse that tries to eliminate deep thinking about the innumerable coercions faced by most people by reframing them all as choices. Consider Paul Ryan’s recent bizarre claim that cuts to Medicaid and the elimination of the ACA wouldn’t eliminate people’s healthcare: Such people would be simply “choosing” not to have it any longer. The transformation of the inability to pay for something into a free choice is just one of the daftest—though politically expedient—outcomes of choice-based rhetoric. In the context of college sports, it ignores that players coming out of the most deprived areas of the country typically have few other opportunities for a college education or many other routes out of poverty. The rhetoric of choice projects the latitude of choice available to only the most affluent citizens onto everyone, regardless of what their lives actually look like.
The case of college sports also illuminates how the mere possibility of success, no matter how infinitesimal, can lead people to tolerate otherwise intolerable circumstances. Compare it to the Black Mirror episode “15 Million Merits.” Work in the society depicted in this episode is unmitigated drudgery: Citizens’s work lives entail endlessly pedaling on stationary bikes. Their only respite comes from a constant connection to an array of entertainment possibilities, and their only hope for a way out lies in winning Hot Shot, an America’s Got Talent-like game show. The metaphor in “15 Million Merits” couldn’t be clearer: Clawing one’s way out of the doldrums of working in what David Graeber has labeled “bullshit jobs” is largely a roll of the dice, dependent on the caprice of those who do have the power to decide. The hosts of Hot Shot sit with an air of superiority, judging who is worthy and who is not—much like a few of the hosts of the show Shark Tank. Like college ball players who must subject their bodies to four years of strain for a shot at an NFL contract, some workers acquiesce to an unjust working arrangement partly because they too are caught up in dreams of getting to be one of the lucky few to strike it rich.
I’m not the first to note that Americans are limited in their ability to think critically about class because of a belief that inequality is okay as long as they have a chance of being on the right side of it. A common quote, routinely misattributed to John Steinbeck, laments how “the poor [in America] see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.” The underlying narrative that success invariably comes to those who show grit and determination adds to the rhetoric of choice to prevent critical questions about the sources of poverty. I will never forget the panicked look on a student, who in a class discussion about economic fairness, tried to claim that if he were parachuted into Haiti that he would be successful in six months; while uttering something horrible, he nonetheless seemed to be straining under an immense load of cognitive dissonance, attempting to resolve the conflict between a narrative that gave him hope about his own future and its implication that Haitians are somehow poor because they don’t know how to work as hard as middle-class white people.
In any case, also noteworthy in “15 Million Merits” is how those who, for whatever reason, are unable to handle the strain of cycling all day are treated. They are widely abused, distinguished by particular clothing, and targeted for mockery in violent video games and on television game shows—that society’s equivalent of Jerry Springer and Cops. Citizens of this imagined society, much like our own, are partly driven to labor—often to the detriment of their mental and physical well-being—by the fear of being poor and mocked and the belief that perhaps they too can achieve a state of transcendent affluence. Who gives any thoughts to the hundreds or thousands student athletes who, once injured, are often deprived of their scholarship? Often not earning a degree, or perhaps not one that is worth anything, and carrying a potentially disabling injury, such as cervical spine damage, once phenomenal athletes on the way to stardom become just another impoverished nobody, another one of the “takers” denigrated in contemporary conservative discourse.
It seems to me that achieving a more just American society will not be possible without the simultaneous demise of these poverty justifying narratives. Not only will new narratives be necessary, but such narratives will need to be uttered by the right people. As great as it is that attendees of Ivy League universities and participants in urban art collectives have developed counter narratives to those that today justify status quo inequalities, it seems unlikely that such narratives will ever resonate with average citizens. A recent video by The Onion makes much the same point in satirically depicting a Trump voter whose mind was changed after reading 800 pages of queer feminist theory. In my mind, much of the humanities and social sciences are not worth the paper they have been printed on, if they cannot be persuasively conveyed to non-academic—indeed, uneducated—audiences. Unfortunately, many of the academics I know are too busy denigrating Trump voters for being ignorant to consider how things might actually change.
Certainly there are things to like about the March for Science. As you are likely aware, scientists and engineers have a reputation for being politically aloof. I, for one, am glad to see events like it, which run contrary to that stereotype.
The March for Science website describes the event as a nonpartisan call for politicians to recognize that science upholds the public good: in other words, science matters. I want to push those of you reading this post to critically examine this slogan—to treat it as you would any truth claim.
On face value, there seems to be little to disagree with: of course science should matter. Good luck solving any 21st century challenge without it. Hence, I think it is more interesting to ask, “Which science should matter? And how much?” Some of you may find this to be a provocative turn of phrase, because it applies to science a standard definition of politics: that is, politics as any answer to the question “What gets what, when, and how?”
This is a provocative question because many people, including many scientists and engineers, tend to believe that politics is everything science is not and vice-versa, which in turn supports the idea that advocating for science can be a non-partisan activity, that it can be an apolitical social movement.
To say today that science should matter, but little more than that, could be construed to imply that we ought to continue with science as we had prior to recent electoral results. Such an implication would appear to be rooted in the presumption that science was previously nonpartisan and only recently tainted by political agendas. Is that a wise presumption?
Certainly the current administration’s attempts to excise climate science from NASA and muzzle the EPA can be recognized as political. But what about the historical relationship between science and military applications, running all the way from Archimedes to the United States today—where some $77 billion gets spent on military R&D annually compared to $69 billion on nondefense research? What about the fact that a paltry portion of public research money is dedicated to developing non-toxic alternatives to the suspected and confirmed carcinogens and endocrine disruptors found inside most consumer products, toxins which invariably end up in the environment and, thus, in human bodies. Compare that to the billions that always seems await every new overhyped and highly risky area of innovation: nano-tech, syn-bio, and so on.
I don’t assume that you will agree with my own valuation of the relative worthiness of these different areas of science, but I hope you can join me in recognizing that such discrepancies in funding and attention do not exist because one area is more scientific than the others.
If historians who can study our time period even exist in 100 years, they will likely find our belief that science is nonpartisan as perplexing to say the least. How could a sophisticated society believe in such an idea when it is obvious that some areas of science matter more than others and some science gets ignored? How could they sustain such a belief when the advantages of military R&D and the harms of toxic consumer products clearly accrue more strongly to some people than others? Some clearly win because of this arrangement, while others lose.
I don’t say this to denigrate science but to denigrate one of the myths that undergirds the political aloofness that is so common among scientists and engineers. My message to you is that you’re already and always partisan. That is a reality that will not disappear simply by not believing in it. Accepting this message, I would argue, is not as destructive as one might believe at first. Rather, I think it is freeing: it enables one to act more wisely in the world, rather than be misguided by a “flat Earth theory” of politics. There is no abyss to fall into wherein one ceases to be scientific, in turn becoming political. One is already and always both.
Therefore, it is not a question of whether science and engineering is partisan or not, but a question of what kind of partisans scientists and engineers should be: self-conscious ones or ones asleep at the wheel? What kind of technoscientific world will you be a partisan for? Which science should matter? And how much?
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
If You Don't Want Outbreaks, Don't Have In-Person Classes
How to Stop Worrying and Live with Conspiracy Theorists
Democracy and the Nuclear Stalemate
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life