Although it is a common cliché that a new sitting president will “unite” us, the idea of the “American people” falling in line behind a new administration is a fantasy. Presidential approval ratings have always been polarized along party lines—although the partisan gap has widened over the decades. Yet such rhetoric remains an integral part of the post-election political theater, helpful for relieving some of the partisan pressure built up over a frequently nasty campaign season.
The gaping chasm between Biden supporters and Trump voters, however, is not going to be narrowed by rhetorical olive branches. The pre-election belief that ousting Trump would restore political normalcy is rooted in the mistaken idea that his 2016 success was an aberration, a freak anomaly that would be soon forgotten after a more respectable statesman or woman took over the Oval Office. That story is nothing more than a flight of fancy, one that ignores the underlying causes for rampant polarization, “post-truth,” Trumpism, or however one diagnoses our political malaise. The problem is that Americans worship entirely different notions of Truth.
Many of my fellow leftist friends and colleagues desperately want to believe that Trumpism is little more than the last gasps of a dying racist culture. It is a convenient move, which allows them to think of Trump voters as the political equivalent of anti-vaxxers. Once diagnosed as hopelessly deluded and on the wrong side of history, there is no longer any need to understand Trump voters. They only need to be defeated.
Washington Monthly contributor David Atkins openly wondered how we could “deprogram” the 75 million or so people who voted for Trump, dismissing their electoral choice as due to belonging to “a conspiracy theory fueled belligerent death cult against reality & basic decency.” What a way to talk about an almost full quarter of the population! Only the most dyspeptic will admit to daydreaming about ideological reeducation drives, at least publicly. Optimists, on the other hand, allow themselves to believe that “truth and rationality” will inevitably win out, that the wave of popular support for politicians like Trump will naturally subside, ebbing away like the tide.
The desire to equate of Trumpism with white supremacy is understandable, even if it is probably political escapism. Trump’s company faced a 1973 federal lawsuit for racial discriminating against black tenants, and he took out a full-page ad in the New York Times in 1989 to call for the death of five black and Latino youth falsely accused of sexually assaulting a white jogger. His 2016 campaign rhetoric included insinuating that Mexican immigrants mostly brought drugs and crime to the United States, and promised tough-on-crime legislation that disproportionately burdens minority neighborhoods.
The “dying racist America” thesis is not merely propped up by a sober accounting of Trump’s panoply of racial misdeeds but by attacking the next plausible explanation: working class discontent. Adam Serwer’s review of the 2016 election statistics raises serious challenges to that argument. Clinton won a greater proportion of voters making less than $50,000. And neither did the opioid epidemic ripping through white rural America seem to tip the election toward Trump. White working-class citizens whose loved ones were coping with mental health problems, financial troubles, and substance abuse expressed less support for Trump, not more.
But then again Biden was most successful in the counties where the bulk of America’s economic activity occur. The 477 counties accounting for 70 percent of the US economy went to the former vice-president, while a greater proportion of counties that have faced weak job, population, and economic growth went to Trump. To top it off, Trump lost white votes in 2020 and made significant, albeit single-digit, gains among African Americans, Hispanics, and Asian Americans. Although Trump’s base has traditionally been with white voters who railed against immigration and the perceived feminization of society, neither the “economically disenfranchised working class” nor the “white supremacy” theses seem to really explain the former president’s appeal.
Science vs. Common Sense
Political movements are rarely understood by focusing on members’ deficiencies. An especially misleading starting point is wherever those movements’ opponents think the deficits are. Political allies are more strongly bonded by their shared strengths than their flaws—however obvious and fatal the latter may be.
Data on the employment backgrounds of political donations are insightful here. The Democrats are increasingly of not just the party of the professional class, but the college-educated in general. Scientists, engineers, lawyers, teachers, and accountants dominated donations to Biden, whereas Trump’s support came from business owners, police officers, truckers, construction workers, pilots, stay-at-home mothers, and farmers. This split doesn’t map well onto to income-based conceptions of “working class.” Business owners and pilots often make a good living for their families, while well-educated social workers do not. Rather, the difference lies in which kind of knowledge defines their competency.
Today’s political divisions are only indirectly class and racially based, the more fundament divide is between formally schooled expertise and experience-based judgment. Office workers, service employees, and elite knowledge-sector analysts are on one side, whereas blue-collar manual laborers and business owners are on the other. Despite our culture’s tendency to equate intelligence with a talent for mathematical abstraction or the patience (or pretension) to read dense books by French authors, writers like Mike Rose and Matthew Crawford remind us of the discerning judgements necessary for competence in physical work. Blue collar workers and business owners don’t explicitly analyze symbols but have precisely tuned gut feelings that help them do their jobs well.
Much of America’s political malaise is due to the polarized veneration of these competing styles of knowledge. The idolization of science translates into support for technocracy, or rule by experts. The same worship but for common sense drives an attraction to populism. It is this division above all others that increasingly separates Democrats and Republicans. In his victory speech, Biden explicitly painted himself as a defender of science, promising to put properly credentialed experts in charge of pandemic planning.
In contrast, Trump and his allies have taken great lengths to champion how (some) lay people (fail to) understand public problems. Recall how Newt Gingrich dismissed FBI crime statistics as immaterial in the run-up to the 2016 election, stating that “the average American…does not think that crime is down.” He followed up with, “As a political candidate, I’ll go with what people feel.” Likewise, faced with polls and election numbers showing a Biden victory, Trump voters saw the occurrence of voter fraud as “common sense.” They reason that since Democratic politicians and media outlets despise Trump so much, why wouldn’t they collude with poll workers to spoil the election? As Democrats have portrayed themselves as defending “the facts,” the right has doubled down on gut feeling.
This same dynamic increasingly defines America’s racial politics as well. Armed with concepts and theories from academics such as Robin D’Angelo, people don’t end up having rich, empathy-evoking conversations about their experiences of race in America. Rather, the more militant members of the “woke” left play armchair psychologist, focusing on diagnosing fellow citizens’ racist personality dysfunctions and prescribing the appropriate therapeutic interventions. They demand that others learn to understand their lives through the vocabulary of critical race scholars, whose conclusions are presented as indisputable.
White people who struggle with some critical race theorists’ stark view that one is either racist or actively anti-racist are labeled as suffering from “white fragility,” an irrational and emotional response to having one’s own undeniable racism exposed. Regardless of whether one thinks this judgement is apt or not, it is difficult to deny that the prognosis comes off as arrogant and dismissive. How else would we expect participants to respond when hearing that some of their cherished ideas about valuing hard work, individualism, or the nuclear family is just part of an inherently racist “white culture”? But empathic understanding is often in short supply when antiracists find themselves in a rare teachable moment. As a New York City Education Councilwoman put it to her colleague in a now viral video, “Read a book. Read Ibram Kendi….It is not my job to educate you.”
Many conservative Americans have reacted to the inroads that critical race theory has made into the workplace and popular discourse by retreating to a similarly incurious form of “common sense.” Ideas from critical race theory become labeled as political correctness run amok, or even “cult indoctrination.” All this is somehow imposed on the rest of the nation by left-wing academic activists whose far-reaching powers somehow only disappear when it comes to concretely influencing policymaking or to getting their 18-year-old students to actually read for class.
Regardless of where one’s sympathies lie in this debate, it seems clear that productive conversations across party lines about the extent of America’s racial problems and what could be done about them are simply not happening.
The real tragedy of today’s political moment is that the elevation of “truth” over politics has locked us into a vicious cycle. Technocracy breeds popular distrust in experts. And the specter of populism spurs the professional class to further wrap themselves in the mantle of “the facts.” And no one budges on anything. Consider ongoing protests of pandemic protection measures. Resistance to state government’s social distancing and stay-at-home requirements has not been a rejection of science so much as the rejection of the inflexibility, strictness, and occasional arbitrariness of executive orders.
Yet many governors have reacted to the pandemic situation as if politics is no longer necessary, as if leaving any room for negotiation were dangerous. California’s Gov. Gavin Newsom stated that “health outcomes and science – not politics” would guide reopening plans. And Michigan’s Gov. Gretchen Whitmer concurred, contending that her decision was a “data-driven approach based on facts, based on science, based on recommendations from experts.”
And much of the media repeats ad nauseum the facts of the matter. It is as if journalists believe that if they present just the right infographic on ICU beds or presentation of the size of one’s social “bubble”—along with a healthy dollop of social shaming—less risk averse Americans will immediately change their holiday plans and social lives to sequester at home. When that fails to happen, the talking points shift to the irrationality and immorality of a public that fails to respect facts.
The problem with the mantra “please just listen to the science!” is that science literacy doesn’t determine whether citizens listen or not. What matters is the perceived trustworthiness of experts and public officials. Expert guidance only gets legitimated through a kind of common sense. Citizens go “with their guts” on whether the elected officials and their science-based advice seems sensible and trustworthy.
One of the best ways to inadvertently sow distrust in experts is to portray them as an unquestionable authority. Anywhere where “post-truth” seems to be on the upswing—in the response to the pandemic or for childhood vaccinations—the determinations of experts are handed down as if divinely decreed, or as if citizens’ reservations need to be only “managed” or countered. The stakes are judged to be so high that ordinary people are deemed unworthy of being consulted regarding how exactly expert recommendations should be implemented. Health researchers and doctors don’t think to ask the vaccine hesitant how they might be made to feel more comfortable with the vaccination process; state governors never bothered to ask people how pandemic protections might be implemented in ways that are less disruptive to their lives.
The other way is for experts to get things wrong. When “the facts” are initially presented as unassailable, later corrections aren’t taken as the routine self-corrections of the scientific process but rather as evidence that the science has been tainted by politics. Early in the pandemic, health experts told the public that masks were ineffective at preventing a COVID-19 infection, but then turned around and recommended that they be mandatory in many states’ pandemic guidelines.
The minor inconveniences of mask wearing and reduced restaurant occupancy have blown up to become perceived as gross violations of freedom in the “common sense” of a substantial portion of Americans not because those citizens deny science but rather because expert opinion was arrogantly applied. Officials failed to be honest and admit that even the scientists were feeling their way through an uncertain situation and that they too make value judgements in light of what they don’t yet understand.
And governmental policies have been inconsistent and sometimes unfair. I can get ticketed in my own state if I walk around a deserted athletic field without a mask. In many states, bars and restaurants are open but outdoor playgrounds remain closed. It doesn’t take a scientist to know that these rules probably do not match the on-the-ground risks. But when leaders and experts overplay their hand, some portion of citizens just start taking expert guidance less seriously. Precious political capital is wasted to enforce compliance in ways that may be pure hygiene theater. Neither has it helped that governors such as Andrew Cuomo originally seemed content to waste vaccines, lest they go to someone who has been calculated by state bureaucrats to be less deserving.
Beyond Political Road Rage
Amidst the trench warfare between technocrats lobbying fact-laced truth bombs and populists laying “common sense” landmines, Americans have largely forgotten how to do politics. The underlying reasoning is understandable. If our collective problems are recast as completely solvable by straightforwardly applying science or something called common sense, what reason is there to debate, negotiate, or listen? Doing so would needlessly give ground to ignorant rubes and brainwashed ideologues.
Americans need not like the viewpoints of their compatriots, but they ought to at least try to understand them. But far too many people appear to put their faith in Truth with a capital T. If only their opponents could be led to see the light and recognize “the facts” or popular “common sense” that they would they quit arguing and toe the line. If only politics were so simple.
It is easy to imagine things getting much worse. A less dysfunctional public sphere feels like a utopian daydream. A good first step would be for Democrats to stop wrapping themselves in the mantle of science and learn to listen better to Americans who don’t have a college degree. Everywhere that populism has developed, it has been in reaction to the popular perception that the political establishment is increasingly elitist and distant. Is it any surprise that 64% of Americans now disagree with the idea that politicians “care what people like me think”? Appearing to hand important political decisions over to scientists in distant universities and governmental agencies is literally the worst thing that elected officials could do right now.
For their part, conservatives, libertarians, and others should lay off the polarizing rhetoric about universities and experts. Rather than attack expertise itself, it is far better to focus instead on efforts to intellectually diversify higher education or turn political conversations back toward the moral (rather than factual) divides that define them. The claim that institutions of higher education are “under attack” by an “insidious” woke ideology is needlessly melodramatic. It reinforces the image of the world as hopeless divided between incommensurable worldviews and further fuels polarization between credentialed expertise and practical common sense. Worst of all, it portrays wokeness is something to be excised from the body politic rather than faced up to as a legitimate political adversary. Hyperbolic handwringing about traitorous cultural Marxists is as antidemocratic as any of the more draconian proposals coming from the left.
Taking one’s political adversaries seriously means making a serious effort to persuade fence sitters—and not just through shaming. When a majority of Americans balked at the “Defund the Police” slogan over the summer, defenders doubled down on the phrase. The problem wasn’t with their poor choice of rhetoric, they seemed to imply, but that the public was misinformed about what the phrase really meant. I can’t think of a more potent example of willful political unawareness.
Whether it is woke-ism or Trumpism, fanatical political movements tell us that something is seriously amiss in our body politic, that an area of public discontent is in a need of serious attention. We must be willing to take those discontents seriously. Focusing on apparent deficits in factual understanding or commonsense reasoning prevents us from better addressing the underlying causes of popular frustration.
George Carlin once mused, “Have you ever noticed that anybody driving slower than you is an idiot and anyone going faster than you is a maniac?” American political discourse is little different, having become the rhetorical equivalent of road rage. Citizens need more productive ways to talk about their collective problems. Unless people are willing to admit that people can hold a different political viewpoint without being either an idiot of a maniac, there is little hope for American democracy. Power will just continue to vacillate between aloof technocrats and populist buffoons.
Joseph Epstein's recent Wall Street Journal op-ed has been controversial to say the least. While his article is ostensibly penned as a personal letter to the future first lady, Jill Biden, urging her to not so adamantly insist on being called "Doctor", his main focus is on what he sees as the "watering down" of the academic doctorate.
Much of the fury directed at Epstein (and the Wall Street Journal for publishing it) has focused on signs that he was possibly more motivated by sexism than a good faith exploration of what honorific Doctor should mean. He doesn't help his case by being informal to the point of condescension. I mean, he referred to her as "kiddo"! Even though there is a more charitable interpretation: Epstein was playing off of future President Biden's own rhetorical style, that allusion was not clear to nearly all readers.
Given both their of their ages and career experience, Jill Biden and Joseph Epstein are obviously peers. If Epstein had wanted more people to take him seriously, he would have avoided seeming to talk down to the future first lady. Put in context of the long hard road that women have had to fight to get themselves taken seriously within fields like academia, his approach is tone deaf, if not worse.
I made the mistake on Twitter of trying to engage with the non-sexist part Epstein's thesis. (Yes, I am really that bad at social media). As unsavory as the history of women's credentials being disrespected is, I don't think we should let that history totally overshadow all the other readings of Epstein's argument. Certainly one can argue that discussing whether a PhD really merits being called Doctor should wait until our society is more equal. But that is different than the implication that the question is sexist on its face.
Regardless, Epstein's focus is on the increasing ease with which PhDs can be obtained. Exams on Greek and Latin have been dispensed with. More and more PhDs are being minted every year. And honorary doctorates are seemingly handed out to anyone with a sizable donation to offer or even a middling level of celebrity.
I think debating the "difficulty" of the degree is the wrong question. Any kind of credential could be made excessively arduous so as to weed out most of the students that attempt it.
When I get introduced as Doctor, I usually qualify with a jokey line that my father-in-law used to add, "But not the kind that helps people." The sensibility at the heart of that joke is what lies at the crux of the issue. The extra respect that medical professionals receive is not simply due to the difficulty of obtaining a medical degree. Though, even in that line of thought, it is easy to forget that medical doctors have to take difficult licensing exams, pursue continuing education, and can face potential discipline by a professional governing body--things that PhDs are almost nowhere subject to.
Rather, the most important difference between MDs and PhDs is that the former take the Hippocratic Oath. They publicly commit to using their knowledge to help others, although they can and sometimes do fall fall short of that aspiration. The beneficiaries of the work of PhDs are often unclear. The cliché that PhDs are motivated purely by curiosity or knowledge for knowledge's sake obscures a troubling reality. The most reliable benefits of academic work accrue to the researcher themselves (in terms of professional status) and to the small clique of scholars they associate with. No doubt there are exceptions, such as when PhDs admirably choose to work on "applied" problems. But those researchers usually take a big hit professionally by doing so.
If PhDs are to earn the Doctor label they should be required to take an analogous oath, one that commits them to using scholarship to benefit at least some small group of people who do not hold PhDs. The attitude that PhDs are entitled to the status of Doctor because they successfully wrote a dissertation, in my mind, inevitably culminates in a narcissistic form of elitism. Status should be a product of how a person serves others, not something awarded because they survived a largely arbitrary academic gauntlet.
One of the major oversights that Epstein made in his piece was that he failed to take seriously the difference between a PhD and the degree that Jill Biden actually has, an EdD. Educational doctorate programs are designed to enable graduates to apply their knowledge to situations that are likely to be encountered in real-life educational settings. It is a credential that sets up graduates to do good in the world, not just produce knowledge for other academics. So, in light of my own argument, Jill Biden is more befitting of the Doctor honorific than I am. That is a more exciting and interesting conclusion than I thought would have come from engaging with Epstein's sexist op-ed, one that is worth considering.
Are Americans losing their grip on reality? It is difficult not to think so in light of the spread of QANON conspiracy theories, which posit that a deep-state ring of Satanic pedophiles is plotting against President Trump. A recent poll found that some 56% of Republican voters believe that at least some of the QANON conspiracy theory is true. But conspiratorial thinking has been on the rise for some time. One 2017 Atlantic article claimed that America had “lost its mind” in its growing acceptance of post-truth. Robert Harris has more recently argued that the world had moved into an “age of irrationality.” Legitimate politics is threatened by a rising tide of unreasonableness, or so we are told. But the urge to divide people in rational and irrational is the real threat to democracy. And the antidote is more inclusion, more democracy—no matter how outrageous the things our fellow citizens seem willing to believe.
Despite recent panic over the apparent upswing in political conspiracy thinking, salacious rumor and outright falsehoods has been an ever-present feature of politics. Today’s lurid and largely evidence-free theories about left-wing child abuse rings have plenty of historical analogues. Consider tales of Catherine the Great’s equestrian dalliances and claims that Marie Antoinette found lovers in both court servants and within her own family. Absurd stories about political elites seems to have been anything but rare. Some of my older relatives believed in the 1990s that the government was storing weapons and spare body parts underneath Denver International Airport in preparation for a war against common American citizens—and that was well before the Internet was a thing.
There seems to be little disagreement that conspiratorial thinking threatens democracy. Allusions to Richard Hofstadter’s classic essay on the “paranoid style of American politics” have become cliché. Hofstadter’s targets included 1950s conservatives that saw Communist treachery around every corner, 1890s populists railing against the growing power of the financial class, and widespread worries about the machinations of the Illuminati. He diagnosed their politics paranoid in light of their shared belief that the world was being persecuted by a vast cabal of morally corrupt elites.
Regardless of their specific claims, conspiracy theories’ harms come from their role in “disorienting” the public, leading citizens to have grossly divergent understandings of reality. And widespread conspiratorial thinking drives the delegitimation of traditional democratic institutions like the press and the electoral system. Journalists are seen as pushing “fake news.” The voting booths become “rigged.”
Such developments are no doubt concerning, but we should think carefully about how we react to conspiracism. Too often the response is to endlessly lament the apparent end of rational thought and wonder aloud if democracy can survive while being gripped by a form of collective madness. But focusing on citizens' perceived cognitive deficiencies presents its own risks. Historian Ted Steinberg called this the “diagnostic style” of American political discourse, which transforms “opposition to the cultural mainstream into a form of mental illness.” The diagnostic style leads us to view QANONers, and increasingly political opponents in general, as not merely wrong but cognitively broken. They become the anti-vaxxers of politics.
While QANON believers certainly seem to be deluding themselves, isn’t the tendency by leftists to blame Trump’s popular support on conservative’s faculty brains and an uneducated or uninformed populace equally delusional? The extent to which such cognitive deficiencies are actually at play is beside the point as far as democracy is concerned. You can’t fix stupid, as the well-worn saying has it. Diagnosing chronic mental lapses actually leaves us very few options for resolving conflicts. Even worse, it prevents an honest effort to understand and respond to the motivations of people with strange beliefs. Calling people idiots will only cause them to dig in further.
Responses to the anti-vaxxer movement show as much. Financial penalties and other compulsory measures tend to only anger vaccine hesitant parents, leading them to more often refuse voluntary vaccines and become more committed in their opposition. But it does not take a social scientific study to know this. Who has ever changed their mind in response to the charge of stupidity or ignorance?
Dismissing people with conspiratorial views blinds us to something important. While the claims themselves might be far-fetched, people often have legitimate reasons for believing them. African Americans, for instance, disproportionately believe conspiracy theories regarding the origin of HIV, such as that it was man-made in a laboratory or that the cure was being withheld, and are more hesitant of vaccines. But they also rate higher in distrust of medical institutions, often pointing to the Tuskegee Syphilis Study and ongoing racial disparities as evidence. And from British sheepfarmers’ suspicion of state nuclear regulators in the aftermath of Chernobyl to mask skeptics’ current jeremiads against the CDC, governmental mistrust has often developed after officials’ overconfident claims about the risks turned out to be inaccurate. What might appear to an “irrational” rejection of the facts is often a rational response to a power structure that feels distant, unresponsive, and untrustworthy.
The influence of psychologists has harmed more than it has helped in this regard. Carefully designed studies purport to show that believers in conspiracy theories lack the ability to think analytically or claim that they suffer from obscure cognitive biases like “hypersensitive agency detection.” Recent opinion pieces exaggerate the “illusory truth effect,” a phenomenon discovered in psych labs that repeated exposure to false messages leads to a relatively slight increase in the number subjects rating them as true or plausible. The smallness of this, albeit statistically significant, effect doesn’t stop commentators from presenting social media users as if they were passive dupes, who only need to be told about QANON so many times before they start believing it. Self-appointed champions of rationality have spared no effort to avoid thinking about the deeper explanations for conspiratorial thinking.
Banging the drum over losses in rationality will not get us out of our present situation. Underneath our seeming inability to find more productive political pastures is a profound misunderstanding of what makes democracy work. Hand Wringing over “post-truth” or conspiratorial beliefs is founded on the idea that the point of politics is to establish and legislate truths. Once that is your conception of politics, the trouble with democracy starts to look like citizens with dysfunctional brains.
When our fellow Americans are recast as cognitively broken, it becomes all too easy to believe that it would be best to exclude or diminish the influence of people who believe outrageous things. Increased gatekeeping within the media or by party elites and scientific experts begins to look really attractive. Some, like philosopher Jason Brennan, go even further. His 2016 book, Against Democracy, contends that the ability to rule should be limited to those capable of discerning and “correctly” reasoning about the facts, while largely sidestepping the question of who decides what the right facts are and how to know when we are correctly reasoning about them.
But it is misguided to think that making our democracy only more elitist will throttle the wildfire spread of conspiratorial thinking. If anything, doing so will only temporarily contain populist ferment, letting pressure build until it eventually explodes or (if we are lucky) economic growth leads it to fizzle out. Political gatekeeping, by mistaking supposed deficits in truth and rationality for the source of democratic discord, fails to address the underlying cause of our political dysfunction: the lack of trust.
Signs of our political system’s declining legitimacy are not difficult to find. A staggering 71 percent of the Americans believe that elected officials don’t care about the average citizen or what they think. Trust in our government has never been lower, with only 17 percent of citizens expressing confidence about Washington most or all the time. By diagnosing rather than understanding, we cannot see that conspiratorial thinking is the symptom rather than the disease.
The spread of bizarre theories about COVID-19 being a “planned” epidemic or child-abuse rings is a response to real feelings of helplessness, isolation, and mistrust as numerous natural and manmade disasters unfold before our eyes—epochal crises that governments seem increasingly incapable of getting a handle on. Many of Hofstadter’s listed examples of conspiratorial thought came during similar moments: at the height of the Red Scare and Cold War nuclear brinkmanship, during the 1890s depression, or in the midst of pre-Civil War political fracturing. Conspiracy theories offer a simplified world of bad guys and heroes. A battle between good and evil is a more satisfying answer than the banality of ineffectual government and flawed electoral systems when one is facing wicked problems.
Perhaps social media adds fuel to the fire, accelerating the spread of outlandish proposals about what ails the nation. But it does so not because it short-circuits our neural pathways to crash our brains’ rational thinking modules. Conspiracy theories are passed by word of mouth (or Facebook likes) by people we already trust. It is no surprise that they gain traction in a world where satisfying solutions to our chronic, festering crises are hard to find, and where most citizens are neither afforded a legible glimpse into the workings of the vast political machinery that determines much of their lives nor the chance to actually substantially influence it.
Will we be able to reverse course before it is too late? If mistrust and unresponsiveness is the cause, the cure should be the effort to reacquaint Americans with the exercise of democracy on a broad-scale. Hofstadter himself noted that, because the political process generally affords more extreme sects little influence, public decisions only seemed to confirm conspiracy theorists’ belief that they are a persecuted minority. The urge to completely exclude “irrational” movements forgets that finding ways to partially accommodate their demands is often the more effective strategy. Allowing for conscientious objections to vaccination effectively ended the anti-vax movement in early 20th century Britain. Just as interpersonal conflicts are more easily resolved by acknowledging and responding to people’s feelings, our seemingly intractable political divides will only become productive by allowing opponents to have some influence on policy. That is not to say that we should give into all their demands. Rather it is only that we need to find small but important ways for them to feel heard and responded to, with policies that do not place unreasonable burdens on the rest of us.
While some might pooh-pooh this suggestion, pointing to conspiratorial thinking as evidence of how ill-suited Americans are for any degree of political influence, this gets the relationship backwards. Wisdom isn’t a prerequisite to practicing democracy, but an outcome of it. If our political opponents are to become more reasonable it will only be by being afforded more opportunities to sit down at the table with us to wrestle with just how complex our mutually shared problems are. They aren’t going anywhere, so we might as well learn how to coexist.
Democracy and the Nuclear Stalemate
America’s nuclear energy situation is a microcosm of the nation’s broader political dysfunction. We are at an impasse, and the debate around nuclear energy is highly polarized, even contemptuous. This political deadlock ensures that a widely disliked status quo carries on unabated. Depending on one’s politics, Americans are left either with outdated reactors and an unrealized potential for a high-energy but climate-friendly society, or are stuck taking care of ticking time bombs churning out another two thousand tons of unmanageable radioactive waste every year
Continue reading at The New Atlantis
Radiation Politics in a Pandemic
Why is Covid-19 science making us more partisan?
Continued at The New Atlantis
Who Needs What Technology Analysis?
Back during the summer, Tristan Harris sparked a flurry of academic indignation when he suggested that we needed a new field called “Science & Technology Interaction” or STX, which would be dedicated to improving the alignment between technologies and social systems. Tweeters were quick to accuse him of “Columbizing,” claiming that such a field already existed in the form of Science & Technology Studies (STS) or similar such academic department. So ignorant, amirite?
I am far more sympathetic. If people like Harris (and earlier Cathy O’Neil) have been relatively unaware of fields like Science and Technology Studies, it is because much of the research within these disciplines is mostly illegible to non-academics, not all that useful to them, or both. I really don’t blame them for not knowing. I am even an STS scholar myself, and the table of contents of most issues of my field’s major journals don’t really inspire me to read further.
And in fairness to Harris and contrary to Academic Twitter, the field of STX that he proposes does not already exist. The vast majority of STS articles and books dedicate single digit percentages of their words to actually imagining how technology could better match the aspirations of ordinary people and their communities. Next to no one details alternative technological designs or clear policy pathways toward a better future, at least not beyond a few pages at the end of a several-hundred-page manuscript.
My target here is not just this particular critique of Harris, but the whole complex of academic opiners who cite Foucault and other social theory to make sure we know just how “problematic” non-academics’ “ignorant” efforts to improve technological society are. As essential as it is to try to improve upon the past in remaking our common world, most of these critiques don’t really provide any guidance for what steps we should be taking. And I think that if scholars are to be truly helpful to the rest of humanity they need to do more than tally and characterize problems in ever more nuanced ways. They need to offer more than the academic equivalent of fiddling while Rome burns.
In the case of Harris, we are told that underlying the more circumspect digital behavior that his organization advocates is a dangerous preoccupation with intentionality. The idea of being more intentional is tainted by the unsavory history of humanistic thought itself, which has been used for exclusionary purposes in the past. Left unsaid is exactly how exclusionary or even harmful it remains in the present.
This kind of genealogical take down has become cliché. Consider how one Gizmodo blogger criticizes environmentalists’ use the word “natural” in their political activism. The reader is instructed that because early Europeans used the concept of nature to prop up racist ideas about Native Americans that the term is now inherently problematic and baseless. The reader is supposed to believe from this genealogical problematization that all human interactions with nature are equally natural or artificial, regardless of whether we choose to scale back industrial development or to erect giant machines to control the climate.
Another common problematiziation is of the form “not everyone is privileged enough to…”, and it is often a fair objection. For instance, people differ in their individual ability to disconnect from seductive digital devices, whether due to work constraints or the affordability or ease of alternatives. But differences in circumstances similarly challenge people’s capacity to affordably see a therapist, retrofit their home to be more energy efficient, or bike to work (and one might add to that: read and understand Foucault). Yet most of these actions still accomplish some good in the world. Why is disconnection any more problematic than any other set of tactics that individuals use to imperfectly realize their values in an unequal and relatively undemocratic society? Should we just hold our breaths for the “total overhaul…full teardown and rebuild” of political economies that the far more astute critics demand?
Equally trite are references to the “panopticon,” a metaphor that Foucault developed to describe how people’s awareness of being constantly surveilled leads them to police themselves. Being potentially visible at all times enables social control in insidious ways. A classic example is the Benthamite prison, where a solitary guard at the center cannot actually view all the prisoners simultaneously, but the potential for him to be viewing a prisoner at any given time is expected to reduce deviant behavior.
This gets applied to nearly any area of life where people are visible to others, which means it is used to problematize nearly everything. Jill Grant uses it to take down the New Urbanist movement, which aspires (though fairly unsuccessfully) to build more walkable neighborhoods that are supportive of increased local community life. This movement is “problematic” because the densities it demands means that citizens are everywhere visible to their neighbors, opening up possibilities for the exercise of social control. Whether not any other way of housing human beings would not result in some form of residential panopticon is not exactly clear, except perhaps by designing neighborhoods so as to prohibit social community writ large.
Further left unsaid in these critiques is exactly what a more desirable alternative would be. Or at least that alternative is left implicit and vague. For example, the pro-disconnection digital wellness movement is in need of enhanced wokeness, to better come to terms with “the political and ideological assumptions” that they take for granted and the “privileged” values they are attempting to enact in the world.
But what does that actually mean? There’s a certain democratic thrust to the criticism, one that I can get behind. People disagree about what is “the good life” and how to get there, and any democratic society would be supportive of a multitude of them. Yet the criticism that the digital wellness movement seems to center around one vision of “being human,” one emphasizing mindfulness and a capacity to exercise circumspect individual choosing, seems hollow without the critics themselves showing us what should take its place. Whatever the flaws with digital wellness, it is not as self-stultifying as the defeatist brand of digital hedonism implicitly left in the wake of academic critiques that offer no concrete alternatives. Perhaps it is unfair to expect a full-blown alternative; yet few of these critiques offer even an incremental step in the right direction.
Even worse, this line of criticism can problematize nearly everything, losing its rhetorical power as it is over-applied. Even academia itself is disciplining. STS has its own dominant paradigms, and critique is mobilized in order to mold young scholars into academics who cite the right people, quote the correct theories, and support the preferred values. My success depends on me being at least “docile enough” in conforming myself to the norms of the profession.
I also exercise self-discipline in my efforts to be a better spouse and a better parent. I strive to be more intentional when I’m frustrated or angry, because I too often let my emotions shape my interactions with loved ones in ways that do not align with my broader aspirations. More intentionality in my life has been generally a good thing, so long as my expectations are not so unrealistic as to provoke more anxiety than the benefits are worth. But in a critical mode where self-discipline and intentionality automatically equate to self-subjugation, how exactly are people to exercise agency in improving their own lives?
In any case, advocating devices that enable users to exercise greater intentionality over their digital practices is not a bad thing per se. Citizens pursue self-help, meditate, and engage in other individualistic wellness activities because the lives they live are constrained. Their agency is partly circumscribed by their jobs, family responsibilities, and incomes, not to mention the more systemic biases of culture and capitalism. Why is it wrong for groups like Harris’ center to advocate efforts that largely work within those constraints?
Yet even that reading of the digital wellness movement seems uncharitable. Certainly Harris’ analysis lacks the sophistication of a technology scholar’s, but he has made it obvious that he recognizes that dominant business models and asymmetrical relations of power underlay the problem. To reduce his efforts to mere individualistic self-discipline is borderline dishonest, though he no doubt emphasizes the parts of the problem he understands best. Of course it will likely take more radical changes to realize the humane technology than Harris advocates, but it is not totally clear whether individualized efforts necessarily detract from people’s ability or the willingness demand more from tech firms and governments (i.e., are they like bottled water and other “inverted quarantines”?). At least that is a claim that should be demonstrated rather than presumed from the outset.
At its worst, critical “problematizing” presents itself as its own kind of view from nowhere. For instance, because the idea of nature has been constructed in various biased throughout history, we are supposed to accept the view that all human activities are equally natural. And we are supposed to view that perspective as if it were itself an objective fact rather than yet another politically biased social construction.
Various observers mobilize much the same critique about claims regarding the “realness” of digital interactions. Because presenting the category of “real life” as being apart from digital interactions is beset with Foulcauldian problematics, we are told that the proper response is to no longer attempt the qualitative distinctions that that category can help people make—whatever its limitations. It is probably no surprise that the same writer wanting to do away with the digital-real distinction is enthusiastic in their belief that the desires and pleasures of smartphones somehow inherently contain the “possibility…of disrupting the status quo.” Such critical takes give the impression that all technology scholarship can offer is a disempowering form of relativism, one that only thinly veils the author’s underlying political commitments.
The critic’s partisanship is also frequently snuck in the backdoor by couching criticism in an abstract commitment to social justice. The fact that the digital wellness movement is dominated by tech bros and other affluent whites implies that it must be harmful to everyone else—a claim made by alluding to some unspecified amalgamation of oppressed persons (women, people of color, or non-cis citizens) who are insufficiently represented. It is assumed but not really demonstrated that people within the latter demographics would be unreceptive or even damaged by Harris’ approach. But given the lack of actual concrete harms laid out in these critiques, it is not clear whether the critics are actually advocating for those groups or that the social-theoretical existence of harms to them is just a convenient trope to make a mainly academic argument seem as if it actually mattered.
People’s prospects for living well in the digital age would be improved if technology scholars more often eschewed the deconstructive critique from nowhere. I think they should act instead as “thoughtful partisans.” By that I mean that they would acknowledge that their work is guided by a specific set of interests and values, ones that are in the benefit of particular groups.
It is not an impartial application of social theory to suggest that “realness” and “naturalness” are empty categories that should be dispensed with. And a more open and honest admission of partisanship would at least force writers to be upfront with readers regarding what the benefits would actually be to dispensing with those categories and who exactly would enjoy them—besides digital enthusiasts and ecomodernists. If academics were expected to use their analysis to the clear benefit of nameable and actually existing groups of citizens, scholars might do fewer trite Foucauldian analyses and more often do the far more difficult task of concretely outlining how a more desirable world might be possible.
“The life of the critic easy,” notes Anton Ego in the Pixar film Ratatouille. Actually having skin in the game and putting oneself and one’s proposals out in the world where they can be scrutinized is far more challenging. Academics should be pushed to clearly articulate exactly how it is the novel concepts, arguments, observations, and claims they spend so much time developing actually benefit human beings who don’t have access to Elsevier or who don't receive seasonal catalogs from Oxford University Press. Without them doing so, I cannot imagine academia having much of a role in helping ordinary people live better in the digital age.
If your Facebook wall is like mine, you have seen no shortage of memes trying to convince you that a simple explanation for school shootings exists. One claims that their increase coincides with the decline of proper “discipline” (read: corporeal punishment) of children thirty years ago. Yet all sorts of things have changed over the last several decades, especially since 2011 when the frequency of mass shootings tripled. In any case, Europeans are equally unlikely to strike their children but see no uptick in the likelihood of acts of mass violence—the 2011 attack in Norway notwithstanding. Moreover, assault weapons like the AR-15 have been available for fifty years and a federal assault weapon ban (i.e., “The Brady Bill”) expired back in 2004, long before today’s upswing in shootings. Under the slightest bit of scrutiny, any single-cause explanation begins to unravel.
Journalists and other observers often note that the perpetrators of these events were “loners” or socially isolated but do little to no further investigation when it comes time to recommend solutions. It is as if we have begun to accept the existence of such isolated and troubled individuals as if it were natural, as if little could be done to prevent it, as if eliminating civilian weapons or de-secularizing society were less wicked of problems. If there is any mindset my book, Technically Together, tries to eliminate, it is the belief that the social lives offered to us by contemporary networked societies are unalterable—the idea that we have arrived at the best of all possible social worlds. Indeed, it is difficult to square sociologist Keith Hampton’s claim that “because of cellphones and social media, those we depend on are more accessible today than at any point since we lived in small, village-like settlements” with massive increases in the rates of medication use for depression and anxiety, not just the frequency of mass shootings. At the very least, digital technologies—for all their wonders—do less than is needed to remedy feelings of isolation.
Such changes, I contend, suggest that something is very wrong with contemporary practices of togetherness. No doubt most of us get by well enough with some mixture of social networks, virtual communities, and perhaps a handful of neighborly and workplace-based connections (if we’re lucky). That said, most goods, social or otherwise, are unequally distributed. Even if sociologists disagree about whether social ties have changed on average, the distribution of connection has and so have the qualitative dimensions of friendship. For every social butterfly who uses online networks to maintain levels of acquaintanceship that would have been impossible in the days of rolodexes and phone conversations, there are those for whom increasing digital mediation has meant a decline in companionship in both numeracy and intimacy. As nice as “lurking” on Facebook or a pleasant comment from a semi-anonymous Reddit compatriot can be, they cannot match a hug. Indeed, self-reported loneliness and expressed difficulties in sustaining close friendships persist among the older generations and young men despite no lack of digital mechanisms for connecting with others.
Some sociologists downplay this, as if highlighting the downsides to social networks invariably leads to simplistically blaming them for people’s problems. No doubt Internet-critics like Sherry Turkle overlook many of the complexities of digital-age sociality, but only those socially advantaged by contemporary network technologies benefit from viewing them through rose-colored glasses. Certainly an explanation for mass shootings cannot be reduced to the prevalence of digital technologies, just as it cannot be blamed simply on the ostensible disappearance of God from schools, declines in juvenile corporeal punishment, the mere presence of assault weapons, or any of the other purported causes that proliferate in the media. What Internet technologies do provide, however, is a window into society—insofar as they can exacerbate or make more visible social changes set in motion decades earlier.
To try to blame the Internet for social isolation would fail to recognize that it was suburbia that first physically isolated people. It makes the warm intimacy of bodily co-presence hard work; hanging out requires gas money as well as the time and energy to drive to somewhere.
Skeptical readers would probably point out that events like mass shootings became prevalent and accelerated well after the suburb-building boom of the mid-20th century. That objection is easy to counter: social lag. The first suburban dwellers brought with them communal practices learned in small towns or tight-knit urban neighborhoods, and their children maintained some of them. 30 Rock’s Jack Donaghy lamented that 1st generation immigrants work their fingers to the bone, the 2nd goes to college, and the 3rd snowboards and takes improv classes. A similar generational slide could be said about community in suburbia: The 1st generation bowls together; the 2nd organizes neighborhood watch; the 3rd waits with their kids in the car until the school bus arrives.
Even while considering all that the physical makeup of our cities does to stifle community life, it would be a mistake not to recognize that there is something unique about many of our Internet activities that make them far more conducive to feelings of loneliness than other media—even if they do connect us with friends.
Consider how one woman in the BBC documentary, The Age of Loneliness, laments that social media makes her feel even lonelier, because she cannot help but compare her own life to the “highlights reels” posted by acquaintances. Others use the Internet to avoid the painful awkwardness and risk of in-person interactions, getting stuck in a downward spiral of solitude. These features combine with a third to help give birth to mass shooters: The “long tail” of the Internet provides websites that concentrate and amplify pathological tendencies. Forums that encourage and help people with eating disorders continue damaging behaviors are as common as racist, violence-promoting websites, many of which had been frequented by recent mass shooters.
While it is the suburbs that physically isolate people and make physical friendships practically difficult, online social networks too easily exacerbate and highlight that isolation. My point, however, is not to call for dismantling the Internet—though I think it could use a massive redesign. Such a call would be as simple-minded as believing that just eliminating AR-15s or making kids read the Bible in school would prevent acts of mass violence. Appeals to improving mental health services or calls to arm teachers or place military veterans at schools are equally misguided. These are all band-aid solutions that fail to ask about the underlying causes. What we need most is not more guns, God, scrutinization of the mentally ill, or even necessarily gun bans, but a sober evaluation of our social world: Why does it not provide adequate levels of loving togetherness and belonging to nearly everyone? How could it?
To some this might sound like a call to coddle potential murderers. Yet, given that people’s genetics do not fully explain their personalities, societies have to reckon with the fact that mass shooters are not born ready-made monsters but become that way. It is difficult not to see parallels between many young men today and the “lost generation” that was so liable to fall prey to fascism in the early 20th century. The growth of, mainly white, young, and male, mass shooters cannot be totally unrelated to the increase in, mainly white, young, and male, acolytes of prophets like Jordan Peterson, who extol the virtues of traditional notions of male power. Absent work toward ameliorating the “crisis of connection” that many face men currently face, we should be unsurprised if some of them continue to try to replace a lost sense of belonging with violent power fantasies.
As a scholar concerned about the value of democracy within contemporary societies, especially with respect to the challenges presented by increasingly complex (and hence risky) technoscience, a good check for my views is to read arguments by critics of democracy. I had hoped Jason Brennan's Against Democracy would force me to reconsider some of the assumptions that I had made about democracy's value and perhaps even modify my position. Hoped.
Having read through a few chapters, I am already disappointed and unsure if the rest of the book is worth the my time. Brennan's main assertion is that because some evidence shows that participation in democratic politics has a corrupting influence--that is, participants are not necessarily well informed and often end becoming more polarized and biased in the process--we would be better off limiting decision making power to those who have proven themselves sufficiently competent and rational, to epistocracy. Never mind the absurdity of the idea that a process for judging those qualities in potential voters could ever be made in an apolitical, unbiased, or just way, Brennan does not even begin with a charitable or nuanced understanding of what democracy is or could be.
One early example that exposes the simplicity of Brennan's understanding of democracy--and perhaps even the circularity of his argument--is a thought experiment about child molestation. Brennan asks the reader to consider a society that has deeply deliberated the merits of adults raping children and subjected the decision to a majority vote, with the yeas winning. Brennan claims that because the decision was made in line with proper democratic procedures, advocates of a proceduralist view of democracy must see it as a just outcome. Due to the clear absurdity and injustice of this result, we must therefore reject the view that democratic procedures (e.g., voting, deliberation) themselves are inherently just.
What makes this thought experiment so specious is that Brennan assumes that one relatively simplistic version of a proceduralist, deliberative democracy can represent the whole. Ever worse, his assumed model of deliberative democracy--ostensibly not too far from what already exists in most contemporary nations--is already questionably democratic. Not only is majoritarian decision-making and procedural democracy far from equivalent, but Brennan makes no mention of whether or not children themselves were participants in either the deliberative process or the vote, or even would have a representative say through some other mechanism. Hence, in this example Brennan actually ends up showing the deficits of a kind of epistocracy rather than democracy, insofar as the ostensibly more competent and rationally thinking adults are deliberating and voting for children. That is, political decisions about children already get made by epistocrats (i.e., adults) rather than democratically (understood as people having influence in deciding the rules by which they will be governed for the issues they have a stake in). Moreover, any defender of the value of democratic procedures would likely counter that a well functioning democracy would contain processes to amplify or protect the say of less empowered minority groups, whether through proportional representation or mechanisms to slow down policy or to force majority alliances to make concessions or compromises. It is entirely unsurprising that democratic procedures look bad when one's stand-in for democracy is winner-take-all, simple majoritarian decision-making.
His attack on democratic deliberations is equally short-sighted. Criticizing, quite rightly, that many scholars defend deliberative democracy with purely theoretical arguments, while much of the empirical evidence shows that many average people dislike deliberation and are often very bad at it, Brennan concludes that, absent promising research on how to improve the situation, there is no logical reason to defend deliberative democracy. This is where Brennan's narrow disciplinary background as a political theorist biases his viewpoint. It is not at all surprising to a social scientist that average people would fail to deliberate well nor like it when the near entirety of contemporary societies fails to prepare them for democracy. Most adults have spent 18 years or more in schools and up to several decades in workplaces that do not function as democracies but rather are authoritarian, centrally planned institutions. Empirical research on deliberation has merely uncovered the obvious: People with little practice with deliberative interactions are bad at them. Imagine if an experiment put assembly line workers in charge of managing General Motors, then justified the current hierarchical makeup of corporate firms by pointing to the resulting non-ideal outcomes. I see no reason why Brennan's reasoning about deliberative democracy is any less absurd.
Finally, Brennan's argument rests on a principle of competence--and concurrently the claim that citizens have a right to governments that meet that principle. He borrows the principle from medical ethics, namely that a patient is competent if they are aware of the relevant facts, can understand them, appreciate their relevance, and can reason about them appropriately. Brennan immediately avoids the obvious objections about how any of the judgements about relevance and appropriateness could be made in non-political ways to merely claim that the principle is non-objectionable in the abstract. Certainly for the simplified thought examples that he provides of plumber's unclogging pipes and doctors treating patients with routine conditions the validity of the principle of competence is clear. However, for the most contentious issues we face: climate change, gun control, genetically modified organisms, etc., the facts themselves and the reliability of experts are themselves in dispute. What political system would best resolve such a dispute? Obviously it could not be a epistocracy, given that the relevance and appropriateness of the "relevant" expertise itself is the issue to be decided. Perhaps Brennan's suggestions have some merit, but absent a non-superficial understanding of the relationship between science and politics the foundation of his positive case for epistocracy is shaky at best. His oft repeated assertion that epistocracy would likely produce more desirable decisions is highly speculative.
I plan on continuing to examine Brennan's arguments regarding democracy, but I find it ironic that his argument against average citizens--that they suffer too much from various cognitive maladies to reason well about public issues--applies equally to Brennan. Indeed, the hubris of most experts is deeply rooted in their unfounded belief that a little learning has freed them from the mental limitations that infect the less educated. In reality, Brennan is a partisan like anyone else, not a sagely academic doling out objective advice. Whether one turns to epistocratic ideas in light of the limitations of contemporary democracies or advocate for ensuring the right preconditions for democracies to function better comes back to one's values and political commitments. So far it seems that Brennan's book demonstrates his own political biases as much as it exposes the ostensibly insurmountable problems for democracy.
It is hard to imagine anything more damaging to the movements for livable minimum wages, greater reliance on renewable energy resources, or workplace democracy than the stubborn belief that one must be a “liberal” to support them. Indeed, the common narrative that associates energy efficiency with left-wing politics leads to absurd actions by more conservative citizens. Not only do some self-identified conservatives intentionally make their pickup trucks more polluting at high costs (e.g., “rolling coal”) but they will shun energy efficient—and money saving— lightbulbs if their packaging touts their environmental benefits. Those on the left, often do little to help the situation, themselves seemingly buying into the idea that conservatives must culturally be everything leftists are not and vice-versa. As a result, the possibility to ally for common purposes, against a common enemy (i.e., neoliberalism), is forgone.
The Germans have not let themselves be hindered by such narratives. Indeed, their movement toward embracing renewables, which now make up nearly a third of their power generation market, has been driven by a diverse political coalition. A number villages in the German conservative party (CDU) heartland now produce more green energy than they need, and conservative politicians supported the development of feed-in tariffs and voted to phase out nuclear energy. As Craig Morris and Arne Jungjohann describe, the German energy transition resonates with key conservative ideas, namely the ability of communities to self-govern and the protection of valued rural ways of life. Agrarian villages are given a new lease on life by farming energy next to crops and livestock, and enabling communities to produce their own electricity lessens the control of large corporate power utilities over energy decisions. Such themes remain latent in American conservative politics, now overshadowed by the post-Reagan dominance of “business friendly” libertarian thought styles.
Elizabeth Anderson has noticed a similar contradiction with regard to workplaces. Many conservative Americans decry what they see as overreach by federal and state governments, but tolerate outright authoritarianism at work. Tracing the history of conservative support for “free market” policies, she notes that such ideas emerged in an era when self-employment was much more feasible. Given the immense economies of scale possible with post-Industrial Revolution technologies, however, the barriers to entry for most industries are much too high for average people to own and run their own firms. As a result, free market policies no longer create the conditions for citizens to become self-reliant artisans but rather spur the centralization and monopolization of industries. Citizens, in turn, become wage laborers, working under conditions far more similar to feudalism than many people are willing to recognize.
Even Adam Smith, to whom many conservatives look for guidance on economic policy, argued that citizens would only realize the moral traits of self-reliance and discipline—values that conservatives routinely espouse—in the right contexts. In fact, he wrote of people stuck doing repetitive tasks in a factory:
“He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible to become for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life. Of the great and extensive interests of his country he is altogether incapable of judging”
Advocates of economic democracy have overlooked a real opportunity to enroll conservatives in this policy area. Right leaning citizens need not be like Mike Rowe—a man who ironically garnered a following among “hard working” conservatives by merely dabbling in blue collar work—and mainly bemoan the ostensible decline in citizens’ work ethic. Conservatives could be convinced that creating policies that support self-employment and worker-owned firms would be far more effective in creating the kinds of citizenry they hope for, far better than simply shaming the unemployed for apparently being lazy. Indeed, they could become like the conservative prison managers in North Dakota (1), who are now recognizing that traditionally conservative “tough on crime” legislation is both ineffective and fiscally irresponsible—learning that upstanding citizens cannot be penalized into existence.
Another opportunity has been lost by not constructing more persuasive narratives that connect neoliberal policies with the decline of community life and the eroding well-being of the nation. Contemporary conservatives will vote for politicians who enable corporations to outsource or relocate at the first sign of better tax breaks somewhere else, while they simultaneously decry the loss of the kinds of neighborhood environments that they experienced growing up. Their support of “business friendly” policies had far different implications in the days when the CEO of General Motors would say “what is good for the country is good for General Motors and vice versa.” Compare that to an Apple executive, who baldly stated: “We don’t have an obligation to solve America’s problems. Our only obligation is making the best product possible.”
Yet fights for a higher minimum wage and proposals to limit the destructively competitive processes where nations and cities try to lure businesses away from each other with tax breaks get framed as anti-American, even though they are poised to reestablish part of the social reality that conservatives actually value. Communities cannot prosper when torn asunder by economic disruptions; what is best for a multinational corporation is often not what is best for nation like the United States. It is a tragedy that many leftists overlook these narratives and focus narrowly on appeals to egalitarianism, a moral language that political psychologists have found (unsurprisingly) to resonate only with other leftists.
The resulting inability to form alliances with conservatives over key economic and energy issues allows libertarian-inspired neoliberalism to drive conservative politics in the United States, even though libertarianism is as incompatible with conservativism as it is with egalitarianism. Libertarianism, by idealizing impersonal market forces, upholds an individualist vision of society that is incommensurable with communal self-governance and the kinds of market interventions that would enable more people to be self-employed or establish cooperative businesses. By insisting that one should “defer” to the supposedly objective market in nearly all spheres of life, libertarianism threatens to commodify the spaces that both leftists and conservatives find sacred: pristine wilderness, private life, etc.
There are real challenges, however, to more often realizing political coalitions between progressives and conservatives, namely divisions over traditionalist ideas regarding gender and sexuality. Yet even this is a recent development. As Nadine Hubbs shows, the idea that poor rural and blue collar people are invariably more intolerant than urban elites is a modern construction. Indeed, studies in rural Sweden and elsewhere have uncovered a surprising degree of acceptance for non-hetereosexual people, though rural queer people invariably understand and express their sexuality differently than urban gays. Hence, even for this issue, the problem lies not in rural conservatism per se but with the way contemporary rural conservatism in America has been culturally valenced. The extension of communal acceptance has been deemphasized in order to uphold consistency with contemporary narratives that present a stark urban-rural binary, wherein non-cis, non-hetereosexual behaviors and identities are presumed to be only compatible with urban living. Yet the practice, and hence the narrative, of rural blue collar tolerance could be revitalized.
However, the preoccupation of some progressives with maintaining a stark cultural distinction with rural America prevents progressive-conservative coalitions from coming together to realize mutually beneficial policy changes. I know that I have been guilty of that. Growing up with left-wing proclivities, I was guilty of much of what Nadine Hubbs criticizes about middle-class Americans: I made fun of “rednecks” and never, ever admitted to liking country music. My preoccupation with proving that I was really an “enlightened” member of the middle class, despite being a child of working class parents and only one generation removed from the farm, only prevented me from recognizing that I potentially had more in common with rednecks politically than I ever would with the corporate-friendly “centrist” politicians at the helm of both major parties. No doubt there is work to be done to undo all that has made many rural areas into havens for xenophobic, racist, and homophobic bigotry; but that work is no different than what could and should be done to encourage poor, conservative whites to recognize what a 2016 SNL sketch so poignantly illustrated: that they have far more in common with people of color than they realize.
1. A big oversight in the “work ethic” narrative is that it fails to recognize that slacking workers are often acting rationally. If one is faced with few avenues for advancement and is instantly replaced when suffering an illness or personal difficulties, why work hard? What white collar observers like Rowe might see as laziness could be considered an adaptation to wage labor. In such contexts, working hard can be reasonably seen as not the key to success but rather a product of being a chump. A person would be merely harming their own well-being in order to make someone else rich. This same discourse in the age of feudalism would have involved chiding peasants for taking too many holidays.
On the Myth of Net Neutrality
Few issues stoke as much controversy, or provoke as shallow of analysis, as net neutrality. Richard Bennett’s recent piece in the MIT Technology Review is no exception. His views represent a swelling ideological tide among certain technologists that threatens not only any possibility for democratically controlling technological change but any prospect for intelligently and preemptively managing technological risks. The only thing he gets right is that “the web is not neutral” and never has been. Yet current “net neutrality” advocates avoid seriously engaging with that proposition. What explains the self-stultifying allegiance to the notion that the Internet could ever be neutral?
Bennett claims that net neutrality has no clear definition (it does), that anything good about the current Internet has nothing to do with a regulatory history of commitment to net neutrality (something he can’t prove), and that the whole debate only exists because “law professors, public interest advocates, journalists, bloggers, and the general public [know too little] about how the Internet works.”
To anyone familiar with the history of technological mistakes, the underlying presumption that we’d be better off if we just let the technical experts make the “right” decision for us—as if their technical expertise allowed them to see the world without any political bias—should be a familiar, albeit frustrating, refrain. In it one hears the echoes of early nuclear energy advocates, whose hubris led them to predict that humanity wouldn’t suffer a meltdown in hundreds of years, whose ideological commitment to an atomic vision of progress led them to pursue harebrained ideas like nuclear jets and using nuclear weapons to dig canals. One hears the echoes of those who managed America’s nuclear arsenal and tried to shake off public oversight, bringing us to the brink of nuclear oblivion on more than one occasion.
Only armed with such a poor knowledge of technological history could someone make the argument that “the genuine problems the Internet faces today…cannot be resolved by open Internet regulation. Internet engineers need the freedom to tinker.” Bennett’s argument is really just an ideological opposition to regulation per se, a view based on the premise that innovation better benefits humanity if it is done without the “permission” of those potentially negatively affected. Even though Bennett presents himself as simply a technologist whose knowledge of the cold, hard facts of the Internet leads him to his conclusions, he is really just parroting the latest discursive instantiation of technological libertarianism.
As I’ve recently argued, the idea of “permissionless innovation” is built on a (intentional?) misunderstanding of the research on how to intelligently manage technological risks as well as the problematic assumption that innovations, no matter how disruptive, have always worked out for the best for everyone. Unsurprisingly the people most often championing the view are usually affluent white guys who love their gadgets. It is easy to have such a rosy view of the history of technological change when one is, and has consistently been, on the winning side. It is a view that is only sustainable as long as one never bothers to inquire into whether technological change has been an unmitigated wonder for the poor white and Hispanic farmhands who now die at relatively younger ages of otherwise rare cancers, the Africans who have mined and continue to mine Uranium or coltan in despicable conditions, or the permanent underclass created by continuous technological upheavals in the workplace not paired with adequate social programs.
In any case, I agree with Bennett’s argument in a later comment to the article: “the web is not neutral, has never been neutral, and wouldn't be any good if it were neutral.” Although advocates for net neutrality are obviously demanding a very specific kind of neutrality: that ISPs do not treat packets differently based on where they originate or where they’re going, the idea of net neutrality has taken on a much broader symbolic meaning, one that I think constrains people’s thinking about Internet freedoms rather than enhances it.
The idea of neutrality carries so much rhetorical weight in Western societies because their cultures are steeped in a tradition of philosophical liberalism. Liberalism is a philosophical tradition based in the belief that the freedom of individuals to choose is the greatest good. Even American political conservatives really just embrace a particular flavor of philosophical liberalism, one that privileges the freedoms enjoyed by supposedly individualized actors unencumbered by social conventions or government interference to make market decisions. Politics in nations like the US proceeds with the assumption that society, or at least parts of it, can be composed in such a way to allow individuals to decide wholly for themselves. Hence, it is unsurprising that changes in Internet regulations provoke so much ire: The Internet appears to offer that neutral space, both in terms of the forms of individual self-expression valued by left-liberals and the purportedly disruptive market environment that gives Steve Jobs wannabes wet dreams.
Neutrality is, however, impossible. As I argue in my recent book, even an idealized liberal society would have to put constraints on choice: People would have to be prevented from making their relationship or communal commitments too strong. As loathe as some leftists would be to hear it, a society that maximizes citizens’ abilities for individual self-expression would have to be even more extreme than even Margaret Thatcher imagined it: composed of atomized individuals. Even the maintenance of family structures would have to be limited in an idealized liberal world.
On a practical level it is easy to see the cultivation of a liberal personhood in children as imposed rather than freely chosen, with one Toronto family going so far as to not assign their child a gender. On plus side for freedom, the child now has a new choice they didn’t have before. On the negative side, they didn’t get to choose whether or not they’d be forced to make that choice. All freedoms come with obligations, and often some people get to enjoy the freedoms while others must shoulder the obligations.
So it is with the Internet as well. Currently ISPs are obliged to treat packets equally so that content providers like Google and Netflix can enjoy enormous freedoms in connecting with customers. That is clearly not a neutral arrangement, even though it is one that many people (including Google) prefer.
However, the more important non-neutrality of the Internet, one that I think should take center stage in debates, is that it is dominated by corporate interests. Content providers are no more accountable to the public than large Internet service providers. At least since it was privatized in the mid-90s, the Internet has been biased toward fulfilling the needs of business. Other aspirations like improving democracy or cultivating communities, if the Internet has even really delivered all that much in those regards, have been incidental. Facebook wants you to connect with childhood friends so it can show you an ad for a 90s nostalgia t-shirt design. Google wants to make sure neo-nazis can find the Stormfront website so they can advertise the right survival gear to them.
I don’t want a neutral net. I want one biased toward supporting well-functioning democracies and vibrant local communities. It might be possible for an Internet to do so while providing the wide latitude for innovative tinkering that Bennett wants, but I doubt it. Indeed, ditching the pretense of neutrality would enable the broader recognition of the partisan divisions about what the Internet should do, the acknowledgement that the Internet is and will always be a political technology. Whose interests do you want it to serve?
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
On Vaccine Mandates
Escaping the Ecomodernist Binary
No, Electing Joe Biden Didn't Save American Democracy
When Does Someone Deserve to Be Called "Doctor"?
If You Don't Want Outbreaks, Don't Have In-Person Classes
How to Stop Worrying and Live with Conspiracy Theorists
Democracy and the Nuclear Stalemate
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life