Continued at The New Atlantis
Why is Covid-19 science making us more partisan?
Continued at The New Atlantis
Back during the summer, Tristan Harris sparked a flurry of academic indignation when he suggested that we needed a new field called “Science & Technology Interaction” or STX, which would be dedicated to improving the alignment between technologies and social systems. Tweeters were quick to accuse him of “Columbizing,” claiming that such a field already existed in the form of Science & Technology Studies (STS) or similar such academic department. So ignorant, amirite?
I am far more sympathetic. If people like Harris (and earlier Cathy O’Neil) have been relatively unaware of fields like Science and Technology Studies, it is because much of the research within these disciplines is mostly illegible to non-academics, not all that useful to them, or both. I really don’t blame them for not knowing. I am even an STS scholar myself, and the table of contents of most issues of my field’s major journals don’t really inspire me to read further.
And in fairness to Harris and contrary to Academic Twitter, the field of STX that he proposes does not already exist. The vast majority of STS articles and books dedicate single digit percentages of their words to actually imagining how technology could better match the aspirations of ordinary people and their communities. Next to no one details alternative technological designs or clear policy pathways toward a better future, at least not beyond a few pages at the end of a several-hundred-page manuscript.
My target here is not just this particular critique of Harris, but the whole complex of academic opiners who cite Foucault and other social theory to make sure we know just how “problematic” non-academics’ “ignorant” efforts to improve technological society are. As essential as it is to try to improve upon the past in remaking our common world, most of these critiques don’t really provide any guidance for what steps we should be taking. And I think that if scholars are to be truly helpful to the rest of humanity they need to do more than tally and characterize problems in ever more nuanced ways. They need to offer more than the academic equivalent of fiddling while Rome burns.
In the case of Harris, we are told that underlying the more circumspect digital behavior that his organization advocates is a dangerous preoccupation with intentionality. The idea of being more intentional is tainted by the unsavory history of humanistic thought itself, which has been used for exclusionary purposes in the past. Left unsaid is exactly how exclusionary or even harmful it remains in the present.
This kind of genealogical take down has become cliché. Consider how one Gizmodo blogger criticizes environmentalists’ use the word “natural” in their political activism. The reader is instructed that because early Europeans used the concept of nature to prop up racist ideas about Native Americans that the term is now inherently problematic and baseless. The reader is supposed to believe from this genealogical problematization that all human interactions are equally natural or artificial, regardless of whether we choose to scale back industrial development or to erect giant machines to control the climate.
Another common problematiziation is of the form “not everyone is privileged enough to…”, and it is often a fair objection. For instance, people differ in their individual ability to disconnect from seductive digital devices, whether due to work constraints or the affordability or ease of alternatives. But differences in circumstances similarly challenge people’s capacity to affordably see a therapist, retrofit their home to be more energy efficient, or bike to work (and one might add to that: read and understand Foucault). Yet most of these actions still accomplish some good in the world. Why is disconnection any more problematic than any other set of tactics that individuals use to imperfectly realize their values in an unequal and relatively undemocratic society? Should we just hold our breaths for the “total overhaul…full teardown and rebuild” of political economies that the far more astute critics demand?
Equally trite are references to the “panopticon,” a metaphor that Foucault developed to describe how people’s awareness of being constantly surveilled leads them to police themselves. Being potentially visible at all times enables social control in insidious ways. A classic example is the Benthamite prison, where a solitary guard at the center cannot actually view all the prisoners simultaneously, but the potential for him to be viewing a prisoner at any given time is expected to reduce deviant behavior.
This gets applied to nearly any area of life where people are visible to others, which means it is used to problematize nearly everything. Jill Grant uses it to take down the New Urbanist movement, which aspires (though fairly unsuccessfully) to build more walkable neighborhoods that are supportive of increased local community life. This movement is “problematic” because the densities it demands means that citizens are everywhere visible to their neighbors, opening up possibilities for the exercise of social control. Whether not any other way of housing human beings would not result in some form of residential panopticon is not exactly clear, except perhaps by designing neighborhoods so as to prohibit social community writ large.
Further left unsaid in these critiques is exactly what a more desirable alternative would be. Or at least that alternative is left implicit and vague. For example, the pro-disconnection digital wellness movement is in need of enhanced wokeness, to better come to terms with “the political and ideological assumptions” that they take for granted and the “privileged” values they are attempting to enact in the world.
But what does that actually mean? There’s a certain democratic thrust to the criticism, one that I can get behind. People disagree about what is “the good life” and how to get there, and any democratic society would be supportive of a multitude of them. Yet the criticism that the digital wellness movement seems to center around one vision of “being human,” one emphasizing mindfulness and a capacity to exercise circumspect individual choosing, seems hollow without the critics themselves showing us what should take its place. Whatever the flaws with digital wellness, it is not as self-stultifying as the defeatist brand of digital hedonism implicitly left in the wake of academic critiques that offer no concrete alternatives. Perhaps it is unfair to expect a full-blown alternative; yet few of these critiques offer even an incremental step in the right direction.
Even worse, this line of criticism can problematize nearly everything, losing its rhetorical power as it is over-applied. Even academia itself is disciplining. STS has its own dominant paradigms, and critique is mobilized in order to mold young scholars into academics who cite the right people, quote the correct theories, and support the preferred values. My success depends on me being at least “docile enough” in conforming myself to the norms of the profession.
I also exercise self-discipline in my efforts to be a better spouse and a better parent. I strive to be more intentional when I’m frustrated or angry, because I too often let my emotions shape my interactions with loved ones in ways that do not align with my broader aspirations. More intentionality in my life has been generally a good thing, so long as my expectations are not so unrealistic as to provoke more anxiety than the benefits are worth. But in a critical mode where self-discipline and intentionality automatically equate to self-subjugation, how exactly are people to exercise agency in improving their own lives?
In any case, advocating devices that enable users to exercise greater intentionality over their digital practices is not a bad thing per se. Citizens pursue self-help, meditate, and engage in other individualistic wellness activities because the lives they live are constrained. Their agency is partly circumscribed by their jobs, family responsibilities, and incomes, not to mention the more systemic biases of culture and capitalism. Why is it wrong for groups like Harris’ center to advocate efforts that largely work within those constraints?
Yet even that reading of the digital wellness movement seems uncharitable. Certainly Harris’ analysis lacks the sophistication of a technology scholar’s, but he has made it obvious that he recognizes that dominant business models and asymmetrical relations of power underlay the problem. To reduce his efforts to mere individualistic self-discipline is borderline dishonest, though he no doubt emphasizes the parts of the problem he understands best. Of course it will likely take more radical changes to realize the humane technology than Harris advocates, but it is not totally clear whether individualized efforts necessarily detract from people’s ability or the willingness demand more from tech firms and governments (i.e., are they like bottled water and other “inverted quarantines”?). At least that is a claim that should be demonstrated rather than presumed from the outset.
At its worst, critical “problematizing” presents itself as its own kind of view from nowhere. For instance, because the idea of nature has been constructed in various biased throughout history, we are supposed to accept the view that all human activities are equally natural. And we are supposed to view that perspective as if it were itself an objective fact rather than yet another politically biased social construction.
Various observers mobilize much the same critique about claims regarding the “realness” of digital interactions. Because presenting the category of “real life” as being apart from digital interactions is beset with Foulcauldian problematics, we are told that the proper response is to no longer attempt the qualitative distinctions that that category can help people make—whatever its limitations. It is probably no surprise that the same writer wanting to do away with the digital-real distinction is enthusiastic in their belief that the desires and pleasures of smartphones somehow inherently contain the “possibility…of disrupting the status quo.” Such critical takes give the impression that all technology scholarship can offer is a disempowering form of relativism, one that only thinly veils the author’s underlying political commitments.
The critic’s partisanship is also frequently snuck in the backdoor by couching criticism in an abstract commitment to social justice. The fact that the digital wellness movement is dominated by tech bros and other affluent whites implies that it must be harmful to everyone else—a claim made by alluding to some unspecified amalgamation of oppressed persons (women, people of color, or non-cis citizens) who are insufficiently represented. It is assumed but not really demonstrated that people within the latter demographics would be unreceptive or even damaged by Harris’ approach. But given the lack of actual concrete harms laid out in these critiques, it is not clear whether the critics are actually advocating for those groups or that the social-theoretical existence of harms to them is just a convenient trope to make a mainly academic argument seem as if it actually mattered.
People’s prospects for living well in the digital age would be improved if technology scholars more often eschewed the deconstructive critique from nowhere. I think they should act instead as “thoughtful partisans.” By that I mean that they would acknowledge that their work is guided by a specific set of interests and values, ones that are in the benefit of particular groups.
It is not an impartial application of social theory to suggest that “realness” and “naturalness” are empty categories that should be dispensed with. And a more open and honest admission of partisanship would at least force writers to be upfront with readers regarding what the benefits would actually be to dispensing with those categories and who exactly would enjoy them—besides digital enthusiasts and ecomodernists. If academics were expected to use their analysis to the clear benefit of nameable and actually existing groups of citizens, scholars might do fewer trite Foucauldian analyses and more often do the far more difficult task of concretely outlining how a more desirable world might be possible.
“The life of the critic easy,” notes Anton Ego in the Pixar film Ratatouille. Actually having skin in the game and putting oneself and one’s proposals out in the world where they can be scrutinized is far more challenging. Academics should be pushed to clearly articulate exactly how it is the novel concepts, arguments, observations, and claims they spend so much time developing actually benefit human beings who don’t have access to Elsevier or who don't receive seasonal catalogs from Oxford University Press. Without them doing so, I cannot imagine academia having much of a role in helping ordinary people live better in the digital age.
If your Facebook wall is like mine, you have seen no shortage of memes trying to convince you that a simple explanation for school shootings exists. One claims that their increase coincides with the decline of proper “discipline” (read: corporeal punishment) of children thirty years ago. Yet all sorts of things have changed over the last several decades, especially since 2011 when the frequency of mass shootings tripled. In any case, Europeans are equally unlikely to strike their children but see no uptick in the likelihood of acts of mass violence—the 2011 attack in Norway notwithstanding. Moreover, assault weapons like the AR-15 have been available for fifty years and a federal assault weapon ban (i.e., “The Brady Bill”) expired back in 2004, long before today’s upswing in shootings. Under the slightest bit of scrutiny, any single-cause explanation begins to unravel.
Journalists and other observers often note that the perpetrators of these events were “loners” or socially isolated but do little to no further investigation when it comes time to recommend solutions. It is as if we have begun to accept the existence of such isolated and troubled individuals as if it were natural, as if little could be done to prevent it, as if eliminating civilian weapons or de-secularizing society were less wicked of problems. If there is any mindset my book, Technically Together, tries to eliminate, it is the belief that the social lives offered to us by contemporary networked societies are unalterable—the idea that we have arrived at the best of all possible social worlds. Indeed, it is difficult to square sociologist Keith Hampton’s claim that “because of cellphones and social media, those we depend on are more accessible today than at any point since we lived in small, village-like settlements” with massive increases in the rates of medication use for depression and anxiety, not just the frequency of mass shootings. At the very least, digital technologies—for all their wonders—do less than is needed to remedy feelings of isolation.
Such changes, I contend, suggest that something is very wrong with contemporary practices of togetherness. No doubt most of us get by well enough with some mixture of social networks, virtual communities, and perhaps a handful of neighborly and workplace-based connections (if we’re lucky). That said, most goods, social or otherwise, are unequally distributed. Even if sociologists disagree about whether social ties have changed on average, the distribution of connection has and so have the qualitative dimensions of friendship. For every social butterfly who uses online networks to maintain levels of acquaintanceship that would have been impossible in the days of rolodexes and phone conversations, there are those for whom increasing digital mediation has meant a decline in companionship in both numeracy and intimacy. As nice as “lurking” on Facebook or a pleasant comment from a semi-anonymous Reddit compatriot can be, they cannot match a hug. Indeed, self-reported loneliness and expressed difficulties in sustaining close friendships persist among the older generations and young men despite no lack of digital mechanisms for connecting with others.
Some sociologists downplay this, as if highlighting the downsides to social networks invariably leads to simplistically blaming them for people’s problems. No doubt Internet-critics like Sherry Turkle overlook many of the complexities of digital-age sociality, but only those socially advantaged by contemporary network technologies benefit from viewing them through rose-colored glasses. Certainly an explanation for mass shootings cannot be reduced to the prevalence of digital technologies, just as it cannot be blamed simply on the ostensible disappearance of God from schools, declines in juvenile corporeal punishment, the mere presence of assault weapons, or any of the other purported causes that proliferate in the media. What Internet technologies do provide, however, is a window into society—insofar as they can exacerbate or make more visible social changes set in motion decades earlier.
To try to blame the Internet for social isolation would fail to recognize that it was suburbia that first physically isolated people. It makes the warm intimacy of bodily co-presence hard work; hanging out requires gas money as well as the time and energy to drive to somewhere.
Skeptical readers would probably point out that events like mass shootings became prevalent and accelerated well after the suburb-building boom of the mid-20th century. That objection is easy to counter: social lag. The first suburban dwellers brought with them communal practices learned in small towns or tight-knit urban neighborhoods, and their children maintained some of them. 30 Rock’s Jack Donaghy lamented that 1st generation immigrants work their fingers to the bone, the 2nd goes to college, and the 3rd snowboards and takes improv classes. A similar generational slide could be said about community in suburbia: The 1st generation bowls together; the 2nd organizes neighborhood watch; the 3rd waits with their kids in the car until the school bus arrives.
Even while considering all that the physical makeup of our cities does to stifle community life, it would be a mistake not to recognize that there is something unique about many of our Internet activities that make them far more conducive to feelings of loneliness than other media—even if they do connect us with friends.
Consider how one woman in the BBC documentary, The Age of Loneliness, laments that social media makes her feel even lonelier, because she cannot help but compare her own life to the “highlights reels” posted by acquaintances. Others use the Internet to avoid the painful awkwardness and risk of in-person interactions, getting stuck in a downward spiral of solitude. These features combine with a third to help give birth to mass shooters: The “long tail” of the Internet provides websites that concentrate and amplify pathological tendencies. Forums that encourage and help people with eating disorders continue damaging behaviors are as common as racist, violence-promoting websites, many of which had been frequented by recent mass shooters.
While it is the suburbs that physically isolate people and make physical friendships practically difficult, online social networks too easily exacerbate and highlight that isolation. My point, however, is not to call for dismantling the Internet—though I think it could use a massive redesign. Such a call would be as simple-minded as believing that just eliminating AR-15s or making kids read the Bible in school would prevent acts of mass violence. Appeals to improving mental health services or calls to arm teachers or place military veterans at schools are equally misguided. These are all band-aid solutions that fail to ask about the underlying causes. What we need most is not more guns, God, scrutinization of the mentally ill, or even necessarily gun bans, but a sober evaluation of our social world: Why does it not provide adequate levels of loving togetherness and belonging to nearly everyone? How could it?
To some this might sound like a call to coddle potential murderers. Yet, given that people’s genetics do not fully explain their personalities, societies have to reckon with the fact that mass shooters are not born ready-made monsters but become that way. It is difficult not to see parallels between many young men today and the “lost generation” that was so liable to fall prey to fascism in the early 20th century. The growth of, mainly white, young, and male, mass shooters cannot be totally unrelated to the increase in, mainly white, young, and male, acolytes of prophets like Jordan Peterson, who extol the virtues of traditional notions of male power. Absent work toward ameliorating the “crisis of connection” that many face men currently face, we should be unsurprised if some of them continue to try to replace a lost sense of belonging with violent power fantasies.
As a scholar concerned about the value of democracy within contemporary societies, especially with respect to the challenges presented by increasingly complex (and hence risky) technoscience, a good check for my views is to read arguments by critics of democracy. I had hoped Jason Brennan's Against Democracy would force me to reconsider some of the assumptions that I had made about democracy's value and perhaps even modify my position. Hoped.
Having read through a few chapters, I am already disappointed and unsure if the rest of the book is worth the my time. Brennan's main assertion is that because some evidence shows that participation in democratic politics has a corrupting influence--that is, participants are not necessarily well informed and often end becoming more polarized and biased in the process--we would be better off limiting decision making power to those who have proven themselves sufficiently competent and rational, to epistocracy. Never mind the absurdity of the idea that a process for judging those qualities in potential voters could ever be made in an apolitical, unbiased, or just way, Brennan does not even begin with a charitable or nuanced understanding of what democracy is or could be.
One early example that exposes the simplicity of Brennan's understanding of democracy--and perhaps even the circularity of his argument--is a thought experiment about child molestation. Brennan asks the reader to consider a society that has deeply deliberated the merits of adults raping children and subjected the decision to a majority vote, with the yeas winning. Brennan claims that because the decision was made in line with proper democratic procedures, advocates of a proceduralist view of democracy must see it as a just outcome. Due to the clear absurdity and injustice of this result, we must therefore reject the view that democratic procedures (e.g., voting, deliberation) themselves are inherently just.
What makes this thought experiment so specious is that Brennan assumes that one relatively simplistic version of a proceduralist, deliberative democracy can represent the whole. Ever worse, his assumed model of deliberative democracy--ostensibly not too far from what already exists in most contemporary nations--is already questionably democratic. Not only is majoritarian decision-making and procedural democracy far from equivalent, but Brennan makes no mention of whether or not children themselves were participants in either the deliberative process or the vote, or even would have a representative say through some other mechanism. Hence, in this example Brennan actually ends up showing the deficits of a kind of epistocracy rather than democracy, insofar as the ostensibly more competent and rationally thinking adults are deliberating and voting for children. That is, political decisions about children already get made by epistocrats (i.e., adults) rather than democratically (understood as people having influence in deciding the rules by which they will be governed for the issues they have a stake in). Moreover, any defender of the value of democratic procedures would likely counter that a well functioning democracy would contain processes to amplify or protect the say of less empowered minority groups, whether through proportional representation or mechanisms to slow down policy or to force majority alliances to make concessions or compromises. It is entirely unsurprising that democratic procedures look bad when one's stand-in for democracy is winner-take-all, simple majoritarian decision-making.
His attack on democratic deliberations is equally short-sighted. Criticizing, quite rightly, that many scholars defend deliberative democracy with purely theoretical arguments, while much of the empirical evidence shows that many average people dislike deliberation and are often very bad at it, Brennan concludes that, absent promising research on how to improve the situation, there is no logical reason to defend deliberative democracy. This is where Brennan's narrow disciplinary background as a political theorist biases his viewpoint. It is not at all surprising to a social scientist that average people would fail to deliberate well nor like it when the near entirety of contemporary societies fails to prepare them for democracy. Most adults have spent 18 years or more in schools and up to several decades in workplaces that do not function as democracies but rather are authoritarian, centrally planned institutions. Empirical research on deliberation has merely uncovered the obvious: People with little practice with deliberative interactions are bad at them. Imagine if an experiment put assembly line workers in charge of managing General Motors, then justified the current hierarchical makeup of corporate firms by pointing to the resulting non-ideal outcomes. I see no reason why Brennan's reasoning about deliberative democracy is any less absurd.
Finally, Brennan's argument rests on a principle of competence--and concurrently the claim that citizens have a right to governments that meet that principle. He borrows the principle from medical ethics, namely that a patient is competent if they are aware of the relevant facts, can understand them, appreciate their relevance, and can reason about them appropriately. Brennan immediately avoids the obvious objections about how any of the judgements about relevance and appropriateness could be made in non-political ways to merely claim that the principle is non-objectionable in the abstract. Certainly for the simplified thought examples that he provides of plumber's unclogging pipes and doctors treating patients with routine conditions the validity of the principle of competence is clear. However, for the most contentious issues we face: climate change, gun control, genetically modified organisms, etc., the facts themselves and the reliability of experts are themselves in dispute. What political system would best resolve such a dispute? Obviously it could not be a epistocracy, given that the relevance and appropriateness of the "relevant" expertise itself is the issue to be decided. Perhaps Brennan's suggestions have some merit, but absent a non-superficial understanding of the relationship between science and politics the foundation of his positive case for epistocracy is shaky at best. His oft repeated assertion that epistocracy would likely produce more desirable decisions is highly speculative.
I plan on continuing to examine Brennan's arguments regarding democracy, but I find it ironic that his argument against average citizens--that they suffer too much from various cognitive maladies to reason well about public issues--applies equally to Brennan. Indeed, the hubris of most experts is deeply rooted in their unfounded belief that a little learning has freed them from the mental limitations that infect the less educated. In reality, Brennan is a partisan like anyone else, not a sagely academic doling out objective advice. Whether one turns to epistocratic ideas in light of the limitations of contemporary democracies or advocate for ensuring the right preconditions for democracies to function better comes back to one's values and political commitments. So far it seems that Brennan's book demonstrates his own political biases as much as it exposes the ostensibly insurmountable problems for democracy.
It is hard to imagine anything more damaging to the movements for livable minimum wages, greater reliance on renewable energy resources, or workplace democracy than the stubborn belief that one must be a “liberal” to support them. Indeed, the common narrative that associates energy efficiency with left-wing politics leads to absurd actions by more conservative citizens. Not only do some self-identified conservatives intentionally make their pickup trucks more polluting at high costs (e.g., “rolling coal”) but they will shun energy efficient—and money saving— lightbulbs if their packaging touts their environmental benefits. Those on the left, often do little to help the situation, themselves seemingly buying into the idea that conservatives must culturally be everything leftists are not and vice-versa. As a result, the possibility to ally for common purposes, against a common enemy (i.e., neoliberalism), is forgone.
The Germans have not let themselves be hindered by such narratives. Indeed, their movement toward embracing renewables, which now make up nearly a third of their power generation market, has been driven by a diverse political coalition. A number villages in the German conservative party (CDU) heartland now produce more green energy than they need, and conservative politicians supported the development of feed-in tariffs and voted to phase out nuclear energy. As Craig Morris and Arne Jungjohann describe, the German energy transition resonates with key conservative ideas, namely the ability of communities to self-govern and the protection of valued rural ways of life. Agrarian villages are given a new lease on life by farming energy next to crops and livestock, and enabling communities to produce their own electricity lessens the control of large corporate power utilities over energy decisions. Such themes remain latent in American conservative politics, now overshadowed by the post-Reagan dominance of “business friendly” libertarian thought styles.
Elizabeth Anderson has noticed a similar contradiction with regard to workplaces. Many conservative Americans decry what they see as overreach by federal and state governments, but tolerate outright authoritarianism at work. Tracing the history of conservative support for “free market” policies, she notes that such ideas emerged in an era when self-employment was much more feasible. Given the immense economies of scale possible with post-Industrial Revolution technologies, however, the barriers to entry for most industries are much too high for average people to own and run their own firms. As a result, free market policies no longer create the conditions for citizens to become self-reliant artisans but rather spur the centralization and monopolization of industries. Citizens, in turn, become wage laborers, working under conditions far more similar to feudalism than many people are willing to recognize.
Even Adam Smith, to whom many conservatives look for guidance on economic policy, argued that citizens would only realize the moral traits of self-reliance and discipline—values that conservatives routinely espouse—in the right contexts. In fact, he wrote of people stuck doing repetitive tasks in a factory:
“He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible to become for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life. Of the great and extensive interests of his country he is altogether incapable of judging”
Advocates of economic democracy have overlooked a real opportunity to enroll conservatives in this policy area. Right leaning citizens need not be like Mike Rowe—a man who ironically garnered a following among “hard working” conservatives by merely dabbling in blue collar work—and mainly bemoan the ostensible decline in citizens’ work ethic. Conservatives could be convinced that creating policies that support self-employment and worker-owned firms would be far more effective in creating the kinds of citizenry they hope for, far better than simply shaming the unemployed for apparently being lazy. Indeed, they could become like the conservative prison managers in North Dakota (1), who are now recognizing that traditionally conservative “tough on crime” legislation is both ineffective and fiscally irresponsible—learning that upstanding citizens cannot be penalized into existence.
Another opportunity has been lost by not constructing more persuasive narratives that connect neoliberal policies with the decline of community life and the eroding well-being of the nation. Contemporary conservatives will vote for politicians who enable corporations to outsource or relocate at the first sign of better tax breaks somewhere else, while they simultaneously decry the loss of the kinds of neighborhood environments that they experienced growing up. Their support of “business friendly” policies had far different implications in the days when the CEO of General Motors would say “what is good for the country is good for General Motors and vice versa.” Compare that to an Apple executive, who baldly stated: “We don’t have an obligation to solve America’s problems. Our only obligation is making the best product possible.”
Yet fights for a higher minimum wage and proposals to limit the destructively competitive processes where nations and cities try to lure businesses away from each other with tax breaks get framed as anti-American, even though they are poised to reestablish part of the social reality that conservatives actually value. Communities cannot prosper when torn asunder by economic disruptions; what is best for a multinational corporation is often not what is best for nation like the United States. It is a tragedy that many leftists overlook these narratives and focus narrowly on appeals to egalitarianism, a moral language that political psychologists have found (unsurprisingly) to resonate only with other leftists.
The resulting inability to form alliances with conservatives over key economic and energy issues allows libertarian-inspired neoliberalism to drive conservative politics in the United States, even though libertarianism is as incompatible with conservativism as it is with egalitarianism. Libertarianism, by idealizing impersonal market forces, upholds an individualist vision of society that is incommensurable with communal self-governance and the kinds of market interventions that would enable more people to be self-employed or establish cooperative businesses. By insisting that one should “defer” to the supposedly objective market in nearly all spheres of life, libertarianism threatens to commodify the spaces that both leftists and conservatives find sacred: pristine wilderness, private life, etc.
There are real challenges, however, to more often realizing political coalitions between progressives and conservatives, namely divisions over traditionalist ideas regarding gender and sexuality. Yet even this is a recent development. As Nadine Hubbs shows, the idea that poor rural and blue collar people are invariably more intolerant than urban elites is a modern construction. Indeed, studies in rural Sweden and elsewhere have uncovered a surprising degree of acceptance for non-hetereosexual people, though rural queer people invariably understand and express their sexuality differently than urban gays. Hence, even for this issue, the problem lies not in rural conservatism per se but with the way contemporary rural conservatism in America has been culturally valenced. The extension of communal acceptance has been deemphasized in order to uphold consistency with contemporary narratives that present a stark urban-rural binary, wherein non-cis, non-hetereosexual behaviors and identities are presumed to be only compatible with urban living. Yet the practice, and hence the narrative, of rural blue collar tolerance could be revitalized.
However, the preoccupation of some progressives with maintaining a stark cultural distinction with rural America prevents progressive-conservative coalitions from coming together to realize mutually beneficial policy changes. I know that I have been guilty of that. Growing up with left-wing proclivities, I was guilty of much of what Nadine Hubbs criticizes about middle-class Americans: I made fun of “rednecks” and never, ever admitted to liking country music. My preoccupation with proving that I was really an “enlightened” member of the middle class, despite being a child of working class parents and only one generation removed from the farm, only prevented me from recognizing that I potentially had more in common with rednecks politically than I ever would with the corporate-friendly “centrist” politicians at the helm of both major parties. No doubt there is work to be done to undo all that has made many rural areas into havens for xenophobic, racist, and homophobic bigotry; but that work is no different than what could and should be done to encourage poor, conservative whites to recognize what a 2016 SNL sketch so poignantly illustrated: that they have far more in common with people of color than they realize.
1. A big oversight in the “work ethic” narrative is that it fails to recognize that slacking workers are often acting rationally. If one is faced with few avenues for advancement and is instantly replaced when suffering an illness or personal difficulties, why work hard? What white collar observers like Rowe might see as laziness could be considered an adaptation to wage labor. In such contexts, working hard can be reasonably seen as not the key to success but rather a product of being a chump. A person would be merely harming their own well-being in order to make someone else rich. This same discourse in the age of feudalism would have involved chiding peasants for taking too many holidays.
Few issues stoke as much controversy, or provoke as shallow of analysis, as net neutrality. Richard Bennett’s recent piece in the MIT Technology Review is no exception. His views represent a swelling ideological tide among certain technologists that threatens not only any possibility for democratically controlling technological change but any prospect for intelligently and preemptively managing technological risks. The only thing he gets right is that “the web is not neutral” and never has been. Yet current “net neutrality” advocates avoid seriously engaging with that proposition. What explains the self-stultifying allegiance to the notion that the Internet could ever be neutral?
Bennett claims that net neutrality has no clear definition (it does), that anything good about the current Internet has nothing to do with a regulatory history of commitment to net neutrality (something he can’t prove), and that the whole debate only exists because “law professors, public interest advocates, journalists, bloggers, and the general public [know too little] about how the Internet works.”
To anyone familiar with the history of technological mistakes, the underlying presumption that we’d be better off if we just let the technical experts make the “right” decision for us—as if their technical expertise allowed them to see the world without any political bias—should be a familiar, albeit frustrating, refrain. In it one hears the echoes of early nuclear energy advocates, whose hubris led them to predict that humanity wouldn’t suffer a meltdown in hundreds of years, whose ideological commitment to an atomic vision of progress led them to pursue harebrained ideas like nuclear jets and using nuclear weapons to dig canals. One hears the echoes of those who managed America’s nuclear arsenal and tried to shake off public oversight, bringing us to the brink of nuclear oblivion on more than one occasion.
Only armed with such a poor knowledge of technological history could someone make the argument that “the genuine problems the Internet faces today…cannot be resolved by open Internet regulation. Internet engineers need the freedom to tinker.” Bennett’s argument is really just an ideological opposition to regulation per se, a view based on the premise that innovation better benefits humanity if it is done without the “permission” of those potentially negatively affected. Even though Bennett presents himself as simply a technologist whose knowledge of the cold, hard facts of the Internet leads him to his conclusions, he is really just parroting the latest discursive instantiation of technological libertarianism.
As I’ve recently argued, the idea of “permissionless innovation” is built on a (intentional?) misunderstanding of the research on how to intelligently manage technological risks as well as the problematic assumption that innovations, no matter how disruptive, have always worked out for the best for everyone. Unsurprisingly the people most often championing the view are usually affluent white guys who love their gadgets. It is easy to have such a rosy view of the history of technological change when one is, and has consistently been, on the winning side. It is a view that is only sustainable as long as one never bothers to inquire into whether technological change has been an unmitigated wonder for the poor white and Hispanic farmhands who now die at relatively younger ages of otherwise rare cancers, the Africans who have mined and continue to mine Uranium or coltan in despicable conditions, or the permanent underclass created by continuous technological upheavals in the workplace not paired with adequate social programs.
In any case, I agree with Bennett’s argument in a later comment to the article: “the web is not neutral, has never been neutral, and wouldn't be any good if it were neutral.” Although advocates for net neutrality are obviously demanding a very specific kind of neutrality: that ISPs do not treat packets differently based on where they originate or where they’re going, the idea of net neutrality has taken on a much broader symbolic meaning, one that I think constrains people’s thinking about Internet freedoms rather than enhances it.
The idea of neutrality carries so much rhetorical weight in Western societies because their cultures are steeped in a tradition of philosophical liberalism. Liberalism is a philosophical tradition based in the belief that the freedom of individuals to choose is the greatest good. Even American political conservatives really just embrace a particular flavor of philosophical liberalism, one that privileges the freedoms enjoyed by supposedly individualized actors unencumbered by social conventions or government interference to make market decisions. Politics in nations like the US proceeds with the assumption that society, or at least parts of it, can be composed in such a way to allow individuals to decide wholly for themselves. Hence, it is unsurprising that changes in Internet regulations provoke so much ire: The Internet appears to offer that neutral space, both in terms of the forms of individual self-expression valued by left-liberals and the purportedly disruptive market environment that gives Steve Jobs wannabes wet dreams.
Neutrality is, however, impossible. As I argue in my recent book, even an idealized liberal society would have to put constraints on choice: People would have to be prevented from making their relationship or communal commitments too strong. As loathe as some leftists would be to hear it, a society that maximizes citizens’ abilities for individual self-expression would have to be even more extreme than even Margaret Thatcher imagined it: composed of atomized individuals. Even the maintenance of family structures would have to be limited in an idealized liberal world.
On a practical level it is easy to see the cultivation of a liberal personhood in children as imposed rather than freely chosen, with one Toronto family going so far as to not assign their child a gender. On plus side for freedom, the child now has a new choice they didn’t have before. On the negative side, they didn’t get to choose whether or not they’d be forced to make that choice. All freedoms come with obligations, and often some people get to enjoy the freedoms while others must shoulder the obligations.
So it is with the Internet as well. Currently ISPs are obliged to treat packets equally so that content providers like Google and Netflix can enjoy enormous freedoms in connecting with customers. That is clearly not a neutral arrangement, even though it is one that many people (including Google) prefer.
However, the more important non-neutrality of the Internet, one that I think should take center stage in debates, is that it is dominated by corporate interests. Content providers are no more accountable to the public than large Internet service providers. At least since it was privatized in the mid-90s, the Internet has been biased toward fulfilling the needs of business. Other aspirations like improving democracy or cultivating communities, if the Internet has even really delivered all that much in those regards, have been incidental. Facebook wants you to connect with childhood friends so it can show you an ad for a 90s nostalgia t-shirt design. Google wants to make sure neo-nazis can find the Stormfront website so they can advertise the right survival gear to them.
I don’t want a neutral net. I want one biased toward supporting well-functioning democracies and vibrant local communities. It might be possible for an Internet to do so while providing the wide latitude for innovative tinkering that Bennett wants, but I doubt it. Indeed, ditching the pretense of neutrality would enable the broader recognition of the partisan divisions about what the Internet should do, the acknowledgement that the Internet is and will always be a political technology. Whose interests do you want it to serve?
One of the biggest challenges that I think social scientists should be committing themselves to solving is the question of how to enable large-scale social change. Our age is rife with injustices: growing income inequality, an increasingly brutal police-prison-industrial complex, among others. At the same time, these injustices are frustratingly chronic. Positive change, if it has occurred at all, has been ploddingly slow. I think that a big contributor is the unwillingness or inability of average people to imagine change as possible, a necessary condition for them to even begin to advocate for reform. Yet, as anyone who is has read the commentary on a critical article on these issues has probably seen, many Americans seem willing to spare no effort in trying to justify the status quo as either inevitable or the best of all possible worlds. As Steve Fraser argues in The Age of Acquiescence, building a more equal society will require attacking and reconceiving the narratives that today prop up the status quo.
Take college sports, arguably one of most egregiously unjust labor systems in the US. Nowhere else can you find people laboring—indeed college football is like a fulltime job—and inflicting long-term damage to their bodies for little reward. The NCAA generates a billion dollars in revenue, all the while players are contractually barred from reaping the fruits of their labor. As others have pointed out, the “NCAA is a plantation, and the players are the sharecroppers.” That many, if not most, of the prospective players hail from poorer, black regions of the country makes the system seem even more destructive. Football combines start to bear an eerie resemblance to the auction block when one reflects on all these similarities.
The response to such observations always seems to be the same: Don’t these players voluntarily sign the dotted line on the contract? Aren’t they free to do otherwise? The rhetoric of choice is one of the most pernicious discourses today, one that is routinely mobilized to prevent people from digging too deep into systematic inequalities. It is a discourse that tries to eliminate deep thinking about the innumerable coercions faced by most people by reframing them all as choices. Consider Paul Ryan’s recent bizarre claim that cuts to Medicaid and the elimination of the ACA wouldn’t eliminate people’s healthcare: Such people would be simply “choosing” not to have it any longer. The transformation of the inability to pay for something into a free choice is just one of the daftest—though politically expedient—outcomes of choice-based rhetoric. In the context of college sports, it ignores that players coming out of the most deprived areas of the country typically have few other opportunities for a college education or many other routes out of poverty. The rhetoric of choice projects the latitude of choice available to only the most affluent citizens onto everyone, regardless of what their lives actually look like.
The case of college sports also illuminates how the mere possibility of success, no matter how infinitesimal, can lead people to tolerate otherwise intolerable circumstances. Compare it to the Black Mirror episode “15 Million Merits.” Work in the society depicted in this episode is unmitigated drudgery: Citizens’s work lives entail endlessly pedaling on stationary bikes. Their only respite comes from a constant connection to an array of entertainment possibilities, and their only hope for a way out lies in winning Hot Shot, an America’s Got Talent-like game show. The metaphor in “15 Million Merits” couldn’t be clearer: Clawing one’s way out of the doldrums of working in what David Graeber has labeled “bullshit jobs” is largely a roll of the dice, dependent on the caprice of those who do have the power to decide. The hosts of Hot Shot sit with an air of superiority, judging who is worthy and who is not—much like a few of the hosts of the show Shark Tank. Like college ball players who must subject their bodies to four years of strain for a shot at an NFL contract, some workers acquiesce to an unjust working arrangement partly because they too are caught up in dreams of getting to be one of the lucky few to strike it rich.
I’m not the first to note that Americans are limited in their ability to think critically about class because of a belief that inequality is okay as long as they have a chance of being on the right side of it. A common quote, routinely misattributed to John Steinbeck, laments how “the poor [in America] see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.” The underlying narrative that success invariably comes to those who show grit and determination adds to the rhetoric of choice to prevent critical questions about the sources of poverty. I will never forget the panicked look on a student, who in a class discussion about economic fairness, tried to claim that if he were parachuted into Haiti that he would be successful in six months; while uttering something horrible, he nonetheless seemed to be straining under an immense load of cognitive dissonance, attempting to resolve the conflict between a narrative that gave him hope about his own future and its implication that Haitians are somehow poor because they don’t know how to work as hard as middle-class white people.
In any case, also noteworthy in “15 Million Merits” is how those who, for whatever reason, are unable to handle the strain of cycling all day are treated. They are widely abused, distinguished by particular clothing, and targeted for mockery in violent video games and on television game shows—that society’s equivalent of Jerry Springer and Cops. Citizens of this imagined society, much like our own, are partly driven to labor—often to the detriment of their mental and physical well-being—by the fear of being poor and mocked and the belief that perhaps they too can achieve a state of transcendent affluence. Who gives any thoughts to the hundreds or thousands student athletes who, once injured, are often deprived of their scholarship? Often not earning a degree, or perhaps not one that is worth anything, and carrying a potentially disabling injury, such as cervical spine damage, once phenomenal athletes on the way to stardom become just another impoverished nobody, another one of the “takers” denigrated in contemporary conservative discourse.
It seems to me that achieving a more just American society will not be possible without the simultaneous demise of these poverty justifying narratives. Not only will new narratives be necessary, but such narratives will need to be uttered by the right people. As great as it is that attendees of Ivy League universities and participants in urban art collectives have developed counter narratives to those that today justify status quo inequalities, it seems unlikely that such narratives will ever resonate with average citizens. A recent video by The Onion makes much the same point in satirically depicting a Trump voter whose mind was changed after reading 800 pages of queer feminist theory. In my mind, much of the humanities and social sciences are not worth the paper they have been printed on, if they cannot be persuasively conveyed to non-academic—indeed, uneducated—audiences. Unfortunately, many of the academics I know are too busy denigrating Trump voters for being ignorant to consider how things might actually change.
Certainly there are things to like about the March for Science. As you are likely aware, scientists and engineers have a reputation for being politically aloof. I, for one, am glad to see events like it, which run contrary to that stereotype.
The March for Science website describes the event as a nonpartisan call for politicians to recognize that science upholds the public good: in other words, science matters. I want to push those of you reading this post to critically examine this slogan—to treat it as you would any truth claim.
On face value, there seems to be little to disagree with: of course science should matter. Good luck solving any 21st century challenge without it. Hence, I think it is more interesting to ask, “Which science should matter? And how much?” Some of you may find this to be a provocative turn of phrase, because it applies to science a standard definition of politics: that is, politics as any answer to the question “What gets what, when, and how?”
This is a provocative question because many people, including many scientists and engineers, tend to believe that politics is everything science is not and vice-versa, which in turn supports the idea that advocating for science can be a non-partisan activity, that it can be an apolitical social movement.
To say today that science should matter, but little more than that, could be construed to imply that we ought to continue with science as we had prior to recent electoral results. Such an implication would appear to be rooted in the presumption that science was previously nonpartisan and only recently tainted by political agendas. Is that a wise presumption?
Certainly the current administration’s attempts to excise climate science from NASA and muzzle the EPA can be recognized as political. But what about the historical relationship between science and military applications, running all the way from Archimedes to the United States today—where some $77 billion gets spent on military R&D annually compared to $69 billion on nondefense research? What about the fact that a paltry portion of public research money is dedicated to developing non-toxic alternatives to the suspected and confirmed carcinogens and endocrine disruptors found inside most consumer products, toxins which invariably end up in the environment and, thus, in human bodies. Compare that to the billions that always seems await every new overhyped and highly risky area of innovation: nano-tech, syn-bio, and so on.
I don’t assume that you will agree with my own valuation of the relative worthiness of these different areas of science, but I hope you can join me in recognizing that such discrepancies in funding and attention do not exist because one area is more scientific than the others.
If historians who can study our time period even exist in 100 years, they will likely find our belief that science is nonpartisan as perplexing to say the least. How could a sophisticated society believe in such an idea when it is obvious that some areas of science matter more than others and some science gets ignored? How could they sustain such a belief when the advantages of military R&D and the harms of toxic consumer products clearly accrue more strongly to some people than others? Some clearly win because of this arrangement, while others lose.
I don’t say this to denigrate science but to denigrate one of the myths that undergirds the political aloofness that is so common among scientists and engineers. My message to you is that you’re already and always partisan. That is a reality that will not disappear simply by not believing in it. Accepting this message, I would argue, is not as destructive as one might believe at first. Rather, I think it is freeing: it enables one to act more wisely in the world, rather than be misguided by a “flat Earth theory” of politics. There is no abyss to fall into wherein one ceases to be scientific, in turn becoming political. One is already and always both.
Therefore, it is not a question of whether science and engineering is partisan or not, but a question of what kind of partisans scientists and engineers should be: self-conscious ones or ones asleep at the wheel? What kind of technoscientific world will you be a partisan for? Which science should matter? And how much?
It is an understatement to say that the case of Anna Stubblefield is simply controversial. Opinions of the former Rutgers professor, who was recently sentenced to some 10 odd years in prison for the charge of sexually assaulting a disabled man, are highly polarized. When reading comments on recent news stories on the case, one finds not only people who find her absolutely abhorrent but also people who empathize or support her side. No doubt there are important issues to consider regarding the rights of disabled persons, professional ethics, racism, and the nature of consent. However, I want to focus on how the framing of the case as a battle between science and pseudoscience prevents us from sensibly dealing with the politics underlying the issue.
The case is strongly shaped by a broader dispute over of the scientific status of “facilitated communication” (FC), a technique claimed by its advocates to allow previously voiceless people with cerebral palsy or autism to speak. As its name suggests, a facilitator helps guide the disabled person’s hand to a keyboard. In the most favorable reading of the practice, the facilitator simply balances out the muscle contractions and lessens the physical barriers to typing. Some see the practice, however, as more than mere assistance: they claim that the facilitator is the one really doing the typing, either consciously or unconsciously. In the former case, FC is a wonderful gift for those suffering from disabilities and their families. In latter reading, facilitators are charlatans, utilizing a pseudoscientific technique to deceive people.
"Given our inability to see into the minds of people so disabled, both sides of the debate end up speaking for them in light of indirect observations."
This latter view seems to have won out in the case of Anna Stubblefield, who claims that DJ--a man with profound physical and suspected mental disabilities—consented to have sex with her via FC. The court rules that FC did not meet the state standards for science. Hence, Stubblefield was unable to mount a much of a defense vis-à-vis FC.
Most people fail to grasp, however, exactly how hard it is to distinguish science and pseudoscience—despite whatever popularizers like Neil DeGrasse Tyson or Bill Nye seem to claim. Science does not simply produce unquestionable facts, rather it is a skilled practice; its capacity to prove truth is always partial, seen far better in hindsight than in the moment. As science and technology studies scholars well illustrate, experiments are incredibly complex—only becoming more so when their results are controversial. The fact that many scientific activities are heavily dependent on the skill of the scientist is on the one hand obvious, but nevertheless eludes most people.
Mid-20th century experiments attempting to transfer memories (e.g., fear of the dark, how to run a maze) between planarian worms or mice exemplify this facet of science. Skeptical and supportive scientists went back and forth incessantly over methodological disagreements in trying to determine whether the observed effects were “real,” eventually considering more than 70 separate variables as possible influences on the outcome of memory transfer experiments. Even though some skeptical scientists derided skill-based variables as a so-called “golden hands” argument, there are plenty of areas of science where an experimentalist’s skill makes or breaks an experiment. Biologists, in particular, frequently lament the difficulty of keeping an RNA sample from breaking down or find themselves developing fairly eccentric protocols for getting “good” results out of a Western Blot or bioassay experiment. What some will view as ad-hoc “golden hands” excuses are often simply facets of doing a complex and highly sensitive procedure.
A similar dispute over the role of the skill of the practitioner makes FC controversial. After rosy beginnings, skeptical scientists produced results that cast doubt on the technique. Experiments involved the attempt to duplicate text generated with the help of a disabled person’s usual facilitator with a “naïve” facilitator or the asking of questions to which the facilitator wouldn’t know the answer. Indeed, just such an experiment was conducted with DJ, for which both sides claimed victory (Jeff McMahan and Peter Singer, for instance, argue that DJ is more cognitively able than the prosecution would have one believe). As has been the case for other controversial scientific phenomenon, FC only becomes more complex the more deeply one looks into it. Advocates of the method raise their own doubts about studies claiming to disprove the technique’s effectiveness, contending that facilitation requires skills and sensitivities unique to the person being facilitated and that the stressfulness of the testing environment skews the results in the favor of skeptics. There is enough uncertainty surrounding the abilities of those with autism or cerebral palsy to make reasonable arguments either way. Given our inability to see into the minds of people so disabled, both sides of the debate end up speaking for them in light of indirect observations.
Again, my point is not to try to argue one way or another for FC but to merely point out that the phenomenon under consideration is immensely complex; we simplify it only at our peril.
Indeed, the history of science and technology provides plenty of evidence suggesting that we are better off acknowledging that even today’s best science is unlikely to provide sure answers to a controversial debate. Advocates of nuclear energy, for instance, once claimed that their science proved that an accident was a near impossibility, happening perhaps once in ten thousand years. Similarly, some petroleum geology experts have claimed that it is physically impossible for fracking to introduce natural gas and other contaminants to water supplies: there is simply too much rock in between. Yet, an EPA scientist has recently produced fairly persuasive evidence to the contrary. “Settled science” rhetoric has mainly served to shut down inquiry, and the discovery of contrary findings in ensuing decades only adds support to the view that reaching something like scientific certainty is a long and difficult struggle. As a result, scientific controversies are often as much settled politically as scientifically: they are as much battles of rhetoric as facts.
Rather than pretend that absolute certitude were possible, what if we proceeded with controversial practices like FC guided by the presumption that we might be wrong about it? What if we assumed that it was possible the method could work—perhaps for a very small percentage of autistics and those born with severe cerebral palsy--but that we are challenged in our ability to know for whom it worked? Moreover, self-deception—like many believe Anna Stubblefield fell prey to—remains a pervasive risk. The situation changes dramatically. Rather than commit oneself to idea that something is either pure truth or complete pseudoscience, the issue can be framed in terms of risk: given that we may be wrong, who might suffer which benefits and harms? How many cases of sham communication via FC balances out the possibility of a non-communicative person losing their voice? In other words, do we prefer false positives or false negatives?
Such a perspective challenges people to think more deeply about what matters with respect to FC. Surely the prospect of disabled people being abused or killed because of communication that originates more with the facilitator than the person being facilitated is horrifying. Yet, on the other hand, Daniel Engeber describes meeting families who feel like FC has been a godsend. Even in the scenario in which FC only provides a comforting delusion, is anyone being harmed? A philosophy professor I once knew remarked that he’d take a good placebo over nothing at all any day of the week. On what grounds do we have to deprive people of controversial (even potentially fictitious) treatment if it is not too harmful and potentially increases the well-being of at least some of the people involved? I don’t have an answer to these questions, but I do know that we cannot begin to debate them if we hide behind a simplistic partitioning of all knowledge into either science or pseudoscience, pretending that such designations can do our politics for us
Adam Nossiter has recently published a fascinating look at the decline of small to medium French cities in the New York Times. I recommend not only reading the article but also perusing the comments section, for the latter gives some insight into the larger psycho-cultural barriers to realizing thicker communities.
Nossiter's article is a lament over the gradual economic and social decline of Albi, a small city of around 50 thousand inhabitants not far from Toulouse. He is troubled by the extent to which the once vibrant downtown has become devoid of social and economic activity, apart from, that is, the periodic influx of tourists interested in its rustic charm as a medieval-era town. Nossiter's piece, however, is not a screed against tourists; rather, he notes that the large proportion of visitors can prevent one from noticing that the town itself now has few amenities to offer locals: It is a single bakery and no local butcher, grocery, or cafe. Residents obtain their needs from supermarkets and malls at the outskirts of town.
One might be tempted to dismiss Nossiter's concerns as mere "nostalgia" in the face of "real progress." Indeed, many of those commenting on the article do just that, suggesting that young people want an exciting night life offered by nearby metropolises and that local shops are relics of the past that were destined to be destroyed by the ostensibly lower prices and greater efficiency of malls and big box stores.
I think, however, that it is unwise to do so, if one wishes to think carefully and intelligently about the issue. Appeals to progress and inevitability are not so much statements of fact, indeed evidence to back them up is quite limited, but instead rhetorical moves meant to shut down debate; their aim, intentionally or not, is to naturalize a process that is actually sociopolitical.
If France is at all like the United States, and I suspect it is, the erection of malls was nothing preordained but a product of innumerable policy decisions and failures of foresight. So contingent was the outcome on these external variables that it seems obtuse to try to claim that it was the result of simply providing consumers with what they wanted. Readers interested in the details can look forward to my soon to be released book Technically Together (MIT Press). For the purposes of this post I can only summarize a few of the ways in which downtown economic decay is not inevitable.
The ability for a big-box store or mall to turn a profit is dependent on far more than just the owner's business acumen. Such stores are only attractive to the extent that governments spend public funds to make them easy to get to. Indeed, big box prices are low enough to attract Americans because of the invisible subsidy provided by citizens' tax dollars in building roads and highways. Many, if not most, malls and big box stores were built with public funds, either as the result of favorable tax deductions offered by municipalities or schemes like tax-increment financing. Lacking the political clout of the average corporate retailer, a local butcher is unlikely to receive the same deal.
Other forms of subsidy are more indirect. Few shoppers factor in the additional costs of gasoline or car repairs when pursuing exurban discount shopping. Given AAA's estimate of the yearly cost of driving as in excess of ten thousand dollars per year, the full cost of a ten mile drive to the mall is significant, even if it is not salient to consumers. Indeed, they forget it by the time they arrive at the register. Moreover, what about the additional health care costs incurred by driving rather than walking or the psychic costs of living in areas no longer offering embodied community? Numerous studies have found that local community is one of the biggest contributors to a long life and spry old age. It seems unlikely to be mere coincidence that Americans have become increasing medicated against psychological disorders as their previously thick communities have fragmented into diffuse social networks. While these costs do not factor into the prices consumers enjoy via discount exurban shopping, citizens still pay them.
Despite the fact that these sociopolitical drivers are fairly obvious if one takes the time to think about them, "just so" stories that try to explain the status quo as in line with the inexorable march of progress remain predominate. Psychologists have theorized that the power of such stories results from the intense psychological discomfort that many people would feel if faced with the possibility that the world as they know it is either unjust or was arrived at via less-than-fair means. Progress narratives are just one of the ways in which citizens psychically shore up an arbitrary and, in the view of many, undesirable status quo. Indeed, Americans, as well as Europeans and others to an increasing extent, seem to have an intense desire to justify the present by appealing to past abstract "market forces."
Yale political economist Charles Lindblom argued that the tendency for citizens to reason their way into believing that what is good for economic elites is good for everyone was one of the main sources of business's relatively privileged position in society. In fact, many people go so far to talk as if the market were a dangerous but nonetheless productive animal that one must placate with favorable treatment and a long leash, apparently not realizing that acting in accordance to such logic makes the market system seem less like a beacon of freedom and more like a prison. One thing remains certain: As long as citizens think and act as if changes like the economic decline of downtown areas in small cities are merely the price of progress, it will be impossible to do anything but watch them decay.
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life