We’ve all got one: that relative, friend, or social media acquaintance who thinks the danger from Covid-19 is overhyped and the real danger is to the economy. I’m less bothered by this than the response: endless grieving about the inability of people to respect the experts and listen to the facts. “If only people recognized epidemiological truths when they saw them!” seems to be a growing cadence among my friends and numerous media pundits. Really people should quit acting as if sharing videos of handwashing techniques and flattening incidence curves will bring everyone else on board. Actually achieving greater compliance with social distancing and lock-downs will require taking a far different tack.
When we misunderstand disagreement as a product of a deficit in truth, we miss all the ways that it is really rooted in matters of trust. When people fail to accept what we ourselves might see as an obvious fact, we are more likely to denigrate them as ignorant or brainwashed. There’s no shortage of handwringing on Twitter about the MAGA-hat-wearing fools who are less concerned the need to take precautions. But in understanding the issue as a battle between fools and those of us ostensibly enlightened enough to hang on to the CDC director’s every word, we lose the ability to actually understand and sway skeptics. If anything, we push them further away.
In framing skepticism as a matter of scientific literacy, we forget that the risk is actually very uncertain, distant, intangible for most people, a challenge that the pandemic shares with challenges like climate change. Even epidemiological experts have been at odds in face of the considerable complexities and uncertainties of the novel disease; some even debate whether we actually know that the mortality rate is significantly worse than the flu and question the practicality of a long-term social lock-down.
Furthermore, most of us are not seeing the harms in front of our faces, and demands that we pursue social distancing can seem irrational when much of the rest of everyday life appears unchanged. It won’t be clear for years if our actions were too precautious, just right, or not stringent enough—if ever. Health officials are really asking quite a lot from people: ignore what you see, trust us and our models to know what’s best…even if this goes on for months.
That some people more readily trust health officials and the exhortations of heads of state says more about their underlying moral framework than their intelligence. As Jonathan Haidt’s work uncovers, most liberals’ politics is guided by a single principle: care for the least advantaged. Calls to social distance are steeped in this morality, asking people to take precautionary action to preserve precious medical resources for the elderly, immunocompromised, etc. So, it comes as no surprise that liberals are the most convinced that the pandemic is worrisome, the problem already fits neatly into their preferred moral universe.
It isn’t so much that the people who do take Covid-19 more seriously are more scientifically literate. Rather the vast majority of them had likely already bought in before they had ever seen a meme about “flattening the curve.” Although there are likely exceptions, most people’s concerns precedes their scientific literacy: if the more precautionary are more scientifically literate it’s because they already accepted the crisis as a legitimate one in the first place and then sought out scientific counsel. Do you know anyone who waited for high school biology or earth science to have an opinion about abortion or climate change, and do you know very many people whose opinion has actually been significantly altered in the short-term by taking in new scientific information? I don’t, and I know a lot of very well-educated people.
The idea that simply listening to the epidemiologists (or climates scientists or…) will end petty politicking and lead to objectively correct actions and policies just doesn’t jive with people’s thinking. I think it is embraced more for relieving our anxieties than accomplishing anything productive. Yet the belief that science can swoop in to establish order in a chaotic and conflictual world is a comforting one. I don’t necessary begrudge anyone who seeks out its comforts, but the belief drives ill-conceived political communication regardless.
The lesson for coronavirus is the same for other scientific crises: Don’t expect people to not only accept your advice but also your moral framework. And even better, leverage already trusted authority figures, like pastors and conservative television hosts.
I won’t pretend to know exactly what is going through the mind of coronavirus skeptics, but there are a few already visible threads that we should follow. Many skeptics argue that the prospect of a global economic crisis is more salient and important to them than the actual deaths that might manifest—rightly or wrongly. That some people still venture out, despite mandated social distancing, is not so much carelessness per se but caring about different things than the rest of us.
Smart pandemic policy would seek to limit the extent to which these economic worries undo social precautions. Apart from the massive proposed economic stimuli on the table, states would be wise to pair limitations on pubs and cafes with a relaxing of laws on the delivery of alcoholic beverages. And many restaurants are surrounded by so much parking that they could function as drive-in’s without too much difficulty, if made legally permissible. Similar temporary changes to regulations could enable other brick-and-mortar institutions to still do some business rather than none at all.
Discovering the right moral intuition to evoke could be done right now, using the same research techniques that marketers already use to fashion more persuasive ads. We should not just appeal to the moral intuition of care but also loyalty, sanctity, and other ideals. Abstract calls to mind the country’s limited infrastructure of intubation machines and intensive care beds would be more fruitful if supplemented by other messages: respecting social distancing is an act of loyalty to one’s older relatives, it is an exercise in patriotic togetherness against an invading disease (hopefully without also evoking xenophobia), or it reflects the truism that cleanliness is next to godliness.
Just because we are working to prevent the worst-case scenario of an epidemiological model doesn’t mean we have to also embrace the epidemiologist’s stripped-down moral accounting. Realizing the best possible outcome from this pandemic may rely on us doing anything but.
Despite the barriers to community presented by suburban sprawl, the distraction of digital devices, and a pervasive culture of individualism, people do regularly collaborate to create pockets of togetherness. Mike Lanza’s Playborhood is an important reminder that ordinary people do have the ability to incrementally realize more communal lives for their children.
Although I briefly encountered Lanza’s work as I was doing the research for my first book, Technically Together, only recently did I give it a careful read. Lanza is a staunch advocate of encouraging and supporting free play: getting kids more often away from screens and out of overly structured activities (the endless shuttling between sports and piano lessons) and letting them decide for themselves how to spend their (relatively unsupervised) time. He champions carving out space in neighborhoods for children to structure their own outdoor play spaces and recounts how he and his wife have done so on their own Bay Area block, installing water features, sandboxes, and trampolines in their yard and giving local kids permission to use them whenever they want.
Lanza’s book and motivation no doubt stems from nostalgia for the childhood he enjoyed in a Pittsburgh suburb in the 1960s and 70s. But Playborhood doesn’t simply dwell on a lost past but focuses on what groups of motivated citizens are doing today, covering efforts in New Urbanist neighborhoods, in cohousing arrangements, and elsewhere.
In contrast to a New York Times profile on Lanza’s work, I did not find any evidence of mom-bashing or unawareness of his own privilege in Playborhood.[i] Lanza is a relatively well-to-do white guy. When he describes how he has prepared his sons to ride to school by themselves, he admits that sometimes their nanny rides with them. Any limitations in his perspective comes from the fact that he writes from his own personal standpoint: his own middle-class childhood and those of his young sons. The failure to say enough about how girls and others fit into a playborhood is more a sin of omission than commission and a fairly understandable one at that.
That is not to say that Lanza doesn’t include diverse cases. One of his main examples is Lyman Place, a road in the Bronx that turns into a car-free play street every summer. While to many readers, one case study may not be enough to convincingly demonstrate that playborhoods are not likely to remain limited to more affluent residential areas for the near future, it at least shows that Lanza is making the effort to cast a wide net.
Yet one should not have unfair expectations for works like Playborhood. The book serves as a sort of how-to guide and provides inspiration for concerned parents. It is not a systematic sociological study of free play. While it is clear that Lanza has read widely on the subject—he references Ray Oldenburg and Jane Jacobs—readers looking for insight on the broader structural changes that would make things like playborhoods more the norm rather than the exception will prefer Adrian Voce’s Policy for Play or my own Technically Together. No doubt there is a lot to say about making free play and more communal child rearing feasible for a greater portion of humanity, but I don't think we should expect books like Playborhood to do that kind of work.
Surveying my own street, I find the prospect of a street-level playborhood for my two-year-old son both exciting and discouraging. The closest thing to an already existing playborhood in my town is “Faculty Hill”, a pocket of largely unaffordable homes tucked next to my University’s golf course. Purchasing a home that was walking distance to work and also within my price range meant buying on a road dominated by college student rentals. Yet my street is also relatively free of car traffic and my corner lot backyard seems likely to be compatible with whatever plans my son and local youth would eventually dream up.
Still, part of me wonders if the broader barriers will loom too large. Perhaps my street simply lacks the sufficient density of children. Maybe other parents won’t be persuaded by my case for the value of free play. Already having been warned by one of my neighbors to keep my kid “out of the street”—ostensibly to save my neighbor the trouble from having to watch out for little ones when driving his big truck down it—foreshadows future conflicts.
Yet one never knows what latent needs and desires may lie just under the surface. When looking at any suburban street, I always wonder: What percentage of houses have lonely people in them at this moment, people sitting in their homes wishing they enjoyed more local togetherness but not knowing how or too discouraged to seek out a community beyond their front door. Books like Playborhood remind us that often the biggest barrier is belief. Small groups of dedicated people can sometimes overcome all the barriers and change their neighborhoods for the better. All it takes is someone to get the ball rolling.
[i] I regret taking this profile at face value in Technically Together. It seems to have exaggerated Lanza perspective, if not wholly distort the position he lays out in Playborhood
Back during the summer, Tristan Harris sparked a flurry of academic indignation when he suggested that we needed a new field called “Science & Technology Interaction” or STX, which would be dedicated to improving the alignment between technologies and social systems. Tweeters were quick to accuse him of “Columbizing,” claiming that such a field already existed in the form of Science & Technology Studies (STS) or similar such academic department. So ignorant, amirite?
I am far more sympathetic. If people like Harris (and earlier Cathy O’Neil) have been relatively unaware of fields like Science and Technology Studies, it is because much of the research within these disciplines is mostly illegible to non-academics, not all that useful to them, or both. I really don’t blame them for not knowing. I am even an STS scholar myself, and the table of contents of most issues of my field’s major journals don’t really inspire me to read further.
And in fairness to Harris and contrary to Academic Twitter, the field of STX that he proposes does not already exist. The vast majority of STS articles and books dedicate single digit percentages of their words to actually imagining how technology could better match the aspirations of ordinary people and their communities. Next to no one details alternative technological designs or clear policy pathways toward a better future, at least not beyond a few pages at the end of a several-hundred-page manuscript.
My target here is not just this particular critique of Harris, but the whole complex of academic opiners who cite Foucault and other social theory to make sure we know just how “problematic” non-academics’ “ignorant” efforts to improve technological society are. As essential as it is to try to improve upon the past in remaking our common world, most of these critiques don’t really provide any guidance for what steps we should be taking. And I think that if scholars are to be truly helpful to the rest of humanity they need to do more than tally and characterize problems in ever more nuanced ways. They need to offer more than the academic equivalent of fiddling while Rome burns.
In the case of Harris, we are told that underlying the more circumspect digital behavior that his organization advocates is a dangerous preoccupation with intentionality. The idea of being more intentional is tainted by the unsavory history of humanistic thought itself, which has been used for exclusionary purposes in the past. Left unsaid is exactly how exclusionary or even harmful it remains in the present.
This kind of genealogical take down has become cliché. Consider how one Gizmodo blogger criticizes environmentalists’ use the word “natural” in their political activism. The reader is instructed that because early Europeans used the concept of nature to prop up racist ideas about Native Americans that the term is now inherently problematic and baseless. The reader is supposed to believe from this genealogical problematization that all human interactions with nature are equally natural or artificial, regardless of whether we choose to scale back industrial development or to erect giant machines to control the climate.
Another common problematiziation is of the form “not everyone is privileged enough to…”, and it is often a fair objection. For instance, people differ in their individual ability to disconnect from seductive digital devices, whether due to work constraints or the affordability or ease of alternatives. But differences in circumstances similarly challenge people’s capacity to affordably see a therapist, retrofit their home to be more energy efficient, or bike to work (and one might add to that: read and understand Foucault). Yet most of these actions still accomplish some good in the world. Why is disconnection any more problematic than any other set of tactics that individuals use to imperfectly realize their values in an unequal and relatively undemocratic society? Should we just hold our breaths for the “total overhaul…full teardown and rebuild” of political economies that the far more astute critics demand?
Equally trite are references to the “panopticon,” a metaphor that Foucault developed to describe how people’s awareness of being constantly surveilled leads them to police themselves. Being potentially visible at all times enables social control in insidious ways. A classic example is the Benthamite prison, where a solitary guard at the center cannot actually view all the prisoners simultaneously, but the potential for him to be viewing a prisoner at any given time is expected to reduce deviant behavior.
This gets applied to nearly any area of life where people are visible to others, which means it is used to problematize nearly everything. Jill Grant uses it to take down the New Urbanist movement, which aspires (though fairly unsuccessfully) to build more walkable neighborhoods that are supportive of increased local community life. This movement is “problematic” because the densities it demands means that citizens are everywhere visible to their neighbors, opening up possibilities for the exercise of social control. Whether not any other way of housing human beings would not result in some form of residential panopticon is not exactly clear, except perhaps by designing neighborhoods so as to prohibit social community writ large.
Further left unsaid in these critiques is exactly what a more desirable alternative would be. Or at least that alternative is left implicit and vague. For example, the pro-disconnection digital wellness movement is in need of enhanced wokeness, to better come to terms with “the political and ideological assumptions” that they take for granted and the “privileged” values they are attempting to enact in the world.
But what does that actually mean? There’s a certain democratic thrust to the criticism, one that I can get behind. People disagree about what is “the good life” and how to get there, and any democratic society would be supportive of a multitude of them. Yet the criticism that the digital wellness movement seems to center around one vision of “being human,” one emphasizing mindfulness and a capacity to exercise circumspect individual choosing, seems hollow without the critics themselves showing us what should take its place. Whatever the flaws with digital wellness, it is not as self-stultifying as the defeatist brand of digital hedonism implicitly left in the wake of academic critiques that offer no concrete alternatives. Perhaps it is unfair to expect a full-blown alternative; yet few of these critiques offer even an incremental step in the right direction.
Even worse, this line of criticism can problematize nearly everything, losing its rhetorical power as it is over-applied. Even academia itself is disciplining. STS has its own dominant paradigms, and critique is mobilized in order to mold young scholars into academics who cite the right people, quote the correct theories, and support the preferred values. My success depends on me being at least “docile enough” in conforming myself to the norms of the profession.
I also exercise self-discipline in my efforts to be a better spouse and a better parent. I strive to be more intentional when I’m frustrated or angry, because I too often let my emotions shape my interactions with loved ones in ways that do not align with my broader aspirations. More intentionality in my life has been generally a good thing, so long as my expectations are not so unrealistic as to provoke more anxiety than the benefits are worth. But in a critical mode where self-discipline and intentionality automatically equate to self-subjugation, how exactly are people to exercise agency in improving their own lives?
In any case, advocating devices that enable users to exercise greater intentionality over their digital practices is not a bad thing per se. Citizens pursue self-help, meditate, and engage in other individualistic wellness activities because the lives they live are constrained. Their agency is partly circumscribed by their jobs, family responsibilities, and incomes, not to mention the more systemic biases of culture and capitalism. Why is it wrong for groups like Harris’ center to advocate efforts that largely work within those constraints?
Yet even that reading of the digital wellness movement seems uncharitable. Certainly Harris’ analysis lacks the sophistication of a technology scholar’s, but he has made it obvious that he recognizes that dominant business models and asymmetrical relations of power underlay the problem. To reduce his efforts to mere individualistic self-discipline is borderline dishonest, though he no doubt emphasizes the parts of the problem he understands best. Of course it will likely take more radical changes to realize the humane technology than Harris advocates, but it is not totally clear whether individualized efforts necessarily detract from people’s ability or the willingness demand more from tech firms and governments (i.e., are they like bottled water and other “inverted quarantines”?). At least that is a claim that should be demonstrated rather than presumed from the outset.
At its worst, critical “problematizing” presents itself as its own kind of view from nowhere. For instance, because the idea of nature has been constructed in various biased throughout history, we are supposed to accept the view that all human activities are equally natural. And we are supposed to view that perspective as if it were itself an objective fact rather than yet another politically biased social construction.
Various observers mobilize much the same critique about claims regarding the “realness” of digital interactions. Because presenting the category of “real life” as being apart from digital interactions is beset with Foulcauldian problematics, we are told that the proper response is to no longer attempt the qualitative distinctions that that category can help people make—whatever its limitations. It is probably no surprise that the same writer wanting to do away with the digital-real distinction is enthusiastic in their belief that the desires and pleasures of smartphones somehow inherently contain the “possibility…of disrupting the status quo.” Such critical takes give the impression that all technology scholarship can offer is a disempowering form of relativism, one that only thinly veils the author’s underlying political commitments.
The critic’s partisanship is also frequently snuck in the backdoor by couching criticism in an abstract commitment to social justice. The fact that the digital wellness movement is dominated by tech bros and other affluent whites implies that it must be harmful to everyone else—a claim made by alluding to some unspecified amalgamation of oppressed persons (women, people of color, or non-cis citizens) who are insufficiently represented. It is assumed but not really demonstrated that people within the latter demographics would be unreceptive or even damaged by Harris’ approach. But given the lack of actual concrete harms laid out in these critiques, it is not clear whether the critics are actually advocating for those groups or that the social-theoretical existence of harms to them is just a convenient trope to make a mainly academic argument seem as if it actually mattered.
People’s prospects for living well in the digital age would be improved if technology scholars more often eschewed the deconstructive critique from nowhere. I think they should act instead as “thoughtful partisans.” By that I mean that they would acknowledge that their work is guided by a specific set of interests and values, ones that are in the benefit of particular groups.
It is not an impartial application of social theory to suggest that “realness” and “naturalness” are empty categories that should be dispensed with. And a more open and honest admission of partisanship would at least force writers to be upfront with readers regarding what the benefits would actually be to dispensing with those categories and who exactly would enjoy them—besides digital enthusiasts and ecomodernists. If academics were expected to use their analysis to the clear benefit of nameable and actually existing groups of citizens, scholars might do fewer trite Foucauldian analyses and more often do the far more difficult task of concretely outlining how a more desirable world might be possible.
“The life of the critic easy,” notes Anton Ego in the Pixar film Ratatouille. Actually having skin in the game and putting oneself and one’s proposals out in the world where they can be scrutinized is far more challenging. Academics should be pushed to clearly articulate exactly how it is the novel concepts, arguments, observations, and claims they spend so much time developing actually benefit human beings who don’t have access to Elsevier or who don't receive seasonal catalogs from Oxford University Press. Without them doing so, I cannot imagine academia having much of a role in helping ordinary people live better in the digital age.
Quartier Vauban in Freiberg, Germany seems to be everything that planned neighborhoods in North America are not. Compact. Supportive of walking and biking. And green. Not only does the neighborhood, which houses some five and a half thousand people, enjoy the distinction of one of the lowest rates of car ownership and highest percentages of passive-energy housing in Europe, much of the quarter’s streets are essentially “car-free.” Automobile owners even pay the construction costs of the parking garage spot (about 17,000 euros) where they are required to store their vehicles. But what’s most notable is not simply the successes of the development, but the process that participants used to get there.
Continue reading at Strong Towns...
I am an educator, and I also hated school. That might seem to be a contradiction at first, but I hope (for my own sake) that it does not have to be one. The fact that nearly every student and faculty member that I have known begin counting down the weeks until winter or summer break right after first day of classes signals to me that something is deeply wrong with what we call education. If it is actually the life-enriching, citizen-building experience that it is often touted to be, why does almost everyone directly involved in teaching or learning seem to only tolerate it—if not actively dislike it?
I can count the classes that have I actually enjoyed on one hand. For the most part, public school and university courses felt like arduous slogs. Sure, there would be brief moments of inspiration, insight, and reflection, but I mostly focused on completing my assignments as quickly, and with as little effort, as possible. I doubt that I am alone in having treated classes as mostly annoyances on the path to a degree. I hated having to divvy up my curiosity, temporarily pretending to care about whatever it was that my teacher thought that I should be interested in.
Even though I eventually earned a PhD in Science and Technology Studies (STS) and now teach courses in that area, I found being a graduate student to be a largely miserable experience. I only woke up excited to go to my office after I completed my coursework, when I could dedicate most of my time to doing research on my own and sometimes under the guidance of my advisor.
The only class that I did enjoy during my studies involved doing largely self-guided group projects. The professors taught us a few tools and then mostly stepped back, letting student groups find their own way through their research and mathematical modeling projects, and only intervened when asked to and after groups presented their findings.
Maybe I simply forgot all this once I became a professor. Perhaps I wanted to believe that the STS topics that fill my classes would have been different, or were inherently more interesting and engaging. I do treasure the students who actively engage with my course material beyond the minimum expectations, but most do not. And I have largely failed in my attempts to teach my courses more like the one class that I actually enjoyed and less like all the others that I have fortunately forgotten.
Underlying this failure has been the stubborn belief that I can control my students’ learning. If only I properly incentivized them with grades, set up assignments just right, or made class lectures entertaining enough, perhaps I could get most of them to invest in their own education beyond merely fulfilling syllabus requirements. I have a hard time letting go of the belief that I can make them really learn.
Yet my own experiences tell me that this pedagogy of control is destined to be an abysmal failure. I have forgotten most of the mathematics that I learned over the course of a bachelors and masters degree. All that remains is a knack for working through quantitative puzzles, usually after surveying Wikipedia or Wolfram to relearn forgotten techniques.
Even this knack has not been worth much. When I interned as a statistician, my supervisor asked me to help him decide which would be the best algorithm to help categorize some “big data” that he had. “Shit,” I thought, “I don’t know how to do that.” My supervisor wanted judgement and deep understanding from me, but I had spent the bulk of my time in university plugging and chugging through calculations, writing up relatively simple programs for canned (rather than real-world) problem sets, and generally avoiding doing too much learning. And I was even an “A” student.
To scholars of decision making and high performance, none of this is surprising. Barry Schwartz and Kenneth Sharpe have written a great deal about phronesis, or “practical wisdom,” and how important, but hard won, it is. People become excellent leaders, doctors, judges, or teachers only by being afforded opportunities to make mistakes, and then being encouraged and enabled to reflect and act upon those errors. They never have a chance to become practically wise when mandatory sentencing requirements preclude exercising judgement, when state-prescribed testing and preparation eats up too much classroom time, or economic incentives prevent doctors from spending the time necessary to talk with and care about patients.
The failure of nuclear energy in the United States, for instance, has been partly caused by the tendency for the American regulatory system legalistically and tediously outline point-by-point design specifications in the effort to enforce safety. The specifications for a nuclear reactor in the US are on the order of several thousand pages, compared to mere tens in other countries. Whatever its benefits, the unintended consequence of a top-down control-oriented system is that American nuclear reactor builders have come to believe that safety is accomplished by following the letter of the law, not by thinking.
Of course, there are reasons why the American regulatory system overprescribes and leaves no room for on-going learning, adaptation, and innovative solutions to safety problems: lack of trust. An antagonistic relationship between industry and government drives our rule-based system, which in turn prevents a more cooperative back-and-forth. Government bureaucrats focus on controlling industry, whom they believe would do nothing right without intense supervision. Industry, in turn, often acts little differently from contemporary college-students: Doing the bare minimum and even subverting the rules when advantageous. Wherever learning is supposed to happen, too much control crowds out initiative.
Recognizing this does not mean we should take a “hands off” approach, whether it be in education or industrial regulations. In my mind, the advocates of “permissionless innovation,” which prescribes that governments never proactively attempt to avert or mitigate the unintended negatives consequences of technological change, are complete fanatics. Neither would it be sensible to give students no direction at all with regards to their education, especially given that public schooling has inadvertently trained them to treat learning as a chore to be gotten over with as soon as possible or to think of degrees as merely signaling how hardworking or smart they are. Instructors have good reason not to entrust today’s college students with their own learning.
That leaves those of us involved in higher education in a Catch 22. We cannot begin to trust students without relinquishing control, but it is difficult to relinquish control without broader changes to enable and encourage students to demonstrate their trustworthiness. Teaching at an engineering school, I know that whatever extra time that I give my students usually translates into them spending more time studying for Calc II or other courses that they will invariably describe as their “real classes”—which are, ironically, the ones that they probably hate the most.
Yet continuing with the way things are seems perverse. The prospect of dedicating the rest of my life to an institution that mints degree-holders only at the cost of making students miserable fills me with dread. I wish that I had a good answer regarding how to break a sizable chunk of higher education out of the pedagogy of control, and I hope that I can eventually figure out how to more significantly reduce its presence in my own teaching. What would a university that produces practically wise human beings look like and how could we achieve one?
STS research has traditionally focused on innovation processes—and more recently maintenance—leaving practices of technological dismantling, decommissioning, and refusal under examined and less deeply theorized. This is inspite of the fact that contemporary forms of Luddism are highly visible. Ordinary citizens consciously take a break from digital devices, cities demolish of urban infrastructures like elevated highways or trolley systems, parents opt their children out of mandated state testing, and Silicon Valley firms aim to “disrupt” already existing sociotechnical systems and replace them with networked platforms under a startup’s control. How could (or should) STS scholars make sense of these seemingly disparate Luddite activities?
This panel builds on recent scholarship on the interrelations between Luddism as epistemology—a process of learning about technologies as legislations—and as politics—an effort to materially realize a certain vision of the good society. Desirable presentations include ones that draw connections between and contrast contemporary and past movements aspiring to dismantle certain technologies, theorize and elucidate the epistemological dimensions of Luddite politics, discern and examine the barriers to democratizing Luddism, and imagine and propose how technological destruction can proceed in an intelligent and just matter. In exploring deeper theorizations and research on technological dismantling, decommissioning, and refusal, this panel also seeks constructive critiques of epistemological and political Luddism: How to ensure that dismantling is an ethically just political project and protect against the reactionary instantiations that are often associated with 20th century neo-Luddites?
Please submit your abstracts by February 1st, 2019 here.
Michael Bouchey, RPI
Michael Lachney, Michigan State
Taylor Dotson, New Mexico Tech
A person who has only half-grasped the lessons of history can sometimes be more dangerous than a naif. Convinced that they have it all figured out, they can be just as wrong but more dogmatic than someone who recognizes their ignorance. This is why I felt considerable unease reading ecomodernist arguments, like those of Michael Shellenberger, about the desirability of nuclear power. Because Shellenberger only partly grasps the historical challenges presented by large-scale technologies like nuclear energy, he manages to argue his way to some fairly reckless conclusions.
Shellenberger, as president of a pro-nuclear interest group Environmental Progress, has dedicated himself to keeping current nuclear plants running around the world and advocating for building new ones. Reading over the group’s materials, I am reminded of the grand technological visions that got us into the nuclear game in the first place. Nuclear energy was originally promised to provide energy “too cheap to meter” and an atomically powered utopia. Ecomodernist environmentalists like Shellenberger see similar nuclear promise, namely as a Deux Ex Machina for climate change, if his cited estimates of its carbon footprint are to be believed.
It is important to note that the economic (and safety) estimates that drove the first nuclear bubble proved to be spurious. Largely based on theoretical projections made in the absence of real experience, they proved too optimistic and utilities were stuck with underperforming reactors that were completed years, if not decades, behind schedule. Nuclear energy has only become moderately financially competitive as decades-old investments have been paid off. Given these past mistakes, we should be worried about becoming too enchanted with a similar, albeit ecomodernist, nuclear day dream.
Shellenberger and others might prove correct regarding nuclear’s carbon footprint, or they might not. Getting an accurate life-cycle assessment is challenging for most technologies, and the carbon picture for nuclear is made worse when including the processing of uranium ore and decommissioning of reactors. Considerable uncertainties emerge when recognizing that we do not really know the carbon costs of building and maintaining long-term storage sites. Will nuclear look so advantageous if we massively expanded our use of it and when we finally gain some practical experience with stewarding the waste? Even worse, we might not actually learn the full environmental costs until it is too late, inadvertently committing ourselves to a net environmental negative for ten thousand years. And that’s not even accounting for the carbon footprint of potential nuclear conflicts made more likely, because expanding nuclear power across the globe would almost invariably lead to more nuclear weapons programs.
The ecomodernist position acknowledges little of these complexities. Shellenberger seems to know just enough about nuclear energy to paint himself into a technologically conservative corner. In an article for Forbes, he cites the financial and scheduling woes of new designs like the AP1000, which promised more passive safety features to overcome some of the light-water reactor’s inherent design problems, as evidence that nuclear innovation writ large is costly and should be strictly limited. His article dwells on Hyman Rickover’s experience with developing the light water reactor and statements about the challenges that arise when translating experimental designs to the real world. Shellenberger advocates that we mostly stick with a slightly modified, standardized version of decades-old designs—to be produced in massive numbers—ostensibly because it fits with the ecomodernist demand that we rapidly expand nuclear energy to combat climate change. Moving to a different design would slow down that process.
The costs of innovation, however, is only sometimes and partly due to the radicalness of design changes. Also pertinent are the capital intensity of the technology, speed of feedback about errors, dependence on specialized infrastructure, and the process for scaling up. It is about the whole sociotechnical system of innovation, not just the technology itself. The challenges that nuclear innovation faces today is partly due to the shape of the system put in place decades ago.
The costliness of pursuing potentially safer reactor designs—or one’s that may have a better outlook in terms of waste or carbon emissions—is partly caused by previous generations of nuclear advocates plunging ahead. The currently dominant light water reactor became the de facto standard largely before we learned much about its relative benefits and a drawbacks for producing commercial energy. We still do not know much about the alternatives. Options became foreclosed because true believers chasing nuclear dreams, pursuing market dominance, and/or wanting to beat the Russians acted single-mindedly, constructing dozens of light-water plants and scaling them up by a factor of six before people started to recognize their flaws and the bubble burst.
As a result of this nuclear energy bubble, alternative designs face unfair comparisons with a light water standard that received billions in early investment and subsidy, among other advantages. Alternatives designs are further stymied because established infrastructures, educational programs, and regulatory regimes are not designed for them. And people are more risk-averse with regard to developing alternatives, lulled into complacence by the knowledge that a working design already exists. Put together, this leads the light water reactor to be the QWERTY keyboard of nuclear energy: a suboptimal design that is nevertheless too entrenched to be altered or dispensed with.
People who have studied how promising innovations turn into expensive technological failures warn about the risks of locked-in or otherwise inflexible technologies. The process of standardization and narrowing of alternatives that Shellenberg sees as so beneficial financially is only desirable once one really knows the ins-and-outs of competing designs. Ensuring such diversity is no doubt expensive, but expenses can be reduced by ensuring that development is appropriately paced and scaled up gradually. Consider how NASA-led wind turbine development in the United States was a costly flop: Engineers scaled up to megawatt sized turbines that failed early and catastrophically, providing little to no guidance for commercial builders. The Danish, on the other hand, supported a decentralized process by a diverse set of builders. The size of the turbines grew gradually over the course of a decade, resulting in Danish turbines being the most reliable in world without millions in wasted on inefficient investments. Note that this is the complete opposite of the process that Shellenberger advocates: decentralized rather than centrally controlled, small and gradual rather than deployed at grand scales, and diverse and open rather than standardized from the get go.
Realizing such a decentralized and gradual process with nuclear energy is challenged by a number of factors—no doubt securing nuclear material being one of them. Perhaps the biggest barrier is size expectations. Any moderately novel reactor design is going to run into major unanticipated problems when it is expected to produce electricity at the scale of hundreds of megawatts. This well-established pattern should give us pause. Can we create a context where intelligent incremental nuclear innovation can occur? If we cannot, then perhaps for all intents and purposes nuclear energy is a pathological technology. If its very character prevents a learning-focused innovation process—especially with respect to safety and environmental impact—maybe it is not worth the cost and effort, especially given that energy reduction is so much more cost effective and freer of undesirable consequences.
The coach of my university’s rugby team instructs players that “Slow is smooth, and smooth is fast.” Behind what would seem to be an apparent contradiction in terms is a recognition that panicked decisions are often more costly in terms of time (and everything that comes with it) than being self-conscious and deliberate. Another popular rugby saying is “Don’t shovel shit” This means if you receive a crappy pass, don’t try to pass it on to the next guy. It will only make things worse.
The seriousness of the problems of global climate change should not be viewed as an invitation to make hasty decisions about nuclear; such choices could make the predicament of future generations even worse. Without an emphasis on deliberate learning, the risk of doing so is very high. Moreover, there is no requirement that we remain committed to our nuclear inheritance. The sunk cost of experience, dedicated infrastructure, and other choices that now make current light-water designs seem like a good deal was originally the sociotechnical equivalent of a shit pass. We do not have to shovel it on to the next generation. We can take the opportunity—as more and more of our plants reach the end of their lifespan—to reassess the situation and adjust our line of attack, proceeding in full recognition of our limited knowledge.
Contemporary parents live in constant fear of the authorities—and the “Good Samaritans” who contact them. A friend of mine left his elementary school-aged kids home alone for a mere five minutes to talk to a neighbor, only to return to a police cruiser investigating a call about “abandoned children.” When my brother forgets his work badge on the kitchen counter, he unloads and shuttles his three small children out of their car seats and into the house. He wastes ten minutes coaxing them back into their harnesses, lest a neighbor report him for “neglecting” his little ones by letting them stay buckled in a parked car with all its doors open.
Parents today are increasingly harassed or even arrested for leaving their kids in a car for two minutes to buy a coffee or allowing them to frolic unsupervised in a nearby playground. Yet it was not always like this. Generations ago—when the world was considerably more dangerous—children as young as eight were allowed to roam six miles away from home. Children’s freedom is a litmus test for community vibrancy. We won’t be able to improve one without boosting the other.
In contrast to growing anxieties, the rate of fatal injuries for children in the United States has been in steep decline for decades. There is no reason, however, to credit increased helicopter parenting and the vigilance of authority-contacting strangers for that decline. The rate is even lower in European countries where parents generally give their kids a much longer leash. Heck, Dutch kids don’t even wear helmets when they bike by themselves to school.
The difference is that childhood risks are individualized in the United States. Rather than redesign road systems to make dangerous interactions between cyclists and automobiles less likely—as the Dutch have done—we clad kids in padding and shame parents for not watching closely enough if little Johnny or Jane ends up on the business end of a Buick. Not only does this approach run completely contrary to what we’ve learned about the organization of safety (see nuclear power), but it fundamentally reshapes parents’ relationship to their neighbors. We police, rather than watch out for, one another. We punish and moralize instead of cooperate and empathize.
Calls made to the authorities about unsupervised kids are not made because of any real danger but because many Americans don’t want to have to keep an eye out for children who aren’t their own. As one mother complains about more “free range” parenting, “I don’t want to be responsible for someone else’s kids.” Cops are used to punish parents for the sin of making other people worry about their offspring, of drawing them into community without their consent.
The individualization of much of the rest of American life makes the model of absolute parental responsibility only more difficult. Many of the parents who had law enforcement called on them faced difficult decisions: Let the kids play in the park or fail to show up for work; leave them in the car for twenty minutes or miss out on a job interview. Childcare is unaffordable because we don’t collectively provide for it. Current economic arrangements and policies individuate workers, giving little to no respect to the family or community as a social unit. On top of this, few Americans today have good enough relationships with neighbors to have them babysit.
It will not be easy to redirect American society toward a more communitarian, less individualized model of childrearing. Fortunately, studying how we’ve come to today’s world of neighborhoods full of strangers, near deserted suburban streets, and low levels of communal goodwill can teach us how to reverse the downward spiral.
It would be unreasonable to expect parents to embrace “free range” parenting overnight, given that decades of fear-based news reporting and home-based hermitting has led many to see danger lurking around every corner. But simple measures could allay some parental anxieties while giving children the freedom to play without parental surveillance. Teenagers already earn money ensuring safety at public pools. Why couldn’t “play lifeguards” staff local parks to supply sports equipment, tend to minor injuries, and help prevent major accidents? Why not locate children’s play areas among outdoor cafes, pubs, and other spaces amenable to adult relaxation, letting the eyes of other parents, retirees, and staff help keep kids safe? Such spaces already exist in many malls. Why couldn’t they be built elsewhere?
We could begin to weave back together the frayed communal web in many neighborhoods by redesigning zoning and building codes to encourage community interaction. Few American homes today have a front porch worth gathering on, and few neighborhoods contain places worth walking to. Building residential areas differently would help foster the kinds of social interactions that could restore neighbors’ trust in each other, growing social capital to a level where community members no longer treat keeping an eye out for someone’s kid as an onerous hardship.
No doubt thickening community life in this way would mean taking on new duties and responsibilities, many of which would feel uncomfortable at first. But individualization has been no different, coming with its own set of freedom-limiting burdens. The question is always “Which freedoms are worth what constraints?”
Past thick communities had their share of problems—often being patriarchal and overly demanding of conformity. These are serious issues that demand careful solutions. But there is no free lunch when it comes to the makeup of society. We are now seeing the costs of contemporary individualism: the loneliness of stay-at-home parents and older adults, the use of police to terrorize busy moms, and growing rates of depression. Nevertheless, a more pleasant and communitarian world for parents and children is possible—if we’re willing to collectively reevaluate and reconstruct our social lives.
Taylor Dotson is an assistant professor of science and technology studies at New Mexico Tech, author of Technically Together: Reconstructing Community in a Networked World (MIT Press), and a researcher with Whoa Inc.
A recent diagnosis of obstructive sleep apnea has led me develop a new level of annoyance with the medical profession. The condition seems simple enough: My throat and tongue musculature relaxes too much when I sleep, cutting off my airway several times an hour and keeping me from getting restful sleep. After my sleep study, I was prescribed a CPAP machine, a device that forces my airway open by pumping pressurized air into my nostril, and sent home. To say that the road to wellness has not been smooth would be an understatement. As an STS scholar, I am well familiar with cases where patients have been frustrated by the way their conditions and treatment options are understood by the medical community. Their frustrations have become far more real to me in my struggle to deal with my apnea diagnosis.
What struck me most when I first took my CPAP machine home was the large degree to which my sleep became “medicalized.” That is, it became understood in terms of the assumptions, values, and desires of medical professionals, not my own. The “MyAir” app associated my machine only tells me how long I’ve kept my mask on, whether it has leaked, and how many apnea episodes I have had every hour. Sleep quality is not measured or represented anywhere. Ironically, I get pretty good numbers when I lie awake for hours on end, wishing that the panicked feeling of suffocation would subside just enough for me to fall asleep. Even nights that I do sleep, I awake four to five times per night, never reaching the deepest level of sleep. My slumber can be nearly as unhealthy as before despite the “good” numbers sent to my doctor by the machine.
Most telling is the way that my usage of the machine is talked about. The primary concern of my doctor and insurance company is “compliance” – so much so that a respiratory technician was made to show me a scary four figure number that I would be responsible for paying if I do not wear my mask the required four hours per night. Unfortunately, there is no equally threatening monetary incentive for my doctor to ensure that I am actually asleep and sleeping well for the night. I can be totally compliant while being completely miserable.
The tendency to be overly enthralled with seemingly objective but unrepresentative measures and take too little care in understanding how people interact with their technologies is tragically common. Robert Pool calls this the “machine centered philosophy of engineering.” Under the spell of this philosophy, whatever machine technologists come up with is framed as ideal. The only imaginable problem then becomes the failure of people to adapt themselves to the machine, not that designers failed to give empathic consideration of what people can reasonably do. A classic example of this machine-centered view was the control room in nuclear plants like Three Mile Island. Operators were blamed for mistakes made in the run up to a near meltdown at the plant, but one of the underlying causes was that the array of dial and gauges in the room were not set up to be comprehensible to operators but easier for the designers and builders to lay out.
Once one notices that CPAP machines are a machine-centered approach to treating sleep apnea, their status as the “gold standard” treatment begins to appear much less certain. Indeed, nearly 50% percent of diagnosed apnea sufferers never adapt to their machines and stop using them. “Gold standard” status perhaps makes sense in the simplified environment of the clinical study but not in real life. Yet alternative treatments to the CPAP machine receive little attention from sleep doctors, perhaps because they do not reliably get patients’ AHI (average incidents per hour) down to the sought after 5 or less. However, consider that a “compliant” CPAP patient only need wear their mask 4 hours a night. Their actual nightly AHI may actually be little different than people using these alternative treatments. Someone managing to wear a CPAP mask 5 hours per night with an AHI of 4, but going back to an AHI of 25 for the remaining 3 hours, has a nightly AHI of almost 12—which would classify them as suffering from moderate sleep apnea and is no different than what alternative treatments accomplish. However, under the spell of machine-centered thinking, this would be seen solely the patient’s fault for being insufficiently diligent rather than a failure of the CPAP approach more broadly.
Looking at other cases of machine-centered failure, however, provides lessons regarding how sleep apnea treatment could be more person-centered. For instance, autopilot leads to new kinds of plane crashes because trying to completely delegate the process of flying to an algorithm deskills pilots, leading them to make elementary mistakes when the autopilot shuts off in unusual circumstances. The alternative is to “informate,” which involves using automation technologies to help pilots become better at their jobs: help them maintain attention, periodically test their skills, provide feedback on performance, etc. Informating takes the cognitive aspects of pilots and the human-machine interface as part of the design, rather than expect users to be superhuman. The challenge for sleep apnea researchers is learn to think out of the machine-centered box. Rather than simply delegate the holding open of patients’ throats to a machine, how could patients be better empowered to do that themselves?
This alternative approach is mostly undone science. While there are a few studies looking into how physical therapy exercises, playing the didgeridoo, and a cannabinoid could reduce the frequency of apnea incidents by up to 50 percent, there are few follow-up studies, much less any research attempting these treatment options in combination. Little to no energy has been spent by my doctor to try to diagnose exactly why my airway collapses. Given that breathing is a semi-voluntary act, what reason is there to believe that I could not retrain my respiratory system to have a more suitable level of musculature?
Insofar as today’s paradigm of compliance to CPAP reigns, apnea sufferers like myself are left in the dark, trying to piece together sparse information on the Internet in order to design our own alternative and complementary treatment pathways. This need not be the case. I could use the help of a trained medical professional, rather than go it alone. Absent a less machine-centered, more person-centered paradigm of apnea treatment, I do not have any other options.
There have been no shortage of (mainly conservative) pundits and politicians suggesting that the path to fewer school shootings is armed teachers—and even custodians. Although it is entirely likely that such recommendations are not really serious but rather meant to distract from calls for stricter gun control legislation, it is still important to evaluate them. As someone who researches and teaches about the causes of unintended consequences, accidents, and disasters for a living, I find the idea that arming public school workers will make children safer highly suspect—but not for the reasons one might think.
If there is one commonality across myriad cases of political and technological mistakes, it would be the failure to acknowledge complexity. Nuclear reactors designed for military submarines were scaled up over an order of magnitude to power civilian power plants without sufficient recognition of how that affected their safety. Large reactors can get so hot that containing a meltdown becomes impossible, forcing managers to be ever vigilant to the smallest errors and install backup cooling systems—which only increased difficult to manage complexities. Designers of autopilot systems neglected to consider how automation hurt the abilities of airline pilots, leading to crashes when the technology malfunctioned and now-deskilled pilots were forced to take over. A narrow focus on applying simple technical solutions to complex problems generally leads to people being caught unawares by ensuing unanticipated outcomes.
Debate about whether to put more guns in schools tends to emphasize the solution’s supposed efficacy. Given that even the “good guy with a gun” best positioned to stop the Parkland shooting failed to act, can we reasonably expect teachers to do much better? In light of the fact that mass shootings have even occurred at military bases, what reason do we have to believe that filling educational institutions with armed personnel will reduce the lethality of such incidents? As important as these questions are, they divert our attention to the new kinds of errors produced by applying a simplistic solution—more guns—to a complex problem.
A comparison with the history of nuclear armaments should give us pause. Although most American during the Cold War worried about potential atomic war with the Soviets, Cubans, or Chinese, much of the real risks associated with nuclear weapons involve accidental detonation. While many believed during the Cuban Missile Crisis that total annihilation would come from nationalistic posturing and brinkmanship, it was actually ordinary incompetence that brought us closest. Strategic Air Command’s insistence on maintaining U2 and B52 flights and intercontinental ballistic missiles tests during periods of heightened risked a military response from the Soviet Union: pilots invariably got lost and approached Soviet airspace and missile tests could have been misinterpreted to be malicious. Malfunctioning computer chips made NORAD’s screens light up with incoming Soviet missiles, leading the US to prep and launch nuclear-armed jets. Nuclear weapons stored at NATO sites in Turkey and elsewhere were sometimes guarded by a single American soldier. Nuclear armed B52s crashed or accidently released their payloads, with some coming dangerously close to detonation.
Much the same would be true for the arming of school workers: The presence and likelihood routine human error would put children at risk. Millions of potentially armed teachers and custodians translates into an equal number of opportunities for a troubled student to steal weapons that would otherwise be difficult to acquire. Some employees are likely to be as incompetent as Michelle Ferguson-Montogomery, a teacher who shot herself in the leg at her Utah school—though may not be so lucky as to not hit a child. False alarms will result not simply in lockdowns but armed adults roaming the halls and, as result, the possibility of children killed for holding cellphones or other objects that can be confused for weapons. Even “good guys” with guns miss the target at least some of the time.
The most tragic unintended consequence, however, would be how arming employees would alter school life and the personalities of students. Generations of Americans mentally suffered under Cold War fears of nuclear war. Given the unfortunate ways that many from those generations now think in their old age: being prone to hyper-partisanship, hawkish in foreign affairs, and excessively fearful of immigrants, one worries how a generation of kids brought up in quasi-militarized schools could be rendered incapable of thinking sensibly about public issues—especially when it comes to national security and crime.
This last consequence is probably the most important one. Even though more attention ought to be paid toward the accidental loss of life likely to be caused by arming school employees, it is far too easy to endlessly quibble about the magnitude and likelihood of those risks. That debate is easily scientized and thus dominated by a panoply of experts, each claiming to provide an “objective” assessment regarding whether the potential benefits outweigh the risks. The pathway out of the morass lies in focusing on values, on how arming teachers—and even “lockdown” drills— fundamentally disrupts the qualities of childhood that we hold dear. The transformation of schools into places defined by a constant fear of senseless violence turns them into places that cannot feel as warm, inviting, and communal as they otherwise could. We should be skeptical of any policy that promises greater security only at the cost of the more intangible features of life that make it worth living.
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
If You Don't Want Outbreaks, Don't Have In-Person Classes
How to Stop Worrying and Live with Conspiracy Theorists
Democracy and the Nuclear Stalemate
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life