I am an educator, and I also hated school. That might seem to be a contradiction at first, but I hope (for my own sake) that it does not have to be one. The fact that nearly every student and faculty member that I have known begin counting down the weeks until winter or summer break right after first day of classes signals to me that something is deeply wrong with what we call education. If it is actually the life-enriching, citizen-building experience that it is often touted to be, why does almost everyone directly involved in teaching or learning seem to only tolerate it—if not actively dislike it?
I can count the classes that have I actually enjoyed on one hand. For the most part, public school and university courses felt like arduous slogs. Sure, there would be brief moments of inspiration, insight, and reflection, but I mostly focused on completing my assignments as quickly, and with as little effort, as possible. I doubt that I am alone in having treated classes as mostly annoyances on the path to a degree. I hated having to divvy up my curiosity, temporarily pretending to care about whatever it was that my teacher thought that I should be interested in.
Even though I eventually earned a PhD in Science and Technology Studies (STS) and now teach courses in that area, I found being a graduate student to be a largely miserable experience. I only woke up excited to go to my office after I completed my coursework, when I could dedicate most of my time to doing research on my own and sometimes under the guidance of my advisor.
The only class that I did enjoy during my studies involved doing largely self-guided group projects. The professors taught us a few tools and then mostly stepped back, letting student groups find their own way through their research and mathematical modeling projects, and only intervened when asked to and after groups presented their findings.
Maybe I simply forgot all this once I became a professor. Perhaps I wanted to believe that the STS topics that fill my classes would have been different, or were inherently more interesting and engaging. I do treasure the students who actively engage with my course material beyond the minimum expectations, but most do not. And I have largely failed in my attempts to teach my courses more like the one class that I actually enjoyed and less like all the others that I have fortunately forgotten.
Underlying this failure has been the stubborn belief that I can control my students’ learning. If only I properly incentivized them with grades, set up assignments just right, or made class lectures entertaining enough, perhaps I could get most of them to invest in their own education beyond merely fulfilling syllabus requirements. I have a hard time letting go of the belief that I can make them really learn.
Yet my own experiences tell me that this pedagogy of control is destined to be an abysmal failure. I have forgotten most of the mathematics that I learned over the course of a bachelors and masters degree. All that remains is a knack for working through quantitative puzzles, usually after surveying Wikipedia or Wolfram to relearn forgotten techniques.
Even this knack has not been worth much. When I interned as a statistician, my supervisor asked me to help him decide which would be the best algorithm to help categorize some “big data” that he had. “Shit,” I thought, “I don’t know how to do that.” My supervisor wanted judgement and deep understanding from me, but I had spent the bulk of my time in university plugging and chugging through calculations, writing up relatively simple programs for canned (rather than real-world) problem sets, and generally avoiding doing too much learning. And I was even an “A” student.
To scholars of decision making and high performance, none of this is surprising. Barry Schwartz and Kenneth Sharpe have written a great deal about phronesis, or “practical wisdom,” and how important, but hard won, it is. People become excellent leaders, doctors, judges, or teachers only by being afforded opportunities to make mistakes, and then being encouraged and enabled to reflect and act upon those errors. They never have a chance to become practically wise when mandatory sentencing requirements preclude exercising judgement, when state-prescribed testing and preparation eats up too much classroom time, or economic incentives prevent doctors from spending the time necessary to talk with and care about patients.
The failure of nuclear energy in the United States, for instance, has been partly caused by the tendency for the American regulatory system legalistically and tediously outline point-by-point design specifications in the effort to enforce safety. The specifications for a nuclear reactor in the US are on the order of several thousand pages, compared to mere tens in other countries. Whatever its benefits, the unintended consequence of a top-down control-oriented system is that American nuclear reactor builders have come to believe that safety is accomplished by following the letter of the law, not by thinking.
Of course, there are reasons why the American regulatory system overprescribes and leaves no room for on-going learning, adaptation, and innovative solutions to safety problems: lack of trust. An antagonistic relationship between industry and government drives our rule-based system, which in turn prevents a more cooperative back-and-forth. Government bureaucrats focus on controlling industry, whom they believe would do nothing right without intense supervision. Industry, in turn, often acts little differently from contemporary college-students: Doing the bare minimum and even subverting the rules when advantageous. Wherever learning is supposed to happen, too much control crowds out initiative.
Recognizing this does not mean we should take a “hands off” approach, whether it be in education or industrial regulations. In my mind, the advocates of “permissionless innovation,” which prescribes that governments never proactively attempt to avert or mitigate the unintended negatives consequences of technological change, are complete fanatics. Neither would it be sensible to give students no direction at all with regards to their education, especially given that public schooling has inadvertently trained them to treat learning as a chore to be gotten over with as soon as possible or to think of degrees as merely signaling how hardworking or smart they are. Instructors have good reason not to entrust today’s college students with their own learning.
That leaves those of us involved in higher education in a Catch 22. We cannot begin to trust students without relinquishing control, but it is difficult to relinquish control without broader changes to enable and encourage students to demonstrate their trustworthiness. Teaching at an engineering school, I know that whatever extra time that I give my students usually translates into them spending more time studying for Calc II or other courses that they will invariably describe as their “real classes”—which are, ironically, the ones that they probably hate the most.
Yet continuing with the way things are seems perverse. The prospect of dedicating the rest of my life to an institution that mints degree-holders only at the cost of making students miserable fills me with dread. I wish that I had a good answer regarding how to break a sizable chunk of higher education out of the pedagogy of control, and I hope that I can eventually figure out how to more significantly reduce its presence in my own teaching. What would a university that produces practically wise human beings look like and how could we achieve one?
STS research has traditionally focused on innovation processes—and more recently maintenance—leaving practices of technological dismantling, decommissioning, and refusal under examined and less deeply theorized. This is inspite of the fact that contemporary forms of Luddism are highly visible. Ordinary citizens consciously take a break from digital devices, cities demolish of urban infrastructures like elevated highways or trolley systems, parents opt their children out of mandated state testing, and Silicon Valley firms aim to “disrupt” already existing sociotechnical systems and replace them with networked platforms under a startup’s control. How could (or should) STS scholars make sense of these seemingly disparate Luddite activities?
This panel builds on recent scholarship on the interrelations between Luddism as epistemology—a process of learning about technologies as legislations—and as politics—an effort to materially realize a certain vision of the good society. Desirable presentations include ones that draw connections between and contrast contemporary and past movements aspiring to dismantle certain technologies, theorize and elucidate the epistemological dimensions of Luddite politics, discern and examine the barriers to democratizing Luddism, and imagine and propose how technological destruction can proceed in an intelligent and just matter. In exploring deeper theorizations and research on technological dismantling, decommissioning, and refusal, this panel also seeks constructive critiques of epistemological and political Luddism: How to ensure that dismantling is an ethically just political project and protect against the reactionary instantiations that are often associated with 20th century neo-Luddites?
Please submit your abstracts by February 1st, 2019 here.
Michael Bouchey, RPI
Michael Lachney, Michigan State
Taylor Dotson, New Mexico Tech
A person who has only half-grasped the lessons of history can sometimes be more dangerous than a naif. Convinced that they have it all figured out, they can be just as wrong but more dogmatic than someone who recognizes their ignorance. This is why I felt considerable unease reading ecomodernist arguments, like those of Michael Shellenberger, about the desirability of nuclear power. Because Shellenberger only partly grasps the historical challenges presented by large-scale technologies like nuclear energy, he manages to argue his way to some fairly reckless conclusions.
Shellenberger, as president of a pro-nuclear interest group Environmental Progress, has dedicated himself to keeping current nuclear plants running around the world and advocating for building new ones. Reading over the group’s materials, I am reminded of the grand technological visions that got us into the nuclear game in the first place. Nuclear energy was originally promised to provide energy “too cheap to meter” and an atomically powered utopia. Ecomodernist environmentalists like Shellenberger see similar nuclear promise, namely as a Deux Ex Machina for climate change, if his cited estimates of its carbon footprint are to be believed.
It is important to note that the economic (and safety) estimates that drove the first nuclear bubble proved to be spurious. Largely based on theoretical projections made in the absence of real experience, they proved too optimistic and utilities were stuck with underperforming reactors that were completed years, if not decades, behind schedule. Nuclear energy has only become moderately financially competitive as decades-old investments have been paid off. Given these past mistakes, we should be worried about becoming too enchanted with a similar, albeit ecomodernist, nuclear day dream.
Shellenberger and others might prove correct regarding nuclear’s carbon footprint, or they might not. Getting an accurate life-cycle assessment is challenging for most technologies, and the carbon picture for nuclear is made worse when including the processing of uranium ore and decommissioning of reactors. Considerable uncertainties emerge when recognizing that we do not really know the carbon costs of building and maintaining long-term storage sites. Will nuclear look so advantageous if we massively expanded our use of it and when we finally gain some practical experience with stewarding the waste? Even worse, we might not actually learn the full environmental costs until it is too late, inadvertently committing ourselves to a net environmental negative for ten thousand years. And that’s not even accounting for the carbon footprint of potential nuclear conflicts made more likely, because expanding nuclear power across the globe would almost invariably lead to more nuclear weapons programs.
The ecomodernist position acknowledges little of these complexities. Shellenberger seems to know just enough about nuclear energy to paint himself into a technologically conservative corner. In an article for Forbes, he cites the financial and scheduling woes of new designs like the AP1000, which promised more passive safety features to overcome some of the light-water reactor’s inherent design problems, as evidence that nuclear innovation writ large is costly and should be strictly limited. His article dwells on Hyman Rickover’s experience with developing the light water reactor and statements about the challenges that arise when translating experimental designs to the real world. Shellenberger advocates that we mostly stick with a slightly modified, standardized version of decades-old designs—to be produced in massive numbers—ostensibly because it fits with the ecomodernist demand that we rapidly expand nuclear energy to combat climate change. Moving to a different design would slow down that process.
The costs of innovation, however, is only sometimes and partly due to the radicalness of design changes. Also pertinent are the capital intensity of the technology, speed of feedback about errors, dependence on specialized infrastructure, and the process for scaling up. It is about the whole sociotechnical system of innovation, not just the technology itself. The challenges that nuclear innovation faces today is partly due to the shape of the system put in place decades ago.
The costliness of pursuing potentially safer reactor designs—or one’s that may have a better outlook in terms of waste or carbon emissions—is partly caused by previous generations of nuclear advocates plunging ahead. The currently dominant light water reactor became the de facto standard largely before we learned much about its relative benefits and a drawbacks for producing commercial energy. We still do not know much about the alternatives. Options became foreclosed because true believers chasing nuclear dreams, pursuing market dominance, and/or wanting to beat the Russians acted single-mindedly, constructing dozens of light-water plants and scaling them up by a factor of six before people started to recognize their flaws and the bubble burst.
As a result of this nuclear energy bubble, alternative designs face unfair comparisons with a light water standard that received billions in early investment and subsidy, among other advantages. Alternatives designs are further stymied because established infrastructures, educational programs, and regulatory regimes are not designed for them. And people are more risk-averse with regard to developing alternatives, lulled into complacence by the knowledge that a working design already exists. Put together, this leads the light water reactor to be the QWERTY keyboard of nuclear energy: a suboptimal design that is nevertheless too entrenched to be altered or dispensed with.
People who have studied how promising innovations turn into expensive technological failures warn about the risks of locked-in or otherwise inflexible technologies. The process of standardization and narrowing of alternatives that Shellenberg sees as so beneficial financially is only desirable once one really knows the ins-and-outs of competing designs. Ensuring such diversity is no doubt expensive, but expenses can be reduced by ensuring that development is appropriately paced and scaled up gradually. Consider how NASA-led wind turbine development in the United States was a costly flop: Engineers scaled up to megawatt sized turbines that failed early and catastrophically, providing little to no guidance for commercial builders. The Danish, on the other hand, supported a decentralized process by a diverse set of builders. The size of the turbines grew gradually over the course of a decade, resulting in Danish turbines being the most reliable in world without millions in wasted on inefficient investments. Note that this is the complete opposite of the process that Shellenberger advocates: decentralized rather than centrally controlled, small and gradual rather than deployed at grand scales, and diverse and open rather than standardized from the get go.
Realizing such a decentralized and gradual process with nuclear energy is challenged by a number of factors—no doubt securing nuclear material being one of them. Perhaps the biggest barrier is size expectations. Any moderately novel reactor design is going to run into major unanticipated problems when it is expected to produce electricity at the scale of hundreds of megawatts. This well-established pattern should give us pause. Can we create a context where intelligent incremental nuclear innovation can occur? If we cannot, then perhaps for all intents and purposes nuclear energy is a pathological technology. If its very character prevents a learning-focused innovation process—especially with respect to safety and environmental impact—maybe it is not worth the cost and effort, especially given that energy reduction is so much more cost effective and freer of undesirable consequences.
The coach of my university’s rugby team instructs players that “Slow is smooth, and smooth is fast.” Behind what would seem to be an apparent contradiction in terms is a recognition that panicked decisions are often more costly in terms of time (and everything that comes with it) than being self-conscious and deliberate. Another popular rugby saying is “Don’t shovel shit” This means if you receive a crappy pass, don’t try to pass it on to the next guy. It will only make things worse.
The seriousness of the problems of global climate change should not be viewed as an invitation to make hasty decisions about nuclear; such choices could make the predicament of future generations even worse. Without an emphasis on deliberate learning, the risk of doing so is very high. Moreover, there is no requirement that we remain committed to our nuclear inheritance. The sunk cost of experience, dedicated infrastructure, and other choices that now make current light-water designs seem like a good deal was originally the sociotechnical equivalent of a shit pass. We do not have to shovel it on to the next generation. We can take the opportunity—as more and more of our plants reach the end of their lifespan—to reassess the situation and adjust our line of attack, proceeding in full recognition of our limited knowledge.
Contemporary parents live in constant fear of the authorities—and the “Good Samaritans” who contact them. A friend of mine left his elementary school-aged kids home alone for a mere five minutes to talk to a neighbor, only to return to a police cruiser investigating a call about “abandoned children.” When my brother forgets his work badge on the kitchen counter, he unloads and shuttles his three small children out of their car seats and into the house. He wastes ten minutes coaxing them back into their harnesses, lest a neighbor report him for “neglecting” his little ones by letting them stay buckled in a parked car with all its doors open.
Parents today are increasingly harassed or even arrested for leaving their kids in a car for two minutes to buy a coffee or allowing them to frolic unsupervised in a nearby playground. Yet it was not always like this. Generations ago—when the world was considerably more dangerous—children as young as eight were allowed to roam six miles away from home. Children’s freedom is a litmus test for community vibrancy. We won’t be able to improve one without boosting the other.
In contrast to growing anxieties, the rate of fatal injuries for children in the United States has been in steep decline for decades. There is no reason, however, to credit increased helicopter parenting and the vigilance of authority-contacting strangers for that decline. The rate is even lower in European countries where parents generally give their kids a much longer leash. Heck, Dutch kids don’t even wear helmets when they bike by themselves to school.
The difference is that childhood risks are individualized in the United States. Rather than redesign road systems to make dangerous interactions between cyclists and automobiles less likely—as the Dutch have done—we clad kids in padding and shame parents for not watching closely enough if little Johnny or Jane ends up on the business end of a Buick. Not only does this approach run completely contrary to what we’ve learned about the organization of safety (see nuclear power), but it fundamentally reshapes parents’ relationship to their neighbors. We police, rather than watch out for, one another. We punish and moralize instead of cooperate and empathize.
Calls made to the authorities about unsupervised kids are not made because of any real danger but because many Americans don’t want to have to keep an eye out for children who aren’t their own. As one mother complains about more “free range” parenting, “I don’t want to be responsible for someone else’s kids.” Cops are used to punish parents for the sin of making other people worry about their offspring, of drawing them into community without their consent.
The individualization of much of the rest of American life makes the model of absolute parental responsibility only more difficult. Many of the parents who had law enforcement called on them faced difficult decisions: Let the kids play in the park or fail to show up for work; leave them in the car for twenty minutes or miss out on a job interview. Childcare is unaffordable because we don’t collectively provide for it. Current economic arrangements and policies individuate workers, giving little to no respect to the family or community as a social unit. On top of this, few Americans today have good enough relationships with neighbors to have them babysit.
It will not be easy to redirect American society toward a more communitarian, less individualized model of childrearing. Fortunately, studying how we’ve come to today’s world of neighborhoods full of strangers, near deserted suburban streets, and low levels of communal goodwill can teach us how to reverse the downward spiral.
It would be unreasonable to expect parents to embrace “free range” parenting overnight, given that decades of fear-based news reporting and home-based hermitting has led many to see danger lurking around every corner. But simple measures could allay some parental anxieties while giving children the freedom to play without parental surveillance. Teenagers already earn money ensuring safety at public pools. Why couldn’t “play lifeguards” staff local parks to supply sports equipment, tend to minor injuries, and help prevent major accidents? Why not locate children’s play areas among outdoor cafes, pubs, and other spaces amenable to adult relaxation, letting the eyes of other parents, retirees, and staff help keep kids safe? Such spaces already exist in many malls. Why couldn’t they be built elsewhere?
We could begin to weave back together the frayed communal web in many neighborhoods by redesigning zoning and building codes to encourage community interaction. Few American homes today have a front porch worth gathering on, and few neighborhoods contain places worth walking to. Building residential areas differently would help foster the kinds of social interactions that could restore neighbors’ trust in each other, growing social capital to a level where community members no longer treat keeping an eye out for someone’s kid as an onerous hardship.
No doubt thickening community life in this way would mean taking on new duties and responsibilities, many of which would feel uncomfortable at first. But individualization has been no different, coming with its own set of freedom-limiting burdens. The question is always “Which freedoms are worth what constraints?”
Past thick communities had their share of problems—often being patriarchal and overly demanding of conformity. These are serious issues that demand careful solutions. But there is no free lunch when it comes to the makeup of society. We are now seeing the costs of contemporary individualism: the loneliness of stay-at-home parents and older adults, the use of police to terrorize busy moms, and growing rates of depression. Nevertheless, a more pleasant and communitarian world for parents and children is possible—if we’re willing to collectively reevaluate and reconstruct our social lives.
Taylor Dotson is an assistant professor of science and technology studies at New Mexico Tech, author of Technically Together: Reconstructing Community in a Networked World (MIT Press), and a researcher with Whoa Inc.
A recent diagnosis of obstructive sleep apnea has led me develop a new level of annoyance with the medical profession. The condition seems simple enough: My throat and tongue musculature relaxes too much when I sleep, cutting off my airway several times an hour and keeping me from getting restful sleep. After my sleep study, I was prescribed a CPAP machine, a device that forces my airway open by pumping pressurized air into my nostril, and sent home. To say that the road to wellness has not been smooth would be an understatement. As an STS scholar, I am well familiar with cases where patients have been frustrated by the way their conditions and treatment options are understood by the medical community. Their frustrations have become far more real to me in my struggle to deal with my apnea diagnosis.
What struck me most when I first took my CPAP machine home was the large degree to which my sleep became “medicalized.” That is, it became understood in terms of the assumptions, values, and desires of medical professionals, not my own. The “MyAir” app associated my machine only tells me how long I’ve kept my mask on, whether it has leaked, and how many apnea episodes I have had every hour. Sleep quality is not measured or represented anywhere. Ironically, I get pretty good numbers when I lie awake for hours on end, wishing that the panicked feeling of suffocation would subside just enough for me to fall asleep. Even nights that I do sleep, I awake four to five times per night, never reaching the deepest level of sleep. My slumber can be nearly as unhealthy as before despite the “good” numbers sent to my doctor by the machine.
Most telling is the way that my usage of the machine is talked about. The primary concern of my doctor and insurance company is “compliance” – so much so that a respiratory technician was made to show me a scary four figure number that I would be responsible for paying if I do not wear my mask the required four hours per night. Unfortunately, there is no equally threatening monetary incentive for my doctor to ensure that I am actually asleep and sleeping well for the night. I can be totally compliant while being completely miserable.
The tendency to be overly enthralled with seemingly objective but unrepresentative measures and take too little care in understanding how people interact with their technologies is tragically common. Robert Pool calls this the “machine centered philosophy of engineering.” Under the spell of this philosophy, whatever machine technologists come up with is framed as ideal. The only imaginable problem then becomes the failure of people to adapt themselves to the machine, not that designers failed to give empathic consideration of what people can reasonably do. A classic example of this machine-centered view was the control room in nuclear plants like Three Mile Island. Operators were blamed for mistakes made in the run up to a near meltdown at the plant, but one of the underlying causes was that the array of dial and gauges in the room were not set up to be comprehensible to operators but easier for the designers and builders to lay out.
Once one notices that CPAP machines are a machine-centered approach to treating sleep apnea, their status as the “gold standard” treatment begins to appear much less certain. Indeed, nearly 50% percent of diagnosed apnea sufferers never adapt to their machines and stop using them. “Gold standard” status perhaps makes sense in the simplified environment of the clinical study but not in real life. Yet alternative treatments to the CPAP machine receive little attention from sleep doctors, perhaps because they do not reliably get patients’ AHI (average incidents per hour) down to the sought after 5 or less. However, consider that a “compliant” CPAP patient only need wear their mask 4 hours a night. Their actual nightly AHI may actually be little different than people using these alternative treatments. Someone managing to wear a CPAP mask 5 hours per night with an AHI of 4, but going back to an AHI of 25 for the remaining 3 hours, has a nightly AHI of almost 12—which would classify them as suffering from moderate sleep apnea and is no different than what alternative treatments accomplish. However, under the spell of machine-centered thinking, this would be seen solely the patient’s fault for being insufficiently diligent rather than a failure of the CPAP approach more broadly.
Looking at other cases of machine-centered failure, however, provides lessons regarding how sleep apnea treatment could be more person-centered. For instance, autopilot leads to new kinds of plane crashes because trying to completely delegate the process of flying to an algorithm deskills pilots, leading them to make elementary mistakes when the autopilot shuts off in unusual circumstances. The alternative is to “informate,” which involves using automation technologies to help pilots become better at their jobs: help them maintain attention, periodically test their skills, provide feedback on performance, etc. Informating takes the cognitive aspects of pilots and the human-machine interface as part of the design, rather than expect users to be superhuman. The challenge for sleep apnea researchers is learn to think out of the machine-centered box. Rather than simply delegate the holding open of patients’ throats to a machine, how could patients be better empowered to do that themselves?
This alternative approach is mostly undone science. While there are a few studies looking into how physical therapy exercises, playing the didgeridoo, and a cannabinoid could reduce the frequency of apnea incidents by up to 50 percent, there are few follow-up studies, much less any research attempting these treatment options in combination. Little to no energy has been spent by my doctor to try to diagnose exactly why my airway collapses. Given that breathing is a semi-voluntary act, what reason is there to believe that I could not retrain my respiratory system to have a more suitable level of musculature?
Insofar as today’s paradigm of compliance to CPAP reigns, apnea sufferers like myself are left in the dark, trying to piece together sparse information on the Internet in order to design our own alternative and complementary treatment pathways. This need not be the case. I could use the help of a trained medical professional, rather than go it alone. Absent a less machine-centered, more person-centered paradigm of apnea treatment, I do not have any other options.
There have been no shortage of (mainly conservative) pundits and politicians suggesting that the path to fewer school shootings is armed teachers—and even custodians. Although it is entirely likely that such recommendations are not really serious but rather meant to distract from calls for stricter gun control legislation, it is still important to evaluate them. As someone who researches and teaches about the causes of unintended consequences, accidents, and disasters for a living, I find the idea that arming public school workers will make children safer highly suspect—but not for the reasons one might think.
If there is one commonality across myriad cases of political and technological mistakes, it would be the failure to acknowledge complexity. Nuclear reactors designed for military submarines were scaled up over an order of magnitude to power civilian power plants without sufficient recognition of how that affected their safety. Large reactors can get so hot that containing a meltdown becomes impossible, forcing managers to be ever vigilant to the smallest errors and install backup cooling systems—which only increased difficult to manage complexities. Designers of autopilot systems neglected to consider how automation hurt the abilities of airline pilots, leading to crashes when the technology malfunctioned and now-deskilled pilots were forced to take over. A narrow focus on applying simple technical solutions to complex problems generally leads to people being caught unawares by ensuing unanticipated outcomes.
Debate about whether to put more guns in schools tends to emphasize the solution’s supposed efficacy. Given that even the “good guy with a gun” best positioned to stop the Parkland shooting failed to act, can we reasonably expect teachers to do much better? In light of the fact that mass shootings have even occurred at military bases, what reason do we have to believe that filling educational institutions with armed personnel will reduce the lethality of such incidents? As important as these questions are, they divert our attention to the new kinds of errors produced by applying a simplistic solution—more guns—to a complex problem.
A comparison with the history of nuclear armaments should give us pause. Although most American during the Cold War worried about potential atomic war with the Soviets, Cubans, or Chinese, much of the real risks associated with nuclear weapons involve accidental detonation. While many believed during the Cuban Missile Crisis that total annihilation would come from nationalistic posturing and brinkmanship, it was actually ordinary incompetence that brought us closest. Strategic Air Command’s insistence on maintaining U2 and B52 flights and intercontinental ballistic missiles tests during periods of heightened risked a military response from the Soviet Union: pilots invariably got lost and approached Soviet airspace and missile tests could have been misinterpreted to be malicious. Malfunctioning computer chips made NORAD’s screens light up with incoming Soviet missiles, leading the US to prep and launch nuclear-armed jets. Nuclear weapons stored at NATO sites in Turkey and elsewhere were sometimes guarded by a single American soldier. Nuclear armed B52s crashed or accidently released their payloads, with some coming dangerously close to detonation.
Much the same would be true for the arming of school workers: The presence and likelihood routine human error would put children at risk. Millions of potentially armed teachers and custodians translates into an equal number of opportunities for a troubled student to steal weapons that would otherwise be difficult to acquire. Some employees are likely to be as incompetent as Michelle Ferguson-Montogomery, a teacher who shot herself in the leg at her Utah school—though may not be so lucky as to not hit a child. False alarms will result not simply in lockdowns but armed adults roaming the halls and, as result, the possibility of children killed for holding cellphones or other objects that can be confused for weapons. Even “good guys” with guns miss the target at least some of the time.
The most tragic unintended consequence, however, would be how arming employees would alter school life and the personalities of students. Generations of Americans mentally suffered under Cold War fears of nuclear war. Given the unfortunate ways that many from those generations now think in their old age: being prone to hyper-partisanship, hawkish in foreign affairs, and excessively fearful of immigrants, one worries how a generation of kids brought up in quasi-militarized schools could be rendered incapable of thinking sensibly about public issues—especially when it comes to national security and crime.
This last consequence is probably the most important one. Even though more attention ought to be paid toward the accidental loss of life likely to be caused by arming school employees, it is far too easy to endlessly quibble about the magnitude and likelihood of those risks. That debate is easily scientized and thus dominated by a panoply of experts, each claiming to provide an “objective” assessment regarding whether the potential benefits outweigh the risks. The pathway out of the morass lies in focusing on values, on how arming teachers—and even “lockdown” drills— fundamentally disrupts the qualities of childhood that we hold dear. The transformation of schools into places defined by a constant fear of senseless violence turns them into places that cannot feel as warm, inviting, and communal as they otherwise could. We should be skeptical of any policy that promises greater security only at the cost of the more intangible features of life that make it worth living.
If your Facebook wall is like mine, you have seen no shortage of memes trying to convince you that a simple explanation for school shootings exists. One claims that their increase coincides with the decline of proper “discipline” (read: corporeal punishment) of children thirty years ago. Yet all sorts of things have changed over the last several decades, especially since 2011 when the frequency of mass shootings tripled. In any case, Europeans are equally unlikely to strike their children but see no uptick in the likelihood of acts of mass violence—the 2011 attack in Norway notwithstanding. Moreover, assault weapons like the AR-15 have been available for fifty years and a federal assault weapon ban (i.e., “The Brady Bill”) expired back in 2004, long before today’s upswing in shootings. Under the slightest bit of scrutiny, any single-cause explanation begins to unravel.
Journalists and other observers often note that the perpetrators of these events were “loners” or socially isolated but do little to no further investigation when it comes time to recommend solutions. It is as if we have begun to accept the existence of such isolated and troubled individuals as if it were natural, as if little could be done to prevent it, as if eliminating civilian weapons or de-secularizing society were less wicked of problems. If there is any mindset my book, Technically Together, tries to eliminate, it is the belief that the social lives offered to us by contemporary networked societies are unalterable—the idea that we have arrived at the best of all possible social worlds. Indeed, it is difficult to square sociologist Keith Hampton’s claim that “because of cellphones and social media, those we depend on are more accessible today than at any point since we lived in small, village-like settlements” with massive increases in the rates of medication use for depression and anxiety, not just the frequency of mass shootings. At the very least, digital technologies—for all their wonders—do less than is needed to remedy feelings of isolation.
Such changes, I contend, suggest that something is very wrong with contemporary practices of togetherness. No doubt most of us get by well enough with some mixture of social networks, virtual communities, and perhaps a handful of neighborly and workplace-based connections (if we’re lucky). That said, most goods, social or otherwise, are unequally distributed. Even if sociologists disagree about whether social ties have changed on average, the distribution of connection has and so have the qualitative dimensions of friendship. For every social butterfly who uses online networks to maintain levels of acquaintanceship that would have been impossible in the days of rolodexes and phone conversations, there are those for whom increasing digital mediation has meant a decline in companionship in both numeracy and intimacy. As nice as “lurking” on Facebook or a pleasant comment from a semi-anonymous Reddit compatriot can be, they cannot match a hug. Indeed, self-reported loneliness and expressed difficulties in sustaining close friendships persist among the older generations and young men despite no lack of digital mechanisms for connecting with others.
Some sociologists downplay this, as if highlighting the downsides to social networks invariably leads to simplistically blaming them for people’s problems. No doubt Internet-critics like Sherry Turkle overlook many of the complexities of digital-age sociality, but only those socially advantaged by contemporary network technologies benefit from viewing them through rose-colored glasses. Certainly an explanation for mass shootings cannot be reduced to the prevalence of digital technologies, just as it cannot be blamed simply on the ostensible disappearance of God from schools, declines in juvenile corporeal punishment, the mere presence of assault weapons, or any of the other purported causes that proliferate in the media. What Internet technologies do provide, however, is a window into society—insofar as they can exacerbate or make more visible social changes set in motion decades earlier.
To try to blame the Internet for social isolation would fail to recognize that it was suburbia that first physically isolated people. It makes the warm intimacy of bodily co-presence hard work; hanging out requires gas money as well as the time and energy to drive to somewhere.
Skeptical readers would probably point out that events like mass shootings became prevalent and accelerated well after the suburb-building boom of the mid-20th century. That objection is easy to counter: social lag. The first suburban dwellers brought with them communal practices learned in small towns or tight-knit urban neighborhoods, and their children maintained some of them. 30 Rock’s Jack Donaghy lamented that 1st generation immigrants work their fingers to the bone, the 2nd goes to college, and the 3rd snowboards and takes improv classes. A similar generational slide could be said about community in suburbia: The 1st generation bowls together; the 2nd organizes neighborhood watch; the 3rd waits with their kids in the car until the school bus arrives.
Even while considering all that the physical makeup of our cities does to stifle community life, it would be a mistake not to recognize that there is something unique about many of our Internet activities that make them far more conducive to feelings of loneliness than other media—even if they do connect us with friends.
Consider how one woman in the BBC documentary, The Age of Loneliness, laments that social media makes her feel even lonelier, because she cannot help but compare her own life to the “highlights reels” posted by acquaintances. Others use the Internet to avoid the painful awkwardness and risk of in-person interactions, getting stuck in a downward spiral of solitude. These features combine with a third to help give birth to mass shooters: The “long tail” of the Internet provides websites that concentrate and amplify pathological tendencies. Forums that encourage and help people with eating disorders continue damaging behaviors are as common as racist, violence-promoting websites, many of which had been frequented by recent mass shooters.
While it is the suburbs that physically isolate people and make physical friendships practically difficult, online social networks too easily exacerbate and highlight that isolation. My point, however, is not to call for dismantling the Internet—though I think it could use a massive redesign. Such a call would be as simple-minded as believing that just eliminating AR-15s or making kids read the Bible in school would prevent acts of mass violence. Appeals to improving mental health services or calls to arm teachers or place military veterans at schools are equally misguided. These are all band-aid solutions that fail to ask about the underlying causes. What we need most is not more guns, God, scrutinization of the mentally ill, or even necessarily gun bans, but a sober evaluation of our social world: Why does it not provide adequate levels of loving togetherness and belonging to nearly everyone? How could it?
To some this might sound like a call to coddle potential murderers. Yet, given that people’s genetics do not fully explain their personalities, societies have to reckon with the fact that mass shooters are not born ready-made monsters but become that way. It is difficult not to see parallels between many young men today and the “lost generation” that was so liable to fall prey to fascism in the early 20th century. The growth of, mainly white, young, and male, mass shooters cannot be totally unrelated to the increase in, mainly white, young, and male, acolytes of prophets like Jordan Peterson, who extol the virtues of traditional notions of male power. Absent work toward ameliorating the “crisis of connection” that many face men currently face, we should be unsurprised if some of them continue to try to replace a lost sense of belonging with violent power fantasies.
As a scholar concerned about the value of democracy within contemporary societies, especially with respect to the challenges presented by increasingly complex (and hence risky) technoscience, a good check for my views is to read arguments by critics of democracy. I had hoped Jason Brennan's Against Democracy would force me to reconsider some of the assumptions that I had made about democracy's value and perhaps even modify my position. Hoped.
Having read through a few chapters, I am already disappointed and unsure if the rest of the book is worth the my time. Brennan's main assertion is that because some evidence shows that participation in democratic politics has a corrupting influence--that is, participants are not necessarily well informed and often end becoming more polarized and biased in the process--we would be better off limiting decision making power to those who have proven themselves sufficiently competent and rational, to epistocracy. Never mind the absurdity of the idea that a process for judging those qualities in potential voters could ever be made in an apolitical, unbiased, or just way, Brennan does not even begin with a charitable or nuanced understanding of what democracy is or could be.
One early example that exposes the simplicity of Brennan's understanding of democracy--and perhaps even the circularity of his argument--is a thought experiment about child molestation. Brennan asks the reader to consider a society that has deeply deliberated the merits of adults raping children and subjected the decision to a majority vote, with the yeas winning. Brennan claims that because the decision was made in line with proper democratic procedures, advocates of a proceduralist view of democracy must see it as a just outcome. Due to the clear absurdity and injustice of this result, we must therefore reject the view that democratic procedures (e.g., voting, deliberation) themselves are inherently just.
What makes this thought experiment so specious is that Brennan assumes that one relatively simplistic version of a proceduralist, deliberative democracy can represent the whole. Ever worse, his assumed model of deliberative democracy--ostensibly not too far from what already exists in most contemporary nations--is already questionably democratic. Not only is majoritarian decision-making and procedural democracy far from equivalent, but Brennan makes no mention of whether or not children themselves were participants in either the deliberative process or the vote, or even would have a representative say through some other mechanism. Hence, in this example Brennan actually ends up showing the deficits of a kind of epistocracy rather than democracy, insofar as the ostensibly more competent and rationally thinking adults are deliberating and voting for children. That is, political decisions about children already get made by epistocrats (i.e., adults) rather than democratically (understood as people having influence in deciding the rules by which they will be governed for the issues they have a stake in). Moreover, any defender of the value of democratic procedures would likely counter that a well functioning democracy would contain processes to amplify or protect the say of less empowered minority groups, whether through proportional representation or mechanisms to slow down policy or to force majority alliances to make concessions or compromises. It is entirely unsurprising that democratic procedures look bad when one's stand-in for democracy is winner-take-all, simple majoritarian decision-making.
His attack on democratic deliberations is equally short-sighted. Criticizing, quite rightly, that many scholars defend deliberative democracy with purely theoretical arguments, while much of the empirical evidence shows that many average people dislike deliberation and are often very bad at it, Brennan concludes that, absent promising research on how to improve the situation, there is no logical reason to defend deliberative democracy. This is where Brennan's narrow disciplinary background as a political theorist biases his viewpoint. It is not at all surprising to a social scientist that average people would fail to deliberate well nor like it when the near entirety of contemporary societies fails to prepare them for democracy. Most adults have spent 18 years or more in schools and up to several decades in workplaces that do not function as democracies but rather are authoritarian, centrally planned institutions. Empirical research on deliberation has merely uncovered the obvious: People with little practice with deliberative interactions are bad at them. Imagine if an experiment put assembly line workers in charge of managing General Motors, then justified the current hierarchical makeup of corporate firms by pointing to the resulting non-ideal outcomes. I see no reason why Brennan's reasoning about deliberative democracy is any less absurd.
Finally, Brennan's argument rests on a principle of competence--and concurrently the claim that citizens have a right to governments that meet that principle. He borrows the principle from medical ethics, namely that a patient is competent if they are aware of the relevant facts, can understand them, appreciate their relevance, and can reason about them appropriately. Brennan immediately avoids the obvious objections about how any of the judgements about relevance and appropriateness could be made in non-political ways to merely claim that the principle is non-objectionable in the abstract. Certainly for the simplified thought examples that he provides of plumber's unclogging pipes and doctors treating patients with routine conditions the validity of the principle of competence is clear. However, for the most contentious issues we face: climate change, gun control, genetically modified organisms, etc., the facts themselves and the reliability of experts are themselves in dispute. What political system would best resolve such a dispute? Obviously it could not be a epistocracy, given that the relevance and appropriateness of the "relevant" expertise itself is the issue to be decided. Perhaps Brennan's suggestions have some merit, but absent a non-superficial understanding of the relationship between science and politics the foundation of his positive case for epistocracy is shaky at best. His oft repeated assertion that epistocracy would likely produce more desirable decisions is highly speculative.
I plan on continuing to examine Brennan's arguments regarding democracy, but I find it ironic that his argument against average citizens--that they suffer too much from various cognitive maladies to reason well about public issues--applies equally to Brennan. Indeed, the hubris of most experts is deeply rooted in their unfounded belief that a little learning has freed them from the mental limitations that infect the less educated. In reality, Brennan is a partisan like anyone else, not a sagely academic doling out objective advice. Whether one turns to epistocratic ideas in light of the limitations of contemporary democracies or advocate for ensuring the right preconditions for democracies to function better comes back to one's values and political commitments. So far it seems that Brennan's book demonstrates his own political biases as much as it exposes the ostensibly insurmountable problems for democracy.
After news broke of the Las Vegas shooting, which claimed some 59 lives, professional and lay observers did not hesitate in trotting out the same rhetoric that Americans have heard time and time again. Those horrified by the events demanded that something be done; indeed, the frequency and scale of these events should be horrifying. Conservatives, in response, emphasized evidence for what they see as the futility of gun control legislation. Yet it is not so much gun control itself that seems futile but rather our collective efforts to accomplish almost any policy change. The Onion satirized America's firearm predicament with the same headline used after numerous other shootings: “‘No Way to Prevent This,’ Says Only Nation Where This Regularly Happens.” Why is it that we Americans seem so helpless to effect change with regard to mass shootings? What explains our inability to collectively act to combat these events?
Political change is, almost invariably, slow and incremental. Although the American political system is, by design, uniquely conservative and biased toward maintaining the status quo, that is not the only reason why rapid change rarely occurs. Democratic politics is often characterized as being composed by a variety of partisan political groups, all vying with one another to get their preferred outcome on any given policy area: that is, as pluralistic. When these different partisan groups are relatively equal and numerous, change is likely to be incremental because of substantial disagreements between these groups and the fact that each only has a partial hold on power. Relative equality among them means that any policy must be a product of compromise and concession—consensus is rarely possible. Advocates of environmental protection, for instance, could not expect to convince governments to immediately dismantle of coal-fired power plants, though they might be able to get taxes, fines, or subsidies adjusted to discourage them; the opposition of industry would prevent radical change. Ideally, the disagreements and mutual adjustments between partisans would lead to a more intelligent outcome than if, say, a benevolent dictator unilaterally decided.
While incremental policy change would be expected even in an ideal world of relatively equal partisan groups, things can move even slower when one or more partisan groups are disproportionately powerful. This helps explain why gun control policy—and, indeed, environmental protections, and a whole host of other potentially promising changes—more often stagnates than advances. Businesses occupy a relatively privileged position compared to other groups. While the CEO of Exxon can expect the president’s ear whenever a new energy bill is being passed, average citizens—and even heads of large environmental groups—rarely get the same treatment. In short, when business talks, governments listen. Unsurprisingly the voice of the NRA, which is in essence a lobby group for the firearm industry, sounds much louder to politicians than anyone else’s—something that is clear from the insensitivity of congressional activity to widespread support for strengthening gun control policy.
But there is more to it that just that. I am not the first person to point out that the strength of the gun lobby stymies change. Being overly focused the disproportionate power wielded by some in the gun violence debate, we miss the more subtle ways in which democratic political pluralism is itself in decline.
Another contributing factor to the slowness of gun policy change is the way Americans talk about issues like gun violence. Most news stories, op-eds, and tweets are laced with references to studies and a plethora of national and international statistics. Those arguing about what should be done about gun violence act as if the main barrier to change has been that not enough people have been informed of the right facts. What is worse is that most participants seem already totally convinced of the rightness of their own version or interpretation of those facts: e.g., employing post-Port Arthur Australian policy in the US will reduce deaths or restrictive gun laws will lead to rises in urban homicides. Similar to two warring nations both believing that they have God uniquely on their side, both sides of the gun control debate lay claim to being on the right side of the facts, if not rationality writ large.
The problem with such framings (besides the fact that no one actually knows what the outcome would be until a policy is tried out) is that anyone who disagrees must be ignorant, an idiot, or both. That is, exclusively fact-based rhetoric—the scientizing of politics—denies pluralism. Any disagreement is painted as illegitimate, if not heretical. Such as view leads to a fanatical form of politics: There is the side with “the facts” and the side that only needs informed or defeated, not listened to. If “the facts” have already pre-determined the outcome of policy change, then there is no rational reason for compromise or concession, one is simply harming one’s own position (and entertaining nonsense).
If gun control policy is to proceed more pluralistically, then it would seem that rhetorical appeals to the facts would need dispensed with—or at least modified. Given that the uncompromising fanaticism of some of those involved seems rooted in an unwavering certainty regarding the relevant facts, emphasizing uncertainty would likely be a promising avenue. In fact, psychological studies find that asking people to face the complexity of public issues and recognize the limits of their own knowledge leads to less fanatical political positions.
Proceeding with a conscious acknowledgement of uncertainty would have the additional benefit of encouraging smarter policy. Guided by an overinflated trust that a few limited studies can predict outcomes in exceedingly complex and unpredictable social systems, policy makers tend to institute rule changes or laws with no explicit role for learning. Despite that even scientific theories are only tentatively true, ready to be turned over by evermore discerning experimental tests or shift in paradigm, participants in the debate act as if events in Australia or Chicago have established eternal truths about gun control. As a result, seldom is it considered that new policies could be tested gradually, background check and registration requirements that become more stringent over time or regional rollouts, with an explicit emphasis on monitoring for effectiveness and unintended consequences—especially consequences for the already marginalized.
How Americans debate issues like gun control would be improved in still other ways if the narrative of “the facts” were not so dominant in people’s speech. It would allow greater consideration of values, feelings, and experiences. For instance, gun rights advocates are right to note that semiautomatic “assault” weapons are responsible for a minority of gun deaths, but their narrow focus on that statistical fact prevents them from recognizing that it is not their “objective” danger that motivates their opponents but their political riskiness. The assault rifle, due to its use in horrific mass shootings, has come to symbolize American gun violence writ large. For gun control advocates it is the antithesis of conservatives’ iconography of the flag: It represents everything that is rotten about American culture. No doubt reframing the debate in that way would not guarantee more productive deliberation, but it would at least enable political opponents some means of beginning to understand each others' position.
Even if I am at least partly correct in diagnosing what ails American political discourse, there remains the pesky problem of how to treat it. Allusions to “the facts,” attempts to leverage rhetorical appeals to science for political advantage, have come to dominant political discourse over the course of decades—and without anyone consciously intending or dictating it. How to effect movement in the opposite direction? Unfortunately, while some social scientists study these kinds of cultural shifts as they occur throughout history, practically none of them research how beneficial cultural changes could be generated in the present. Hence, perhaps the first change citizens could advocate for would be more publicly responsive and relevant social research. Faced with an increasingly pathological political process and evermore dire consequences from epochal problems, social scientists can no longer afford to be so aloof; they cannot afford to simply observe and analyze society while real harms and injustices continue unabated.
A recent MIT Technology Review article posed a thought provoking question with an obvious answer--at least to anyone familiar with the history of technology: "Are Semi-Autonomous Cars Making Us Worse Drivers?" It is difficult not to see autonomous and semi-autonomous driving technology as another case where widespread techno-enthusiasm leads otherwise intelligent people to act unintelligently. Indeed, an answer to the Technology Review's question was available long before driver-assist technologies ever hit the road.
Although we are often awestruck by human ingenuity, there are fairly firm limits to the range of cognitive tasks that are reasonable to expect out any person. Complex interactions within and between large technological systems are frequently opaque to even experts, and most people find it extremely challenging to babysit technological controls for long periods of time. Despite being dead obvious in hind sight, military leaders were surprised to find that personnel tasked with monitoring then newly developed radar screens for a once-in-a-blue-moon sighting soon became complacent and dozed off. It has been recognized for decades that, even though improved maintenance scheduling alongside technical advancements like autopilot have enhanced the safety record of airlines, new and often deadly mistakes have occurred as a result of automating the control of passenger jets. Pilots are now tasked with monitoring gauges and babysitting the autopilot, just the tasks that humans are poorly suited for. Unsurprisingly, when there is an autopilot error and the human pilot is thrust back into control--often in a crisis moment requiring immediate and accurate decision making--they make elementary and deadly errors.
It has been recognized by technology scholars for decades that automation creates unintended consequences, especially for complex and tightly coupled systems--such as navigating an automobile through a maze of traffic seventy miles per hour. Technology scholars have proposed informating as a safer alternative, one that recognizes how people actually interact effectively with technologies rather than trying to cut them out of the loop. In informated processes the human driver would still be in control, but computerized components would aim to ensure that they would make timely, accurate, and more sensible decisions. Informated automobiles would be explicitly designed to make their human operators better drivers. It is in no way guaranteed that a properly informated driver could not outperform a car on autopilot.
Exactly why companies like Google have not worked to develop informating technologies for automobiles is anyone's guess. I suspect that it has more to do with the reigning business model in the 21st century than anything like a concern for safety. Lacking firm data on what a massively automated highway actually would actually look like, claims of improved safety with driverless and semi-autonomous cars are more speculations, conveniently used for public relations purposes, than "proven" science. Companies like Google have a financial stake in getting drivers to spend less and less time at the wheel. Time spent operating an automobile is time not used producing personal data for Google on a digital device. Autonomous automobiles are part of a growing network of technologies aimed at producing an evermore detailed digital profile of a persons' desires and purchasing proclivities.
Yet companies like GM and Audi have been hard at work developing semi-autonomous driving technologies, despite not having the same financial stake in people's drive time becoming more occupied by Netflix binges and Facebook rants than navigating traffic. They may be pursuing such technologies for a more mundane reason: not wanting to appear to be "behind the times." Indeed, given the often fickle and unpredictable swings in consumer markets, car companies are prone tobandwagons.
At the same time, there is also the pervasive--and evidence resistant--cultural belief that automated technologies automatically outperform human operators in all the (relevant) aspects of a job. Certainly computers have an advantage when it comes to highly routinized or algorithmic tasks: games, assembly-line work, etc., but no program has been able to replicate human judgement. Yet it seems taken for granted that any time a human being can be replaced by a robot--e.g., care bots for the elderly and children or diagnosing algorithms--progress has been made. Indeed, some go so far as to believe that a kind of heaven on Earth can be realized by digitizing our own bodies and consciousness and mixing them with artificial intelligence algorithms.
As clear as it is, upon reflection, that such problematic beliefs, business interests, and potentially misguided strategies are at play in any automation effort, there is a profound lack of self-awareness or honesty by the automakers themselves and media reports. Citizens might still decide, of course, that automated automobiles are worth the risks, even when challenged to weigh some of the issues that I have brought up here, but at least they would be doing so consciously. Indeed, the most problematic automated process within technological civilization may be that of technological change itself. Quasi-decisions get made about the direction of technological innovation as if by autopilot; societies react more than consciously steer. And semi-random technological drifts get interpreted as if they were part of an inevitable process of evolution toward Modernity. Unless social scientists succeed in figuring out how to cure societies of their technological sleepwalking, innovators seem destined to continually lurch from error to mistake to disaster.
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
If You Don't Want Outbreaks, Don't Have In-Person Classes
How to Stop Worrying and Live with Conspiracy Theorists
Democracy and the Nuclear Stalemate
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life