I wrote a short piece for the 4S blog, where I explore alternative roles for scientists and engineers in public controversies. More and more I believe that experts shouldn't be discerning what we "ought" to do about a situation (whether climate change or gas stoves). Nor are they used fully effectively as honest brokers. Rather, I think they should act as "thoughtful partisans", who use they expertise to facilitate more productive politics. Click here to read it.
If you tried to describe early 21st century politics with a single word, apocalyptic would be among the top contenders. From climate change and COVID to biodiversity change and abortion, the rhetoric surrounding most issues seems to be "The world (as you know it) is going to end. Only radical action can hope to save it." Whatever the merits of the argument that the relevant indicators have gone from merely "bad" to a "crisis" might be, the consequences from democracy have been deadly. Once political problems are framed as potentially apocalyptic, the implication that they are therefore too important or pressing to be solved by regular old democracy tends to be taken for granted. Participants become polarized and fanatical, seeing their opponents as not just behind the times or ignorant of "the facts" but enemies of the future of humanity. What's more is that it imposes a disaster narrative. And like a tornado bearing down upon us, the expectation is that people lay all other concerns to the side and just do what they're told. But as I've pointed out earlier, that story about the politics doesn't really hold up.
I think that we're not going to make consistent and dependable progress on the global crises we face until we can learn to talk about them as merely "big problems," rather than as cataclysm. Decades of political experience have told us that no amount of screaming about "the facts" or insisting on the stupidity or evilness of other people will make them drop their own political views and do what we tell them. Most of all, it only takes a bit of reflection to recognize that these problems are not at all like a disaster, like a tornado we can see with our own eyes. They depend upon our trust in experts and those communicating the science. The pandemic, for instance, is not something the average citizen perceives in its totality. It is a diffuse problem, one that people have to be constantly reminded of. Choosing to wear a mask or get vaccinated, as a result, is not at all like stacking sandbags in the face of rising flood waters. It is asking people to play their part in an improving epidemiological situation that is only understood to government modelers. It demands that people see imperceptibly influencing statistical projections of viral spread and healthcare burden as a heroic and necessary act. That can't be done except by building trust, by doing democracy, by convincing your opponents that you're not only credible but honest and benevolent.
In any case, see more at The New Atlantis.
Science and technology scholars write and talk a lot about post-normal science, the unique political situation that emerges for issues where there are considerable stakes and high levels of (perceived) uncertainty. I was asked to give a talk on short notice at the World Biodiversity Forum this week, and I used it as an opportunity to think through how people involved in areas of post-normal science and politics try to cope with or escape the situation of post-normality. Can the stakes be reduced while still addressing the problem? Can the perceived uncertainties be lessened without other stakeholders seeing it as dishonest, biased, or unfair? Those are just a few of the thoughts that I explored. Unfortunately, the talk wasn't recorded, but here is a link to the slides.
Biodiversity conservation is riddled with conflict. This is unsurprising, given that people live where we also find important and charismatic animal species. Although there has been a lot of good work looking into how to reconcile the often divergent interests of conservationists and rural peoples, I feel like a lot of it is on the wrong track. No doubt is it well intentioned. There is a long history of treating rural people as conservation "problems," one that goes at least as far back as efforts to remove native peoples from America's newly established national parks. The idea that species can only be protected by creating "pristine" wilderness areas is increasingly recognized as not only ahistorical but also the driving force behind the expropriation of land from rural residents. The future of conservation relies on moving past the historical antagonism between typically urban-dwelling, "science following" conservation advocates and the people who live within the landscapes seen as needing protection.
The response within conservation has paralleled a similar move within Science and Technology Studies, consequently suffering some of the same drawbacks. Researchers in STS have done great work to highlight the existence of "lay expertise," that non-scientists have important knowledge to contribute. Similarly, environmental scientists have uncovered how indigenous and local peoples often have an intricate understanding of their local environment and have developed strategies that allow them to live off the land in ways that sometimes more sustainable or supportive of biodiversity than what so-called modern people do. Work in both these areas try to encourage scientists to be more humble and open to listening to non-scientists.
The risk is romanticizing lay people. Not all indigenous peoples have lived so sustainability. Mesoamerica, for instance, was once dominated by groups who sustained themselves as much by imperialism as by their home grown agricultural practices. More broadly the antithesis of "follow the science", can devolve into a kind of epistemological populism, where it is non-experts whose knowledge becomes sacrosanct or unquestionable. Recall how Newt Gingerich, in the lead up to the 2016 election, argued that American's belief that the country was more dangerous than in the past, despite statistics to the contrary, was all that mattered for the election. Democracy is served by putting different kinds of knowledge in conversation, not by venerating the little guy. The question here is not whether the average American is wrong or if FBI statistics are right, but why Americans would still feel unsafe despite this data. The apparent contradiction uncovers a unresolved problem that policy should address.
I think that part of the problem is that we've confused political problems for knowledge conflicts. Past injustices were often justified by science, such as when pristine (read "human free") protected areas were argued to be the only way to preserve nature. So the appropriate response seems to be that we can prevent those injustices by elevating lay knowledge so as to be equal in value to science. The idea is that the power differential was created by the unequal weight given to different kinds of knowledge, but really the causality worked in the opposite direction: Power legitimates one group's knowledge over that of others. So we focus excessively on developing ways to give diverse forms of knowledge equal weight when the real issue is simply that the way we decide what is a problem and how to solve it is insufficiently democratic. We're treating the symptom rather than the cause.
The way out is both agnostic and agonistic. I think it's better to not get into the morass of deciding which knowledge should have the most influence or whether different ways of knowing are or are not equal. Rather, groups with different ideas about what is important and different ways of knowing about the environment should have more equal say in deciding how to solve collective problems. We settle political conflicts through democracy, not convoluted analytical schemes for realizing epistemological equality. This is also the right way, because rural people should have a say in what conservation measures are deployed where they live and how, full stop. They have this right not because they have special knowledge or because they live in appropriately non-western or native ways, but because they live there and have a stake. No amount of scientific know-how justifies depriving someone else of a say in decisions that affect them, insofar as we want to live in democratic societies.
In any case, if you intrigued by this line of thought, take a peak at a commentary that I recently published in One Earth.
We have all heard stories of people pilloried online. One of the earliest instances occurred in South Korea in 2005, when a young woman’s dog pooped in a subway car and she didn’t clean it up. Someone had taken photographs with a flip phone and posted them online, unleashing nationwide public harassment. The most famous story from Twitter is that of communications director Justine Sacco, who in 2013, before a flight to South Africa, tweeted a hamfisted joke about getting AIDS. Even though she had only 170 Twitter followers, the post blew up — as did her life.
The two stories show rather different kinds and levels of offense and shaming. But they both illustrate the same reality. Once upon a time, an ill-advised comment or action drew an appropriately stern rebuke from a friend or a boss or a stranger; today it draws a public firestorm that can ruin you. So now everyone is on guard, because everyone is watching.
Continued at The New Atlantis
Entering the new year, Americans are increasingly divided. They clash not only over differing opinions on COVID-19 risk or abortion, but basic facts like election counts and whether vaccines work. Surveying rising political antagonism, journalist George Packer recently wondered in The Atlantic, “Are we doomed?”... Continued at The Conversation
[We must] rethink the proper place of scientific expertise in policymaking and public deliberation. [An] inventory of the consequences of “follow the science” politics is sobering, applying to COVID-19 no less than to climate change and nuclear energy. When scientific advice is framed as unassailable and value-free, about-faces in policy undermine public trust in authorities. When “following the science” stifles debate, conflicts become a battle between competing experts and studies.
We must grapple with the complex and difficult trade-offs and judgment calls out in the open, rather than hide behind people in lab coats, if we are to successfully and democratically navigate the conflicts and crises that we face.
Continued at ISSUES...
Is it better to tolerate seemingly prejudiced political opinions, or should we be intolerant of people whose views on diversity, equity, and identity strike us as harmful?
I am an advocate for radically tolerating political disagreement, even if that disagreement strikes us as unmoored from facts or common sense. One reason is that dissent makes democracy more intelligent. While many believe that vaccine skeptics misunderstand the relevant science and threaten public health, their opposition to vaccines nevertheless draws attention to chronic problems within our medical system: financial conflicts of interest, racism and sexism, and other legitimate reasons for mistrust. People should have their voices heard because politics shapes the things citizens care about, not just the things they know.
Tolerating disagreement also ensures the practice of democracy. Otherwise, we may find ourselves handing off ever more political control to experts and bureaucrats. Political truths can motivate fanaticism. Whether it is “follow the science” or “commonsense conservatism,” the belief that policy must actualize one’s own view of reality divides the world into “enlightened” good guys and ignorant enemies who just need to go away.
But what about beliefs that seem harmful and intolerant? You might question, as the political philosopher Jonathan Marks does, whether a zealous belief in the idea “that all men are created equal” is so problematic. Why not divide the political world into citizens who believe in equality and harmfully ignorant people to be ignored? The trouble is that doing so makes actually achieving equality more difficult....Continued at Zocalo Public Square
This is a more academic piece of writing than I usually post, but I want to help make a theory so central to my thinking more accessible. This is an except from a paper that I had published in The Journal of Responsible Innovation a few years ago. If you find this intriguing, Intelligent Trial and Error (ITE) also showed up in an article that I wrote for The New Atlantis last year.
The Intelligent Steering of Technoscience
ITE is a framework for betting understanding and managing the risks of innovation, largely developed via detailed studies of cases of both technological error and instances when catastrophe had been fortuitously averted (see Morone and Woodhouse 1986; Collingridge 1992). Early ITE research focused on mistakes made in developing large-scale technologies like nuclear energy and the United States’ space shuttle program. More recently, scholars have extended the framework in order to explain partially man-made disasters, such as Hurricane Katrina’s impact on New Orleans (Woodhouse 2007), as well as more mundane and slow moving tragedies, like the seemingly inexorable momentum of sprawling suburban development (Dotson 2016).
Although similar to high-reliability theory (Sagan 1993), an organizational model that tries to explain why accidents are so rare on aircraft carriers and among American air traffic controllers, ITE has a distinct lineage. The framework’s roots lie in political incrementalism (Lindblom 1959; Woodhouse and Collingridge 1993; Genus 2000). Incrementalism begins with a recognition of the limits of analytical rationality. Because analysts lack the necessary knowledge to predict the results of a policy change and are handicapped by biases, their own partisanship, and other cognitive shortcomings, incrementalism posits that they should—and very often do—proceed in a gradual and decentralized fashion. Policy therefore evolves via mutual adjustment between partisan groups, an evolutionary process that can be stymied when some groups’ desires—namely business’—dominate decision making. In short, pluralist democracy outperforms technocratic politics. Consider how elite decision makers in pre-Katrina New Orleans eschewed adequate precautions, having come to see flooding risks as acceptable or less important than supporting the construction industry; encouraging and enabling myriad constituent groups to advocate for their own interests in the matter would have provoked deliberations more likely to have led to preventative action (see Woodhouse 2007). In any case, later political scientists and decision theorists extended incrementalism to technological development (Collingridge 1980; Morone and Woodhouse 1986).
ITE also differs from technology assessment, though both seek to avoid undesirable unintended consequences (see Genus 2000). Again, ITE is founded on a skepticism of analysis: the ramifications of complex technologies are highly unpredictable. Consequences often only become clear once a sociotechnical system has already become entrenched (Collingridge 1980). Hence, formal analytical risk assessments are insufficient. Lacking complete understanding, participants should not try to predict the future but instead strategize to lessen their ignorance. Of course, analysis still helps. Indeed, ITE research suggests that technologies and organizations with certain characteristics hinder the learning process necessary to minimize errors, characteristics that preliminary assessments can uncover.
Expositions of ITE vary (cf. Woodhouse 2013; Collingridge 1992; Dotson 2017); nevertheless, all emphasize meeting the epistemological challenge of technological change: can learning happen quickly and without high costs? The failure to face up to this challenge not only leads to major mistakes for emerging technologies but can also stymie innovation in already established areas. The ills associated with suburban sprawl persists, for instance, because most learning happens far too late (Dotson 2016). Can developers be blamed for staying the course when innovation “errors” are learned about only after large swaths of houses have already been built? Regardless, meeting this central epistemological challenge requires employing three interrelated kinds of precautionary strategies.
The first set of precautions are cultural. Are participants and organizations and prepared to learn? Is feedback produced early enough and development appropriately paced so that participants can feasibly change course? Does adequate monitoring by the appropriate experts occur? Is that feedback effectively communicated to those affected and those who decide? Such ITE strategies were applied by early biotechnologists at the Asilomar Conference: They put a moratorium on the riskiest genetic engineering experiments until more testing could be done, proceeding gradually as risks became better understood, and communicating the results broadly (Morone and Woodhouse 1986). In contrast, large-scale technological mistakes—from nuclear energy to irrigation dams in developing nations—tend to occur because they are developed and deployed by a group of true believers who fail to fathom that they could be wrong (Collingridge 1992).
Another set of strategies entail technical precautions. Even if participants are disposed to emphasize and respond to learning, does the technology’s design enable precaution? Sociotechnical systems can be made forgiving of unanticipated occurrences by ensuring wide margins for error, including built-in redundancies and backup systems, and giving preference to designs that are flexible or easily altered. The designers of the 20th century nuclear industry pursued the first two strategies but not the third. Their single-minded pursuit of economies of scale combined with the technology’s capital intensiveness all but locked-in the light water reactor design prior to a full appreciation of its inherent safety limitations (Morone and Woodhouse 1989). No doubt the technical facet of ITE intersects with its cultural dimensions: a prevailing bias toward a rapid pace of innovation can create technological inflexibility just as well as overly capital-intensive or imprudently scaled technical designs (cf. Collingridge 1992; Woodhouse 2016).
Finally, there are political precautions. Do existing regulations, incentives, deliberative forums, and other political creations push participants toward more precautionary dispositions and technologies? Innovators may not be aware of the full range of risks or their own ignorance if deliberation is insufficiently diverse. AIDs sufferers, for instance, understood their own communities’ needs and health practices far better than medical researchers (Epstein 1996). Their exclusion slowed the development of appropriate treatment options and research. Moreover, technologies are less likely to be flexibly designed if deliberation on potential risks occurs too late in the innovation process. Finally, do regulations protect against widely shared conflicts of interest, encourage error correction, and enforce a fair distribution of the burden of proof? Regulatory approaches that demand “sound science” prior to regulation put the least empowered participants (i.e., victims) in the position of having to convincingly demonstrate their case and fail to incentivize innovators to avoid mistakes. In contrast, making innovators pay into victim’s funds until harm is disproven would encourage precaution by introducing a monetary incentive to prevent errors (Woodhouse 2013, 79). Indeed, mining companies already have to post remediation bonds to ensure that funds exist to clean up after valuable minerals and metals have been unearthed.
To these political precautions, I would add the need for deliberative activities to build social capital (see Fleck 2016). Indeed, those studying commons tragedies and environmental management have outlined how establishing trust and a vision of some kind of common future—often through more informal modes of communication—are essential for well-functioning forms of collective governance (Ostrom 1990; Temby et al. 2017). Deliberations are unlikely to lead to precautionary action and productively working through value disagreements if proceedings are overly antagonistic or polarized.
The ITE framework has a lot of similarities to Responsible Research and Innovation (RRI) but differs in a number of important ways. RRI’s four pillars of anticipation, reflexivity, inclusion, and responsiveness (Stilgoe, Owen and Macnaghten 2013) are reflected in ITE’s focus on learning. Innovators must be pushed to anticipate that mistakes will happen, encouraged to reflect upon appropriate ameliorative action, and made to include and be accountable to potential victims. ITE can also be seen as sharing RRI’s connection to deliberative democratic traditions and the precautionary principle (see Reber 2018). ITE differs, however, in terms of scope and approach. Indeed, others have pointed out that the RRI framework could better account for the material barriers and costs and prevailing power structures that can prevent well-meaning innovators from innovating responsibly (De Hoop, Pols and Romijn 2016). ITE’s focus on ensuring technological flexibility, countering conflicts of interest, and fostering diversity in decision-making power and fairness in the burden of proof exactly addresses those limitations. Finally, ITE emphasizes political pluralism, in contrast to RRI’s foregrounding of ethical reflexivity. Innovators need not be ethically circumspect about their innovations provided that they are incentivized or otherwise encouraged by political opponents to act as if they were.
An op-ed that I wrote with Nicholas Tampio was published in the Washington Post on Saturday. The reaction was much stronger than I anticipated it would be. I was ready for the run-of-the-mill social media negativity that I see on Twitter everyday, but the vitriol in the article's comments and in Twitter replies was really something else. One person came to my wall to write that I had "blood on my hands", and no shortage of people questioned my intelligence and moral compass. In our article, Nick and I don't exactly come out against mandates writ large (though some have interpreted it that way), but that pursuing vaccine mandates right now will not be worth the costs. The thing to remember is that we're living through two pandemics at the moment: COVID and rampant political polarization. Getting vaccine numbers up faster while only making our democracy even more pathological is not a wise move.
Things weren't helped by Tom Nicholas retweeting it to his 500k followers with such authoritative pronouncements as "It [vaccine mandates] is *exactly* how things work in a democracy, which is why you didn't get polio." For a guy who pronounced the "Death of Expertise," you'd think Nichols would pause to consider that maybe he might want to learn from people who study vaccine hesitancy and resistance before claiming to know what's best, but he hasn't. Nichols, like many people today who decry the declining respect for truth or democracy, is really taking issue with the reality that people think differently than him rather than that they don't respect expertise or act democratically per se.
And that's exactly the problem that, as I argue in my book The Divide, underlies contemporary democracies. It's not so much that people are "irrational," but that political opponents are absolutely convinced that they are on the right side of truth, whether they are pro-vaccine or anti-vaxx. This fanatical certitude produces demand for fanatical policy. Just look at Max Boot's now deleted tweet praising vaccine mandates in the notably undemocratic Saudi Arabia, or Matthew Yglesias's suggestion that vaccine resisters should be given the jab by force.
A lot of leftists and centrists are just as anti-democratic in their thinking as reactionary conservatives. It's just that that attitude only comes out when there's a population of citizens that refuse to embrace a truth that is accepted by the political mainstream. It's in these moments that self-described liberals and centrists out themselves as technocrats and agitate against the fundamental features of democracy: dialogue, negotiation, and compromise.
But they're not alone. As I demonstrate in my book, plenty of otherwise intelligent people have been confusing democracy with The Truth for quite some time, and that explains a lot of the political gridlock and intransigence in modern democracies.
The question will be whether enough of us can rise to the challenge of democratic citizenship in the near term in order to avoid the "death spiral" of polarization that previously infected nations like Venezuela, pre-Pinochet Chile, and pre-Franco Spain. Using mandates as a stick to punish the unvaccinated, especially while also giving the appearance that it's motivated by partisanship, will make a polarization death spiral in this country a real possibility.
But that's not the only thing that I've noticed in the reaction to the piece. First, many vaccine mandate advocates are unsurprisingly similar to the vaccine hesitant citizens in how they perceive risk. It's just in the opposite direction. One commenter described being worried everyday about her 3 year old ending up in the ICU with COVID. That's certainly a possibility, but the risk to children is actually not much more than it has been for other long-prevalent viruses like the flu and RSV. It's common to chide the vaccine hesitant for their "irrational fear" of vaccine side-effects, but plenty of vaccine supporters have a similarly outsized worries about COVID. Both should be considered "legitimate" concerns that we ought to take seriously, even if the goal is to eventually lessen the magnitude of those worries.
The more annoying argument is the comparison between getting vaccinated and driving drunk: We don't let people onto public roads while drunk, why should we let the "reckless" unvaccinated into public spaces. This is a terrible metaphor. No one comes into the world drunk or driving the automobile, but we were all born without immunity to COVID. A person has to explicitly imbibe alcohol to become a drunk driver, while not being a COVID spreader (well, less of one) means permitting someone to inject a vaccine into your body. The metaphor completely blinds us to all of the important differences in these cases, making it easier to ignore the immense amount of trust in doctors, the FDA, and the pharmaceutical industry that it takes to get vaccinated. Plus, it reduces human beings to being disease-ridden virus vectors, which is somewhat dehumanizing.
I think it's better to think of herd immunity as like an airplane, except it's an enormous plane that some 80 percent of us have to board before it can take off. People fear flying. They have to give up control. They have to trust the pilot, the FAA, and that engineers at companies like Boeing are all doing their jobs diligently. Would we strap people into their seats in that case or talk to them to try to alleviate their fears?
Other thing that I've learned from some of the emails that I've received is that there's a big overlap between "essential workers" and the vaccine hesitant. At my own college and others throughout the country, it is staff and not faculty who are shunning the vaccine. At hospitals, it is nurses and orderlies who more often refuse, not doctors. Those of us who have been relatively shielded from most of the harms of COVID are often the ones most ardent in calling for mandates. A reader who emailed me framed it as "The professional class took none of the risks during the pandemic and are now forcing an experimental vaccine on us." That's a perspective that I hadn't considered, and it's one that I'm still thinking about. I think it explains some of the class dynamics of vaccine mistrust.
One final realization that I've had concerns vaccines as technological fixes. Dan Sarewitz wrote that he thought they were the best example of using a technology to sidestep the social complexities and difficulties in solving a tenacious public problems. I now think that he's wrong. The techno-fix for disease like COVID-19 is treatment, not vaccine. This should be obvious, given how many of the vaccine hesitant have latched onto to uncertain treatments like Invermectin. Vaccines, like I noted above, require immense amounts of trust. They also ask that people who are not currently sick take a form of medicine. Treatments don't. People who are exceptionally sick see risks differently. They're looking to get better, not avoid getting sick. In light of the fact that pandemics aren't going to disappear anytime soon, we may want to put as much R&D into improving treatment options as into developing vaccines. We would be better off having a techno-fix that lets us temporarily sidestep the messy problem of vaccine hesitancy, giving us more time to engage with the vaccine hesitant, to hear their concerns, and build trust.
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
On Vaccine Mandates
Escaping the Ecomodernist Binary
No, Electing Joe Biden Didn't Save American Democracy
When Does Someone Deserve to Be Called "Doctor"?
If You Don't Want Outbreaks, Don't Have In-Person Classes
How to Stop Worrying and Live with Conspiracy Theorists
Democracy and the Nuclear Stalemate
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life