Back during the summer, Tristan Harris sparked a flurry of academic indignation when he suggested that we needed a new field called “Science & Technology Interaction” or STX, which would be dedicated to improving the alignment between technologies and social systems. Tweeters were quick to accuse him of “Columbizing,” claiming that such a field already existed in the form of Science & Technology Studies (STS) or similar such academic department. So ignorant, amirite?
I am far more sympathetic. If people like Harris (and earlier Cathy O’Neil) have been relatively unaware of fields like Science and Technology Studies, it is because much of the research within these disciplines is mostly illegible to non-academics, not all that useful to them, or both. I really don’t blame them for not knowing. I am even an STS scholar myself, and the table of contents of most issues of my field’s major journals don’t really inspire me to read further. And in fairness to Harris and contrary to Academic Twitter, the field of STX that he proposes does not already exist. The vast majority of STS articles and books dedicate single digit percentages of their words to actually imagining how technology could better match the aspirations of ordinary people and their communities. Next to no one details alternative technological designs or clear policy pathways toward a better future, at least not beyond a few pages at the end of a several-hundred-page manuscript. My target here is not just this particular critique of Harris, but the whole complex of academic opiners who cite Foucault and other social theory to make sure we know just how “problematic” non-academics’ “ignorant” efforts to improve technological society are. As essential as it is to try to improve upon the past in remaking our common world, most of these critiques don’t really provide any guidance for what steps we should be taking. And I think that if scholars are to be truly helpful to the rest of humanity they need to do more than tally and characterize problems in ever more nuanced ways. They need to offer more than the academic equivalent of fiddling while Rome burns. In the case of Harris, we are told that underlying the more circumspect digital behavior that his organization advocates is a dangerous preoccupation with intentionality. The idea of being more intentional is tainted by the unsavory history of humanistic thought itself, which has been used for exclusionary purposes in the past. Left unsaid is exactly how exclusionary or even harmful it remains in the present. This kind of genealogical take down has become cliché. Consider how one Gizmodo blogger criticizes environmentalists’ use the word “natural” in their political activism. The reader is instructed that because early Europeans used the concept of nature to prop up racist ideas about Native Americans that the term is now inherently problematic and baseless. The reader is supposed to believe from this genealogical problematization that all human interactions with nature are equally natural or artificial, regardless of whether we choose to scale back industrial development or to erect giant machines to control the climate. Another common problematiziation is of the form “not everyone is privileged enough to…”, and it is often a fair objection. For instance, people differ in their individual ability to disconnect from seductive digital devices, whether due to work constraints or the affordability or ease of alternatives. But differences in circumstances similarly challenge people’s capacity to affordably see a therapist, retrofit their home to be more energy efficient, or bike to work (and one might add to that: read and understand Foucault). Yet most of these actions still accomplish some good in the world. Why is disconnection any more problematic than any other set of tactics that individuals use to imperfectly realize their values in an unequal and relatively undemocratic society? Should we just hold our breaths for the “total overhaul…full teardown and rebuild” of political economies that the far more astute critics demand? Equally trite are references to the “panopticon,” a metaphor that Foucault developed to describe how people’s awareness of being constantly surveilled leads them to police themselves. Being potentially visible at all times enables social control in insidious ways. A classic example is the Benthamite prison, where a solitary guard at the center cannot actually view all the prisoners simultaneously, but the potential for him to be viewing a prisoner at any given time is expected to reduce deviant behavior. This gets applied to nearly any area of life where people are visible to others, which means it is used to problematize nearly everything. Jill Grant uses it to take down the New Urbanist movement, which aspires (though fairly unsuccessfully) to build more walkable neighborhoods that are supportive of increased local community life. This movement is “problematic” because the densities it demands means that citizens are everywhere visible to their neighbors, opening up possibilities for the exercise of social control. Whether not any other way of housing human beings would not result in some form of residential panopticon is not exactly clear, except perhaps by designing neighborhoods so as to prohibit social community writ large. Further left unsaid in these critiques is exactly what a more desirable alternative would be. Or at least that alternative is left implicit and vague. For example, the pro-disconnection digital wellness movement is in need of enhanced wokeness, to better come to terms with “the political and ideological assumptions” that they take for granted and the “privileged” values they are attempting to enact in the world. But what does that actually mean? There’s a certain democratic thrust to the criticism, one that I can get behind. People disagree about what is “the good life” and how to get there, and any democratic society would be supportive of a multitude of them. Yet the criticism that the digital wellness movement seems to center around one vision of “being human,” one emphasizing mindfulness and a capacity to exercise circumspect individual choosing, seems hollow without the critics themselves showing us what should take its place. Whatever the flaws with digital wellness, it is not as self-stultifying as the defeatist brand of digital hedonism implicitly left in the wake of academic critiques that offer no concrete alternatives. Perhaps it is unfair to expect a full-blown alternative; yet few of these critiques offer even an incremental step in the right direction. Even worse, this line of criticism can problematize nearly everything, losing its rhetorical power as it is over-applied. Even academia itself is disciplining. STS has its own dominant paradigms, and critique is mobilized in order to mold young scholars into academics who cite the right people, quote the correct theories, and support the preferred values. My success depends on me being at least “docile enough” in conforming myself to the norms of the profession. I also exercise self-discipline in my efforts to be a better spouse and a better parent. I strive to be more intentional when I’m frustrated or angry, because I too often let my emotions shape my interactions with loved ones in ways that do not align with my broader aspirations. More intentionality in my life has been generally a good thing, so long as my expectations are not so unrealistic as to provoke more anxiety than the benefits are worth. But in a critical mode where self-discipline and intentionality automatically equate to self-subjugation, how exactly are people to exercise agency in improving their own lives? In any case, advocating devices that enable users to exercise greater intentionality over their digital practices is not a bad thing per se. Citizens pursue self-help, meditate, and engage in other individualistic wellness activities because the lives they live are constrained. Their agency is partly circumscribed by their jobs, family responsibilities, and incomes, not to mention the more systemic biases of culture and capitalism. Why is it wrong for groups like Harris’ center to advocate efforts that largely work within those constraints? Yet even that reading of the digital wellness movement seems uncharitable. Certainly Harris’ analysis lacks the sophistication of a technology scholar’s, but he has made it obvious that he recognizes that dominant business models and asymmetrical relations of power underlay the problem. To reduce his efforts to mere individualistic self-discipline is borderline dishonest, though he no doubt emphasizes the parts of the problem he understands best. Of course it will likely take more radical changes to realize the humane technology than Harris advocates, but it is not totally clear whether individualized efforts necessarily detract from people’s ability or the willingness demand more from tech firms and governments (i.e., are they like bottled water and other “inverted quarantines”?). At least that is a claim that should be demonstrated rather than presumed from the outset. At its worst, critical “problematizing” presents itself as its own kind of view from nowhere. For instance, because the idea of nature has been constructed in various biased throughout history, we are supposed to accept the view that all human activities are equally natural. And we are supposed to view that perspective as if it were itself an objective fact rather than yet another politically biased social construction. Various observers mobilize much the same critique about claims regarding the “realness” of digital interactions. Because presenting the category of “real life” as being apart from digital interactions is beset with Foulcauldian problematics, we are told that the proper response is to no longer attempt the qualitative distinctions that that category can help people make—whatever its limitations. It is probably no surprise that the same writer wanting to do away with the digital-real distinction is enthusiastic in their belief that the desires and pleasures of smartphones somehow inherently contain the “possibility…of disrupting the status quo.” Such critical takes give the impression that all technology scholarship can offer is a disempowering form of relativism, one that only thinly veils the author’s underlying political commitments. The critic’s partisanship is also frequently snuck in the backdoor by couching criticism in an abstract commitment to social justice. The fact that the digital wellness movement is dominated by tech bros and other affluent whites implies that it must be harmful to everyone else—a claim made by alluding to some unspecified amalgamation of oppressed persons (women, people of color, or non-cis citizens) who are insufficiently represented. It is assumed but not really demonstrated that people within the latter demographics would be unreceptive or even damaged by Harris’ approach. But given the lack of actual concrete harms laid out in these critiques, it is not clear whether the critics are actually advocating for those groups or that the social-theoretical existence of harms to them is just a convenient trope to make a mainly academic argument seem as if it actually mattered. People’s prospects for living well in the digital age would be improved if technology scholars more often eschewed the deconstructive critique from nowhere. I think they should act instead as “thoughtful partisans.” By that I mean that they would acknowledge that their work is guided by a specific set of interests and values, ones that are in the benefit of particular groups. It is not an impartial application of social theory to suggest that “realness” and “naturalness” are empty categories that should be dispensed with. And a more open and honest admission of partisanship would at least force writers to be upfront with readers regarding what the benefits would actually be to dispensing with those categories and who exactly would enjoy them—besides digital enthusiasts and ecomodernists. If academics were expected to use their analysis to the clear benefit of nameable and actually existing groups of citizens, scholars might do fewer trite Foucauldian analyses and more often do the far more difficult task of concretely outlining how a more desirable world might be possible. “The life of the critic easy,” notes Anton Ego in the Pixar film Ratatouille. Actually having skin in the game and putting oneself and one’s proposals out in the world where they can be scrutinized is far more challenging. Academics should be pushed to clearly articulate exactly how it is the novel concepts, arguments, observations, and claims they spend so much time developing actually benefit human beings who don’t have access to Elsevier or who don't receive seasonal catalogs from Oxford University Press. Without them doing so, I cannot imagine academia having much of a role in helping ordinary people live better in the digital age. In my last post, I considered some of the consequences of instantly available and seemingly endless quantities of Internet-driven novelty for the good life, particularly in the areas of story and joke telling as well as how we converse and think about our lives. This week, I want to focus more on the challenges to willpower exacerbated by Internet devices. Particularly, I am concerned with how today’s generation of parents, facing their own particular limitations of will, may be encouraging their children to have a relationship with screens that might be best described as fetishistic. My interest is not merely with the consequences for learning, although psychological research does connect media-multitasking with certain cognitive and memory deficits. Rather, I am worried about the ways in which some technologies too readily seduce their users into distracted and fragmented ways of living rather than enhancing their capacity to pursue the good life.
A recent piece in Slate overviews much of recent research concerning the negative educational consequences of media multitasking. Unsurprisingly, students who allowed their focus to be interrupted by a text or some other digital task, whether in lecture or studying, perform significantly worse. The article, more importantly, notes the special challenge that digital devices pose to self-discipline, suggesting that such devices are the contemporary equivalent to the “marshmallow test.” The Stanford marshmallow experiment was a series of longitudinal studies that found children's capacity to delay gratification to be correlated with their later educational successes and body-mass index, among other factors. In the case of these experiments, children were rated according their ability to forgo eating a marshmallow, pretzel or cookie sitting in front of them in order to obtain two later on. Follow-up studies have shown that this capacity for self-discipline is likely as much environmental as innate; children in “unreliable environments,” where experimenters would make unrelated promises and then break them, exhibited a far lower ability to wait before succumbing to temptation. The reader may reasonably wonder at this point, what do experiments tempting children with marshmallows have to do with iPhones? The psychologist Roy Baumeister argues that the capacity to exert willpower behaves like a limited resource, generally declining after repeated challenges. By recognizing this aspect of human self-discipline, the specific challenge of device-driven novelty is clearer. Today, more and more time and effort must be expended in exerting self-control over various digital temptations, more quickly depleting the average person's reserves of willpower. Of course, there are innumerable non-digital temptations and distractions that people are faced with everyday, but they are of a decidedly different character. I can just as easily shirk by reading a newspaper. At some point, however, I run out of articles. The particular allure of a blinking email notice or instant message that always seems to demand one’s immediate attention cannot be discounted either. Although it is not yet clear what the broader effects of pervasive digital challenges to willpower and self-discipline will be, other emerging practices will likely only exacerbate the consequences. The portability of contemporary digital devices, for instance, has enabled the move from “TV as babysitter” to media tablet as pacifier. A significant portion of surveyed parents admit to using a smart phone or iPad in order to distract their children at dinners and during car rides. Parents, of course, should not bear all of the blame for doing so; they face their own limits to willpower due to their often hectic and stressful working lives. Nevertheless, this practice is worrisome not only because it fails to teach children ways of occupying themselves that do not involve staring into a screen but also since the device is being used foremost as a potentially pathological means of pacification. I have observed a number of parents stuffing a smart phone in their child’s face to prevent or stop a tantrum. While doing so is usually effective, I worry about the longer term consequences. Using a media device as the sole curative to their children’s’ emotional distress and anxiety threatens to create a potentially fetishistic relationship between the child and the technology. That is, the tablet or smart phone becomes like a security blanket – an object that allays anxiety; it is a security blanket, however, that the child does not have give up as he or she gets older. This sort of fetishism has already become fodder for cultural commentary. In the television show “The Office,” the temporary worker named Ryan generally serves as a caricature of the millennial generation. In one episode, he leaves his co-workers in the lurch during a trivia contest after being told he cannot both have his phone and participate. Forced to decide between helping his colleagues win the contest and being able to touch his phone, Ryan chooses the latter. This is, of course, a fictional example but, I think, not too unrealistic a depiction of the likely emotional response. I am unsure if many of the college students I teach would not feel a similar sort of distress if (forcibly) separated from their phones. This sort of affect-rich, borderline fetishistic, connection with a device can only make more difficult the attempt to live in any way other than by the device’s own logic or script. How easily can users resist the distractions emerging from a technological device that comes to double as their equivalent to a child’s security blanket? Yet, many of my colleagues would view my concerns about people’s capacities for self-discipline with suspicion. For those having read (perhaps too much) Michel Foucault, notions of self-discipline tend to be understood as a means for the state or some other powerful entity to turn humans into docile subjects. In seminar discussions, places like gyms are often viewed as sites of self-repression first and promoting of physical well-being second. There is, to be fair, a bit of truth to this. Much of the design of early compulsory schooling, for instance, was aimed at producing diligent office and factory workers who followed the rules, were able to sit still for hours and could tolerate both rigid hierarchies and ungodly amounts of tedium. Yet, just because the instilling of self-discipline can be convenient for those who desire a pacified populace does not mean it is everywhere and always problematic. The ability to work for longer than five minutes without getting distracted is a useful quality for activists and the self-employed to have as well; self-discipline is not always self-stultifying. Indeed, it may be the skill needed most if one is to resist the pull of contemporary forms of control, such as advertising. The last point is one of the critical oversights of many post-modern theorists. So concerned they are about forms of policing and discipline imposed by the state that they overlook how, as Zygmunt Bauman has also pointed out, humans are increasingly integrated into today’s social order through seduction rather than discipline, advertising rather than indoctrination. Fears about potentials for a 1984 can blind one to the realities of an emerging Brave New World. Being pacified by the equivalent of soma and feelies is, in my mind, no less oppressive than living under the auspices of Big Brother and the thought police. Viewed in light of this argument, the desire to “disconnect” can be seen not the result of an irrational fear of the digital but is made in recognition of the particular seductive challenges that it poses for human decision making. Too often, scholars and layperson alike tend to view technological civilization through the lens of “technological liberalism,” conceptualizing technologies as simply tools that enhance and extend the individual person’s ability to choose their own version of the good life. Insofar as a class of technologies increasingly enable users to give into their most base and unreflective proclivities – such as enabling endless distraction into a largely unimportant sea of videos, memes and trivia, they seem to enhance neither a substantive form of choice nor the good life. Evgeny Morozov’s disclosure that he physically locks up his wi-fi card in order to better concentrate on his work spurred an interesting comment-section exchange between him and Nicholas Carr. At the heart of their disagreement is a dispute concerning the malleability of technologies, how this plasticity ought to recognized and dealt with in intelligent discourse about their effects and how the various social problems enabled, afforded or worsened by contemporary technologies could be mitigated. Neither mentions, however, the good life. Carr, though not ignorant of the contingency/plasticity of technology, tends to underplay malleability by defining a technology quite broadly and focusing mainly on their effects on his life and those of others. That is, he can talk about “the Net” doing X, such as contributing to increasingly shallow thinking and reading, because he is assuming and analyzing the Internet as it is presently constituted. Doing this heavy-handedly, of course, opens him up to charges of essentialism: assuming a technology has certain inherent and immutable characteristics. Morozov criticizes him accordingly: “Carr…refuses to abandon the notion of “the Net,” with its predetermined goals and inherent features; instead of exploring the interplay between design, political economy, and information science…” Morozov’s critique reflects the theoretical outlook of a great deal of STS research, particularly the approaches of “social construction of technology” and “actor-network theory.” These scholars hope to avoid the pitfalls of technological determinism – the belief that technology drives history or develops according to its own, and not human, logic – by focusing on the social, economic and political interests and forces that shape the trajectory of a technological development as well as the interpretive flexibility of those technologies to different populations. A constructivist scholar would argue that the Internet could have been quite different than it is today and would emphasize the diversity of ways in which it is currently used.
Yet, I often feel that people like Morozov often go too far and over-state the case for the flexibility of the web. While the Internet could be different and likely will be so in several years, in the short-term its structure and dynamics are fairly fixed. Technologies have a certain momentum to them. This means that most of my friends will continue to “connect” through Facebook whether I like the website or not. Neither is it very likely that an Internet device that aids rather than hinders my deep reading practices will emerge any time soon. Taking this level of obduracy or fixedness into account in one’s analysis is neither essentialism nor determinism, although it can come close. All this talk of technology and malleability is important because a scholar’s view of the matter tends to color his or her capacity to imagine or pursue possible reforms to mitigate many of the undesirable consequences of contemporary technologies. Determinists or quasi-determinists can succumb to a kind of fatalism, whether it be in Heidegger’s lament that “only a god can save us” or Kevin Kelly’s almost religious faith in the idea that technology somehow “wants” to offer human beings more and more choice and thereby make them happy. There is an equal level of risk, however, in overemphasizing flexibility in taking a quasi-instrumentalist viewpoint. One might fall prey to technological “solutionism,” the excessive faith in the potential of technological innovation to fix social problems – including those caused by prior ill-conceived technological fixes. Many today, for instance, look to social networking technologies to ameliorate the relational fragmentation enabled by previous generations of network technologies: the highway system, suburban sprawl and the telephone. A similar risk is the over-estimation of the capacity of individuals to appropriate, hack or otherwise work around obdurate technological systems. Sure, working class Hispanics frequently turn old automobiles into “Low Riders” and French computer nerds hacked the Minitel system into an electronic singles’ bar, but it would be imprudent to generalize from these cases. Actively opposing the materialized intentions of designers requires expertise and resources that many users of any particular technology do not have. Too seldom do those who view technologies as highly malleable ask, “Who is actually empowered in the necessary ways to be able to appropriate this technology?” Generally, the average citizen is not. The difficulty of mitigating fairly obdurate features of Internet technologies is apparent in the incident that I mentioned at the beginning of this post: Morozov regularly locks up his Internet cable and wi-fi card in a timed safe. He even goes so far as to include the screw-drivers that he might use to thwart the timer and access the Internet prematurely. Unsurprisingly, Carr took a lot of satisfaction in this admission. It would appear that some of the characteristics of the Internet, for Morozov, remain quite inflexible to his wishes, since he often requires a fairly involved system and coterie of other technologies in order to allay his own in-the-moment decision-making failures in using it. Of course, Morozov, is not what Nathan Jurgenson insultingly and dismissively calls a “refusenik,” someone refusing to utilize the Internet based on ostensibly problematic assumptions about addiction, or certain ascetic and aesthetic attachments. However, the degree to which he must delegate to additional technologies in order to cope with and mitigate the alluring pull of endless Internet-enabled novelty on his life is telling. Morozov, in fact, copes with the shaping power of Internet technologies on his moral choices as philosopher of technology Peter-Paul Verbeek would recommend. Rather than attempting to completely eliminate an onerous technology from his life, Morozov has developed a tactic that helps him guide his relationship with that technology and its effects on his practices in a more desirable direction. He strives to maximize the “goods” and minimize the “bads.” Because it otherwise persuades or seduces him into distraction, feeding his addiction to novelty, Morozov locks up his wi-fi card so he can better pursue the good life. Yet, these kinds of tactics seem somewhat unsatisfying to me. It is depressing that so much individual effort must be expended in order to mitigate the undesirable behaviors too easily afforded or encouraged by many contemporary technologies. Evan Selinger, for instance, has noted how the dominance of electronically mediated communication increasingly leads to a mindset in which everyday pleasantries, niceties and salutations come to be viewed as annoyingly inconvenient. Such a view, of course, fails to recognize the social value of those seemingly inefficient and superfluous “thank you’s” and “warmest regards’.” Regardless, Selinger is forced to do a great deal more parental labor to disabuse his daughter of such a view once her new iPod affords an alluring and more personally “efficient” alternative to hand-writing her thank-you notes. Raising non-narcissistic children is hard enough without Apple products tipping the scale in the other direction. Internet technologies, of course, could be different and less encouraging of such sociopathological approaches to etiquette or other forms of self-centered behavior, but they are unlikely to be so in the short-term. Therefore, cultivating opposing behaviors or practicing some level of avoidance are not the responses of a naïve and fearful Luddite or “refusenik” but of someone mindful of the kind of life they want (or want their children) to live pursuing what is often the only feasible option available. Those pursuing such reactive tactics, of course, may lack a refined philosophical understanding of why they do what they do, but their worries should not be dismissed as naïve or illogically fearful simply because they struggle to articulate a sophisticated reasoning. Too little attention and too limited of resources are focused on ways to mitigate declines in civility or other technological consequences that ordinary citizens worry about and the works of Carr and Sherry Turkle so cogently expose. Too often, the focus is on never-ending theoretical debates about how to “properly” talk about technology or forever describing all the relevant discursive spaces. More systematically studying the possibilities for reform seems more fruitful than accusations that so-and-so is a “digital dualist,” a charge that I think has more to do with the accused viewing networked technologies unfavorably than their work actually being dualistic. Theoretical distinctions, of course, are important. Yet, at some point neither scholarship nor the public benefits from the linguistic fisticuffs; it is clearly more a matter of egos and the battle over who gets to draw the relevant semantic frontier, outside of which any argument or observation can be considered too insufficiently “nuanced” to be worthy of serious attention. Regardless, barring the broader embrace of systems of technology assessment and other substantive means of formally or informally regulating technologies, some concerned citizen respond to tendency of many contemporary technologies to fragment their lives or distract them from the things they value by refusing to upgrade their phones or unplugging their TVs. Only the truly exceptional, of course, lock them in safes. Yet, the avoidance of technologies that encourage unhealthy or undesirable behaviors is not the sign of some cognitive failing; for many people, it beats acquiescence, and technological civilization currently provides little support for doing anything in between. I have been following the “digital dualism” debate of the last few years, which has mostly emerged from Cyborgology blog critiques of writers like Nicholas Carr and Sherry Turkle, who worry about the effects of digital technologies on human thinking and social interaction. The charge of digital dualism is relatively straightforward. Critics of digital technologies and those are concerned about their effects on everyday life are accused of setting up a false division between the virtual and the real as distinct worlds or realities; they charged with assuming that digital is, in some sense, less real or authentic. Anti-digital dualists, drawing upon the work of Donna Harraway and others, contend that it is more sensible to think of digital and non-digital as composing one completely real augmented/cyborg reality; the digital and non-digital are equally real and not easily separated. I not only find this charge of Carr’s and Turkle’s work unfounded, but also I think that the intention of the digital dualism pejorative has more do to with differing moral imaginaries than differing comprehension of the ontological effects of digital communication technologies. Not only that, I think people on both sides could benefit from considering Neil Postman's view of technological change.
I find the digital dualism debate deeply troubling, but not because I am a closeted digital dualist. Studying for a PhD in science and technology studies, I am well acquainted with the techniques used to take down dualism, whether they be online/offline, religious/secular or natural/artificial. The approach generally takes the form of placing intense focus at the fuzzy frontier between categories, highlighting how the drawing of the boundary is socially and historically contingent and unmasking its arbitrariness. That is, the dividing line between both sides of a dualism is already and always being negotiated. Bonus points are given to those who manage to unearth some unseemly genealogy that connects the dualism with sexism, racism, or another unsavory “–ism.” A short, albeit simple, example of this approach with respect to the natural/artificial dualism can be found here; this author goes so far as to claim that global climate control devices are as natural as “tribal” living. What do culture warriors stand to gain by taking down a pesky dualism? Both the writer of natural/artificial dualism post and the Cyborgology critics direct most of their efforts towards taking down those who seek more “natural” arrangements or desire more room in technological civilization for the ability to “disconnect.” On some level, eliminating the dualism from the conversation gives rhetorical power to those who do not find ideas like global climate control devices or devoting considerable amounts of their waking hours to interfacing with screens worrisome. If the alternatives are equally natural and real, those who desire bigger and more invasive interventions by humans in climatic and other earth systems or dream up increasingly digitally-augmented futures gain the argumentative higher ground. The onus then falls on critics to mobilize some other criteria that cannot be so easily deconstructed. At its worst, the taking down of dualisms lends itself to equally fallacious continuity arguments, where problematic aspects of the present can be justified or claimed to be (mostly) innocuous by their bearing a family resemblance to instances of the past that, from contemporary eyes, no longer seem to have been all that harmful. To staunch advocates for their elimination, dualisms are, at best, rooted in nostalgia and, at their worst, an unjust exercise of power. Yet, I worry that their concerns lead them to throw the baby out with the bath water. Yes, it is true that human categories are somewhat arbitrary and often unfair, but that does not mean they are completely unreliable fictions. True, they are leaky buckets used to imperfectly catch and organize aspects of perceived reality, but they are not always and completely independent of that reality. I view them as similar to the old quote about advertising: half (or some other percentage) of our categorizations reflect reality; the trouble is knowing which half. Yet, while strict dualisms are very obviously problematic and over-idealizing, holism can be equally misguided and inaccurate. Refusing to make any distinctions at all is simply the pursuit of ignorance. As can be clear from later clarifications and Carr’s rebuttal, strict digital dualism and strict holism are straw man positions. Still, the argument persists when there is seemingly less and less to argue about. Critics like Carr and more techno-optimistic Cyborgology theorists seem equally interested in the dynamic interplay of offline and online spaces and technologies. As Carr points out, if online and offline were completely separate worlds there would be nothing for people like him and Turkle to write about. Can we drop this already? Could both sides agree that all human practices and activities lie on some spectrum between face-to-face, embodied interaction and relatively isolated, anonymous text chat and quit going in circles with pointless labeling? I can’t prove it, but I feel the ostensible disagreement rests on differing moral valences. Those who more optimistically view the promise in an increasingly augmented future feel threatened by those more concerned with the undesirability of some the unintended side effects. Regardless, it is obvious that my interactions with my wife are phenomenologically different when I have my arms around her than when I send her a text message. Both are real in some sense, but I know which interaction I and most people I know would prefer. While I often enjoy Facebook and writing emails, at some critical point, the more the context of my life leads me to converse mainly through mediated channels rather than face-to-face, the less happy and more lonely I become. Yet, it is equally clear that the effect of digital communication technologies on my life is somewhat inescapable; I cannot avoid everyone who uses them and all instances where it is employed, and neither can I stop the effects such technologies have on systems and networks more distant from me that nonetheless impinge on my daily life. In truth, I think Neil Postman’s perspective is the most apt, though some readers may find this claim to be initially perplexing. Wasn’t Postman, famous for his critical portrayal of the television’s effect on public discourse as “amusing ourselves to death,” a digital dualist bar-none and a technological determinist at that (hint: I’m not convinced he was either)? I have a soft spot for Postman; reading his books on weekends in my small house in the plains of Montana motivated me to want to study technology. As such, I tend to read him sympathetically. In spite of the fact he plays too little attention to the “interpretive flexibility” of technologies and how they are social constructed, his conceptualization of the effects of technologies, once they are constructed, is insightful. On page 18 of Technopoly he asserts: “Technological change is neither additive nor subtractive. It is ecological.” Critics of digital technologies, at least the ones worth listening to, do not argue that they have reduced the ability to think or made us lonelier in any simple, linear, or zero-sum way. Instead, they recognize that their introduction has altered the ecology of thinking or socializing. I do not interpret Carr as arguing that his brain has an online mode and an offline mode per se. Rather, as his intellectual practices have come to be primarily mediated by his computer and the Internet, he feels it affecting his thinking in all situations. The previous ecological stasis, which he found comfortable and desirable, has been shifted and perhaps even destabilized. In the same way, an interaction between a grizzly bear and myself is substantively different depending on whether it occurs in a Montana forest or in a zoo. Natural/artificial may ultimately fail to accurately capture the distinction, but the fact that the character of these ecologies differ significantly and are distinct in regard to how exactly they were shaped by human hands is undeniable. Those who value less mediated interactions with animals and attempting to minimize the effects of human action on their ecologies are not inevitably being dualists; they may simply value a different balance of their technological ecology because of the activities and practices (the good lives) that such a balance affords or discourages. Of course, one can contend that Carr is making too big deal of the shift or that the effects on thinking by increased screen mediation are worth bearing because all the other benefits they might bring. However, that is moving toward a moral argument rather than an ontological one; the confusion of one for the other is what I think really lies at the heart of the digital dualism debate. The real question is: How much should a particular set of technologies be permitted to shape the characteristic ecologies of daily living? That I may disagree with Cyborgologists on the answer to this question does not mean I fail to appropriately grasp that technologies are malleable and socially constructed or that I am committing the sin of digital dualism. Rather, it simply means that I do not happen to share their vision of the good life. |
Details
AuthorTaylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research. Archives
July 2023
Blog Posts
On Vaccine Mandates Escaping the Ecomodernist Binary No, Electing Joe Biden Didn't Save American Democracy When Does Someone Deserve to Be Called "Doctor"? If You Don't Want Outbreaks, Don't Have In-Person Classes How to Stop Worrying and Live with Conspiracy Theorists Democracy and the Nuclear Stalemate Reopening Colleges & Universities an Unwise, Needless Gamble Radiation Politics in a Pandemic What Critics of Planet of the Humans Get Wrong Why Scientific Literacy Won't End the Pandemic Community Life in the Playborhood Who Needs What Technology Analysis? The Pedagogy of Control Don't Shovel Shit The Decline of American Community Makes Parenting Miserable The Limits of Machine-Centered Medicine Why Arming Teachers is a Terrible Idea Why School Shootings are More Likely in the Networked Age Against Epistocracy Gun Control and Our Political Talk Semi-Autonomous Tech and Driver Impairment Community in the Age of Limited Liability Conservative Case for Progressive Politics Hyperloop Likely to Be Boondoggle Policing the Boundaries of Medicine Automating Medicine On the Myth of Net Neutrality On Americans' Acquiescence to Injustice Science, Politics, and Partisanship Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate Privacy Threats and the Counterproductive Refuge of VPNs Andrew Potter's Macleans Shitstorm The (Inevitable?) Exportation of the American Way of Life The Irony of American Political Discourse: The Denial of Politics Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton Science's Legitimacy Problem Forbes' Faith-Based Understanding of Science There is No Anti-Scientism Movement, and It’s a Shame Too American Pro Rugby Should Be Community-Owned Why Not Break the Internet? Working for Scraps Solar Freakin' Car Culture Mass Shooting Victims ARE on the Rise Are These Shoes Made for Running? Underpants Gnomes and the Technocratic Theory of Progress Don't Drink the GMO Kool-Aid! On Being Driven by Driverless Cars Why America Needs the Educational Equivalent of the FDA On Introversion, the Internet and the Importance of Small Talk I (Still) Don't Believe in Digital Dualism The Anatomy of a Trolley Accident The Allure of Technological Solipsism The Quixotic Dangers Inherent in Reading Too Much If Science Is on Your Side, Then Who's on Mine? The High Cost of Endless Novelty - Part II The High Cost of Endless Novelty Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age The Symbolic Analyst Sweatshop in the Winner-Take-All Society On Digital Dualism: What Would Neil Postman Say? Redirecting the Technoscience Machine Battling my Cell Phone for the Good Life Categories
All
|