Evgeny Morozov’s disclosure that he physically locks up his wi-fi card in order to better concentrate on his work spurred an interesting comment-section exchange between him and Nicholas Carr. At the heart of their disagreement is a dispute concerning the malleability of technologies, how this plasticity ought to recognized and dealt with in intelligent discourse about their effects and how the various social problems enabled, afforded or worsened by contemporary technologies could be mitigated. Neither mentions, however, the good life.
Carr, though not ignorant of the contingency/plasticity of technology, tends to underplay malleability by defining a technology quite broadly and focusing mainly on their effects on his life and those of others. That is, he can talk about “the Net” doing X, such as contributing to increasingly shallow thinking and reading, because he is assuming and analyzing the Internet as it is presently constituted. Doing this heavy-handedly, of course, opens him up to charges of essentialism: assuming a technology has certain inherent and immutable characteristics.
Morozov criticizes him accordingly:
“Carr…refuses to abandon the notion of “the Net,” with its predetermined goals and inherent features; instead of exploring the interplay between design, political economy, and information science…”
Morozov’s critique reflects the theoretical outlook of a great deal of STS research, particularly the approaches of “social construction of technology” and “actor-network theory.” These scholars hope to avoid the pitfalls of technological determinism – the belief that technology drives history or develops according to its own, and not human, logic – by focusing on the social, economic and political interests and forces that shape the trajectory of a technological development as well as the interpretive flexibility of those technologies to different populations. A constructivist scholar would argue that the Internet could have been quite different than it is today and would emphasize the diversity of ways in which it is currently used.
Yet, I often feel that people like Morozov often go too far and over-state the case for the flexibility of the web. While the Internet could be different and likely will be so in several years, in the short-term its structure and dynamics are fairly fixed. Technologies have a certain momentum to them. This means that most of my friends will continue to “connect” through Facebook whether I like the website or not. Neither is it very likely that an Internet device that aids rather than hinders my deep reading practices will emerge any time soon. Taking this level of obduracy or fixedness into account in one’s analysis is neither essentialism nor determinism, although it can come close.
All this talk of technology and malleability is important because a scholar’s view of the matter tends to color his or her capacity to imagine or pursue possible reforms to mitigate many of the undesirable consequences of contemporary technologies. Determinists or quasi-determinists can succumb to a kind of fatalism, whether it be in Heidegger’s lament that “only a god can save us” or Kevin Kelly’s almost religious faith in the idea that technology somehow “wants” to offer human beings more and more choice and thereby make them happy.
There is an equal level of risk, however, in overemphasizing flexibility in taking a quasi-instrumentalist viewpoint. One might fall prey to technological “solutionism,” the excessive faith in the potential of technological innovation to fix social problems – including those caused by prior ill-conceived technological fixes. Many today, for instance, look to social networking technologies to ameliorate the relational fragmentation enabled by previous generations of network technologies: the highway system, suburban sprawl and the telephone.
A similar risk is the over-estimation of the capacity of individuals to appropriate, hack or otherwise work around obdurate technological systems. Sure, working class Hispanics frequently turn old automobiles into “Low Riders” and French computer nerds hacked the Minitel system into an electronic singles’ bar, but it would be imprudent to generalize from these cases. Actively opposing the materialized intentions of designers requires expertise and resources that many users of any particular technology do not have. Too seldom do those who view technologies as highly malleable ask, “Who is actually empowered in the necessary ways to be able to appropriate this technology?” Generally, the average citizen is not.
The difficulty of mitigating fairly obdurate features of Internet technologies is apparent in the incident that I mentioned at the beginning of this post: Morozov regularly locks up his Internet cable and wi-fi card in a timed safe. He even goes so far as to include the screw-drivers that he might use to thwart the timer and access the Internet prematurely. Unsurprisingly, Carr took a lot of satisfaction in this admission. It would appear that some of the characteristics of the Internet, for Morozov, remain quite inflexible to his wishes, since he often requires a fairly involved system and coterie of other technologies in order to allay his own in-the-moment decision-making failures in using it. Of course, Morozov, is not what Nathan Jurgenson insultingly and dismissively calls a “refusenik,” someone refusing to utilize the Internet based on ostensibly problematic assumptions about addiction, or certain ascetic and aesthetic attachments. However, the degree to which he must delegate to additional technologies in order to cope with and mitigate the alluring pull of endless Internet-enabled novelty on his life is telling.
Morozov, in fact, copes with the shaping power of Internet technologies on his moral choices as philosopher of technology Peter-Paul Verbeek would recommend. Rather than attempting to completely eliminate an onerous technology from his life, Morozov has developed a tactic that helps him guide his relationship with that technology and its effects on his practices in a more desirable direction. He strives to maximize the “goods” and minimize the “bads.” Because it otherwise persuades or seduces him into distraction, feeding his addiction to novelty, Morozov locks up his wi-fi card so he can better pursue the good life.
Yet, these kinds of tactics seem somewhat unsatisfying to me. It is depressing that so much individual effort must be expended in order to mitigate the undesirable behaviors too easily afforded or encouraged by many contemporary technologies. Evan Selinger, for instance, has noted how the dominance of electronically mediated communication increasingly leads to a mindset in which everyday pleasantries, niceties and salutations come to be viewed as annoyingly inconvenient. Such a view, of course, fails to recognize the social value of those seemingly inefficient and superfluous “thank you’s” and “warmest regards’.” Regardless, Selinger is forced to do a great deal more parental labor to disabuse his daughter of such a view once her new iPod affords an alluring and more personally “efficient” alternative to hand-writing her thank-you notes. Raising non-narcissistic children is hard enough without Apple products tipping the scale in the other direction.
Internet technologies, of course, could be different and less encouraging of such sociopathological approaches to etiquette or other forms of self-centered behavior, but they are unlikely to be so in the short-term. Therefore, cultivating opposing behaviors or practicing some level of avoidance are not the responses of a naïve and fearful Luddite or “refusenik” but of someone mindful of the kind of life they want (or want their children) to live pursuing what is often the only feasible option available. Those pursuing such reactive tactics, of course, may lack a refined philosophical understanding of why they do what they do, but their worries should not be dismissed as naïve or illogically fearful simply because they struggle to articulate a sophisticated reasoning.
Too little attention and too limited of resources are focused on ways to mitigate declines in civility or other technological consequences that ordinary citizens worry about and the works of Carr and Sherry Turkle so cogently expose. Too often, the focus is on never-ending theoretical debates about how to “properly” talk about technology or forever describing all the relevant discursive spaces. More systematically studying the possibilities for reform seems more fruitful than accusations that so-and-so is a “digital dualist,” a charge that I think has more to do with the accused viewing networked technologies unfavorably than their work actually being dualistic. Theoretical distinctions, of course, are important. Yet, at some point neither scholarship nor the public benefits from the linguistic fisticuffs; it is clearly more a matter of egos and the battle over who gets to draw the relevant semantic frontier, outside of which any argument or observation can be considered too insufficiently “nuanced” to be worthy of serious attention.
Regardless, barring the broader embrace of systems of technology assessment and other substantive means of formally or informally regulating technologies, some concerned citizen respond to tendency of many contemporary technologies to fragment their lives or distract them from the things they value by refusing to upgrade their phones or unplugging their TVs. Only the truly exceptional, of course, lock them in safes. Yet, the avoidance of technologies that encourage unhealthy or undesirable behaviors is not the sign of some cognitive failing; for many people, it beats acquiescence, and technological civilization currently provides little support for doing anything in between.
The current job market is tough, especially for recent college graduates with limited experience. The unemployment and underemployment rate for twenty-somethings is around 53%. Many who are employed work jobs for which they are overqualified. My wife, for example, has a master’s degree in Biotechnology and several years experience but works part-time for fifteen dollars an hour with no benefits as a laboratory technician. It increasingly appears that most job growth is occurring in low-skilled and low-paying positions. Some rise in the demand for highly skilled technical positions, however, suggests not a rising tide but a job market becoming more and more polarized in terms of both skill level and wages.
Yet, what is most disturbing about recent job market trends is not the continuation of wage/skill polarization but the dramatic increase in highly skilled knowledge workers, what Robert Reich called “symbolic analysts,” driven to work for nothing or next to it. Some have referred to this phenomenon as the “post-income” or the “post-employment” economy. While internships and other low paying positions traditionally amounted to a form of apprenticeship and eventually led to a stable position, the economics of the recession has spurred their development into a more permanent form of employment. Yet, it would be inaccurate to blame this simply on the weak job market. These areas of the economy are winner-take-all markets, and the recession has likely just exacerbated their effects.
Economists Robert Frank and Philip Cook both described the functioning and explained the rise of winner-take-all markets in many areas of economic life in their 1996 book The Winner-Take-All Society. Such markets exist whenever the institutional and technological circumstances are such that an economic good is enjoyed on a large scale and/or fewer producers become required to produce it. As a result of exponentially increased payoffs accruing to those you succeed in becoming one of the few producers at the top, evermore contestants enter the game and invest increasing amounts of resources towards their attempts to win the competition.
A classic example of a winner-take-all market is popular music since the advent of recording technologies. The ease of distributing music content and the low cost of duplicating performances, coupled with the small number of artists that any particular consumer can remember or remain interested in, both limits the number of pop stars at any particular moment and greatly amplifies the benefits enjoyed by those successful at becoming one. At the same time, the difference in talent between a successful pop star and someone who almost was one is close to negligible. It is this latter aspect that is critical to understanding the winner-take-all phenomenon: ever slighter differences in ability account for ever larger pay differentials. This, along with the increase in competitors and the intensity of the competition, accounts for the main negative consequences of such markets: social waste, inefficiency and augmented levels of inequality. Not only does the contemporary winner-take-all market in music waste the efforts of would-be pop musicians who over estimate their chances and never make it but also results in job markets consisting of a few super-rich and many making nothing at all, rather than a larger number of musicians of more moderate means.
While music, art and sports have clearly been winner-take-all markets for at least a generation, their emergence in other fields is new. Increasingly, highly skilled knowledge workers work long hours for poverty wages or even no pay at all, often for years, in the hopes of winning the contest to become career Washington insiders, college professors, magazine/blog writers or attorneys. The second novelty to this growth in winner-take-all markets is that, in contrast to would-be pop singers whose talents go unheard, the fruits of these wannabe career knowledge workers do get consumed. It just so happens that much of the revenue generated from it never ends up in their pay checks. Seventy percent of college instructional faculty members, for instance, are not tenure-track, but adjuncts making as little as two thousand dollars to teach a course that twenty or more students each paid several hundred to thousand dollars to take. Contestants are not competing in these markets for the right to produce goods for them but for the chance to earn a good wage, status and job security.
There is a name for workplaces within which human beings toil for long hours creating products that are sold by their managers for ten to twenty times more than the cost to make it: sweatshops. Unsurprisingly, arguments for maintaining sweatshops for American college graduates often vary little from those mobilized in defense of those present in countries like Haiti. Some decry the idea of raising sweatshop wages or banning unpaid internships since doing so would entail fewer positions in the short-run, seemingly implying that social utility is better maximized by large numbers of people scrapping to get by rather than some smaller number of people earning a decent paycheck. Such arguments, for either kind of sweatshop, ignore both the multiplying economic benefits of living wages and the larger problems cascading throughout economies as a result sweatshop practices, including: lower incomes for the average worker in the global/national job market and increased inequality. There are, of course, major differences between sweatshops in places like Haiti and the contemporary sweatshop for symbolic analysts. The former are simply seeking to feed and house themselves; the latter are vying against each other for a shot at a high-status and salaried job.
By many measures the growth of the symbolic analyst sweatshop market is clearly undesirable; something ought to be done about it. Free marketers, of course, will deny any problem, likely claiming that it is just the analyst’s free choice to work for very little; this position obviously ignores how the employers involved are exploiting both the human tendency to overestimate one’s abilities and chances to succeed and the psychologically seductive power of a potentially high pay-off gamble. It is one thing when the loss from a bet is the ten dollars for a lotto ticket but quite another when someone’s livelihood and several years of their life is at stake.
Regardless, Frank and Cook list several possibilities for reform. They suggest that winner-take-all markets can be mitigated through mechanisms that lower the rewards accruing to winners and reformulating incentive schemes to prevent too many people from entering the market, such as student loan and aid policies. True enough, there are likely far too many people being encouraged to pursue careers as lawyers, college professors and Washington policy analysts. Yet, the problem may be bigger than that and not solved by simply redirecting students into STEM fields. It might be that there are too many college grads seeking too few highly skilled positions. The continuing polarization of the job market and decline of moderately skilled but otherwise good jobs due to automation and other technological changes likely has only amplified the winner-take-all competition. It is one thing to give up one’s dream of becoming a college professor to be an engineer or accountant instead; graduates faced with the prospect of stocking grocery store shelves or sweeping floors for minimum wage pay are understandably desperate.
A broader view of the development of the symbolic analyst sweatshop would take account of the whole range of policies, cultural ideas and sociotechnical systems that facilitate the current ways of doing employment. Rather than aiming to make the nation as compatible as possible with the winner-take-all market of international free-trade and look appealing to global financial capital, why not use the standard of the “good job” to guide employment policy. Such a standard would take as given the desirability of a broader distribution of jobs that are mentally stimulating, connect workers to each other and their communities, and pay a living wage. CEO’s could be awarded bonuses according to the number of people making a decent wage at their firms, counteracting the tendency to slash positions to appease stockholders. Policies encouraging workplace democracy or cooperative arrangements could avoid the necessity for legislators to actually design the exact conditions for the “good job,” encouraging workers to do it for themselves. Some constellation of such changes would likely create new problems of its own but certainly could not be any worse than the status quo.
I have been following the “digital dualism” debate of the last few years, which has mostly emerged from Cyborgology blog critiques of writers like Nicholas Carr and Sherry Turkle, who worry about the effects of digital technologies on human thinking and social interaction. The charge of digital dualism is relatively straightforward. Critics of digital technologies and those are concerned about their effects on everyday life are accused of setting up a false division between the virtual and the real as distinct worlds or realities; they charged with assuming that digital is, in some sense, less real or authentic. Anti-digital dualists, drawing upon the work of Donna Harraway and others, contend that it is more sensible to think of digital and non-digital as composing one completely real augmented/cyborg reality; the digital and non-digital are equally real and not easily separated. I not only find this charge of Carr’s and Turkle’s work unfounded, but also I think that the intention of the digital dualism pejorative has more do to with differing moral imaginaries than differing comprehension of the ontological effects of digital communication technologies. Not only that, I think people on both sides could benefit from considering Neil Postman's view of technological change.
I find the digital dualism debate deeply troubling, but not because I am a closeted digital dualist. Studying for a PhD in science and technology studies, I am well acquainted with the techniques used to take down dualism, whether they be online/offline, religious/secular or natural/artificial. The approach generally takes the form of placing intense focus at the fuzzy frontier between categories, highlighting how the drawing of the boundary is socially and historically contingent and unmasking its arbitrariness. That is, the dividing line between both sides of a dualism is already and always being negotiated. Bonus points are given to those who manage to unearth some unseemly genealogy that connects the dualism with sexism, racism, or another unsavory “–ism.” A short, albeit simple, example of this approach with respect to the natural/artificial dualism can be found here; this author goes so far as to claim that global climate control devices are as natural as “tribal” living.
What do culture warriors stand to gain by taking down a pesky dualism? Both the writer of natural/artificial dualism post and the Cyborgology critics direct most of their efforts towards taking down those who seek more “natural” arrangements or desire more room in technological civilization for the ability to “disconnect.” On some level, eliminating the dualism from the conversation gives rhetorical power to those who do not find ideas like global climate control devices or devoting considerable amounts of their waking hours to interfacing with screens worrisome. If the alternatives are equally natural and real, those who desire bigger and more invasive interventions by humans in climatic and other earth systems or dream up increasingly digitally-augmented futures gain the argumentative higher ground. The onus then falls on critics to mobilize some other criteria that cannot be so easily deconstructed. At its worst, the taking down of dualisms lends itself to equally fallacious continuity arguments, where problematic aspects of the present can be justified or claimed to be (mostly) innocuous by their bearing a family resemblance to instances of the past that, from contemporary eyes, no longer seem to have been all that harmful.
To staunch advocates for their elimination, dualisms are, at best, rooted in nostalgia and, at their worst, an unjust exercise of power. Yet, I worry that their concerns lead them to throw the baby out with the bath water. Yes, it is true that human categories are somewhat arbitrary and often unfair, but that does not mean they are completely unreliable fictions. True, they are leaky buckets used to imperfectly catch and organize aspects of perceived reality, but they are not always and completely independent of that reality. I view them as similar to the old quote about advertising: half (or some other percentage) of our categorizations reflect reality; the trouble is knowing which half. Yet, while strict dualisms are very obviously problematic and over-idealizing, holism can be equally misguided and inaccurate. Refusing to make any distinctions at all is simply the pursuit of ignorance.
As can be clear from later clarifications and Carr’s rebuttal, strict digital dualism and strict holism are straw man positions. Still, the argument persists when there is seemingly less and less to argue about. Critics like Carr and more techno-optimistic Cyborgology theorists seem equally interested in the dynamic interplay of offline and online spaces and technologies. As Carr points out, if online and offline were completely separate worlds there would be nothing for people like him and Turkle to write about. Can we drop this already? Could both sides agree that all human practices and activities lie on some spectrum between face-to-face, embodied interaction and relatively isolated, anonymous text chat and quit going in circles with pointless labeling? I can’t prove it, but I feel the ostensible disagreement rests on differing moral valences. Those who more optimistically view the promise in an increasingly augmented future feel threatened by those more concerned with the undesirability of some the unintended side effects.
Regardless, it is obvious that my interactions with my wife are phenomenologically different when I have my arms around her than when I send her a text message. Both are real in some sense, but I know which interaction I and most people I know would prefer. While I often enjoy Facebook and writing emails, at some critical point, the more the context of my life leads me to converse mainly through mediated channels rather than face-to-face, the less happy and more lonely I become. Yet, it is equally clear that the effect of digital communication technologies on my life is somewhat inescapable; I cannot avoid everyone who uses them and all instances where it is employed, and neither can I stop the effects such technologies have on systems and networks more distant from me that nonetheless impinge on my daily life.
In truth, I think Neil Postman’s perspective is the most apt, though some readers may find this claim to be initially perplexing. Wasn’t Postman, famous for his critical portrayal of the television’s effect on public discourse as “amusing ourselves to death,” a digital dualist bar-none and a technological determinist at that (hint: I’m not convinced he was either)? I have a soft spot for Postman; reading his books on weekends in my small house in the plains of Montana motivated me to want to study technology. As such, I tend to read him sympathetically. In spite of the fact he plays too little attention to the “interpretive flexibility” of technologies and how they are social constructed, his conceptualization of the effects of technologies, once they are constructed, is insightful. On page 18 of Technopoly he asserts: “Technological change is neither additive nor subtractive. It is ecological.”
Critics of digital technologies, at least the ones worth listening to, do not argue that they have reduced the ability to think or made us lonelier in any simple, linear, or zero-sum way. Instead, they recognize that their introduction has altered the ecology of thinking or socializing. I do not interpret Carr as arguing that his brain has an online mode and an offline mode per se. Rather, as his intellectual practices have come to be primarily mediated by his computer and the Internet, he feels it affecting his thinking in all situations. The previous ecological stasis, which he found comfortable and desirable, has been shifted and perhaps even destabilized.
In the same way, an interaction between a grizzly bear and myself is substantively different depending on whether it occurs in a Montana forest or in a zoo. Natural/artificial may ultimately fail to accurately capture the distinction, but the fact that the character of these ecologies differ significantly and are distinct in regard to how exactly they were shaped by human hands is undeniable. Those who value less mediated interactions with animals and attempting to minimize the effects of human action on their ecologies are not inevitably being dualists; they may simply value a different balance of their technological ecology because of the activities and practices (the good lives) that such a balance affords or discourages.
Of course, one can contend that Carr is making too big deal of the shift or that the effects on thinking by increased screen mediation are worth bearing because all the other benefits they might bring. However, that is moving toward a moral argument rather than an ontological one; the confusion of one for the other is what I think really lies at the heart of the digital dualism debate. The real question is: How much should a particular set of technologies be permitted to shape the characteristic ecologies of daily living? That I may disagree with Cyborgologists on the answer to this question does not mean I fail to appropriately grasp that technologies are malleable and socially constructed or that I am committing the sin of digital dualism. Rather, it simply means that I do not happen to share their vision of the good life.
It is too often assumed that modern technologies are inherently liberating. Are they not simply tools with which individuals can pursue their own happiness? Allenby and Sarewitz certainly appear to make this assumption, in The Techno-Human Condition, by referring to technologies as “volition enhancers.” There is certainly a bit of truth to the assumption. My cell phone enables me to receive and send voice calls and text messages whenever and wherever I want. If I could muster up the dough to pay for a data plan, I could have the informational wealth of the Internet at my fingertips. Do not all these new capabilities simply improve my ability to choose and to act?
It is true that my cell phone affords me new capabilities and new freedoms, yet those affordances very easily become burdens. By making others more available to me it also makes me more available to others; I find myself answering my phone in annoyance more than not. Many decry feeling tethered to their devices, finding out that new chains have been wrought as soon as the old ones have been broken. As well, I see myself as more easily distracted and more often attempting to multitask in the belief that it will give me more time, a pursuit suggested to be futile (and maybe even cognitively damaging) by Clifford Nass and Nicholas Carr. I am struck how, when feeling lonely, I am more likely to text a quick message to my fiancé than to start up a conversation with a person sitting next to me. Mobile communication technologies enable a virtual privatization of public spaces; think about the usual scene in a Starbucks. At the same time that they have enabled users to multiply their social ties, people have increasingly used them to turn away from the public and in on themselves and their own private networks. Why venture an unsatisfying or risky conversation with a stranger when a loved one is always and instantly available?
Imagine the day you bought your first cell phone. What if the salesperson informed you that eventually you would be constantly on call and working more than ever, loved ones would be irritated or worried if you do not answer immediately, you would find yourself texting at times when you should know better, and you would become a virtual recluse out in public? Would you still have bought it? You may be throwing up your hands at this point, claiming that this not technology’s doing but a simple lack of human discipline.
Yet, social psychological research increasingly supports the view that the human will is much weaker and less rational than most people wish or think it to be. People generally choose to do what seems immediately easier in the local context, not through rationally self-interested and reflective deliberation. Jonathan Haidt, a moral psychologist, describes the human will through the metaphor of a rider on an elephant. The rational part of the self is the rider, who can only sometimes manage to steer the irrational and emotional elephant. For example, governments can easily quadruple organ donation rates by forcing people to make a check mark to opt out rather than to opt in. A popular computer program promises users the chance to reassert their mastery over their computer and conquer distraction by blocking WiFi access until the next reboot, a program ironically but aptly named “Freedom.”
Philosopher of technology Langdon Winner has cogently argued that technologies have politics. He cites the tunnels on the Long Island expressway as an example, contending they were ostensibly designed by Robert Moses to be low enough to prevent public transit and therefore minorities from having access to “his” beaches. I would go farther in arguing that technologies are also built for particular notions of a good life. Rather than being mere neutral tools, their design encourage certain ways of living over other ones. Appropriating a technology for a different kind of life than it was built for, requires enough extra discipline and effort that many, if not most, people do not bother trying. Again, the elephant leads.
If technologies often nudge people into acting in ways that they, upon reflection, would otherwise find undesirable, then it is logical to conclude that technologies could be better designed to help people live less distracted and more engaged lives. However, the contemporary culture of innovation inhibits this development. Emphasis is placed continually on more and more functionality and ostensible choices, and new “problems” are manufactured in order to justify the increase. Having to wait until arriving at home or work to check one’s email or being unable to take a picture of anything and everything did not seem to be a problem until it became part of the functionality of cell phones. Now, to some, it seems as an unreasonable inconvenience to do without. The idea that progress is the increase of complexity and functionality has been so ingrained that it has become much more difficult to buy a “simple” phone without a touch screen, keyboard, camera or innumerable other gadgets. For my last purchase, I had to settle for a brick phone with a slide out keyboard, which I subsequently taped shut since I found that the relatively more cumbersome character of traditional T9 texting encouraged me to call more and text less. Henry Ford said about the Model T, “Any customer can have a car painted any color that he wants so long as it is black.” Today, customers can have a gadget with any amount of functionality so long as it is has more options and features than yesterday.
How can technologists better serve people who want less rather than more from their technologies? Currently, there are few incentives to promote the making of simpler technologies and even less to encourage their purchase. Increasing functionality increases profits for mobile providers because it permits the selling of extra services to the consumer. That is why they generally offer cutting edge models for free, and cheaper than simpler models, with a contract. Part of the problem is that service providers and manufacturers are too intertwined. Rather than being able to deceptively bundle a contract with a phone, the two purchases should be made separate by regulation. The bundling of phones with service providers prevents a fair and competitive phone market. Imagine if you had to buy your computer from Microsoft in order to use Windows. Going even further, the technologies should be made open enough so that small manufacturers could get in on the game or perhaps even open source cell phones would become a viable option. With the demise of the network of pay phones that once dotted public spaces, the need for affordable and simple access to mobile phone networks becomes more and more a requirement for modern living and thus a matter of the public good; it should be treated as such.
Furthermore, phones and places should be designed to encourage people to use their phones differently or not at all. Why not require a “Do Not Disturb” setting on phones in which it does not ring unless the caller specifies, via a menu system, that the call is urgently important? Why not enforce cell-phone free zones where signal is jammed, as long as a wired phone is available nearby? If unnecessarily complex and distracting technologies already shape one’s life and behavior, are these recommendations anymore intrusive? Without more intelligent, less somnambulistic, technology policy, many people will continue to find themselves taking less time to stop and smell the roses; they will be far too busy buying bouquets with their smart phone.
Taylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research.
If You Don't Want Outbreaks, Don't Have In-Person Classes
How to Stop Worrying and Live with Conspiracy Theorists
Democracy and the Nuclear Stalemate
Reopening Colleges & Universities an Unwise, Needless Gamble
Radiation Politics in a Pandemic
What Critics of Planet of the Humans Get Wrong
Why Scientific Literacy Won't End the Pandemic
Community Life in the Playborhood
Who Needs What Technology Analysis?
The Pedagogy of Control
Don't Shovel Shit
The Decline of American Community Makes Parenting Miserable
The Limits of Machine-Centered Medicine
Why Arming Teachers is a Terrible Idea
Why School Shootings are More Likely in the Networked Age
Gun Control and Our Political Talk
Semi-Autonomous Tech and Driver Impairment
Community in the Age of Limited Liability
Conservative Case for Progressive Politics
Hyperloop Likely to Be Boondoggle
Policing the Boundaries of Medicine
On the Myth of Net Neutrality
On Americans' Acquiescence to Injustice
Science, Politics, and Partisanship
Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate
Privacy Threats and the Counterproductive Refuge of VPNs
Andrew Potter's Macleans Shitstorm
The (Inevitable?) Exportation of the American Way of Life
The Irony of American Political Discourse: The Denial of Politics
Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton
Science's Legitimacy Problem
Forbes' Faith-Based Understanding of Science
There is No Anti-Scientism Movement, and It’s a Shame Too
American Pro Rugby Should Be Community-Owned
Why Not Break the Internet?
Working for Scraps
Solar Freakin' Car Culture
Mass Shooting Victims ARE on the Rise
Are These Shoes Made for Running?
Underpants Gnomes and the Technocratic Theory of Progress
Don't Drink the GMO Kool-Aid!
On Being Driven by Driverless Cars
Why America Needs the Educational Equivalent of the FDA
On Introversion, the Internet and the Importance of Small Talk
I (Still) Don't Believe in Digital Dualism
The Anatomy of a Trolley Accident
The Allure of Technological Solipsism
The Quixotic Dangers Inherent in Reading Too Much
If Science Is on Your Side, Then Who's on Mine?
The High Cost of Endless Novelty - Part II
The High Cost of Endless Novelty
Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age
The Symbolic Analyst Sweatshop in the Winner-Take-All Society
On Digital Dualism: What Would Neil Postman Say?
Redirecting the Technoscience Machine
Battling my Cell Phone for the Good Life