The current job market is tough, especially for recent college graduates with limited experience. The unemployment and underemployment rate for twenty-somethings is around 53%. Many who are employed work jobs for which they are overqualified. My wife, for example, has a master’s degree in Biotechnology and several years experience but works part-time for fifteen dollars an hour with no benefits as a laboratory technician. It increasingly appears that most job growth is occurring in low-skilled and low-paying positions. Some rise in the demand for highly skilled technical positions, however, suggests not a rising tide but a job market becoming more and more polarized in terms of both skill level and wages.
Yet, what is most disturbing about recent job market trends is not the continuation of wage/skill polarization but the dramatic increase in highly skilled knowledge workers, what Robert Reich called “symbolic analysts,” driven to work for nothing or next to it. Some have referred to this phenomenon as the “post-income” or the “post-employment” economy. While internships and other low paying positions traditionally amounted to a form of apprenticeship and eventually led to a stable position, the economics of the recession has spurred their development into a more permanent form of employment. Yet, it would be inaccurate to blame this simply on the weak job market. These areas of the economy are winner-take-all markets, and the recession has likely just exacerbated their effects. Economists Robert Frank and Philip Cook both described the functioning and explained the rise of winner-take-all markets in many areas of economic life in their 1996 book The Winner-Take-All Society. Such markets exist whenever the institutional and technological circumstances are such that an economic good is enjoyed on a large scale and/or fewer producers become required to produce it. As a result of exponentially increased payoffs accruing to those you succeed in becoming one of the few producers at the top, evermore contestants enter the game and invest increasing amounts of resources towards their attempts to win the competition. A classic example of a winner-take-all market is popular music since the advent of recording technologies. The ease of distributing music content and the low cost of duplicating performances, coupled with the small number of artists that any particular consumer can remember or remain interested in, both limits the number of pop stars at any particular moment and greatly amplifies the benefits enjoyed by those successful at becoming one. At the same time, the difference in talent between a successful pop star and someone who almost was one is close to negligible. It is this latter aspect that is critical to understanding the winner-take-all phenomenon: ever slighter differences in ability account for ever larger pay differentials. This, along with the increase in competitors and the intensity of the competition, accounts for the main negative consequences of such markets: social waste, inefficiency and augmented levels of inequality. Not only does the contemporary winner-take-all market in music waste the efforts of would-be pop musicians who over estimate their chances and never make it but also results in job markets consisting of a few super-rich and many making nothing at all, rather than a larger number of musicians of more moderate means. While music, art and sports have clearly been winner-take-all markets for at least a generation, their emergence in other fields is new. Increasingly, highly skilled knowledge workers work long hours for poverty wages or even no pay at all, often for years, in the hopes of winning the contest to become career Washington insiders, college professors, magazine/blog writers or attorneys. The second novelty to this growth in winner-take-all markets is that, in contrast to would-be pop singers whose talents go unheard, the fruits of these wannabe career knowledge workers do get consumed. It just so happens that much of the revenue generated from it never ends up in their pay checks. Seventy percent of college instructional faculty members, for instance, are not tenure-track, but adjuncts making as little as two thousand dollars to teach a course that twenty or more students each paid several hundred to thousand dollars to take. Contestants are not competing in these markets for the right to produce goods for them but for the chance to earn a good wage, status and job security. There is a name for workplaces within which human beings toil for long hours creating products that are sold by their managers for ten to twenty times more than the cost to make it: sweatshops. Unsurprisingly, arguments for maintaining sweatshops for American college graduates often vary little from those mobilized in defense of those present in countries like Haiti. Some decry the idea of raising sweatshop wages or banning unpaid internships since doing so would entail fewer positions in the short-run, seemingly implying that social utility is better maximized by large numbers of people scrapping to get by rather than some smaller number of people earning a decent paycheck. Such arguments, for either kind of sweatshop, ignore both the multiplying economic benefits of living wages and the larger problems cascading throughout economies as a result sweatshop practices, including: lower incomes for the average worker in the global/national job market and increased inequality. There are, of course, major differences between sweatshops in places like Haiti and the contemporary sweatshop for symbolic analysts. The former are simply seeking to feed and house themselves; the latter are vying against each other for a shot at a high-status and salaried job. By many measures the growth of the symbolic analyst sweatshop market is clearly undesirable; something ought to be done about it. Free marketers, of course, will deny any problem, likely claiming that it is just the analyst’s free choice to work for very little; this position obviously ignores how the employers involved are exploiting both the human tendency to overestimate one’s abilities and chances to succeed and the psychologically seductive power of a potentially high pay-off gamble. It is one thing when the loss from a bet is the ten dollars for a lotto ticket but quite another when someone’s livelihood and several years of their life is at stake. Regardless, Frank and Cook list several possibilities for reform. They suggest that winner-take-all markets can be mitigated through mechanisms that lower the rewards accruing to winners and reformulating incentive schemes to prevent too many people from entering the market, such as student loan and aid policies. True enough, there are likely far too many people being encouraged to pursue careers as lawyers, college professors and Washington policy analysts. Yet, the problem may be bigger than that and not solved by simply redirecting students into STEM fields. It might be that there are too many college grads seeking too few highly skilled positions. The continuing polarization of the job market and decline of moderately skilled but otherwise good jobs due to automation and other technological changes likely has only amplified the winner-take-all competition. It is one thing to give up one’s dream of becoming a college professor to be an engineer or accountant instead; graduates faced with the prospect of stocking grocery store shelves or sweeping floors for minimum wage pay are understandably desperate. A broader view of the development of the symbolic analyst sweatshop would take account of the whole range of policies, cultural ideas and sociotechnical systems that facilitate the current ways of doing employment. Rather than aiming to make the nation as compatible as possible with the winner-take-all market of international free-trade and look appealing to global financial capital, why not use the standard of the “good job” to guide employment policy. Such a standard would take as given the desirability of a broader distribution of jobs that are mentally stimulating, connect workers to each other and their communities, and pay a living wage. CEO’s could be awarded bonuses according to the number of people making a decent wage at their firms, counteracting the tendency to slash positions to appease stockholders. Policies encouraging workplace democracy or cooperative arrangements could avoid the necessity for legislators to actually design the exact conditions for the “good job,” encouraging workers to do it for themselves. Some constellation of such changes would likely create new problems of its own but certainly could not be any worse than the status quo. I have been following the “digital dualism” debate of the last few years, which has mostly emerged from Cyborgology blog critiques of writers like Nicholas Carr and Sherry Turkle, who worry about the effects of digital technologies on human thinking and social interaction. The charge of digital dualism is relatively straightforward. Critics of digital technologies and those are concerned about their effects on everyday life are accused of setting up a false division between the virtual and the real as distinct worlds or realities; they charged with assuming that digital is, in some sense, less real or authentic. Anti-digital dualists, drawing upon the work of Donna Harraway and others, contend that it is more sensible to think of digital and non-digital as composing one completely real augmented/cyborg reality; the digital and non-digital are equally real and not easily separated. I not only find this charge of Carr’s and Turkle’s work unfounded, but also I think that the intention of the digital dualism pejorative has more do to with differing moral imaginaries than differing comprehension of the ontological effects of digital communication technologies. Not only that, I think people on both sides could benefit from considering Neil Postman's view of technological change.
I find the digital dualism debate deeply troubling, but not because I am a closeted digital dualist. Studying for a PhD in science and technology studies, I am well acquainted with the techniques used to take down dualism, whether they be online/offline, religious/secular or natural/artificial. The approach generally takes the form of placing intense focus at the fuzzy frontier between categories, highlighting how the drawing of the boundary is socially and historically contingent and unmasking its arbitrariness. That is, the dividing line between both sides of a dualism is already and always being negotiated. Bonus points are given to those who manage to unearth some unseemly genealogy that connects the dualism with sexism, racism, or another unsavory “–ism.” A short, albeit simple, example of this approach with respect to the natural/artificial dualism can be found here; this author goes so far as to claim that global climate control devices are as natural as “tribal” living. What do culture warriors stand to gain by taking down a pesky dualism? Both the writer of natural/artificial dualism post and the Cyborgology critics direct most of their efforts towards taking down those who seek more “natural” arrangements or desire more room in technological civilization for the ability to “disconnect.” On some level, eliminating the dualism from the conversation gives rhetorical power to those who do not find ideas like global climate control devices or devoting considerable amounts of their waking hours to interfacing with screens worrisome. If the alternatives are equally natural and real, those who desire bigger and more invasive interventions by humans in climatic and other earth systems or dream up increasingly digitally-augmented futures gain the argumentative higher ground. The onus then falls on critics to mobilize some other criteria that cannot be so easily deconstructed. At its worst, the taking down of dualisms lends itself to equally fallacious continuity arguments, where problematic aspects of the present can be justified or claimed to be (mostly) innocuous by their bearing a family resemblance to instances of the past that, from contemporary eyes, no longer seem to have been all that harmful. To staunch advocates for their elimination, dualisms are, at best, rooted in nostalgia and, at their worst, an unjust exercise of power. Yet, I worry that their concerns lead them to throw the baby out with the bath water. Yes, it is true that human categories are somewhat arbitrary and often unfair, but that does not mean they are completely unreliable fictions. True, they are leaky buckets used to imperfectly catch and organize aspects of perceived reality, but they are not always and completely independent of that reality. I view them as similar to the old quote about advertising: half (or some other percentage) of our categorizations reflect reality; the trouble is knowing which half. Yet, while strict dualisms are very obviously problematic and over-idealizing, holism can be equally misguided and inaccurate. Refusing to make any distinctions at all is simply the pursuit of ignorance. As can be clear from later clarifications and Carr’s rebuttal, strict digital dualism and strict holism are straw man positions. Still, the argument persists when there is seemingly less and less to argue about. Critics like Carr and more techno-optimistic Cyborgology theorists seem equally interested in the dynamic interplay of offline and online spaces and technologies. As Carr points out, if online and offline were completely separate worlds there would be nothing for people like him and Turkle to write about. Can we drop this already? Could both sides agree that all human practices and activities lie on some spectrum between face-to-face, embodied interaction and relatively isolated, anonymous text chat and quit going in circles with pointless labeling? I can’t prove it, but I feel the ostensible disagreement rests on differing moral valences. Those who more optimistically view the promise in an increasingly augmented future feel threatened by those more concerned with the undesirability of some the unintended side effects. Regardless, it is obvious that my interactions with my wife are phenomenologically different when I have my arms around her than when I send her a text message. Both are real in some sense, but I know which interaction I and most people I know would prefer. While I often enjoy Facebook and writing emails, at some critical point, the more the context of my life leads me to converse mainly through mediated channels rather than face-to-face, the less happy and more lonely I become. Yet, it is equally clear that the effect of digital communication technologies on my life is somewhat inescapable; I cannot avoid everyone who uses them and all instances where it is employed, and neither can I stop the effects such technologies have on systems and networks more distant from me that nonetheless impinge on my daily life. In truth, I think Neil Postman’s perspective is the most apt, though some readers may find this claim to be initially perplexing. Wasn’t Postman, famous for his critical portrayal of the television’s effect on public discourse as “amusing ourselves to death,” a digital dualist bar-none and a technological determinist at that (hint: I’m not convinced he was either)? I have a soft spot for Postman; reading his books on weekends in my small house in the plains of Montana motivated me to want to study technology. As such, I tend to read him sympathetically. In spite of the fact he plays too little attention to the “interpretive flexibility” of technologies and how they are social constructed, his conceptualization of the effects of technologies, once they are constructed, is insightful. On page 18 of Technopoly he asserts: “Technological change is neither additive nor subtractive. It is ecological.” Critics of digital technologies, at least the ones worth listening to, do not argue that they have reduced the ability to think or made us lonelier in any simple, linear, or zero-sum way. Instead, they recognize that their introduction has altered the ecology of thinking or socializing. I do not interpret Carr as arguing that his brain has an online mode and an offline mode per se. Rather, as his intellectual practices have come to be primarily mediated by his computer and the Internet, he feels it affecting his thinking in all situations. The previous ecological stasis, which he found comfortable and desirable, has been shifted and perhaps even destabilized. In the same way, an interaction between a grizzly bear and myself is substantively different depending on whether it occurs in a Montana forest or in a zoo. Natural/artificial may ultimately fail to accurately capture the distinction, but the fact that the character of these ecologies differ significantly and are distinct in regard to how exactly they were shaped by human hands is undeniable. Those who value less mediated interactions with animals and attempting to minimize the effects of human action on their ecologies are not inevitably being dualists; they may simply value a different balance of their technological ecology because of the activities and practices (the good lives) that such a balance affords or discourages. Of course, one can contend that Carr is making too big deal of the shift or that the effects on thinking by increased screen mediation are worth bearing because all the other benefits they might bring. However, that is moving toward a moral argument rather than an ontological one; the confusion of one for the other is what I think really lies at the heart of the digital dualism debate. The real question is: How much should a particular set of technologies be permitted to shape the characteristic ecologies of daily living? That I may disagree with Cyborgologists on the answer to this question does not mean I fail to appropriately grasp that technologies are malleable and socially constructed or that I am committing the sin of digital dualism. Rather, it simply means that I do not happen to share their vision of the good life. Many people in well-off, developed nations are afflicted with an acute myopia when it comes to their understanding of technoscience. Everyone knows, of course, that contemporary technoscientists continually produce discoveries and devices that lessen drudgery, limit suffering, and provide comfort and convenience to human lives. However, there is a pervasive failure to see science and technology as not merely contributing solutions to modern social problems but also being one of their most significant causes. Sal Restivo[1], channeling C. Wright Mills, utilizes the metaphor of the science machine. That most people tend to only see the internal mechanisms of this machine leaves them unaware of the fact that the ends to which many contemporary science machines are being directed are anything but objective and value neutral. Contemporary science too easily contributes to the making of social problems because too many people mistakenly believe it to be autonomous and self-correcting, abdicating their own share of responsibility and allowing others direct it for them. Most importantly, science machines are too often steered mainly towards developing profitable treatments of symptoms, and frequently symptoms brought on in part by contemporary technoscience itself, rather than addressing underlying causes. The world of science is often popularly described as a marketplace for ideas. This economic metaphor conjures up an image of science seemingly guided and legitimated by some invisible hand of objectivity. Like markets, it is commonly assumed that science as an institution simply aggregates the activities of individual scientists to provide for an objectively “better” world. Unlike markets, however, scientists are assumed to be disinterested and not motivated by anything other than the desire to pursue unadulterated truth. Nonetheless, in the same way that any respectable scientist would aim to falsify an overly optimistic or unrealistic model of physical phenomena, it behooves social scientists to question such a rosy portrayal of scientific practice. Indeed, this has been the focus of the field of science and technology studies for decades.
Like any human institution, science is rife with inequities of power and influence, and there are many socially-dependent reasons why some avenues of research flourish while others flounder. For instance, why does nanoscience garner so much research attention but “green” chemistry so little? The answer is likely not that funding providers have been thoroughly and unequivocally convinced by the weight of the available evidence; many of the over-hyped promises of nanoscience are not yet anywhere close to being fulfilled. Edward Woodhouse[2] points to a number of reasons. Pertinent to my argument is his observation of the degree of interdependence, double binds, of the chemistry discipline and industry and government with business. Clearly, there are significant barriers to shifting to a novel paradigm for defining “good” chemistry when the “needs” of the current industry shape the curriculum and the narrowness of the pedagogy inhibits the development of a more innovative chemical industry. All the while, business can shape the government’s opinion of which research will be the most profitable and productive, and the most productive research also generally happens to be whatever has the most government backing. Put simply, the trajectory of scientific research is often not directed by scientific motivations or concerns, rather it is generally biased towards maintaining the momentum of the status quo and the interests of industry. The influence of business shapes research paradigms; focus is placed primarily on developments that can be easily marketable to private wants rather than public needs, an observation expanded upon by Woodhouse and Sarewitz[3]. Nanoscientists can promise new drug treatments and individual enhancements that will surely be expensive, although also likely beneficial, for those who can afford them. Yet, it seems that many nanomaterials will likely have toxic and/or carcinogenic effects themselves when released into the environment[4]. A world full of more benign, “green” chemicals, on the other hand, would seem to negate much of the need for some of those treatments, though only by threatening the bottom line of a pharmaceutical industry already adapted to the paradigm of symptom treatment. This illustrates the cruel joke too often played by some areas of contemporary science on the public at large. Technoscientists are busy at work to develop privately profitable treatments for the public health problems caused in part by the chemicals already developed and deployed by contemporary technoscience. It is a supply that succeeds in creating its own demand, and quite a lucrative process at that. Treating underlying causes rather than symptoms is a public good that often comes at private cost, while the current research support structure too frequently converts public tax dollars into private gain. It is not only in the competing paradigms of green chemistry and nanochemistry that this issue arises. Biotechnologists are genetically engineering crops to be more pest and disease resistant by tolerating or producing pesticides themselves, solving problems mostly created by moving to industrial monoculture in the first place. Yet, research into organic farming methods is poorly funded, and there are concerns that such genetic modifications and pesticide use are leading to a decline in the population of pollinating insects that are necessary for agriculture[5]. What might be the next step if biotech/agricultural research continues this dysfunctional trajectory? Genetically engineering pollinating insects to tolerate pesticides or engineering plants to not need pollinating insects at all? What unintended ecological consequences might those developments bring? The process seems to lead further and further to a point at which activities that could be relatively innocuous and straightforward, like maintaining one’s health or growing crops, are increasingly difficult without an ever expanding slew of expensive, invasive, and damaging chemicals and technologies. Goods that were once easily obtainable and cheap, though imperfect, have been transformed into specialized goods available to an ever more select few. However, the breakdown of natural processes into individual components that can each be provided by some new, specialized device or manufactured chemical obviously adds to standard economic measures of growth and progress; more holistic approaches, in comparison, are systematically devalued by such measures. I could go on to note other examples such as how network technologies and psychiatric medicine are used to cope with the contemporary forms of isolation and alienation brought on by practices of sociality increasingly modeled after communication and transportation networks, but the underlying mechanism is the same. If modern technoscience were to be likened to a machine; it would appear be a treadmill. As noted by Woodhouse[6], once technoscientists develop some new capacity it often becomes collectively unthinkable to forgo it. As result, the technoscience machine keeps increasing in speed, and members of technological civilization increasingly struggle to keep up. There are continually new band-aids and techno-fixes being introduced to treat the symptoms caused by previous generations of innovations, band-aids, and techno-fixes. Too little thought, energy, and research funding gets devoted to inquiring into how the dynamics of the science machine could be different: directed towards lessening the likelihood and damage of unintended consequences, removing or replacing irredeemable areas of technoscience, or addressing causes rather than merely treating symptoms. References [1] Restivo, S. (1988). Modern science as a social problem. Social Problems, 35 (3), 206-225. [2] Woodhouse, E. (2005). Nanoscience, green chemistry, and the privileged position of science. In S. Frickel, & K. Moore (Eds.), The new political sociology of science: Insitutions, networks, and power (pp. 148-181). Madison, WI: The University of Wisconsin Press. [3] Woodhouse, E., and Sarewitz, D. (2007). Science policies for reducing societal inequities. Science and Public Policy, 34 (3), 139-150. [4] Becker, H., Herzberg, F., Schulte, A., Kolossa-Gehring, M. (2010). The carcinogenic potential of nanomaterials, their release from products and options for regulating them. International Journal for Hygiene and Environmental Health. 214 (3), 231-238. [5] Suryanarayanan, S., Kleinman, D.L. (2011). Disappearing bess and reluctant regulators. Perspectives in Science and Technology Online, Summer. Retrieved from http://www.issues.org/27.4/p_suryanarayanan.html [6] Woodhouse, E. (2005). Nanoscience, green chemistry, and the privileged position of science. In S. Frickel, & K. Moore (Eds.), The new political sociology of science: Insitutions, networks, and power (pp. 148-181). Madison, WI: The University of Wisconsin Press. It is too often assumed that modern technologies are inherently liberating. Are they not simply tools with which individuals can pursue their own happiness? Allenby and Sarewitz certainly appear to make this assumption, in The Techno-Human Condition, by referring to technologies as “volition enhancers.” There is certainly a bit of truth to the assumption. My cell phone enables me to receive and send voice calls and text messages whenever and wherever I want. If I could muster up the dough to pay for a data plan, I could have the informational wealth of the Internet at my fingertips. Do not all these new capabilities simply improve my ability to choose and to act?
It is true that my cell phone affords me new capabilities and new freedoms, yet those affordances very easily become burdens. By making others more available to me it also makes me more available to others; I find myself answering my phone in annoyance more than not. Many decry feeling tethered to their devices, finding out that new chains have been wrought as soon as the old ones have been broken. As well, I see myself as more easily distracted and more often attempting to multitask in the belief that it will give me more time, a pursuit suggested to be futile (and maybe even cognitively damaging) by Clifford Nass and Nicholas Carr. I am struck how, when feeling lonely, I am more likely to text a quick message to my fiancé than to start up a conversation with a person sitting next to me. Mobile communication technologies enable a virtual privatization of public spaces; think about the usual scene in a Starbucks. At the same time that they have enabled users to multiply their social ties, people have increasingly used them to turn away from the public and in on themselves and their own private networks. Why venture an unsatisfying or risky conversation with a stranger when a loved one is always and instantly available? Imagine the day you bought your first cell phone. What if the salesperson informed you that eventually you would be constantly on call and working more than ever, loved ones would be irritated or worried if you do not answer immediately, you would find yourself texting at times when you should know better, and you would become a virtual recluse out in public? Would you still have bought it? You may be throwing up your hands at this point, claiming that this not technology’s doing but a simple lack of human discipline. Yet, social psychological research increasingly supports the view that the human will is much weaker and less rational than most people wish or think it to be. People generally choose to do what seems immediately easier in the local context, not through rationally self-interested and reflective deliberation. Jonathan Haidt, a moral psychologist, describes the human will through the metaphor of a rider on an elephant. The rational part of the self is the rider, who can only sometimes manage to steer the irrational and emotional elephant. For example, governments can easily quadruple organ donation rates by forcing people to make a check mark to opt out rather than to opt in. A popular computer program promises users the chance to reassert their mastery over their computer and conquer distraction by blocking WiFi access until the next reboot, a program ironically but aptly named “Freedom.” Philosopher of technology Langdon Winner has cogently argued that technologies have politics. He cites the tunnels on the Long Island expressway as an example, contending they were ostensibly designed by Robert Moses to be low enough to prevent public transit and therefore minorities from having access to “his” beaches. I would go farther in arguing that technologies are also built for particular notions of a good life. Rather than being mere neutral tools, their design encourage certain ways of living over other ones. Appropriating a technology for a different kind of life than it was built for, requires enough extra discipline and effort that many, if not most, people do not bother trying. Again, the elephant leads. If technologies often nudge people into acting in ways that they, upon reflection, would otherwise find undesirable, then it is logical to conclude that technologies could be better designed to help people live less distracted and more engaged lives. However, the contemporary culture of innovation inhibits this development. Emphasis is placed continually on more and more functionality and ostensible choices, and new “problems” are manufactured in order to justify the increase. Having to wait until arriving at home or work to check one’s email or being unable to take a picture of anything and everything did not seem to be a problem until it became part of the functionality of cell phones. Now, to some, it seems as an unreasonable inconvenience to do without. The idea that progress is the increase of complexity and functionality has been so ingrained that it has become much more difficult to buy a “simple” phone without a touch screen, keyboard, camera or innumerable other gadgets. For my last purchase, I had to settle for a brick phone with a slide out keyboard, which I subsequently taped shut since I found that the relatively more cumbersome character of traditional T9 texting encouraged me to call more and text less. Henry Ford said about the Model T, “Any customer can have a car painted any color that he wants so long as it is black.” Today, customers can have a gadget with any amount of functionality so long as it is has more options and features than yesterday. How can technologists better serve people who want less rather than more from their technologies? Currently, there are few incentives to promote the making of simpler technologies and even less to encourage their purchase. Increasing functionality increases profits for mobile providers because it permits the selling of extra services to the consumer. That is why they generally offer cutting edge models for free, and cheaper than simpler models, with a contract. Part of the problem is that service providers and manufacturers are too intertwined. Rather than being able to deceptively bundle a contract with a phone, the two purchases should be made separate by regulation. The bundling of phones with service providers prevents a fair and competitive phone market. Imagine if you had to buy your computer from Microsoft in order to use Windows. Going even further, the technologies should be made open enough so that small manufacturers could get in on the game or perhaps even open source cell phones would become a viable option. With the demise of the network of pay phones that once dotted public spaces, the need for affordable and simple access to mobile phone networks becomes more and more a requirement for modern living and thus a matter of the public good; it should be treated as such. Furthermore, phones and places should be designed to encourage people to use their phones differently or not at all. Why not require a “Do Not Disturb” setting on phones in which it does not ring unless the caller specifies, via a menu system, that the call is urgently important? Why not enforce cell-phone free zones where signal is jammed, as long as a wired phone is available nearby? If unnecessarily complex and distracting technologies already shape one’s life and behavior, are these recommendations anymore intrusive? Without more intelligent, less somnambulistic, technology policy, many people will continue to find themselves taking less time to stop and smell the roses; they will be far too busy buying bouquets with their smart phone. |
Details
AuthorTaylor C. Dotson is an associate professor at New Mexico Tech, a Science and Technology Studies scholar, and a research consultant with WHOA. He is the author of The Divide: How Fanatical Certitude is Destroying Democracy and Technically Together: Reconstructing Community in a Networked World. Here he posts his thoughts on issues mostly tangential to his current research. Archives
July 2023
Blog Posts
On Vaccine Mandates Escaping the Ecomodernist Binary No, Electing Joe Biden Didn't Save American Democracy When Does Someone Deserve to Be Called "Doctor"? If You Don't Want Outbreaks, Don't Have In-Person Classes How to Stop Worrying and Live with Conspiracy Theorists Democracy and the Nuclear Stalemate Reopening Colleges & Universities an Unwise, Needless Gamble Radiation Politics in a Pandemic What Critics of Planet of the Humans Get Wrong Why Scientific Literacy Won't End the Pandemic Community Life in the Playborhood Who Needs What Technology Analysis? The Pedagogy of Control Don't Shovel Shit The Decline of American Community Makes Parenting Miserable The Limits of Machine-Centered Medicine Why Arming Teachers is a Terrible Idea Why School Shootings are More Likely in the Networked Age Against Epistocracy Gun Control and Our Political Talk Semi-Autonomous Tech and Driver Impairment Community in the Age of Limited Liability Conservative Case for Progressive Politics Hyperloop Likely to Be Boondoggle Policing the Boundaries of Medicine Automating Medicine On the Myth of Net Neutrality On Americans' Acquiescence to Injustice Science, Politics, and Partisanship Moving Beyond Science and Pseudoscience in the Facilitated Communication Debate Privacy Threats and the Counterproductive Refuge of VPNs Andrew Potter's Macleans Shitstorm The (Inevitable?) Exportation of the American Way of Life The Irony of American Political Discourse: The Denial of Politics Why It Is Too Early for Sanders Supporters to Get Behind Hillary Clinton Science's Legitimacy Problem Forbes' Faith-Based Understanding of Science There is No Anti-Scientism Movement, and It’s a Shame Too American Pro Rugby Should Be Community-Owned Why Not Break the Internet? Working for Scraps Solar Freakin' Car Culture Mass Shooting Victims ARE on the Rise Are These Shoes Made for Running? Underpants Gnomes and the Technocratic Theory of Progress Don't Drink the GMO Kool-Aid! On Being Driven by Driverless Cars Why America Needs the Educational Equivalent of the FDA On Introversion, the Internet and the Importance of Small Talk I (Still) Don't Believe in Digital Dualism The Anatomy of a Trolley Accident The Allure of Technological Solipsism The Quixotic Dangers Inherent in Reading Too Much If Science Is on Your Side, Then Who's on Mine? The High Cost of Endless Novelty - Part II The High Cost of Endless Novelty Lock-up Your Wi-Fi Cards: Searching for the Good Life in a Technological Age The Symbolic Analyst Sweatshop in the Winner-Take-All Society On Digital Dualism: What Would Neil Postman Say? Redirecting the Technoscience Machine Battling my Cell Phone for the Good Life Categories
All
|