The Technophobe’s Dilemma: Nicholas Carr’s ‘The Glass Cage’

Nicholas Carr is well-known for his work critiquing emerging technologies, particularly his argument that “Google is making us stupid.” In his new book, “The Glass Cage: Automation and Us” (W. W. Norton & Company, 2014), he works in the same vein, taking on automation, or “the use of computers and software to do things we used to do ourselves.” Unfortunately, as with his argument against Google, in this book, Carr works too hard to demonize automation technologies, stretching examples and not working through the logic of his arguments. The end result is disappointing. Carr’s genuine insights into the effective uses of automation are lost amid overblown concerns about the dangers of automation.

Carr begins the book on a chilling note, describing an FAA memo that warned pilots and airlines that the increasing automation of commercial jetliners was leading to an increased risk of accidents. As pilots relied more and more on automatic systems, the memo noted, their flying skills were eroding. Consequently, when autopilot systems failed and human pilots were called on to take over, they were more likely to make errors that led to crashes. In Chapter 4, Carr describes in detail two such crashes, noting how reliance on autopiloting systems had lulled pilots into complacency and reduced their opportunities to hone their skills, a mix that ultimately led to fatal errors. In his preface, Carr uses airline automation to drive home the theme of the book, ominously stating that automation “has deeper, hidden effects,” and “not all of them are beneficial.”

Any airline accident is tragic and its cause should be a matter of concern, but as attention-grabbing as these accounts may be, Carr’s use of autopiloting systems to show the dangers of automation is deeply confusing, if not outright misleading. Despite his attempts to paint the automation of commercial air travel as a dangerous trend that is making flying less safe, Carr eventually acknowledges that the opposite is true. The FAA memo refers to a report on manual flying errors in the first decade of the 2000s. It turns out that period of time was the safest in the history of U.S. aviation. Quoting Carr:

Air travel’s lethal days are, mercifully, behind us. Flying is safe now, and pretty much everyone involved in the aviation business believes that advances in automation are one of the reasons why. Together with improvements in aircraft design, airline safety routines, crew training, and air traffic control, the mechanization and computerization of flight have contributed to the sharp and steady decline in accidents and deaths over the decades. In the United States and other Western countries, fatal airliner crashes have become exceedingly rare. Of the more than seven billion people who boarded U.S. commercial flights in the ten years from 2002 through 2011, only 153 ended up dying in a wreck, a rate of two deaths for every million passengers. In the ten years from 1962 through 1971, by contrast, 1.3 billion people took flights, and 1,696 of them died, for a rate of 133 deaths per million. But this sunny story carries a dark footnote. The overall decline in the number of plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation.

Carr gets the numbers wrong by a couple orders of magnitude — the rate is 2 deaths for every 100 million passengers — and his immediate emphasis on the “dark footnote” of this “spectacularly new type of accident” undercuts the rational response to those statistics: automation makes flying more safe, not less. But after this quick dismissal, the safety record of autopiloting systems is barely acknowledged again and when invoking the dangers of aircraft automation he routinely ignores this tenfold decrease in flying deaths, paying much more attention to the loss of romantic notions of flying, where pilots relying on automatic systems are said to feel a loss of “social status” and “sense of self.” In an earlier chapter on self-driving automobiles, Carr makes much the same point about the automatic transmission, noting that his first automatic made him “feel a little less like a driver and a little more like a passenger,” and ultimately led to his “resent[ing]” the vehicle.

When facts aren’t actively undercutting Carr’s argument, his examples themselves point to another problem with his approach. As he notes in his preface, we should be wary of automation because it changes “what we do and who we are.” This is almost certainly true, but it is difficult to understand the importance of such changes in the context of cars and airplanes since these devices are so new to human history. Automobiles date to the late 18th century. The Wright brothers’ first flight was in 1903. How central can such young technologies be to “who we are?”


Further, if we accept Carr’s claims about changes to these technologies, must we not also admit that these two inventions profoundly altered what we do and who we are as a species from their first appearance? If so, why should Carr’s attachment to the manual transmission or the self-worth of pilots — again, a profession that is barely a century old — matter, since neither change is nearly as consequential as the introduction of the underlying technologies themselves? Despite Carr’s protestations to the contrary, these incremental changes seem particularly unimportant to an understanding of “who we are” in light of the changes brought about by the initial technology.

Carr attempts to sidestep these difficulties by focusing on computers and software, rather than encompassing all tool use as automation experts do. For example, as one reads the book, the question of why he does not consider planes “automation” presents itself. Carr claims it is because modern planes, with electronic controls and automatic pilots, sever the link between the pilot’s actions and the embodiment of the action of flying. Where pilots once were able to feel pressure against the flaps connected to their instruments via hydraulics or wires, electronically controlled flight has replaced that tactile feedback with the trappings of video games: joysticks and monitors.

Carr falls prey to what we might call the technophobe’s dilemma: his critique of digital automation sometimes requires him to fail to recognize that some technologies — like pre-electronic aircraft — are automation technologies, and other times requires him to romanticize those technologies over their more modern counterparts. The result feels less like a rational critique of digital automation and more like a screed against change, the fundamental mode of technophobic argument.

Indeed, Carr frames many of his arguments against automation as defending who we are against change — changes in our sense of self, in our mental capacities. But, he doesn’t dwell much on the fact that who we are, presumably the generation raised in the 60s and 70s, is quite different than who we were. The advances in automation of previous generations altered Carr and his contemporaries; where their parents and grandparents may have nostalgically looked back to the unique and complicated embodiments of horse or train travel, Carr romanticizes the manual transmission. Rather than dealing with the tangled history of automation by addressing the development of the untold number of technologies that shape our current existence, he simply singles out computers and software for critique, and he doesn’t seem to recognize that his critiques could be equally applied to other technologies. Rather than unravelling the knot of automation, he simply cuts it.

This is particularly evident when the book turns to solutions. In Chapter 6, Carr writes of the loss of Inuit wayfinding, the skill of navigation using environmental clues like snowdrift patterns and animal behavior, and his final chapter includes a lengthy, elegiac analysis of the connection to the land that is forged when one harvests crops with a scythe, but he doesn’t suggest that we should all return to these practices. Rather, his solution is for computer software to be designed to address the attentional needs of human operators. For example, the use of “adaptive automation” (pdf), in which the automated system is designed to monitor both its task and its human operator, or simply programming automated systems to turn control over to their operators at random intervals, thereby encouraging them to maintain an appropriate level of alertness.

Such systems involve ever more complex software to handle their intelligent automatic processes, enabling them do so in ways that work better with human capacities. This is the message of the FAA memo with which Carr opened the book: pilots should not abandon automation, but spend more time in manual flying so as to stay alert and keep their skills sharp for when they are needed.

Although “The Glass Cage” can be at times a compelling read, it ultimately fails to persuade. Carr is certainly right that automation has and is changing who we are, but these changes do not seem nearly as dire as he would want us to believe. In the end automation is simply a scapegoat for Carr, just as Google was in his previous work: an idea to which those who are nervous about how our society is changing can affix their fears. But even in his critique, Carr cannot imagine a world without it.

Banner image credit: jbgeronimi