What We Should Fear

Each December for the past fifteen years, the literary agent John Brockman has pulled out his Rolodex and asked a legion of top scientists and writers to ponder a single question: What scientific concept would improve everybody’s cognitive tool kit? (Or: What have you changed your mind about?) This year, Brockman’s panelists (myself included) agreed to take on the subject of what we should fear. There’s the fiscal cliff, the continued European economic crisis, the perpetual tensions in the Middle East. But what about the things that may happen in twenty, fifty, or a hundred years? The premise, as the science historian George Dyson put it, is that “people tend to worry too much about things that it doesn’t do any good to worry about, and not to worry enough about things we should be worrying about.” A hundred fifty contributors wrote essays for the project. The result is a recently published collection, “What *Should* We Be Worried About?” available without charge at John Brockman’s edge.org.

A few of the essays are too glib; it may sound comforting to say that ”the only thing we need to worry about is worry itself” (as several contributors suggested), but anybody who has lived through Chernobyl or Fukushima knows otherwise. Surviving disasters requires contingency plans, and so does avoiding them in first places. But many of the essays are insightful, and bring attention to a wide range of challenges for which society is not yet adequately prepared.


One set of essays focusses on disasters that could happen now, or in the not-too-distant future. Consider, for example, our ever-growing dependence on the Internet. As the philosopher Daniel Dennett puts it:

We really don’t have to worry much about an impoverished teenager making a nuclear weapon in his slum; it would cost millions of dollars and be hard to do inconspicuously, given the exotic materials required. But such a teenager with a laptop and an Internet connection can explore the world’s electronic weak spots for hours every day, almost undetectably at almost no cost and very slight risk of being caught and punished.

As most Internet experts realize, the Internet is pretty safe from natural disasters because of its redundant infrastructure (meaning that there are many pathways by which any given packet of data can reach its destination) but deeply vulnerable to a wide range of deliberate attacks, either by censoring governments or by rogue hackers. (Writing on the same point, George Dyson makes the excellent suggestion of calling for a kind of emergency backup Internet, “assembled from existing cell phones and laptop computers,” which would allow the transmission of text messages in the event that the Internet itself was brought down.)

We might also worry about demographic shifts. Some are manifest, like the graying of the population (mentioned in Rodney Brooks’s essay) and the decline in the global birth rate (highlighted by Matt Ridley, Laurence Smith, and Kevin Kelly). Others are less obvious. The evolutionary psychologist Robert Kurzban, for example, argues that the rising gender imbalance in China (due to the combination of early-in-pregnancy sex-determination, abortion, the one-child policy, and a preference for boys) is a growing problem that we should all be concerned about. As Kurzban puts it, by some estimates, by 2020 “there will be 30 million more men than women on the mating market in China, leaving perhaps up to 15% of young men without mates.” He also notes that “cross-national research shows a consistent relationship between imbalanced sex ratios and rates of violent crime. The higher the fraction of unmarried men in a population, the greater the frequency of theft, fraud, rape, and murder.” This in turn tends to lead to a lower G.D.P., and, potentially, considerable social unrest that could ripple around the world. (The same of course could happen in any country in which prospective parents systematically impose a preference for boys.)


Another theme throughout the collection is what Stanford psychologist Brian Knutson called “metaworry”: the question of whether we are psychologically and politically constituted to worry about what we most need to worry about.

In my own essay, I suggested that there is good reason to think that we are not inclined that way, both because of an inherent cognitive bias that makes us focus on immediate concerns (like getting our dishwasher fixed) to the diminishment of our attention to long-term issues (like getting enough exercise to maintain our cardiovascular fitness) and because of a chronic bias toward optimism known as a “just-world fallacy” (the comforting but unrealistic idea that moral actions will invariably lead to just rewards). In a similar vein, the anthropologist Mary Catherine Bateson argues that “knowledgeable people expected an eventual collapse of the Shah’s regime in Iran, but did nothing because there was no pending date. In contrast, many prepared for Y2K because the time frame was so specific.” Furthermore, as the historian of ideas Noga Arikha puts it, “our world is geared at keeping up with a furiously paced present with no time for the complex past,” leading to a cognitive bias that she calls “presentism.”

As a result, we often move toward the future with our eyes too tightly focussed on the immediate to care much about what might happen in the coming century or two—despite potentially huge consequences for our descendants. As Knutson says, his metaworry

is that actual threats [to our species] are changing much more rapidly than they have in the ancestral past. Humans have created much of this environment with our mechanisms, computers, and algorithms that induce rapid, “disruptive,” and even global change. Both financial and environmental examples easily spring to mind.… Our worry engines [may] not retune their direction to focus on these rapidly changing threats fast enough to take preventative action.

The cosmologist Max Tegmark wondered what will happen “if computers eventually beat us at all tasks, developing superhuman intelligence?” As Tegmark notes, there is “little doubt that that this can happen: our brains are a bunch of particles obeying the laws of physics, and there’s no physical law precluding particles from being arranged in ways that can perform even more advanced computations.” That so-called singularity—machines becoming smarter than people—could be, as he puts it, “the best or worst thing ever to happen to life as we know it, so if there’s even a 1% chance that there’ll be a singularity in our lifetime, I think a reasonable precaution would be to spend at least 1% of our GDP studying the issue and deciding what to do about it.” Yet, “we largely ignore it, and are curiously complacent about life as we know it getting transformed."

The sci-fi writer Bruce Sterling tells us not to be not afraid, because

Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s “minds on nonbiological substrates” that might allegedly have the “computational power of a human brain.” A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there’s no there there.

But Sterling’s optimism has little to do with reality. One leading artificial intelligence researcher recently told me that there was roughly a trillion dollars “to be made as we move from keyword search to genuine [A.I.] question answering based on the web.” Google just hired Ray Kurzweil to ramp up their investment in artificial intelligence, and although nobody has yet built a machine with the computational power of the human brain, at least three separate groups are actively trying, with many parties expecting success sometime in the next century.


Edison certainly didn’t envision electric guitars, and even after the basic structure of the Internet had been in place for decades, few people foresaw Facebook or Twitter. It would be mistake for any of us to claim that we know exactly what a world full of robots, 3-D printers, biotech, and nanotechnology will bring. But, at the very least, we can take a long, hard look at our own cognitive limitations (in part through increased training in metacognition and rational decision-making), and significantly increase the currently modest amount of money we invest in research in how to keep our future generations safe from the risks of future technologies.

Gary Marcus, a professor at N.Y.U. and author of “Guitar Zero: The Science of Becoming Musical at Any Age,” has written for newyorker.com about the future of employment in the robot era, the facts and fictions of neuroscience, moral machines, Noam Chomsky, and what needs to be done to clean up science.

Illustration by Lou Brooks.