From The Atlantic – By Jacob Sweet. May 18, 2023
Silicon Valley says AI could be apocalyptic. It’s not acting that way
If you’re looking for a reason the world will suddenly end, it’s not hard to find one—especially if your job is to convince people they need to buy things to prepare for the apocalypse. “World War III, China, Russia, Iran, North Korea, Joe Biden—you know, everything that’s messed up in the world,” Ron Hubbard, the CEO of Atlas Survival Shelters, told me. His Texas-based company sells bunkers with bulletproof doors and concrete walls to people willing to shell out several thousand—and up to millions—of dollars for peace of mind about potential catastrophic events. Lately, interest in his underground bunkers has been booming. “When the war broke out in Ukraine, my phone was ringing every 45 seconds for about two weeks,” he said.
Many of his clients work in tech: Although the prepper movement in America spans the upper and middle classes, the left and the right, Silicon Valley has in recent years become its epicenter. In his book Survival of the Richest: Escape Fantasies of the Tech Billionaires, Douglas Rushkoff delves into what he calls “The Mindset”—the idea among Silicon Valley doomsday preppers that “winning” means earning enough money to escape the damage that befalls everyone else. In 2018, Bloomberg reported that seven tech entrepreneurs had purchased bunkers in New Zealand. And a 2016 New Yorker profile of Sam Altman quoted the OpenAI CEO as saying he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to” in the event of super-contagious viruses, nuclear war, and AI “that attacks us.”
Extreme predictions about what AI could do to the world have since grown louder among a vocal minority of those who work in the field. Earlier this month, the pioneering researcher Geoffrey Hinton quit his role at Google and warned about the dangers of AI. “Look at how it was five years ago and how it is now,” he told The New York Times. “Take the difference and propagate it forwards. That’s scary.” Other people have gone further. “If we go ahead on this everyone will die,” Eliezer Yudkowsky, the senior research fellow at the Machine Intelligence Research Institute, has written, “including children who did not choose this and did not do anything wrong.”
So this should be a moment for AI-doomsday preppers, with frazzled Silicon Valley millionaires shelling out enormous sums of money to shield themselves from whatever AI does to us all. But it’s not. I asked Hubbard if anyone had cited AI to him as their motivator for purchasing a bunker. “I don’t think a single person has brought up AI,” he said. This AI freakout is exposing what has long been true about Silicon Valley’s doomsday preppers: a disaster-proof compound might not save the richest tech moguls, but perhaps that was never the whole point.
Hubbard, one of the biggest names in commercial prepping, told me that his archetypal customer is a 60-year-old man who recently sold his business for $30 million, bought a ranch, and now wants a bunker. Even the tech billionaire he recently worked with didn’t bring up AI as a concern. “What matters is nukes and Yellowstone and meteors,” Hubbard said.
Nobody I talked with in the world of doomsday prepping was sweating AI very much, compared with all the other threats they perceive. J. C. Cole, who runs a prepping business called American Heritage Farms, outlined 13 “Gray Swan” events he believes are both imminent and powerfully destructive. “I don’t worry about AI right now,” he said, “because I think we won’t get there.” He’s pretty sure the U.S. will go to war with Russia and China sometime in the next year. He worries about hyperinflation (“which is happening as we speak”), credit collapse, various natural disasters, and electromagnetic pulses from nuclear bombs, biological weapons, or solar storms destroying the electrical grid. “Before AI comes in and shows up as the Terminator,” he said, “I think we’ll just have a banking crash.” In anticipation of these Gray Swans, he is developing organic farms and underground shelters that can help save a handful of paying members.
Part of why AI-doomsday prepping does not seem to be much of a thing is that it’s still hard to imagine the precise mechanics of an AI threat. Familiar methods of destruction come to mind first, but with an AI twist: Rogue AI launches nuclear weapons, bombs the electrical grid, stages cyberattacks. The shelters that Hubbard offers explicitly provide support for situations like these. Whether the nuclear weapon is sent by an unstable foreign leader or by a malfunctioning or malicious robot, a bomb is still a bomb. People who were already concerned about those threats will prepare, but they would have anyway.
People who are particularly focused on AI’s destructive potential have a different reason not to build a bunker. “The threat we’re worried about is one where we build vastly smarter-than-human AI systems that are resource-hungry and therefore harvest every atom of material on every planet of the solar system,” says Rob Bensinger, the head of research communications at the Machine Intelligence Research Institute. “There’s no ‘prepping’ that can be done to physically guard against that kind of threat.” Yudkowsky told me in an email that nobody he’d consider knowledgeable about AI is doomsday prepping; it makes little sense. “Personally,” he wrote, “I don’t spend a lot of mental energy worrying about relatively mild disaster scenarios where there’d be survivors.” The best way to prepare for an AI doomsday, then, is to fight the technology’s further development before it gets too powerful. “If you’re facing a superintelligence, you’ve already lost,” Yudkowsky said. “Building an elaborate bunker would not help the tiniest bit in any superintelligence disaster I consider realistic, even if the bunker were on Mars.”
The conspicuous lack of doomsday prepping during such a consequential era for AI suggests something else: that among the super-rich in Silicon Valley, bunkers and shelters just aren’t as popular as they once were. Rushkoff told me that the hype around end-of-the-world bunkers has settled, and that some people have seen the foolishness of the enterprise. For doomsdayers who really do fret about the least likely, most devastating scenarios, traditional prep won’t be of much use. “I don’t care how insulated the technology in your bunker is,” he said. “The AI nanos are going to be able to penetrate your bunker … You can’t escape them.” An AI takeover would be the final phase of Silicon Valley’s story of disruption—after taxis and food delivery, the entire human race.
But really, Rushkoff doubts that many ultrarich preppers are truly preparing for the end of the world. What they want, he thinks, is a self-sufficient-island fantasy—more White Lotus than The Last of Us. If this AI moment—when apocalyptic warnings seem to pop up by the day—is not producing a prepping boom, then perhaps there isn’t much substance behind all the expensive posturing. No matter the state of the world, prepping has always been a flashy lifestyle choice. “It doesn’t matter if there’s a disaster or not,” Rushkoff said. “The apocalypse was just the excuse to build these things.”
Preppers have cause.
Go to Article »