Network Insecurity

Advances in connectivity and mobility tend to increase the possibilities for hacking.Photograph by Mauricio Alejo

Richard McFeely, of the F.B.I., is a former insurance adjuster from Unionville, in eastern Pennsylvania horse country. He has a friendly face, meaty hands, and a folksy speaking style that doesn’t seem very F.B.I.-like. “Call me Rick,” he said, when I met him at his office, in Washington, coming around his wide desk and gesturing toward the soft furniture in the front part of the room.

McFeely, who is fifty-one, and whose official title is executive assistant director (“E.A.D.,” in office shorthand), oversees about sixty per cent of F.B.I. operations, including the Cyber Division: some one thousand agents, analysts, forensic specialists, and computer scientists. The bureau has made several high-profile takedowns in recent years, including the dismantling of the Coreflood botnet, a network of millions of infected “zombie” computers, or bots, controlled by a Russian hacking crew.

“But we are just touching the tip of the surface in terms of what companies and what government agencies are at the most risk,” McFeely said, shaking his big head ruefully. “We simply don’t have the resources to monitor the mammoth quantity of intrusions that are going on out there.” Shawn Henry, McFeely’s predecessor at the F.B.I., told me, “When I started in my career, in the late eighties, if there was a bank robbery, the pool of suspects was limited to the people who were in the vicinity at the time. Now when a bank is robbed the pool of suspects is limited to the number of people in the world with access to a five-hundred-dollar laptop and an Internet connection. Which today is two and a half billion people.” And instead of stealing just one person’s credit card, you can steal from millions of people at the same time. This may have happened when, in 2011, PlayStation’s gaming network was hacked and its members’ credit-card data compromised.

“It’s not the eighties,” Tony Stark sneers, in “Iron Man 3.” “No one says ‘hack’ anymore.” Hacking used to mean hippie technologists who wanted to set information free. Now hackers can be organized criminal gangs, working out of the former Soviet bloc, who steal financial information; or state-affiliated spies in China, who are carting away virtual truckloads of intellectual property; or saboteurs in Iran or in North Korea who are trying to disrupt or destroy critical infrastructure—not to mention all the small-scale criminals downloading hacking tools and launching attacks because this shit is cool and you can’t get caught.

General Keith Alexander, who is the head of the N.S.A. and of the U.S. Cyber Command, has called the loss of American industrial secrets and intellectual property to cyber espionage “the greatest transfer of wealth in history.” Plans for the F-35 jet fighter, source code from Google, and details about Coca-Cola’s 2009 bid to buy China Huiyuan Juice Group have been stolen. The Times and the Wall Street Journal both revealed in January that Chinese hackers penetrated their networks last year, apparently in order to gather intelligence about upcoming stories on Chinese officials. Until recently, the U.S. has been reluctant to publicly accuse China of spying, but, in March, Tom Donilon, President Obama’s national-security adviser, spoke of “cyber intrusions emanating from China on an unprecedented scale.” This month, the Pentagon released a report that bluntly accused the Chinese military of cyber espionage. China denies these accusations.

China is by no means our only cyber-security problem. Organized criminal gangs, loosely affiliated with nation states, constitute an entirely new category of threat. As the world moves online, traditional boundaries break down, and whether it will ever be possible to secure the Internet is an open question. McFeely, his voice rising plaintively, said, “The cyber bad guys have evened the playing field! In the past, we knew it was the traditional big players who were spying on us. Now you get these small countries that are trying to gain a competitive advantage in some industry. So they can go and hire a hacking group, specifically target a company, and steal years and years of R. & D.”

In October, Leon Panetta, the Defense Secretary at the time, warned that “an aggressor nation or extremist group could gain control of critical switches and derail passenger trains, or trains loaded with lethal chemicals,” resulting in a “cyber Pearl Harbor.” Privacy advocates criticized the statement as scare-mongering, suspecting that Panetta’s ulterior motive was to increase government oversight of the Internet. Approximately eighty-five per cent of the critical infrastructure in the U.S. is privately held, and the government’s authority over it is limited. The Cyber Security Act of 2012, which would have asked companies to comply with basic security regulations, died in the Senate last year. Republicans thought that the regulations would be too expensive and would entail too much government oversight.

The Department of Homeland Security reported a hundred and ninety-eight attacks on critical U.S. infrastructure in fiscal year 2012; there were just nine attacks in 2009. These included the penetration of twenty-three oil and natural-gas pipeline operators and six attacks on nuclear power plants. Last year, hackers also broke into an unclassified network in the White House Military Office. In all these cases, the intruders seemed more interested in snooping than in sabotage, though they could return, with more sinister intentions.

A large part of the nation’s financial infrastructure is also under siege. The most furious wave of assaults began in September, when almost fifty major U.S. banks suffered “distributed denial of service” (DDoS) attacks, in which botnets—which can be controlled from afar with remote-access tools, known as RATs—directed high volumes of traffic to the banks’ Web sites, causing them to run slowly or to crash altogether.

In many cases, the F.B.I. knows where cyber attacks originate, and in some cases it knows who the attackers are. Much of this information is gathered by the National Cyber Investigative Joint Task Force, an interagency group, based in an undisclosed location near Washington. But, even when hackers are identified, law enforcement is often powerless to confront them. “If you look at the typical attacks on a bank,” McFeely said, “most of the attacks aren’t coming from within the U.S. What’s the stomach of our policymakers here to conduct a unilateral operation on foreign soil?

“I get sick to my stomach when I see that stuff,” McFeely went on. “We literally watch our intellectual property leave the country. But, if we do stop it, we lose sight of the rest of it.” The bureau would end up revealing its sources and methods to the enemy, while the hackers would simply move on to another target. “So it’s a huge conundrum for us. Could the U.S. government do something about these attacks that the financial sector has been undergoing for over a year? Of course we could. The question is, what are the triggers that are going to cause us to take action and what will the impact be?”

From the earliest days of the Internet, the basic approach to network security has been to play defense. The idea is to secure the perimeter of a network with firewalls and intrusion-prevention systems that keep “blacklists” of suspect bits of code, and rely on algorithms which detect suspicious patterns; the algorithms are constantly updated as new threats emerge. Tom Kellermann, a vice-president of Trend Micro, a cyber-security firm based in Cupertino, California, calls this approach “the citadel paradigm.” It depends heavily on antivirus software and users’ diligent updating of it. “You keep the crown jewels on the inside, and you build electronic walls and a moat around them,” he told me. “It’s like the Federal Reserve in lower Manhattan, where the gold is kept.” This type of strategy works well when the attacks are opportunistic and random: the intruders are simply searching for an easy way in.

In recent years, however, the citadel paradigm has been battered by so-called “targeted attacks,” in which the adversary goes after a particular government agency, company, or individual, with a specific goal in mind. This kind of determined adversary often uses a relatively sophisticated e-mail scam known as “spear phishing.” Earlier phishing attacks tended to be clunky and scattershot, like the bogus-looking e-mails purportedly from Google asking you to verify your log-on information, or Nigerian banking scams, or the “Help, I’ve been robbed in Dublin, can you wire money to Western Union” e-mails supposedly from a friend but actually from a con artist who has hacked your friend’s e-mail account. The flimflam was easy to sniff out, generally because the spelling was atrocious (bad guys apparently don’t use spell-checkers). Toying with these hapless thieves—“I have wired a thousand dollars to the local police station, you can collect your reward there”—used to be good sport.

But a spear-phishing e-mail is tailored especially for you. Not long ago, the National Association of Manufacturers received an e-mail purportedly from a reporter at Bloomberg who was working on a story about the group, with an Excel spreadsheet contained in an enclosure. In fact, Chinese hackers had spoofed the reporter’s e-mail address, and the enclosure contained espionage-related “malware”: malicious code. Social networks make it much easier for hackers to impersonate friends and colleagues. “Here are the numbers we spoke about at last night’s party”—the one you posted pictures of on Facebook. Downloading the attachment silently installs the malware, without your noticing; later, you may wonder why your computer’s fan is always on (it’s because the hacker is using your machine’s extra computing power). Now you’ve got a RAT in your machine, which can capture your passwords, credit-card numbers, and banking information, and can turn on your computer’s microphone and camera. (Around this magazine’s offices, people have started putting Post-it notes over their Mac’s little Cyclops-eyed camera.) Your machine is now part of a botnet (in China, bots are called “meaty chickens”), and can be used to launch denial-of-service attacks or send out spam.

RATs also work on smartphones, turning them into ideal spying and tracking devices; you bug yourself, basically. No one is safe. A computer (and therefore any network) can be infected if you simply open an e-mail or visit the wrong Web site. And anyone can be hacked. Even RSA, the maker of the SecurID tokens that are supposed to keep intruders off networks, got hacked when an employee clicked on an enclosure in a spear-phishing e-mail with the subject line “2011 Recruitment Plan,” ostensibly from beyond.com, a career-advancement Web site, and inadvertently installed malware on his computer. The hackers then captured passwords from that computer and used them to gain access to other machines on the network and steal some of RSA’s data, which, in turn, allowed them to hack RSA’s clients using duplicates of the now compromised security tokens. RSA subsequently offered to replace or monitor all of its tokens, which, as of 2009, numbered forty million. The same spear-phishing attack affected more than seven hundred and fifty other companies, including about a fifth of the Fortune 100. Dmitri Alperovitch, the co-founder and chief technology officer of CrowdStrike, a private security firm, summed all this up for me by saying, “The idea that any company in the world is going to be able to protect itself against an intelligence service or an armed military unit of another country is, quite frankly, ridiculous.”

There are simply too many ways for an attacker to get into your computer now. If you log on to the office network with a smartphone, or if you carry a laptop between work and home (a workplace trend known as B.Y.O.D., for “bring your own device,” although I heard security people say that it means “bring your own disaster”), you make it very easy for intruders to enter the office network. “In fact, some of the biggest espionage cases we’re working on right now involve the home-to-work commuting thing,” McFeely told me. “The company can have great security within its own walls, but as soon as it transits out you’re at the mercy of the weakest link in the chain.” With Wi-Fi hot spots, which can be easy to tap into, popping up everywhere, and with ever more network-enabled devices entering both the office and the home—smart TVs, smart front-door locks—intruders have a panoply of ways to break into your life. Several years ago, Best Buy was discovered to be selling digital picture frames that had been infected with malware.

“Up until four years ago, we kind of had a handle on this shit,” Tom Kellermann says. “Virus scanning and encryption and firewalls were doing a pretty good job. But the latest attack kits are bypassing those perimeter defenses, which is why this paradigm has to shift.”

“What’s it going to take to put you in one of these babies?”

I spent a day inside the citadel, with Google’s security team, at the company’s headquarters, in Mountain View, California. As part of its mission to organize the world’s information, Google tries to provide its users with a secure way to access it. Google doesn’t guarantee that it will protect its customers from cyber criminals and spies, but it has devised a number of ways of alerting users to suspicious patterns that its security algorithms pick up. The company has been unusually forthcoming about attacks. In January, 2010, for example, Google announced that it had fallen victim to an attack that came to be known as Operation Aurora. (At least twenty other companies, including Adobe, Intel, and Yahoo, were hit, too, but Google was the first to make the information public.) Subsequent analysis by cyber-security experts suggested that hackers affiliated with the Chinese government were behind the attack, and that they had exploited a vulnerability in Internet Explorer to get onto Google’s network and steal some of its source code. Google’s chief legal officer, David Drummond, wrote that “we have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human-rights activists.” Although only two Gmail accounts were compromised in the attack, Drummond said that the ensuing investigation revealed that the accounts of dozens of Gmail users in the U.S., China, and Europe who are advocates of human rights in China appeared to have been routinely accessed by third parties.

Sergey Brin, one of Google’s co-founders, told me that he invests a lot of time in keeping his security team motivated. “Corporate security teams are often low in morale,” he said last October, at a conference in Arizona. (Brin, dressed in Lycra cycling duds, was sporting Google Glass, Google’s computing eyewear; sometimes I couldn’t tell whether he was talking to me or to the screen inside the lens.) He continued, “In big corporations, people don’t understand what security people do, for the most part, and no one pays attention to them unless something goes wrong. Frankly, a lot of companies aren’t that interested in security. They say they care, but they really don’t; they are vulnerable, they can’t do that much about it, and they know it. They’re just waiting for something big to happen.” Brin meets with the security team every Friday, to review the week’s catalogue of threats. Because Google and its users are attacked thousands of times each day, there is usually much to discuss.

The team at Google is led by Eric Grosse, a software engineer who came from Bell Labs, and includes Linus Upson, who oversees security for Google’s browser, Chrome; Matt Cutts, who handles Web spam; Niels Provos, who leads Google’s anti-malware efforts; and Shane Huntley, who works on targeted threats, which include state-sponsored espionage. The citadel paradigm works better in some of these areas than in others. Google’s anti-spam efforts are the brightest spot in its security landscape; spam is exactly the sort of mass, opportunistic attack that the defensive strategy was designed for. Web spam, which clogged up search-engine results, and unsolicited e-mails advertising prescription drugs, penis-enlargement methods, and casinos, which threatened to sink the Internet in the early two-thousands, have been all but eliminated by Google’s anti-spam algorithms. “We like to say the spammers have the numbers, we have the math,” Cutts told me.

Grosse and I discussed passwords, often the weakest link in the security chain. The recent takeover of the Associated Press’s Twitter account by Syrian hackers, which caused a momentary hundred-and-fifty-point drop in the Dow, is an example of the kind of havoc a stolen password can bring about. “My goal is to get rid of passwords completely,” Grosse said. “Perhaps you will still have a password but it wouldn’t be a prime line of defense.” In the short term, however, more passwords, not fewer, seems to be the solution. “We rolled out this two-step verification”—using two passwords, essentially. “The biggest problem is people can’t be expected to remember two hundred passwords. I mean, I have two hundred passwords, and they’re all different and they’re all strong.”

“How do you remember them?” I asked.

“I have to write them down.”

“But then that piece of paper could be stolen.”

“Yeah, but if your adversary is somebody on the other side of the ocean he can’t get the piece of paper you have in a safe at home. If you’re trying to guard against your roommate, then you need a new roommate.” With the two-step process, you register your mobile number, and when you enter your first password, Google texts a unique code to your phone, and then you enter that.

For Upson, who works on securing Chrome, the principal threat is what’s known as a “zero-day exploit”: a kind of vulnerability, either in an operating system or in an application such as Flash, Quicktime, or Chrome itself, through which intruders can sneak into a computer. (Because the exploit is either unknown to the software vender, or has not yet been patched, it is said to have zero days of remediation.) Exploits can sell for hundreds of thousands of dollars on the black market. Upson told me that Chrome updates itself automatically to fix known exploits, rather than requiring the user to do it. But if an adversary does find his way into your machine through an unpatched hole, he said, “you are better off just throwing your computer away and starting again with a new one, because there are so many places for the malware to hide.” That’s especially true if the hacker uses a “rootkit,” a type of malicious software that can conceal itself from the antivirus software that is supposed to detect it, making cleaning your machine extremely difficult.

One of the Eastern European crews’ favorite ploys, Provos told me, is to masquerade as an anti-malware company. “You go to your computer and your screen flashes and you get this dialogue box that says, ‘We found all this malware on your computer and you are really in deep trouble, but don’t despair, if you pay forty dollars right now you can download this security solution.’ So now the malware authors have got forty dollars and they also have complete control of your computer.” The latest wrinkle in this style of attack is “ransomware”—a program that encrypts your hard drive and sends you a message that appears to come, for example, from the F.B.I.’s Cyber Division, saying that it has detected child pornography or pirated software on your computer and instructing you where to send money, in order to unencrypt your data.

State-sponsored political espionage is perhaps the most difficult challenge the Google team faces. Chinese targets include the so-called “five poisons”: the Falun Gong, Taiwan, the democracy movement, and Uighur and Tibetan separatists; even the Dalai Lama’s computers were hacked. But China isn’t alone in practicing cyber espionage. Oppressive regimes from Syria to Bahrain use the latest cyber-surveillance tools, many of them made by Western companies, to spy on dissidents. Finfisher spyware, for example, made by Gamma International, a U.K.-based firm, can be used to monitor Wi-Fi networks from a hotel lobby, hack cell phones and P.C.s, intercept Skype conversations, capture passwords, and activate cameras and microphones. Egyptian dissidents who raided the office of Hosni Mubarak’s secret police after his overthrow found a proposal from Gamma offering the state Finfisher hardware, software, and training for about four hundred thousand dollars.

Shane Huntley told me, “Our analysis shows that if you are engaged in democracy movements or talking about human rights there is a much greater than fifty-per-cent chance that you are going to be the subject of a targeted attack.” He added, “We found that a range of high-level U.S. officials were also having their accounts hijacked” by spear-phishing schemes. “The breadth and depth is kind of amazing.”

“It’s clear now that relying on traditional tools like antivirus alone is not sufficient for defense,” Eric Grosse said. Today, he said, the various social-engineering attacks, like spear phishing, “are actually quite good at tricking and ensnaring victims.” To counter these threats, he went on, “dynamic defenses that evolve almost instantaneously” are required. “For example, attackers who use compromised Web sites to deliver phishing pages now often have to shift the location of their sites within a matter of minutes to avoid being caught by software that blocks them.”

Adam Meyers, who is the head of intelligence for CrowdStrike, the security firm, walked me through a hypothetical corporate-espionage attack coming from China. Meyers, who is tall, techy, and wears four small earrings in his left ear (three in the lobe and one higher up, in the cartilage), began by noting that many patterns of corporate espionage bear a suspicious resemblance to China’s five-year plans for modernizing the country’s infrastructure. The scenario he conjured up involved China’s South Sea Fleet, one of three fleets that make up the naval branch of the People’s Liberation Army, or P.L.A. The Chinese navy is known to be interested in expanding its capabilities from green-water activities—near to shore—and building up a blue-water, or deep-sea, presence. To do that, it needs to advance its satellite communications, boat building, robotics, and other technologies.

“So the P.L.A. naval officer says to his intelligence forces, ‘Here’s the five-year plan,’ ” Meyers said. “He’s not using the military’s élite hacking crews, because he doesn’t want this traced back to the military. But there are plenty of crews for hire that are only loosely affiliated with the government, so he uses one of those. He says, ‘Get me everything you can on these technologies.’ So they go out and start their operation.

“The first thing they need to do is get access. That starts with open-source intelligence collection—same way you’d start a story, I imagine. They find out who the key people are at the tech companies they’re interested in, and do a Google search. They get people, facilities, potentially who the company’s software venders are, and what kind of security software they run. They get the jargon they can use to start crafting an attack. And if they can’t get access to you they will find out who your partners are and get access to them. It’s all about exploiting a trust relationship.

“Then they run all the names through social media—Facebook, Twitter, LinkedIn—and map your personal relationships.” The spear-phishing e-mail “could be a weaponized press release,” Meyers continued. “If it’s The New Yorker I’m after, I send your P.R. people an e-mail saying, ‘Hey, we’ve got evidence your reporters are paying people for stories, we’re going to go to press with this in the next twelve hours’—and attach the link. Chances are you are going to click on it.” Attacks follow marketing guidelines on what day and time is best to send out e-mails that people will open. “Like Tuesday, late morning. Or they’ll send something on a Friday, before a three-day weekend, because they know all the Americans are going away. Memorial Day is a big one. Then they’ve got until Tuesday before anyone even thinks of doing work”—giving them plenty of time to nose around the network.

Meyers showed me an e-mail that one of CrowdStrike’s customers had received from a Chinese hacking crew, with the identifying information redacted:

Dear Sir, I am writing to you to ask you for some information about [BLANK]. Our company plan to purchase five sets of [BLANK]. We are now ask for quote on this product. . . . Looking for your reply.

Below was an attachment with the header “Details About Requirement.”

Not particularly convincing, I said, noting the poor grammar.

“Yeah,” Meyers replied, “but if you’re a sales guy and you see this come in in the middle of the first week of the first quarter, and the guy plans to purchase five, and this is a million-dollar product—that’s a five-million-dollar deal. So you open it up, it triggers a bug, and your Adobe Reader crashes. But the adversary controls the flow of that crash. As it goes down, he installs the malware. So now they’re in. They establish a back door and then start looking at your system to see what tools you are running.”

In targeted attacks, the intruders generally know exactly what they are looking for, and can use your search tools to navigate around not just your computer but the office network. It’s easy to move around, because most networks are built like Mentos, “crunchy on the outside, but soft and chewy in the middle,” Meyers said, meaning that the networks lack strong internal security. “They install a key logger, dump your passwords, turn on the microphone, turn on the camera. Then they push down a different type of malware, so while the security guys are high-fiving and having a cup of coffee, the second malware is established and the hackers are still in. And, worse, we think we’ve stopped the problem. Then they push down tools that allow them to move laterally across the network. When they find what they’re looking for, they’ll compress it, and encrypt it, and exfiltrate it. And they’ll leave a back door behind to make sure they can come back in the future.”

If the old paradigm was the citadel, the new paradigm, Tom Kellermann, of Trend Micro, contends, is the prison. “You’re not trying to build the Federal Reserve, you’re trying to build Rikers Island. Instead of trying to keep the bad guys out, you keep them in, or you let them in the basement where you keep your Rottweilers, and you make life miserable for them while they are in, so they won’t want to come back.”

What would a cyber prison look like? To get a better idea, I spoke with Shawn Henry. With his shaved head and Bruce Willis tough-guy demeanor, Henry is the G-man from central casting. Having retired as the E.A.D. overseeing the F.B.I.’s Cyber Division last year, he is now a senior executive at CrowdStrike (where he continues to work with the former deputy head of the Cyber Division, Steven Chabinsky, whom he brought to the company). Part of his job is to impress clients with the urgency and the scope of their cyber-security problem. His fierce-looking eyes squint into a grim future as he conjures up cyber threats, and our lack of readiness for them. “When the electronic equivalent of planes crashing into buildings occurs,” he said, “and the lights go out, I guarantee you the public will be up in arms.”

CrowdStrike, which has an office in Crystal City, Virginia, is one of a new generation of security companies, such as FireEye, Damballa, and Mandiant, that offer clients a variety of active strategies—security and intelligence-gathering tools that bring the fight to the attacker in your system. “The old performance measure was, Can you keep a determined adversary off your network,” Henry told me. “The new measure of success needs to be, How soon after they get access can you I.D. them, so you can take immediate action.”

In one instance, which Dmitri Alperovitch, of CrowdStrike, cited approvingly to me, the government of Georgia lured a Russian hacker, who had been breaking into government ministries and banks for more than a year, to a machine that planted spyware on the hacker’s computer and used his Webcam to take his picture; the photographs were published in a government report. “The private sector needs to be empowered to take that kind of action,” Alperovitch said.

But that kind of action, which is generally referred to as “hacking back,” is illegal in the U.S. The same broad-reaching laws, grouped under the 1984 federal Computer Fraud and Abuse Act, or C.F.A.A., that the government uses to go after people like Aaron Swartz—the twenty-six-year-old activist who downloaded millions of articles from the JSTOR database—also limit private companies’ powers to take offensive action. However, the C.F.A.A. is notoriously vague and out of date. “There are gray areas,” Alperovitch told me. “What if the hackers stole malware that you had planted inside your network, and infected themselves? Is that illegal?”

Could those same gray areas allow the government to hack you? During Henry’s time at the F.B.I., the agency developed malware and spyware for possible use in criminal investigations. In 2001, the F.B.I. confirmed the existence of Magic Lantern, a type of spyware that comes in an e-mail attachment. At the time, the agency denied that the spyware had been deployed. But in 2007 the F.B.I. obtained a court order to use a similar program, called a “computer and Internet protocol address verifier,” or CIPAV, which works much the way that RATs do, secretly monitoring a computer’s use remotely.

At CrowdStrike, Henry might not need to obtain a court order to use malware, depending on how it is deployed. However, he said that he has no intention of violating the C.F.A.A. “We don’t hack back,” he said. “We don’t take actions that are illegal. I’ve been enforcing the C.F.A.A. for fifteen years, and I’ve put a lot of people in prison for violation of that law. So we’re not doing that. But,” he went on, “there are a variety of things we are able to do from a deceptive standpoint that don’t involve putting malicious code on hackers’ machines. Feeding them misinformation, giving them the wrong trade secrets. You can’t give them the wrong plans for a plane, or the wrong drug, because people die. But if it’s business plans, tactical information, it’s different.”

Orin Kerr, a professor at George Washington University and an expert in computer-crime law, argues that back-hacking could easily get out of control. He also told me, “It’s hard to know if you are targeting the right person. It’s easy to disguise your location online, so it’s easy to create a false impression that someone else was behind the attack.” But Stewart Baker, the former head of cyber policy for the Department of Homeland Security, maintains that in certain cases hacking back should be within a victim’s rights. “If you had a motorcycle in your garage, and your neighbor stole it, and you could see a trail of oil leading from your garage to his garage, you’re going to go get it back,” he said. “And I don’t think a court of law would convict you of trespassing. So, if you hack my intellectual property, shouldn’t I be able to get it back?”

Most hacking crews have characteristic digital signatures, cryptography keys, and methodologies of attack, and all that information could be used to identify them, and possibly arrest them. But the information is rarely shared between the public and the private sectors. When the F.B.I. detects a cyber-security breach at a company, agents show up at the door with guns and badges and inform the company of the break-in, but they don’t reveal who the intruders were, or what they were looking for, because that information might compromise the F.B.I.’s sources and methods. And when a private company discovers a security breach on its own (on average, more than two hundred days after the initial intrusion), it generally doesn’t share the information either with the F.B.I. or with the public, fearing the impact on its partners, investors, and customers. Even private companies operating critical infrastructure sometimes decline to coöperate with government investigations of cyber attacks on their facilities. “Some of these companies are government contractors—they work with us!” McFeely told me. “That doesn’t seem right.”

Alperovitch said, “Everyone is focussed on malware in the security industry. But malware isn’t really the problem. Organizations think they have a malware problem; in reality they have an adversary problem. Someone is coming after them for a reason. People say attribution is impossible, but there are two fallacies with that. If you are doing multiple attacks, over years, the possibility of attribution goes up. And the second thing is we bemoan the lack of privacy that we have online, but that makes it harder for an adversary to operate in cyberspace without leaving a huge digital footprint as well.”

Adam Meyers, CrowdStrike’s intelligence director, showed me a picture of a Chinese hacker who had penetrated one of its client’s networks. CrowdStrike used the hacker’s unique cryptography key to trace him back to a Chinese university, where it was able to identify him by name, and then find his picture on a social network.

Alperovitch pointed at the young man, who was wearing a tie and a short-sleeved dress shirt and leaning on a large rock with three Chinese symbols on it. “This makes it personal,” he said.

A second picture, also taken from the hacker’s social-network page, showed the man in military gear, posing with a squadron of other, similarly clad men.

“Looks just like P.L.A., right?” Meyers said. “We thought we’d hit gold.” But, on closer inspection of the uniforms, it turned out that the men were dressed for paintball.

Looming darkly over this almost Mordorian cyber threatscape is the prospect of cyber war—a future conflict fought with weaponized code that can do physical damage to infrastructure, and potentially kill people. So far, the only nations known to have deployed such code are the U.S. and Israel. Working together, the two countries produced the Stuxnet worm and, reportedly, Flame, a high-grade espionage tool. Stuxnet, discovered in 2010 but deployed as early as 2007, was designed to attack Iran’s nuclear facilities by exploiting multiple zero-day vulnerabilities in Microsoft’s Windows; it reportedly destroyed about a thousand of the centrifuges used to enrich uranium, by causing them to spin out of control. Iran is believed to have retaliated by launching many of the 2012 DDoS attacks on U.S. banks and by infecting the oil company Saudi Aramco with a virus that damaged tens of thousands of its computers, with the aim of impeding the flow of oil.

“This has the whiff of August, 1945,” Michael Hayden, the former C.I.A. and N.S.A. director, said of Stuxnet, at an event at George Washington University in February. “It’s a new class of weapon, a weapon never before used.” A cyber arms race is getting under way, and it is escalating, as the tools needed to deploy weaponized cyber attacks spread around the world. (In March, General Alexander, of the U.S. Cyber Command, told the House Armed Services Committee that he’s establishing forty new cyber teams, including thirteen dedicated to offensive attacks.) Whether that conflict will be “hot” or “cold” is hard to say, because virtually all the government’s cyber operations against other countries are classified (neither the U.S. nor Israel has taken responsibility for Flame), cloaked in the same secrecy as our drone attacks. David Rothkopf, of Foreign Policy, recently characterized cyber conflict as a “cool war,” writing, “It is a little warmer than cold because it seems likely to involve almost constant offensive measures that, while falling short of actual warfare, regularly seek to damage or weaken rivals or gain an edge through violations of sovereignty and penetration of defenses.” In any case, the Department of Defense recently announced a fivefold increase in our national cyber forces.

And yet, dire though matters appear to be, the American public doesn’t seem particularly alarmed by our cyber-security problem. Danger is everywhere and also nowhere; being invisible, cyber crime is easy to put out of your mind. In this respect, at least, it is nothing like terrorism, which feeds on bloody spectacles, martyred bombers, and public mayhem. The cyber threat is faceless and creeps in on little cat feet. You know that, like death, it’s coming, but all you can do is hope that someone will fix it before it comes for you.

There is nostalgia in the voices of security people when they speak about the “good old days” of the nineteen-nineties, when the gravest threat to network security came from computer viruses. “Wasn’t it nice?” Raimund Genes, Trend Micro’s chief technology officer, said recently, waxing nostalgic about Melinda, the 1999 computer virus, which, along with the “I Love You” virus, of 2000, marked the end of the age of viruses and the dawn of cyber crime. “A virus was highly visible, and everyone knew something was wrong with the system. And the virus was just done for fun. There was no commercial interest in creating viruses.”

Tom Kellermann told me, “Guys like me—I’m not enjoying this anymore. It’s like I’m a forest ranger, and back in the day I used to have to deal with forest fires that were accidentally set by campers, and now I have to deal with fires that are set by arsonists.” He added, “I’m looking at worse shit, crazier crap every day, running on four hours’ sleep, only ever seeing part of the puzzle, and everyone I know in the government who deals with this is completely frazzled. There’s a multiplicity of actors, in a free-fire zone. I’m so tired of people saying China this and China that. If it was just China, then at least we could create international norms and use diplomacy and other mechanisms that have been viable for hundreds of years.”

So is there any solution to our cyber problem? Every advance in connectivity and mobility seems to increase the possibilities for crime.

“We’re completely fucked,” Kellermann said. “I bought a new car yesterday. And the guy says, ‘Hey, man, do you know you can turn on a Wi-Fi hot spot in your car?’ I said, ‘What the?’ He said, ‘Yeah, it’s constantly on, bro, so all you need to do is have any of your passengers synch their devices to it, and you can get high-speed Internet while you are driving!’ I said, ‘Are you fucking kidding me? Where is it? Turn it off!’ ” ♦