About this Issue
We mostly know the story, but it bears repeating: One year ago this week, Glenn Greenwald wrote a news story that would change the world forever. In it, we learned that the National Security Agency had been secretly collecting enormous amounts of telephone metadata on what were presumably ordinary American citizens. The agency had done so without a warrant and without suspicion of any indiviudal person. The revelation changed forever how Americans think about national security, privacy, and civil liberties in the digital age.
More revelations soon followed. Among many others, these included NSA surveillance of web activity, mobile phone location data, and the content of email and text messages. The NSA also conducted many highly embarrassing acts of surveillance against allied or benign world leaders, including German Chancellor Angela Merkel and the conclave that recently elected Pope Francis. It had subverted commonly used encryption systems. It had co-opted numerous tech companies in its plans. Its leaders had repeatedly lied to, or at the very least misled, the U.S. Congress.
How far should surveillance go? What has been the value of the information gained? What have we given up in the process? What are the risks, should malign actors ever get their hands on the controls of the system?
We are able to ask these questions today because of one individual: Edward Snowden, a systems administrator for the NSA who chose to make public the information to which he had access. We have no choice now but to debate it. That’s simply what democracies do whenever such momentous information becomes public.
Joining us at Cato Unbound this month are four individuals with extensive knowledge in the fields of national security and civil liberties: Cato Senior Fellow Julian Sanchez, Brookings Institution Senior Fellow Benjamin Wittes, Georgetown University Professor Carrie F. Cordero, and independent journalist Marcy Wheeler. Each brings a somewhat different perspective on the matters at hand, and we welcome them all to what is sure to be a vigorous debate.
Snowden: Year One
America’s first real debate about the 21st century surveillance state began one year ago. There had, of course, been no previous shortage of hearings, op-eds, and panels mulling the appropriate “balance between privacy and security” in the post-9/11 era. But for the masses who lacked a security clearance, these had the character of a middle school playground conversation about sex—a largely speculative discussion among participants who’d learned a few of the key terms, but with only the vaguest sense of the reality they described. Secrecy meant abstraction, and in a conflict between abstract fears and the all-too-visible horror of a burning skyscraper, there could be little question which would prevail. The panoptic infrastructure of surveillance developed well out of public view.
A more meaningfully informed public debate finally became possible via a series of unprecedented disclosures about the global surveillance apparatus operated by the National Security Agency—disclosures for which the word “leak” seems almost preposterously inadequate. It was a torrent of information, and it gave even the most dedicated newshounds a glimmer of what intelligence officials mean when they complain about “drinking from the fire hose” of planet-spanning communications networks.
The fountainhead of this stream of revelations, a young former contractor named Edward Snowden, declared himself to be motivated by a “reasonable fear of omniscient State powers kept in check by nothing more than policy documents.” It is a telling formulation, because it concedes at the outset the point on which intelligence officials invariably insist: That there are rules and procedures, safeguards and oversight mechanisms, meant to guarantee that the vast quantities of information ingested by the NSA and its global partners are used only for good purposes. The question remains whether, once the astonishing scope of the spy machine is apprehended, those fetters begin to seem somewhat decorative—and if so, what we can do about it.
Above the doorway to the CIA’s Counterterrorism Center hangs a sign meant to remind Langley’s employees of the urgency of their mission—a sign that reads: “Today’s date is September 12, 2001.” In one respect, for all the hearings and blue-ribbon panels, the fresh dumps of documents and the legislative proposals, the date remains June 6, 2013—and our public debate (at least within the United States) remains fixated on the very first program revealed by the newspaper The Guardian using Snowden’s trove of documents: The NSA’s bulk collection of telephone metadata.
Though USA Today had reported on an earlier version of the program in a single 2006 article, it is not hard to see why the Guardian story was so explosive. There, in black and white, was something vanishingly few people had ever laid eyes on: A classified order from the Foreign Intelligence Surveillance Court (FISC) requiring a single telecommunications provider, Verizon, to provide “all call detail records” on a continuous basis, whether they concerned international calls or those “wholly within the United States, including local telephone calls.” Other major carriers, we soon learned, had been receiving similar orders for years.
It had the ideal mix of ingredients to ignite public controversy. Beyond providing that first tantalizing glimpse of forbidden texts, it was unambiguous confirmation of privacy advocates’ worst fears: The NSA, traditionally barred from deploying its unparalleled signals intelligence capabilities domestically, had been vacuuming up sensitive data about millions of ordinary citizens with no known connection to terrorism or espionage, in a program institutionalized with the blessing of the FISC. Unlike many later Snowden stories, this one involved technology familiar to all but the most dedicated Luddites.
Moreover, it gave the first hint of that Court’s extraordinary secret interpretation of the government’s authority under section 215 of the Patriot Act—one that stunned and outraged even the law’s co-author, Rep. James. Sensenbrenner. Language permitting the FBI to obtain documents “relevant to an investigation”—a phrase used in several related authorities— could be used to acquire entire databases of information in order to sift through them for the tiny fraction of records pertaining to investigative targets and their associates. To many, it seemed like the dictionary definition of an impermissible “fishing expedition.” At least one federal judge would ultimately conclude that the NSA program was not just statutorily but constitutionally suspect—too vast and potentially intrusive in scale to fall within the scope of a 1979 Supreme Court opinion that had blessed far more limited collection of phone records without a Fourth Amendment search warrant.
The 215 telephony program is also the one about which we have learned, by far, the most additional details since its initial exposure—through a combination of disclosures from the government itself, both voluntary and legally compelled, as well two thorough investigative reports produced by two independent expert panels: A handpicked review group appointed by President Obama and the long quiescent Privacy and Civil Liberties Oversight Board (PCLOB) established by federal statute.
Initial assurances from the government that the telephony program was both strictly supervised and vital to security fared poorly in light of these subsequent disclosures—continuing a disturbing pattern that has emerged over the past decade. The fact that a 29-year-old contractor had been able to walk out with tens of thousands of the NSA’s most highly classified secrets should already, of course, have raised questions about the efficacy of internal controls. But a 2009 opinion from the FISC also made clear that the agency’s official overseers had little independent ability to monitor whether the rules were being followed. Three full years after the telephony program began, officials acknowledged that it had never actually operated as it had been described to the court. The rules the FISC had imposed to limit access to this vast trove of sensitive records, as the understandably irate opinion put it, “have been so frequently and systematically violated that it can fairly be said that this critical element of the overall regime has never functioned effectively.”
Dramatic defenses of the program’s value soon began to collapse as well. A thorough inquiry by the PCLOB determined that the program had “shown only limited value,” and in the dozen or so cases where it had played some role in a successful investigation, “simply mirrored information about telephone connections that the FBI developed independently using other authorities.” Far from being instrumental in foiling multiple terrorist plots, as some defenders originally suggested, the NSA program had served as a “catalyst” in exactly one investigation, involving not bombs but the transfer of funds to the Somali terror group Al Shabaab. Even in that lone case—the “strongest success story” produced by the program after seven years—the PCLOB concluded that neither the NSA’s vast compendium of records nor its analytic speed were essential to the discovery of the suspect. The FBI, in other words, could have gotten their man by traditional, targeted means.
One of Snowden’s great fears one year ago was that nothing would change as a result of his disclosures—that the public would greet them with a shrug, or at any rate, with insufficient outrage to overcome the inertia of a Congress where the NSA’s allies controlled the intelligence committees. With respect to this one program, that fear has been at least partly dispelled. President Obama has ordered the NSA to seek specific judicial orders before querying its existing database, and the USA FREEDOM Act, legislation requiring the use of “specific identifiers” in government demands for information under a range of intelligence authorities, has already passed the House of Representatives. Already, then, we have powerful confirmation that surveillance secretly approved by “all three branches of government,” as its defenders never tired of reminding us, will not necessarily pass muster with the public.
Yet it is hard to believe that an end to completely indiscriminate bulk collection—under certain authorities, at least—is all the change Snowden hoped to achieve, and the reforms approved by the House thus far have fallen well short of the hopes of privacy advocates.
Slouching Toward Reform
When the USA FREEDOM Act was unveiled at a Cato Institute conference in October 2013, it set civil libertarian hearts aflutter with its ambition. It not only required that demands for records show at least an indirect connection to a suspected foreign agent, but it also implemented an array of procedural changes designed to check the secret expansion of surveillance authorities. Critically, it imposed new limits on large-scale collection of international communications under section 702 of the FISA Amendments Act. But by the time the bill made its way to a floor vote, it had been so thoroughly compromised that many civil liberties groups and technology companies pulled their support.
Though some of the procedural and transparency reforms in the original bill survive in a severely diluted form, the current version jettisoned changes to other surveillance powers in order to focus squarely on barring indiscriminate collection of records. And where the original bill accomplished this by putting teeth into the requirement of “relevance to an investigation,” the current version leaves the FISC’s broad understanding of that phrase untouched, instead requiring the use of vaguely defined “specific identifiers.” With sufficient chutzpah, the government might simply hand the court a stack of telephone directories—or, more plausibly, use extremely broad “identifiers” such as domain names or ranges of Internet Protocol addresses to enable “targeted” collection of records about communications to or from entire Web sites or corporate entities, such as the few remaining major Internet Service Providers.
The bill even includes a novel authority designed to recreate the NSA’s telephony program in a more limited and judicially supervised form, with the compelled “technical assistance” of telephone carriers. This does at least mean that records will generally be left in private hands pending a specific request approved by the FISC. But it also opens the door to demands that carriers redesign their own systems to enable more sophisticated types of searches, utilizing a broader array of personal information than the NSA was previously obtaining. Where the FISC had stopped short of permitting the NSA to acquire the increasingly precise location information that can be derived from cell phone logs, for instance, it remains unclear whether this new authority could be used to compel the production of people “linked” to a suspect by physical proximity rather than direct contact.
However optimistic we choose to be about the likely effects of legislation like USA FREEDOM, however, it was not any one legally dubious program that Snowden cites as his motive for abandoning his life and career, a decision that landed him in exile in Russia. It was a total architecture of monitoring—divided for legal and clerical convenience into discrete codeworded programs, but functionally operating as an integrated apparatus of surveillance whose true capabilities are more than the sum of its subsystems, and which may be flexible enough to simply route around the disruption of any individual data source.
If we care about seriously assessing the warning Snowden purports to offer, we need to scrutinize the full range of capabilities we’ve learned about, not only as freestanding programs, but as nodes in a network of information gathering and analysis. We’re then in a position to ask whether the design and aims of the system as a whole are compatible with a free society.
Collect It All
Two elementary facts—one strategic, one technological—have driven the design of all the programs that the journalists with access to the Snowden documents have disclosed.
The first is that, as Sen. Lindsey Graham put it, “people are trying to come to our nation and kill us, and we need to find out what they’re up to before they do it. They have to be right only one time. We have to be right all the time.” Foreign intelligence has always been about anticipating the actions of adversaries—but with the attacks on the World Trade Center and the Pentagon, it became imperative to anticipate the identity of the adversary as well, with no margin of error considered small enough to tolerate. It was not enough to monitor known threats; the intelligence community was ordered to foresee unknown threats, from individuals and small groups no less than states, and these threats could materialize nearly anywhere.
The second basic fact is that modern communications networks obliterate many of the assumptions about the importance of geography that had long structured surveillance law. A “domestic” Internet communication between a user in Manhattan and a server in Palo Alto might, at midday in the United States, be routed through nocturnal Asia’s less congested pipes, or to a mirror in Ireland, while a “foreign” e-mail service operated from Egypt may be hosted in San Antonio. “What we really need to do is all the bad guys need to be on this section of the Internet,” former NSA director Keith Alexander likes to joke. “And they only operate over here. All good people operate over here. All bad guys over here.” It’s never been quite that easy—but General Alexander’s dream scenario used to be closer to the truth. State adversaries communicated primarily over dedicated circuits that could be intercepted wholesale without much worry about bumping into innocent Americans, whereas a communication entering the United States could generally be presumed to be with someone in the United States. The traditional division of intelligence powers by physical geography—particularized warrants on this side of the border, an interception free-for-all on the other—no longer tracks the reality of global information flows.
What NSA documents themselves describe as a “collect it all” approach to signals intelligence is an understandable reaction to these two facts. If a national security threat could come from anyone, it’s necessary to track everyone. If their communications can flow anywhere, you want to be able to collect everywhere. Thus “Alexander’s strategy is the same as Google’s,” as a former colleague told Foreign Policy’s Shane Harris: “I need to get all of the data. If he becomes the repository for all that data, he thinks the resources and authorities will follow.”
This broad perception of the intelligence mission has natural consequences for the security and privacy of all users. It’s no longer sufficient to focus on cracking the bespoke cryptographic systems used by foreign states, because now everyone relies on encryption, whether they know it or not. Thus the aggressive BULLRUN program which seeks to “insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets,” and “covertly influence” the design of widely used software to ensure in advance there’s no communication the NSA can’t read.
In a very literal sense, then, network infrastructures themselves have become the agency’s primary targets. When they’re abroad, that might mean “hunting sysadmins,” who hold the “keys to the kingdom” on foreign networks, or implanting back-doors on tens of thousands of routers designed to handle an entire network’s traffic. Domestically, it might mean using the relationships developed with major technology companies under the PRISM program, authorized under the FISA Amendments Act of 2008, not only to rapidly collect on specific foreign users, but to influence the design of online services to render them tappable—which is to say, insecure—by default.
Though the “collect it all” approach may have been motivated chiefly by the desire to identify and anticipate terrorists, wholesale collection capabilities will clearly not remain confined to that purpose once they have been created. An astonishing program known as SOMALGET, for instance, reportedly records nearly every cell phone call in… the Bahamas. The rationale for this mindboggling universal wiretap? Not to catch beachcombing jihadis, but to aid in the war on drugs.
One of the most disturbing manifestations of the imperative to control infrastructure is the system known as TURBINE, an industrial scale delivery system for targeted exploitation that now appears to live right on the Internet backbone itself. Scanning the vast stream of Internet traffic, a suspicious user’s Web browsing session can be automatically hijacked to install malware that allows the NSA to log every action a user takes on their device, or even activate cameras and microphones, transforming smartphones and laptops into remotely operated bugs on a massive scale.
The government’s ability to compel the assistance of domestic companies aids subsequent collection on foreign networks, whether under the general warrants provided for by the FISA Amendments Act or the still broader authority of Executive Order 12333. Likewise, the agency’s relatively free hand when collecting data abroad can enable de facto bulk collection at home under nominally targeted authorities designed for domestic use.
The government has insisted, for example, that the companies with which the NSA partners under the PRISM program don’t simply permit intelligence agencies to troll through their systems and servers hunting for malefactors: They provide the government with user data only in response to targeted requests that use a specific “hard selector” like an e-mail address or user ID. But those “hard” selectors may be derived from analysis of far vaster interception acquired from the very same companies without their knowledge, under programs like MUSCULAR, at overseas data links. Alternatively, the NSA might feed bulk-collected foreign Web traffic through a tool like XKeyscore, which allows the agency to filter communications using more abstract characteristics, or “soft selectors” such as language, location, or software configuration. What looks like targeted collection under a program like PRISM may simply be the final step in a process that, considered as a whole, employs specific target identifiers as an intermediary step in what is functionally algorithmic or pattern-based surveillance.
As has become all too clear from the programs that have received the greatest scrutiny to date, the scale and complexity of these interdependent mechanisms for collection an analysis make them opaque even to the spy agencies themselves—and so, a fortiori, to their assigned overseers. It appears that violations of the rules are discovered only when the NSA itself deigns to report them—sometimes years after the fact.
That’s especially disturbing given the vastly increased scale and speed with which surveillance capabilities could be turned to inappropriate ends. Technologies to enable the most intrusive forms of wholesale monitoring, once directed exclusively overseas, are now installed at the heart of our domestic communications infrastructure. And the massive troves of metadata collected under a wide variety of authorities—with varying degrees of oversight—give the intelligence community enormous flexibility in targeting those surveillance technologies.
The result is that we’ve achieved an unprecedented kind of economy of scale in surveillance. Just as a song can now be sent to a single user or to thousands at essentially the same cost, surveillance systems can be tasked with thousands of new targets—or categories of targets—almost instantaneously.
It is a stroke of historical good fortune that J. Edgar Hoover’s seemingly unbounded willingness to deploy his surveillance powers against domestic political dissidents at least faced technical constraints. Having bugged the offices of the Southern Christian Leadership Congress did not render it any cheaper or easier to bug the next hotel room Dr. Martin Luther King Jr. checked into: Time and resources had to be invested on each occasion.
Infrastructural surveillance is another matter. If a system is technically capable of rapidly collecting the Gmail inboxes of foreigners who frequent jihadist websites, then it is apt to be technically capable of doing the same for Americans who are active on Tea Party or Occupy forums. Tweaking a few lines of code will transform one system into the other.
If we care above all about avoiding that scenario, then perhaps the most significant change wrought by the Snowden disclosures to date has not been the policy proposals it has inspired—which, however vital, tend to focus on rules rather than architectures—but in the way it has transformed the incentives of the technology companies that maintain those architectures. Under cover of secrecy, the few within those companies who understood what was happening had little motive to draw attention to it, to do anything other than quietly comply—even if they had legally been permitted to do so. Exposure has inverted those incentives: Silicon Valley now stands to lose tens of billions in revenue unless it acts to regain the dwindling trust of its global user base.
Thus, America’s largest tech companies quickly banded together to demand greater transparency about the scope of surveillance—an essential safeguard against any abrupt and major expansion of collection. They also began making individual changes to their systems, encrypting the links between their data centers to forestall wholesale interception. As a Snowdenversary present, Google even announced that it will introduce strong, user-friendly end-to-end encryption for its wildly popular Gmail service—a design choice that would render them incapable of handing over the keys needed to read user messages. None of these changes are likely to stop the NSA if it is determined to spy on particular targets, but they do help raise the cost of the indiscriminate mass surveillance that poses the greatest threat to democracy.
We cannot, however, rely on Silicon Valley to avoid hard policy choices: The security they now enhance, they can ultimately be ordered to help undermine. Armed at last with a fuller understanding of the surveillance systems our intelligence agencies have been building, it falls to us to assess whether they are truly so necessary to our security that they justify their inherent risks. And the question we should ask about such systems is the question we should ask about, say, biological weapons: Not whether we are satisfied with how (as far as we know) they are currently being used, but whether the consequences of their misuse are so great that, if and when it occurs, it will be too late to do much about it.
Legal Safeguards, Not Disarmament
As I type these words, I have to take on faith that the Washington D.C. police, the FBI, the DEA, and the Secret Service are not raiding my house. I also have to take on faith that federal and state law enforcement authorities are not tapping my various phones. I have no way of knowing they are not doing these things. They certainly have the technical capability to do them. And there’s historical reason to be concerned. Indeed, there is enough history of government abuse in the search and seizure realm that the Founders specifically regulated the area in the Bill of Rights. Yet I sit here remarkably confident that these things are not happening while my back is turned—and so do an enormous number of other Americans.
The reason is that the technical capability for a surveillance event to take place does not alone amount to the reality—or likelihood—of that event’s taking place. And though the D.C. police certainly have the battering rams to take down my door, there are at least two other less-visible barriers to their entry. One is the substance of the law, which forbids their entry in the absence of probable cause of a crime. The other is the compliance and oversight mechanisms that ensure the police follow the law. If one has confidence in those two things, the technical capability of government to conduct an abuse actually does not pose an unmanageable threat.
For much the same reason as I am not rushing home to guard my house, I have a great deal of confidence that the National Security Agency is not spying on me. No doubt it has any number of capabilities to do so. No doubt those capabilities are awesome—in the wrong hands the tools of a police state. But there are laws and rules that protect me, and there are compliance mechanisms that ensure that the NSA follows those laws and rules. These systems are, to be sure, different from those that restrain the D.C. cops, but they are robust enough to reassure me.
They are not, however, robust enough to reassure Julian Sanchez. Sanchez does not especially question NSA’s compliance mechanisms—nor does he question the integrity of the people who operate the agency’s programs. At a recent Brookings event, in fact, he declared of former NSA Deputy Director John “Chris” Inglis—with whom he was appearing—that “If we absolutely must have a nigh-omniscient, planet-spanning electronic Panopticon, then Chris Inglis is the sort of person whose hands I want on the lever.”
Sanchez’s problem, by and large, is instead with the substance of the law, which in his view permits too much and poses inherent dangers. This is the right focus. The fundamental problem here is not the government’s capabilities. It is not the names of scary-sounding programs. It is not the occasionally bombastic language of PowerPoint presentations. And it is not ultimately a concern that the NSA will fail to follow the rules we give it.
At the base of the NSA controversies, rather, is a lack of social agreement about the proper contours of the rules. It is a lack of agreement about the nature and integrity of the judicial and legislative oversight mechanisms we have created and the degree to which they can reasonably function in secret. It is a lack of agreement about the degree to which the Internet requires patrolling both for threats to the platform itself and for evidence on the platform of threats exogenous to it. It is a lack of agreement about the relative weight we should give to each of several different kinds of security—both personal and collective security—against each of several different kinds of violation.
But like a lot of NSA’s critics in the current debate, Sanchez sometimes mingles fears about the rules with fears about the capabilities themselves. Indeed, he concludes with the arresting observation that “the question we should ask about … systems [like the NSA’s] is the question we should ask about, say, biological weapons: Not whether we are satisfied with how (as far as we know) they are currently being used, but whether the consequences of their misuse are so great that, if and when it occurs, it will be too late to do much about it.”
The analogy here is, of course, imprecise. Biological weapons are not being used against us every day—though cyber attacks, exploitations, and espionage are a daily reality. More fundamentally, we have not all opened our veins to a collective intravenous drip to which the whole world has access and can introduce pathogens the way we have all put the Internet on our desk, in our briefcases, in our pockets and on our bodies. We can plausibly talk about banning biological weapons, at least at the state level. We cannot plausibly talk about banning Internet spying or signals intelligence or even offensive cyber operations. We are in a land, rather, in which some significant amount of this activity is simply part of the landscape and is going to remain so. And the restraints are not going to be—as with biological weapons—flat categorical bans on the development of capabilities or even on their situational deployments and use. The restraints are going to lie, rather, in the development of rules and compliance mechanisms in which we feel comfortable reposing trust.
In other words, we need to separate more rigorously discussion of capabilities from discussion of rules. Yes, these capabilities are dangerous. But so are any number of other government capabilities—the ability to conduct air strikes, for example. Sanchez worries about programs that insert vulnerabilities into commercial encryption systems, implant back doors in routers, and track individuals. But despite his analogy to banning biological weapons, he is presumably not arguing against ever doing these things. If he is, he arguing for a remarkable unilateral disarmament in an ongoing international cyber arms race. But if he’s not, the question necessarily pivots away from the inherent menace of the programs and precisely to the question Sanchez disclaims: “whether we are satisfied with how … they are currently being used,” and to what extent the rules we have in place do or do not prevent the uses with which we are uncomfortable.
There is no one right answer to this question. A democratic polity can come down in any number of places along a spectrum of aggressiveness and restraint, reflecting different allocations of different sorts of risk—the risk that NSA is tapping you as you tap your keyboard and risk that NSA is dark on subjects against whom we expect aggressive collection.
Fear vs. Facts: Exploring the Rules the NSA Operates Under
There is no doubt the Snowden disclosures have launched a debate that raises significant issues regarding the extent of U.S. government national security surveillance authorities and activities. And Julian Sanchez’s essay Snowden: Year One raises a number of these issues, including whether the surveillance is too broad, with too few limits and too little oversight. But an overarching theme of Sanchez’s essay is fear – and fear of what might be overshadows what actually is, or is even likely. Indeed, he suggests that by just “tweaking a few lines of code” the NSA’s significant capabilities could be misdirected from targeting valid counterterrorism suspects to Americans involved in the Tea Party or Occupy movements.
So really, what would it take to turn NSA’s capabilities inward, to the dark corner of monitoring political activity and dissent? It turns out, quite a lot. So much, in fact, that after a considered review of the checks and balances in place, it may turn out to be not worth fearing much at all.
First, a little history. Prior to 1978, NSA conducted surveillance activities for foreign intelligence purposes under Executive authority alone. In 1978, Congress passed the Foreign Intelligence Surveillance Act (FISA), which distinguished between surveillance that occurred here at home and that which occurred overseas. FISA requires that when electronic surveillance is conducted inside the United States, the government seek an order from the Foreign Intelligence Surveillance Court (FISC or the Court) based on probable cause. So, if the government wants to conduct surveillance targeting a foreign agent or foreign power here in the United States, it must obtain FISC approval to do so. By law, the Court may not issue an order targeting an American based solely on activities protected by the First Amendment to the Constitution. The Attorney General is required to report on the full range of activities that take place under FISA to four congressional committees: both the intelligence and judiciary committees in Congress. The law requires that the committees be “fully informed” twice each year.
There have been a number of amendments to FISA over the years. In 1994, the statute was amended to require that physical searches for national security purposes conducted inside the United States also happen by an order from the FISC. The USA-PATRIOT Act of 2001 amended several provisions of FISA, one of which enabled better sharing of information between terrorism and criminal investigators. And in 2008, FISA was amended to provide a statutory framework for certain approvals by the Attorney General, Director of National Intelligence, and FISC regarding the targeting of non-U.S. persons reasonably believed to be outside the United States for foreign intelligence purposes, when the cooperation of a U.S. communications service provider is needed.
So how do we know that this system of approvals is followed? Is the oversight over NSA’s activities meaningful, or “decorative,” as Sanchez suggests?
It is worth exploring. Here is how oversight of the Section 702 surveillance works, as one example, since it has been the subject of a significant part of the debate of the past year. Section 702 was added to FISA by the FISA Amendments Act of 2008. It authorizes the NSA to acquire the communications, for foreign intelligence purposes, of non-U.S. persons reasonably believed to be outside the United States. These are persons with no Constitutional protections, and yet, because the acquisition requires the assistance of a U.S. electronic communications provider, there is an extensive approval and oversight process. There is a statutory framework. Specifically, the Attorney General and Director of National Intelligence jointly approve certifications. According to declassified documents, the certifications are topical, meaning, the way the statute is being implemented, the certifications are not so specific that they identify individual targets; but they are not so broad that they cover any and everything that might be foreign intelligence information. The certifications are filed with the FISC, along with targeting and minimization procedures. Targeting procedures are the rules by which NSA selects valid foreign intelligence targets for collection. Minimization procedures are rules by which NSA handles information concerning U.S. persons. The FISC has to approve these procedures. If it does not approve them, the government has to fix them. The Court reviews these procedures and processes annually. The Court can request a hearing with government witnesses (like senior intelligence officials, even the NSA Director, if the judge wanted or needed to hear from him personally) or additional information in order to aid in its decisionmaking process. Information about the 702 certifications is reported to the Congressional intelligence committees.
Once the certifications are in effect, attorneys from the Department of Justice’s (DOJ) National Security Division and attorneys and civil liberties officials from the Office of the Director of National Intelligence (ODNI) review the NSA’s targeting decisions and compliance with the rules. They conduct reviews at least every 90 days. During that 90-day period, oversight personnel are in contact with NSA operational and compliance personnel. Compliance incidents can be discovered in one of at least two ways: the NSA can self-report them, which it does; or the DOJ and ODNI oversight personnel may discover them on their own. Sometimes the NSA does not report a compliance incident in the required timeframe. Then the time lag in reporting may become an additional compliance incident. The DOJ and ODNI compliance teams write up semi-annual reports describing the results of their reviews. The reports are approved by the Attorney General and Director of National Intelligence and provided to the FISC and to Congress. According to the one report that has been declassified so far, in August 2013, for a six-month period in 2012, the rate of error for the NSA’s compliance under Section 702 collection was .49% - less than half of one percent. If we subtract the compliance incidents that were actually delays in reporting, then the noncompliance rate falls to between .15-.25% - less than one quarter of one percent. Hardly an agency run amok.
A few caveats are in order. First, not all compliance matters are equal in terms of importance. While the declassified joint ODNI/DNI report describes the types of some compliance matters that may occur under Section 702 acquisition, technical errors in implementing a collection may be of greater significance than other types of errors that may occur due to human error. Second, compliance is significantly affected by the people conducting it, both in terms of having the quantity of personnel assigned to the task, which can be an issue sometimes in the Executive Branch, and the quality of their work. It is worth noting that Sanchez’ piece highlighted the views of Congressman Jim Sensenbrenner, a former Chairman of the Judiciary Committee. Congressman Sensenbrenner has since taken on the charge of rolling back Section 215 of the USA-PATRIOT Act, pursuant to which the government has obtained court orders compelling the production of bulk telephone metadata for counterterrorism purposes. In his capacity on the Judiciary Committee, Chairman Sensenbrenner had access to extensive information regarding the implementation of FISA authorities. In remarks he made at Georgetown Law in November 2013, he stated that he “limited” his “participation in secret briefings.” In other words, he opted out of conducting meaningful oversight. Now he claims that he did not know how the law was being applied. No wonder.
Generally, however, Congressional committees charged with oversight of the Intelligence Community do their job. The Intelligence Committees of Congress have professional staff, often with deep experience in national security matters. The Committees conduct substantive hearings, although, due to the sensitive and operational nature of the topics discussed, often in classified session. Congressional staff also receive briefings. During the debate surrounding the passage of the FISA Amendments Act of 2008, many members of Congress and their staffs visited the NSA and received dozens of briefings regarding its details and subsequent implementation.
Decorative? Returning to the question implicitly posed by Sanchez’s argument: what would it take to turn this system inside out? Most likely, it would take either a conspiracy of the highest order, or the complete incompetence of everyone involved in the process – from operators to leadership inside the Intelligence Community, from lawyers to senior officials at the Justice Department, from legal advisors to judges of the FISC, from staff to members of Congress.
Here’s what happens in the real world: people make mistakes; technological implementation goes awry; bureaucracy gets in the way of getting down to the bottom line. The adequacy and rigor of Congressional oversight waxes and wanes based, at times, on the quality of the leadership of the various committees at any time. Government employees also sometimes do the wrong thing, such as the twelve cases in ten years that the NSA has explained to Congress, and then they are held accountable. Oversight and compliance systems sometimes fail, too, such as the delay in recognizing the problems in the technical implementation of the phone metadata program that was subsequently brought to the Court’s attention. These are all valid reasons to work on improving auditing, compliance, oversight and accountability mechanisms. They are not valid reasons for adopting reforms that would dramatically scale back important national security capabilities that keep the nation safe.
The Drama Ahead: Google versus America
Five months into the deluge of leaked documents from Edward Snowden, the Washington Post reported that the NSA was tapping into Google and Yahoo’s fiber overseas. That disclosure – perhaps more than any save the revelation that NSA works to weaken encryption standards to make its job easier – may be the signature Snowden story going forward. How will an increasingly adversarial relationship between Google and the U.S. government play out? How will America’s place in the world change given that adversarial relationship? Will those claiming to oversee the NSA start to address the legal issues raised by its data grabs overseas?
As Julian Sanchez laid out, the main disclosures affecting Americans domestically – the phone dragnet, authorized under Section 215 of the USA-PATRIOT Act, and the content collection, including unwarranted back door searches on content from U.S. persons, under Section 702 of the FISA Amendments Act – are only now leading to badly diluted reform of just the phone dragnet. It is even possible that this reform will lead more Americans’ phone records to be scrutinized, in part because more phone records will be included among the chaining, in part because the bill permits chaining on “connections” in addition to actual calls.
And aside from the President’s weak and unenforceable promises to limit spying on foreigners, there has been no consideration of protections for those overseas.
This leaves one central drama to play out, in which Google and other tech companies (and to a much lesser extent, a few telecoms) begin to push back against the NSA’s overreach. It’s not just that U.S. cloud (and other tech) companies stand to lose billions as their clients choose to store data locally rather than expose it easily to the NSA. It’s also that the NSA violated several aspects of the deal the Executive Branch made six years ago with the passage of the FISA Amendments Act (FAA), Section 702 of which authorizes the PRISM program and domestic upstream collection.
Congress passed the FISA Amendments Act several years after the New York Times’ exposure of the illegal wiretap program, ostensibly to address a technical problem used to justify that program. Technology had changed since the analog and radio world in place when FISA was first passed in 1978. Now, much of the world’s communications – including those of extremists who were targeting America – were sitting in Google’s and Yahoo’s and Microsoft’s servers within the United States. So Congress authorized the NSA to conduct collection inside the United States on targets located outside of the country (which swept up those who communicated with those targets, wherever they were located). In exchange, the government and its supporters promised, it would extend protections to Americans who were overseas.
Yahoo and Google played by the rules, as the PRISM slide released last June revealed. The data of both Yahoo and Google have been readily available for any of the broad uses permitted by the law since January 2009. Yet, in spite of the fact that the NSA has a legal way to obtain this Internet data inside the United States using PRISM, the government also broke in to steal from Yahoo and Google fiber overseas.
That’s an important implication of Sanchez’ point that “modern communications networks obliterate many of the assumptions about the importance of geography.” American tech companies now store data overseas, as well as in the United States. Americans’ data is mixed in with foreigners’ data overseas. Many of the more stunning programs described by Snowden’s documents – the collection of 5 billion records a day showing cell location, NSA partner GCHQ’s collection of millions of people’s intimate webcam images, and, of course, the theft of data from Google and Yahoo’s servers – may suck up Americans’ records too.
Plus there’s evidence the NSA is accessing U.S. person data overseas. The agency permits specially trained analysts to conduct Internet metadata contact chaining including the records of Americans from data collected overseas. And in a Senate Intelligence Committee hearing earlier this year, Colorado Senator Mark Udall asked hypothetically what would happen with a “a vast trove of U.S. person information” collected overseas; the answer was such data would not get FISA protection (California Senator Dianne Feinstein, the Intelligence Committee Chair, asked an even more oblique question on the topic).
Udall and Feinstein’s questions show that a lot of this spying does not undergo the oversight Benjamin Wittes and Carrie Cordero point to. Last year, Feinstein admitted her committee gets less reporting on such spying. Even for programs overseen by FISA, the NSA has consistently refused to provide even its oversight committees and the FISA Court real numbers on how many Americans get sucked into various NSA dragnets.
Moreover, the government’s threat to tech companies exists not just overseas. When a group of tech companies withdrew their support for the USA Freedom Act, they argued the bill could permit the resumption of bulk collection of Internet users’ data domestically. In the past, that has always meant telecoms copying Internet metadata at telecom switches, another outside entity compromising tech companies’ services. As with the data stolen overseas, Internet metadata is available to the government legally under PRISM.
In response to the news that the government at times bypasses the legal means it has to access Google’s clients’ data, the tech giant and others have found new ways to protect their customers. That consists of the new encryption Sanchez described – both of that fiber compromised overseas and of emails sent using Google – but also the right to publish how much data the government collects. Even within the criminal context, tech companies (including telecoms Verizon and AT&T) are challenging the U.S. government’s efforts to use tech companies’ presence in the United States to get easy access to customers’ data overseas.
The conflict between Google and its home country embodies another trend that has accelerated since the start of the Snowden leaks. As the President of the Computer & Communications Industry Association, Edward Black, testified before the Senate last year, the disclosure of NSA overreach did not just damage some of America’s most successful companies, it also undermined the key role the Internet plays in America’s soft power projection around the world: as the leader in Internet governance, and as the forum for open speech and exchange once associated so positively with the United States.
The U.S. response to Snowden’s leaks has, to a significant degree, been to double down on hard power, on the imperative to “collect it all” and the insistence that the best cyberdefense is an aggressive cyberoffense. While President Obama paid lip service to stopping short of spying “because we can,” the Executive Branch has refused to do anything – especially legislatively – that would impose real controls on the surveillance system that undergirds raw power.
And that will likely bring additional costs, not just to America’s economic position in the world, but in the need to invest in programs to maintain that raw power advantage. Particularly given the paltry results the NSA has to show for its domestic phone dragnet – the single Somali taxi driver donating to al-Shabaab that Sanchez described. It’s not clear that the additional costs from doubling down on hard power bring the United States any greater security.
Google and the rest of the tech industry probably will continue to be – but should not be – the leading edge of the response to the NSA’s spying. As Sanchez noted, Google’s strategy is largely the same as the NSA’s, to collect vast amount of data on its users; Google only intends to keep its customers’ data private from others, not from its own use. Moreover, Google will continue to keep its data in a relatively centralized location, concentrating the benefits and risks of big data.
But Google and other tech companies will lead the response, both in potentially providing enough heft to make legislative changes, but also in the only response that can ensure greater privacy: to raise the costs on the NSA’s spying by encrypting data, whether via a company like Google, or individually.
Correcting some Inaccuracies about NSA Surveillance
In her response essay, Marcy Wheeler keys on an important, and serious, consequence of the unauthorized disclosures and their aftermath: the developing adversarial relationship between the government and the private sector. Now past the first year of the disclosures, it is clear that companies on the receiving end of orders or directives issued under the Foreign Intelligence Surveillance Act (FISA) may be more likely to challenge them in the future. At least one company has already increased its challenges to requests for data from the government issued under National Security Letters or the Electronic Communications Privacy Act (ECPA). But Wheeler is wrong to suggest that – whatever it is the government may have done to effect the collection of foreign intelligence information overseas (which has happened for decades and continues to occur under Executive Order 12333) – somehow “violated…the deal” that was reached through the FISA Amendments Act of 2008 (FAA). Instead, the information that has been released and declassified over the last year has demonstrated that the FAA has been implemented consistently with how it was described in the public record of legislative text and Congressional hearings that took place up to its passage in 2008.
It is also not accurate to describe the steps the Obama Administration has taken as “refus[ing] to do anything” to limit NSA surveillance. Indeed, the President has already implemented significant reforms to the telephone metadata program, including requiring advance approval from the Foreign Intelligence Surveillance Court (FISC) before querying the data, and limiting the extent of analysis of that data. His Presidential Policy Directive-28 (PPD-28), issued in January 2014, limits the categories of bulk collection the NSA may collect. He has adopted the Surveillance Review Group’s principle of “risk management,” to more formally involve foreign policy implications, for example, in making collection decisions. And he has directed that procedures and rules be changed in order to add privacy protections for foreigners in how the NSA handles information it has acquired. Although it is too soon to assess how the details of some of these and other changes will be implemented, their significance should not be underestimated.
The Single Branch Theory of Oversight
Carrie Cordero made a fairly astonishing claim in her response to my focus on the SA’s theft from Google and Yahoo fiber overseas. She claims that that and other documents showing how the NSA double dips from PRISM producers, collecting content domestically under Section 702 and collecting it internationally under Executive Order 12333, shows “the [FISA Amendments Act] has been implemented consistently with how it was described in the public record of legislative text and Congressional hearings that took place up to its passage in 2008.” Cordero would have you believe that the Administration made it clear it intended to continue to steal data from American providers even after having been given Congressionally authorized access to it.
She is right on one count, though she doesn’t spell out how in her reply. From the very first debates over amendments to FISA in 2007, members of Congress – especially Senator Dianne Feinstein and then Senator Russ Feingold – raised concerns that the Executive Branch would simply bypass the law if it wanted to. And while then Director of National Intelligence (and now Booz Allen Hamilton Vice Chairman) Mike McConnell assured the Senators “the effort to modernize would prevent an operational necessity to do it a different way” (seemingly providing assurances the Intelligence Community would not bypass the FISA process as they have), McConnell and others, including Keith Alexander, kept repeating that “Article II is Article II.”
That is, throughout the FISA amendment process, the intelligence community was quite honest that it did not believe itself to be bound by the laws passed by Congress; they explicitly reserved the authority to simply go overseas to bypass limits and oversight imposed by Congress.
That’s why the 800 words Cordero used to describe the oversight exercised by the FISA Court and Congress as part of the FISA process really describe something that is – as Julian Sanchez argued – decorative. So long as the intelligence community does bypass those authorities to carry out the same collection overseas (they definitely do that with content, and appear to do that with metadata), the oversight of other branches is a mere indulgence from the Executive, made all the weaker because both branches are aware that the Executive will bypass their oversight if the oversight is deemed overly strict.
The reaction towards the Privacy and Civil Liberties Oversight Board’s report on the phone dragnet – which listed a number of ways the program did not comply with the law – is instructive on what happens when an entity tries to exercise real oversight. Before the report, Congress discussed giving PCLOB subpoena power, a role in the FISC advocate process, and expanded review of NSA’s activities. All those plans have been forgotten in the wake of PCLOB actually daring to review the letter of the law, as they had previously informed Congress they would do.
NSA arbitraging of the jurisdiction of FISA may or may not be legal – perhaps one day the Supreme Court will decide. But, legal or not, it undermines the effectiveness of the three-branch oversight that itself covers just a small part of the NSA’s spying.
A Final Thought on Oversight
A final thought on the sufficiency of oversight and accountability structures, given that they have been a central theme of this exchange: before adding additional layers of oversight over different collection frameworks and processes (for example, to foreign intelligence surveillance conducted under Executive Order 12333), there is a need for more dialogue regarding what those both inside and outside of government view as effective oversight. Because while the Executive Branch could certainly take steps to add more oversight to collection programs with the goal of bolstering public confidence, if there is not confidence in the manner in which existing oversight takes place, then the added people, time, money and effort will not be worthwhile. And, the added bureaucracy will come at some price to operational staffing and/or efficiency, particularly in the current federal budget environment. Similarly, if outside observers do not have confidence in the existing oversight that the Intelligence Committees, for example, conduct, then we need to consider whether it is worthwhile to expand those committees’ current practices of oversight over additional Executive Branch activities. Although it might take longer to get right, and probably would not satisfy immediate critiques of government surveillance activities, a comprehensive review of oversight and accountability mechanisms across Intelligence Community operations as they relate to purpose, effectiveness, civil liberties, and privacy would be a more deliberate, and in the long-term, effective, way forward in developing recommendations for improved oversight, and thereby re-building confidence in the Intelligence Community’s activities on behalf of the nation’s security.
Check Your Privilege
Ben Wittes begins his essay by observing that, just as he takes on faith that government agencies are not currently raiding his house, he is “remarkably confident” that the NSA is not illicitly wiretapping him. As someone who occasionally corresponds with Guardian editors and human rights activists outside the United States, I am frankly not nearly so confident that none of my communications have been “incidentally” collected, but how either of us is personally affected is rather beside the point. I’m also, after all, confident that, as a person of pallor, I could have strolled through Brooklyn at any time over the past few years without being subject to a humiliating “stop and frisk.” That is, in a sense, the problem.
I dwell on what Ben no doubt intended mostly as a rhetorical flourish because concerns about surveillance so frequently evoke blasé responses to the effect of: “Well, I’m sure they’re not interested in me, so I don’t really care; I have nothing to hide.” Privileged folks like Ben and I may well be right to think the laws, rules, and institutional priorities governing the intelligence community will protect us—a fortiori if we happen to be vocal advocates of that community—but the test of a just system is not how it treats the privileged. That doesn’t mean a privileged perspective is necessarily wrong, but it does mean we ought to be cautious about any inference from “this is not a problem I worry about” to “this is not a problem.”
In a democracy, of course, the effects of surveillance are not restricted to its direct targets. Spying, like censorship, affects all of us to the extent it shapes who holds power and what ideas hold sway. Had the FBI succeeded in “neutralizing” Martin Luther King Jr. earlier in his career, it would hardly have been a matter of concern solely for King and his family—that was, after all, the whole point.
Instead of a couple wonks comfortably ensconced in D.C. institutions, let’s instead ask a peaceful Pakistani-American who protests our policy of targeted killings, perhaps in collaboration with activists abroad; we might encounter far less remarkable confidence. Or, if that seems like too much effort, we can just look to the survey of writers conducted by the PEN American Center, finding significant percentages of respondents self-censoring or altering their use of the Internet and social media in the wake of revelations about the scope of government surveillance. Or to the sworn declarations of 22 civil society groups in a lawsuit challenging bulk phone records collection, attesting to a conspicuous decline in telephonic contacts and members expressing increased anxiety about their association with controversial or unpopular organizations.
What’s important to keep in mind here is that even if Ben were well justified in his belief that government is unlikely to ever again misuse its powers against any peaceful citizens, the panoptic chilling effect these systems exert on many who lack his (let us suppose) superior understanding would still inflict a real cost in the currency of democratic engagement. Some people, no doubt, will be unreasonably paranoid regardless of the facts about the government’s powers and capabilities, while others now anxious may be reassured once articulate folks like Ben and Carrie explain the rules already in place. At least some, however, may be reassured only if the law is tightened in appropriately reassuring ways. If so, and if members of marginalized groups have sound historical reason to fear mistreatment by government, then folks like Ben and I should be cautious of generalizing too quickly from the fact that we don’t personally share those fears.
Rules and Architectures
Ben Wittes would like to distinguish two kinds of concerns about NSA surveillance — one he regards as a topic for legitimate debate, one he regards as an unhelpful distraction. While it is all well and good to debate the appropriate contours of the formal rules that should govern NSA spying, Wittes writes, we must not “mingle” arguments about whether these rules may permit too much or protect too weakly with “fears about the capabilities themselves.”
While I agree that this is a useful distinction to make for the sake of clarity, I prefer to talk about “architectures” rather than “capabilities,” because the latter is vaguer and obscures important differences between different mechanisms for achieving the same broad objectives.
I also reject the idea that we should, in effect, permit any architecture of surveillance, however potent, that the intelligence community might find useful to construct, provided only the formal rules governing their use are adequate.
If we were confident that an adequate set of rules would both remain in effect and be followed scrupulously, arbitrarily far into the future, perhaps we could safely follow Ben’s advice. But rules are not always followed scrupulously, and indeed, rules can change far more quickly in times of panic than architectures can. We had a fairly stringent set of rules embedded in statute on September 11, 2001, and soon thereafter loosened them substantially — rather too substantially for my taste — in order to further empower our intelligence community to prevent further terrorist attacks. And yet, as we learned only many years later, President Bush determined that even the loosened rules fettered him too tightly, secretly directing the NSA to launch an expansive program of warrantless surveillance, relying on a legal theory to which only a tiny handful of government officials were made privy — and one ultimately discarded as indefensible.
To dismiss as “speculative” the fear that something similar might recur in a future crisis strikes me as requiring something bordering on a Humean skepticism. It’s like regarding as mere conjecture the belief that, having risen this morning and the morning before, the sun is likely to rise tomorrow as well. We should certainly debate what rules make sense — but if it is reasonably foreseeable that in times of crisis, rules will sooner or later be either violated or improvidently diluted by legislators disinclined to give much weight to the rights of unpopular minorities, we should act now to ensure the tools government wield are not limited only by those formal rules.
NSA’s defenders sometimes deflect concerns of this sort by observing that, after all, any authority can be abused. Which is true. But some authorities are more susceptible of abuse than others, and some abuses are more catastrophic in their effects. It is easier to misuse a secret authority than a publicly scrutinized one, and it is easier to evade oversight in a system of surveillance operating on a massive scale than in a more narrowly targeted one.
A helpful concrete example of what I mean is provided courtesy of the transparency report recently issued by the telephone carrier Vodaphone. In the United States, as in many other countries, the normal protocol for a telephone wiretap works roughly as follows: A law enforcement or intelligence official applies to a judge for a warrant authorizing monitoring of a particular line, and if approved the order is served on the appropriate communications provider, whose lawyers then review and whose technicians then implement the wiretap, arranging for communications over the targeted facility to be transmitted to the appropriate government agency.
In six countries, however, Vodaphone revealed that governments demand “direct access” to the company’s network (and presumably those of other carriers), enabling governments to implement taps without involving the carrier. In some ways this approach has obvious advantages in agility and secrecy, and may even enable the rapid addition of “capabilities” (taps targeting a suspect’s voiceprint rather than a phone number, say, or temporary recording of all calls for later retrieval) that would be difficult to implement quickly through the carrier’s own systems.
Equally obviously, however, an architecture of direct access eliminates important structural checks on the use of wiretaps. It makes it easy for the government to radically expand the scale of surveillance without raising any red flags, and it eliminates external chokepoints: The implementation of the tap is carried out by employees of the executive branch, and any records that might subsequently enable investigators to assess the legality of wiretaps remain within the exclusive control of the executive branch. The choice of architecture cannot truly be separated from the choice of optimal rules, because one of these architectures makes it far easier to violate the rules — or to secretly “interpret” them into irrelevance — undetected.
As we now know, American companies were initially willing to obey presidential directives that required both acquisition of records and electronic surveillance outside the procedures established by FISA, but they did eventually begin to insist on court orders. Under an architecture that did not require their cooperation, it seems extremely likely that surveillance programs would have continued for far longer without FISC involvement.
Ben is obviously correct that we are not going to forswear signals intelligence tout court — a policy so insane that I haven’t seen anyone seriously propose it. But that scarcely resolves the question of whether there are “capabilities” — or, as I prefer, “architectures” — that we would be better off not constructing in service of the SIGINT mission, even subject to the best imaginable rules.
NSA appears, for example, to conduct “upstream” collection under §702 via some form of “direct access” to the Internet backbone. If the current architecture bypasses the backbone provider as an intermediary or chokepoint for either individual tasking decisions or even significant programmatic changes to the operation of the surveillance equipment, we can reasonably debate whether this is wise — and whether restoring the chokepoint function in at least the latter case establishes an architecture with less danger of misuse, even if it entails some attenuation of SIGINT capabilities.
Of course the United States is not going to “disarm” in the face of state adversaries that all conduct substantial intelligence operations, but that is hardly the same as saying we must ensure, at any cost and by any means, that every e-mail sent on the planet is at least technically accessible to the NSA. If Ben’s analogy entails that we must match China in every particular — having no more qualms than they do about installing systems of surveillance on domestic networks, or degrading the security of widely used encryption systems — then I question whether that is an “arms race” worth winning.