The Internet of Us Page 9
Legally speaking, the case was complicated by a lack of clarity over whether collecting DNA constitutes a search. That is partly because DNA collection does not require a cheek swab; it can be collected from the skin or hair, for example. Thus, as the majority opinion noted, it is unclear why fingerprints wouldn’t also constitute an illegal search if DNA samples do.
What is clear is that photos, fingerprints and DNA samples allow the police to identify and reidentify you—in ways that are increasingly immune to deception or alteration. You can change your name and your appearance, and fingerprints are not actually unique. But you can’t as easily mess with the DNA—it is, in a real sense, part of who you are.
Of course, being able to identify criminal suspects is generally a very good thing—and DNA has proven to be an effective tool not only in this regard, but also for exonerating innocent people of crimes for which they’ve been falsely accused (and sometimes convicted). At the same time, human beings have always been somewhat suspicious about new means of identification. That includes basic methods that we often overlook. Consider names. A rose may smell as sweet under any other name, but the fact that it has a name at all gives us an ability to reidentify it quickly and communicate that identification to one another. That’s why, in some cultures, knowing someone’s true name can give you (magical) power over them. You know how to identify “who they really are.”
For similar reasons, images have often been said—as Warren and Brandeis were well aware—to have power. Even now, a photograph remains one of our best ways of identifying anything: we record in detail what our memories can’t. Is it any wonder that individuals in some cultures were hesitant to allow their pictures to be taken? The idea that a camera could steal your spirit can be seen as a way of representing a real truth: that a picture identifies you, and like people’s knowledge of your name is not something that is necessarily in your control.
This idea of control is closely connected to the idea of information privacy. The broad notion of privacy is difficult, if not impossible, to define in a straightforward way, and the narrower notion of information privacy is not much better. But even without a precise definition, it is clear that there are several marks or symptoms associated with information privacy. One of those concerns protection: we think of information as private to the extent that it is protected from interference or intrusion.4 Another concerns control: information is private to the extent to which we control access to it.
Why do we value protecting and controlling our information? A cynic might say: we value it only when we have something to hide. But of course, even if this is true, it doesn’t really answer the question. That’s because it depends on what you are hiding and whom you are hiding it from. Hiding a criminal past is one thing; hiding Jews in your basement from the Nazis is another.
In reality, there are much more basic reasons information privacy matters to us.
The Pool of Information
In the summer of 2014, following the revelations of Edward Snowden, the Washington Post revealed what many had long suspected: that the NSA, in targeting foreign nationals, is collecting and storing extremely large amounts of information on American citizens.5 This information is not restricted to meta-data of the sort collected by the NSA’s infamous phone data collection program. It is content—photos, Web chats, emails and the like.
U.S. law prevents the targeting of U.S. citizens without a warrant (even if it is just a warrant from the secret court established for this purpose by the Foreign Intelligence Surveillance Act (FISA) of 1978). But citizens’ digital data is often vacuumed up “incidentally” when the NSA is collecting the posts, emails and so forth of legally designated foreign targets. Nothing currently prevents the NSA from engaging in this “incidental collection.” And the incidentally collected data can be stored indefinitely. Moreover, no law prevents the agency—and other U.S. intelligence and law enforcement agencies—from accessing the incidentally collected content without a warrant, into perpetuity.
The storage of incidentally collected data seems clearly wrong. Yet the reasons that make it so also help to explain why as a nation we sometimes sympathize with the sentiment voiced by Representative Mike Rogers in 2013: that your privacy can’t be violated if you don’t know about it—a non sequitur of such numbing grossness that only Peeping Toms could have greeted it with anything other than laughter.
But before getting back to the NSA, let’s do another thought experiment. Imagine for a moment that I could perform something like the Vulcan mind meld with you (okay, okay, I’m dating myself). I telepathically read all your conscious and unconscious thoughts and feelings. You don’t share your thoughts with me; I take them. I’m sure you’d agree that such an act of mental invasion would be wrong and harmful, but let’s think about why.
The first and most obvious reason is that it has potentially dangerous consequences for you. And, Mike Rogers aside, that danger exists whether or not you know about my violation. Suppose you don’t know. The more I know about you and the less you know about my knowledge, the easier it could be for me to take advantage of your ignorance: and the easier you will be to control or exploit.
Intentions matter. The fact that I can read your thoughts doesn’t necessarily mean I will exploit you. I may be purely motivated by science: I might jot down your thoughts and do nothing to profit from them. Or I may be like Professor X of the comic book heroes X-Men, only motivated by truth, justice and the American way. More simply, the fact that I know what you like may help me guide you toward experiences you haven’t had yet but would enjoy. Think of Amazon: part of their business model is predicated on acquiring information about their customers’ preferences—information often obtained without the customer really knowing it. They use this information to predict what else the customer might like—and to ensure that the customer is given every opportunity to buy it. There is no doubt that many of us feel uncomfortable about the amount of information collected now by corporations for the purpose of selling us stuff we need (or making us want to buy stuff others would like us to need). But it doesn’t necessarily involve nefarious motives. So intentions matter, and even in the case of mind-reading, it is not absolutely certain that bad things will happen, even if you don’t know about it.
But what if you do know that I am reading your thoughts? Well, you’ll be wary, naturally. So wary that you will likely try and censor your thoughts and even your activities—perhaps by humming some Mozart in your mind to disguise your thoughts as best you can. And the reason you’d do so would be obvious—no matter how good I may seem to you now, you will want to minimize your exposure to exploitation and manipulation. This is not surprising. As Bentham knew when he designed his panopticon, observation affects behavior. But, of course, that too isn’t necessarily bad. It is why security cameras are not always hidden. If you know you are being watched, you are less likely to act out. And that can be an instrument for good. Or not.
So, one reason privacy is important is that invasions of it can lead to exploitation, manipulation and loss of liberty. These, in turn, obviously can negatively affect a person’s autonomy. But the possibility of bad effects is one thing, the actuality another. This is precisely the point that defenders of the NSA programs, for example, have been at pains to make. For all that’s been said so far, there might be negative consequences from, e.g., the NSA’s policy of massive incidental collection and other data-sweep programs, if the agency or its architects were assumed to have bad or corrupt intentions. But why, some say, should we think that?
The fact is, however, that we don’t have to know anything about the intentions of the program’s architects in order to be worried. The NSA programs are dangerous to democracy even if we assume that their architects were motivated by the best of intentions—as no doubt many of them were. Roads to unpleasant places are frequently paved with the sweetest of intentions.
The NSA database could be described as a pool of information. This is an apt metaphor. In law, swimmi
ng pools are called attractive nuisances. They attract children and, as a result, if you own a pool, even if you are a watchful, responsible parent yourself, you still have to put up a fence. Similarly, even if we can trust that the architects of the NSA’s various programs had no intention of abusing the information they are collecting about American citizens, the pool of information could easily prove irresistible. And the bigger the pool, the more irresistible it is likely to become. This is not just common sense, it explains why the NSA’s repeated assertions that they aren’t actually looking at the content of emails, or targeting Americans, should have been greeted with skepticism. The pool of data is a pool of knowledge. Knowledge is power; and power corrupts. It is difficult to avoid drawing the inference that absolute knowledge might corrupt absolutely.
That, not surprisingly, is the view of folks like Edward Snowden. But a growing number of stories strongly suggest that fear of abuse is more than a mere theoretical worry. These examples are not constrained to the widely reported cases of NSA employees using their access to spy on sexual partners,6 nor to similar cases in the UK where analysts collected sexually explicit photos of citizens without cause. More troubling, if less titillating, is the fact that the secret FISA court itself has complained that the NSA misrepresented its compliance with the court’s previous rulings that various NSA techniques were unconstitutional.7 In other words, the FISA court is being ignored by the very agency it is assigned to oversee and monitor. It is hard not to form the impression of an agency that feels it knows better than the judiciary or the Congress. And that, surely, should be worrying.
But the most disturbing fact is the massive continued storage of incidentally collected content itself (again: emails, photos, chat conversations and so on)—information that, as reported in the Post, is routinely searched by the CIA and the FBI—all without a warrant, even from the ineffective FISA court, and without any real oversight. Such searches needn’t even be reported, and there is, presently, no legal oversight to prevent queries that are unrelated to national security, or even motivated by political ends. And relying on the agencies themselves to report abuses is like relying on the tobacco companies to tell us whether smoking is harmful.
While that’s one of the major problems with the NSA collecting massive amounts of incidental information about Americans, it also helps explain why people don’t seem too concerned. Putting up fences is arduous, time-consuming and expensive. And it does cut down on easy access to the water. So, if you want to get in that pool with the best intentions—you want to find the terrorists—it is natural to think that the fence only gets in the way of what matters. If you trust that is what the owners of the pool are after, then worries about possible long-term negative consequences will seem abstract and, well, philosophical. After all, it is pretty clear human beings find it difficult to think about long-term consequences—that’s true whether we are talking about swimming pools or global warming. If nothing bad has happened already that we know about as a result of privacy invasions, then what’s the problem?
Unfortunately, if the pool of information about American citizens is systematically abused we aren’t going to know about it—at least, not easily. When it comes to global warming, at least we’ll get to realize the consequences of our current policies (or lack of them) one way or another. But the abuse of knowledge isn’t going to be so obvious, and the abusers will have every reason to hide behind good intentions. That was one of the points made by the President’s own review panel’s report in 2013.8 That panel—made up of not only writers and scholars including Cass Sunstein but former leaders of the CIA—suggested, in fact, more than simply fencing the pool (passing legislation to make it more difficult to access); they suggested the pool be drained. That is, they urged that all incidentally collected information (again, mostly on Americans, and far outweighing the amount being collected on warranted targets) simply be removed from the NSA’s databases. This has not yet been done.9
Privacy and the Concept of a Person
The potential dangers of abusing big data are one reason the storage of incidentally collected information is wrong. But there is another: the more insidious harm is not “instrumental” but “in principle.”
Just this point was made over half a century ago in one of the most cited discussions of the right of privacy. In 1965, Edward J. Bloustein argued in a paper that what is wrong with such intrusions is
not the intentional infliction of mental distress but rather a blow to human dignity, an assault on human personality. Eavesdropping and wiretapping, entry into another’s home, may be the occasion and cause of distress and embarrassment but it is not what makes these acts of intrusion wrongful. They are wrongful because they are demeaning of individuality and they are such whether or not they cause emotional trauma.10
Following what he took as the main point of Warren and Brandeis, Bloustein grounded the right of informational privacy on the intrinsic value of human individuality. The connection was what he called “personal freedom”:
The fundamental fact is that our Western culture defines individuality as including the right to be free from certain types of intrusions. This measure of personal isolation and personal control over the doctrines of its abandonment is of the very essence of personal freedom and dignity, is part of what our culture means by these concepts.11
Let’s unpack this thought. Philosophers have traditionally distinguished freedom of choice or action from what we might call the autonomy of decision. To see the difference, think about impulse buying. You may “freely” click on the Buy button in the heat of the moment—indeed, corporations count on it—without that decision reflecting what really matters to you in the long run. Decisions like that might be “free” but they are not fully autonomous. Someone who makes a fully autonomous decision, in contrast, is committed to that decision; she owns it. Were she to reflect on the matter, she would endorse the decision as reflecting her deepest values.
Totally autonomous decisions are no doubt extremely rare; indeed, philosophers have long questioned whether they are possible at all. But it is clear that we value autonomy of decision, even if we can only approximate the ideal. That’s because autonomy of decision is part of what it is to be a fully mature person. And that, I believe, tells us something about why privacy matters. It matters, at least in part, because information privacy is linked to autonomy, and thereby an important feature of personhood itself.
There are two ways to infringe on a person’s autonomy of decision. The most obvious way is by overruling the decision, either by direct compulsion (I point a gun at your head) or by indirectly controlling your values and commitments (I brainwash you). A subtler way of infringing on your autonomy is to undermine it. Suppose a doctor makes the decision to give you a drug without asking your permission. Nobody has made you decide to do something. But your autonomy has been undermined nonetheless, and for an obvious reason: your decision has been foreclosed. You are not in a position to make the decision. It has been made for you.
Apply this to privacy. One mark of information privacy is control; that is, you have at least some control over how and to whom you share those aspects of your self. So consider a limit case. If you have a condition that compels you to say out loud every thought that comes into your head—whether you like it or not—your autonomy of decision has been overruled. You are at the mercy of your condition.
But privacy invasions generally don’t harm autonomy in this way. They don’t overrule privacy. They undermine it. Suppose, to take a more old-fashioned example, that I break into your house and read your diary over and over again, every day. Suppose further that I make copies for my friends. Even if, again, you never learn of this, I am harming you in a new way—by undermining your capacity to control your private information. Whether you know it or not, that capacity is diminished. You may think you have the autonomy to decide whether to share your diary or not. But in fact, you are not in a position to make the decision; I’ve made that decision for you
. Your autonomy of decision has been undermined.
As noted above, part of what makes your individual mind your mind is that you have a degree of privileged access to your mental states. And that includes, crucially, the ability to control access to the content of those thoughts and feelings—to choose whether and when you share this information with others. Part of the reason we value having this control is because it is a necessary condition for being in a position to make autonomous decisions, for our ability to determine who and what we are as persons.
It is here we see the danger inherent in systematic and sweeping collection by the government of the private information of citizens who have neither been charged with nor
suspected of a crime. Such intrusions on information privacy—
whether or not that information is acted upon or whether the
intrusion is known—undermine not only dignity but, depending on how systematic the intrusions turn out to be, your actual capacity to control how and what information you share with others. The harm to your autonomy of decision becomes more global.
The systemic nature of the invasion therefore matters. Again, this point can be made starkly by looking back at our telepathic case, where I read your mind without your consent. Suppose it happens only once. Obviously, if I act on this knowledge in order to manipulate and control you, then I may directly harm your autonomy. And the risk of this happening may be great enough to prevent me from being tempted to use my power. Moreover, as I’ve been emphasizing, it shows a lack of respect for your status as a person, an autonomous being. But if I only read your mind—or your diary—once (and do nothing with the data), then presumably your autonomy itself is not affected. You don’t become less in control of your self or face a diminished capacity of any sort.
Now let’s suppose this happens again and again over time, in an organized way. From my perspective—the perspective of the knower—your existence as a distinct person will begin to shrink. Our relationship will be so lopsided that I may well cease to regard you as a full subject, as a master of your own destiny. As I learn what reactions you have to stimuli, why you do what you do, you may become like any other object to be manipulated even if I do not, in fact, manipulate you. You may be, as we say, dehumanized in my eyes. The connection between a loss of privacy and dehumanization is a well-known and ancient fact, and one which we don’t need to appeal to science fiction to illustrate. It is employed the world over in every prison and detention camp. It is at the root of interrogation techniques that begin by stripping people literally and figuratively of everything they own.