The Internet of Us Page 8
Now, clearly, any attempt to tell something complicated in a narrative way—the story of the Civil War, for example, as told by a documentary—will necessarily sacrifice some detail in order to illustrate the sweep of events. And in skipping some details, as any description of an event must do, it could be thought to “sacrifice” truth in a certain respect. But these facts don’t bother most people. People know that details must often be left out of historical narratives. And they know that recreations of historical events must “fill in” where there is imperfect knowledge of the past. As a result, we adjust our expectations and, if we are wise, guard against taking the recreation as anything other than a bit of historically informed fiction.
Daisey’s case, however, illustrates how quickly these expectations can shift. We typically don’t treat storytellers as journalists—and this expectation didn’t necessarily change just because Daisey was on NPR. It changed because of his message. His message was directed at uncovering a significant hidden truth about how iPhones were made—a truth that he thought, correctly, needed to be brought to light. That is, what his show was about was the fact that we were ignoring certain facts. When that is the content of your message, expectations change. We expect you to give us the right details, or to explain to us more clearly—as Daisey now does during the show—how and when details are being sacrificed for narrative drive. And this expectation rises the more we are convinced that there is something important at stake. That’s why D’Agata’s “I’m after truth, not accuracy” excuse rings hollow. If, as D’Agata says he is doing, I tell you that something is “significant,” I shift your expectations. So you are right to feel offended if I then ignore the very expectations I’ve helped create. When people sacrifice small-“t” truth—what D’Agata calls accuracy—for one big Truth, their deceptions essentially involve manipulating our expectations.
The Internet makes it ridiculously easy to manipulate people’s expectations. That’s partly due to the relative anonymity of the Internet. But it is also because the expectation-setting context is increasingly difficult to track.
Sock puppets are a good example. A “sock puppet” is Internet-speak for a manufactured online identity used to get people to believe information of some sort. A hotel manager or restaurant owner who logs on to a consumer review site to review his own business, or hires other people to do so, is using sock puppets. Writers who review their own work under aliases on Amazon, and attack the work of others under that same alias, are doing the same. At a more sinister level, governments have been known to use sock puppets and social media to influence public opinion. Westerners typically associate such behavior with China and other oppressive regimes, and for good reason. In 2013, China was widely accused of creating dozens, perhaps hundreds, of fake Twitter accounts for propaganda purposes with regard to Chinese–Tibetan relations. But Western governments are hardly shy about using sock puppets. As the Guardian newspaper reported in 2011, the U.S. government has created Operation Earnest Voice, which awarded the Californian company Ntrepid millions of dollars to create sock puppets for the explicit purpose of spreading propaganda on the net in languages such as Arabic.12
One common form of sock puppetry is the use of a socialbot. A socialbot is not one person pretending to be another (or many), but a robot pretending to be a human. By “robot” I don’t mean a walking-talking robot of the sci-fi variety; I mean an algorithm-guided bit of software that steers its false human face to like real people’s sites, to make posts, and to get others to like those posts. Socialbots have been amazingly good at fooling people. They independently post and repost, tweet and retweet about current events—all using expanding databases of information gleaned from the Internet. They can respond to emails. They are often programmed to tweet in patterns that mimic awake/sleep cycles. In one famous case, a well-known Brazilian journalist—allegedly with more online influence than Oprah—was revealed to be a bot.13 Alan Turing claimed sixty years ago that if a machine could durably fool humans into thinking it was human, then we had as much reason to think it was thinking as we have to think other human beings are thinking. By some standards, bots might seem to be passing this test.
Even if we don’t think they are thinking (and I don’t), the use of bots is incredibly disturbing. Part of the reason is that they are so cheap—you can buy “armies” of them for just a few hundred dollars. But they are also massive deceit machines, built for the purpose of getting people to buy things, do things, vote for certain candidates and not others. (This is partly why Twitter recently banned such bots.) Again, not all uses of bots may be harmful. But it is a mistake to write the whole technology off as simply a new and updated form of marketing or advertising. (Excusing one’s manipulative behavior by saying it is “just advertising” is a bit like excusing one’s infidelity as “just flirting.”) In reality, these bots are more like cons. They operate on getting people to assume they are dealing with someone real who is sincere in their assertions. And they take advantage of that.
As in the Daisey case, some people use their online personas, or bots, to try and get across general political or moral viewpoints (what they consider “big truths”) while perhaps sacrificing or ignoring inconvenient details. In many cases, this might be inconsequential, or simply a form of self-promotion. But there may be more at stake.
In recent years, various political organizations have used social media to great effect for propaganda. Again, the idea is often to broadcast what is perceived to be a “big truth” by misrepresenting the facts. One particular method is the use of photo-sharing. A common technique is to use an older photo but represent it as having been taken during a more recent event. For example, a widely circulated photo on the Internet—which was reproduced by various reliable news services—showed a young child jumping over rows of covered corpses. The photo was represented as having been taken in the aftermath of the Houla massacre in Syria in 2012. In fact, the photo was taken a decade earlier in Iraq. Similar uses of photos have been widely documented, and have led to several efforts to come up with verification techniques to help the public and journalists spot such abuses.14
In The Republic, Plato explicitly suggests that it would be good for citizens to believe a myth that would have the effect of making them care about their society and be content in keeping it stratified. He called this “the noble lie,” and he seemed to think that it might be inevitable if the state is to survive. And of course, deception sometimes is justified: to protect someone, or to prevent a panic, or to minimize offense. Life is complicated and moral principles must always be applied with a sense of context. But the problem with so-called noble lies is that they are like potato chips: it is hard to stop with just one. That’s because the moment a noble lie is discovered to be just that—a lie—it suddenly becomes just as “noble” to lie about whether it was really known to be a lie. Cover-ups become noble lies. Assassinations become noble lies. And soon we are sliding down the slope of the deck right into the jaws of the shark.
Objectivity and Our Constructed World
Life in the Internet of Us can make it hard to know what is true. That’s partly because the digital world is a constructed world, but one constructed by a gazillion hands, all using different plans. And it is partly because it seems increasingly difficult to step outside of our constructed reality.
These facts have led some to claim that objectivity is dead. Internet theorist David Weinberger, for example, has suggested that objectivity has fallen so far “out of favor in our culture” that the Professional Journalists’ Code of Ethics dropped it as an official value (in 1996, no less). Weinberger himself argues that our digital form of life undermines the importance of objectivity, in part because humans always “understand their world from a particular viewpoint.” That’s a problem, apparently, because objectivity rests on a metaphysical assumption: “Objectivity makes the promise to the reader that the [news] report shows the world as it is by getting rid of (or at least minimizing) the individual, subject
ive elements, providing, [in the philosopher Thomas Nagel’s words], ‘the view from nowhere.’ ”15 For Weinberger, objectivity is an illusion because there is no such thing as a view from nowhere.
Maybe not. But that doesn’t rule out objectivity—since being objective doesn’t require a view from nowhere. Truths are objective when what makes them true isn’t just up to us, when they aren’t constructed. But a person is objective, or has an objective attitude, to the extent to which he or she is sensitive to reasons. Being sensitive to reasons involves being aware of your own limitations, being alert to the fact that some of what you believe may not be coming from reasons but from your own prejudices, your own viewpoint alone. Objectivity requires open-mindedness. It doesn’t require being sensitive to reasons that (impossibly) can be assessed from no point of view. It means being sensitive to reasons that can be assessed from multiple and diverse points of view. Being objective in this sense may not always, as Locke would have acknowledged, necessarily bring us closer to the real truth of the matter, to what Kant called “things in themselves.” But that is just to repeat what we already know: being objective, or sensitive to reasons, is no guarantee of certainty.
In Weinberger’s view, objectivity “arose as a public value largely as a way of addressing a limitation of paper as a medium for knowledge.”16 The need to be objective, he argues, stems from the fact that paper—he means the printed word, roughly—is a static medium; it forces you to “include everything that the reader needs to understand a topic.” In his view, the Internet has replaced the value of objectivity with transparency—in two ways. First, the Internet makes it easy to look up a writer’s viewpoint, because you can most likely find a host of information about that writer. And second, it makes a writer’s sources more open: a hyperlink can take you right to them, allowing you to check them out for yourself.
I agree that transparency in both of these senses is a value. And our digital form of life has, indeed, increased transparency in some ways—but not in all. It has increased transparency for those who already desire and value it. But as the use of sock puppets and bots demonstrates, the ability of the Internet to allow deceptive communication leads in precisely the opposite direction. Moreover, we value being able to check on sources and background precisely because we value objective reasons—reasons that are not reasons only for an individual but are valid for diverse viewpoints. We want to know whether the source of our information is biased because we want to sort out that bias from those facts that we can appreciate independently of bias. Transparency is not a replacement for valuing objectivity; it is valuable because we value objectivity.
Just as we can’t let the maze of our digital life convince us to give up on objectivity and reasons, it shouldn’t lead us to think that all truth is constructed, that truth is whatever passes for truth in our community, online or off. What passes for truth in a community can be shaped all too easily. That’s what the noble lie is all about. What passes for truth is vulnerable to the manipulations of power. So, if truth is only what passes for truth, then those who disagree with the consensus are—by definition—not capable of speaking the truth. It’s no surprise, then, that the idea that truth is constructed by opinion has been the favorite of the powerful. In the immortal words of a senior Bush advisor, “We are an empire now, and when we act, we create our own reality.”17
That, fundamentally, is why we should hold onto the idea that at least some truth is not constructed by us—even if the digital world in which we live is. Give up that thought, and we undermine our ability to engage in social criticism: to think beyond the consensus, to see what is really there.
Part II
How We
Know Now
5
Who Wants to Know: Privacy and Autonomy
Life in the Panopticon
In 1890, Samuel Warren and Louis Brandeis published an article in the Harvard Law Review arguing for what they dubbed “the right to privacy.” It made a splash, and is now one of the most widely cited legal articles in U.S. history. What is less known is what precipitated the article. The Kodak camera had just been invented, and it (and cameras like it) was being used to photograph celebrities in unflattering situations. Because of this newfangled invention, Warren and Brandeis worried that technology—and our unfettered use of it—was negatively affecting the individual’s right to control access to private information. Technology seemed to be outstripping our sense of how to use it ethically.
They had no idea.
In the first part of this book, we’ve seen how some ancient philosophical challenges have become new again. We’ve grappled with whether “reasonableness” is reasonable and whether truth is a fantasy. But these old problems are only half the story. To really appreciate how we can know more but understand less, we need to recognize what is distinctive about how we know now. And a good place to start is with this simple fact: the things we carry allow us to know more than ever about the world, faster than ever. But they also allow the world to know more about us—and in ways never dreamed of by Warren and Brandeis. Knowledge has become transparent. We look out the window of the Internet even as the Internet looks back in.
Most of the data being collected in the big data revolution is about us. “Cookies”—those insidious (and insidiously named) little Internet genies—have allowed websites to track our clicking for decades. Now much more sophisticated forms of data analysis allow the lords of big data, like Google and Amazon, to form detailed profiles of our preferences. That’s what makes the now ubiquitous targeted ad possible. Searching for new shoes? Google knows—and will helpfully provide you with an ad showing a selection of the kind of shoes you are looking for the next time you visit nytimes.com. And you don’t have to click to be tracked. The Internet of Things means that your smartphone is constantly spewing data that can be mined to find out how long you are in a store, which parts of the store you visit and for how long, and how much, on average, you spend and on what. Your new car’s “black box” data recorder keeps track of how fast you are traveling, where you have traveled and whether you are wearing your seatbelt. That’s on top of much older technologies that continue to see widespread use—such as the CCTV monitors that record events at millions of locations across the globe.
And, of course, data mining isn’t done just for business purposes. Arguably, the United States’ largest big data enterprise is run by the NSA, which was intercepting and storing an estimated 1.7 billion emails, phone calls and other types of communications every single day (and that was way back in 2010).1 As I write this, the same organization is purported to be finishing the building of several huge research centers to store and analyze this data around the country, including staggeringly large million-square-foot facilities in remote areas of the United States.
We all understand that there is more known about what each of us thinks, feels and values than ever before. It can be hard to shake the feeling that we are living in an updated version of Jeremy Bentham’s famous panopticon—an eighteenth-century building design that the philosopher suggested for a prison. The basic idea was a prison as a fishbowl. Observation, Bentham suggested, affects behavior—and prisoners would control their behavior more if they knew their privacy was completely gone, if they could be seen by and see everyone at all times.
In some ways, our digital lives are fishbowls; but fishbowls we’ve gotten into willingly. One of the more fascinating facts about the amount of tracking going on in the United States is that hardly anyone seems to care. That might be due not to underreporting or lack of Internet savvy by the public (although both are true) but to the simple fact that the vast majority of people are simply used to it. Moreover, there are lots of positives. Targeted ads can be helpful, and smartphones have become effectively indispensable for many of us. And few would deny that increased security from terrorism is a good thing.
Fig. 2. Elevation, section and plan of Jeremy Bentham’s Panopticon penitentiary, drawn by Willey Reveley, 1791.
&nb
sp; Partly for these reasons, writers like Jeremy Rifkin have been saying that information privacy is a worn-out idea. In this view, the Internet of Things exposes the value of privacy for what it is: an idiosyncrasy of the industrial age.2 So no wonder, the thought goes, we are willing to trade it away—not only for security, but for the increased freedom that comes with convenience.
This argument rings true because in some ways it is true: we do, as a matter of fact, have more freedom because of the Internet and its box of wonders. But, as with many arguments that support the status quo, one catches a whiff of desperate rationalization as well. In point of fact, there is a clear sense in which the increased transparency of our lives is not enhancing freedom but doing exactly the opposite—in ways that are often invisible.
The Values of Privacy
If you are arrested for a serious crime in the United States today, your picture is taken, you are fingerprinted, and in some precincts the inside of your cheek is swabbed in order to obtain a sample of your DNA. In his dissenting opinion in the recent Supreme Court case on DNA identification techniques, Supreme Court Justice Antonin Scalia argued that such techniques amount to illegal searches.3 We are, he said, opening our mouths to government invasion and tyranny.