Free Novel Read

The Internet of Us Page 4


  Information cascades are hardly new. The mob mentality has worked its dark magic as long as there have been mobs. That’s why my mom used to ask, in response to my whine that “everyone else is [doing, saying, believing] something” that, “If everyone jumped off a bridge, would you jump off too?” Well, hopefully not, but the history of humanity might suggest otherwise. We not only tend to follow others’ actions, we also seem all too willing to go along with what they believe. We trust their testimony, even when we shouldn’t.

  My mom’s skepticism about the reliability of testimony has deep roots in our culture. The seventeenth-century philosopher John Locke snarled at the idea that you can know something merely because someone else tells you: “I hope it will not be thought arrogance to say,” he begins, “that perhaps we should make greater progress in the discovery of rational and contemplative knowledge if we sought it in the fountain . . . of things in themselves, and made use rather of our own thoughts than other men’s to find it.” He goes on, “The floating of other men’s opinions in our brains makes us not one jot the more knowing, though they happen to be true. What in them was science is in us but opiniatrety.”8 Locke’s point seems to be that real knowledge only comes from your own personal observations, or use of your memory, logical reasoning, or so on. Real knowers, he seems to say, are self-reliant: they drink from the fountain of “things in themselves.” That is, they believe only when they have, or least can easily obtain, reasons—reasons based on personal observations and critical thinking—for one side of the given issue or another.

  An emphasis on self-reliance makes sense given that Locke was a founding figure of the Enlightenment, known for celebrating the individual’s political rights and autonomy. For Locke, citizens had a natural right to their property, and the government needed to be relatively standoffish with regard to how people used that property. This trumpeting of the rights of the individual had a natural epistemic correlate, call it Locke’s command: thou shalt figure things out thyself. The sentiment is echoed again and again throughout the period. Kant, in fact, defined enlightenment partly in terms of it: as humanity’s “emergence from a self-imposed immaturity”—immaturity due to lack of courage to think for oneself as opposed to going with the flow.9 One could not even enter the British Royal Society without passing under their motto (then and now) of nullius in verba (“take nobody’s word for it”).

  These sentiments were largely a reaction to an older idea—that all knowledge requires deferring to, and trusting, authority. Education in the sixteenth century was still very much a matter of mastering certain religious and classical texts, and what you knew came from those, and only those, texts. But as it became apparent that these texts were often wrong (think of Galileo’s and Copernicus’ discoveries, for example) the method of consulting them for knowledge came to seem naive. Thus by 1641, we find Descartes wiping away such methods with the very first sentence of his most famous book: “Some years ago I was struck by the large numbers of falsehoods that I had accepted as true in my childhood, and by the highly doubtful nature of the whole edifice that I had subsequently based on them.”10 Descartes sat down and attempted to reconstruct what he really knew—using only materials that he could find with his own mind—and he implicitly urged his reader to do the same. In other words, don’t trust someone else’s say-so; question authority.

  This is still good advice—to a point. That’s because, really, “Locke’s command” is impossible to follow strictly. We can’t figure everything out for ourselves. So, if we interpret Locke as telling us that you only really know that which you discover on your own, then you and I don’t know very much. And neither did Locke. Even in Locke’s time, when someone like himself could master so much about the world (and write a book titled, non-ironically, An Essay concerning Human Understanding), there was still a cognitive division of labor. Expertise was acknowledged and encouraged in mathematics, navigation, engineering, farming and warfare. Education systems—universities—were already centuries old by Locke’s time, and the printing press, and growing literacy, were allowing more and more people to learn from the knowledge of others.

  The point is partly economic. It simply isn’t efficient to try and have everyone know everything—any more than it is efficient to try and get everyone to grow all their own food. After all, how much of the information that you get from your phone could you work out personally? My experiment from last summer demonstrates that the answer is: not very much. Even if we can figure some things out offline, we still consult experts and outside resources. In all these sorts of cases, we must defer to the testimony and expertise of others.

  Sure, when the zombie apocalypse comes, you want to be able to stand on your own. But this isn’t the zombie apocalypse. In real life, self-reliant folks get that way because of what other people have done for them. Economically self-reliant people typically have benefited from the help of others (and from those public entities that maintain the roads, maintain armies and teach the workers how to read and write). Likewise for the mythic self-reliant believer. This is the fantasy of the rugged individual judging the truth for himself without dependence on anyone else. In the TV commercial version, he is the man in the white lab coat, smoking his pipe, squinting into a microscope. But how did he get there? By learning from others, of course—from education. And while some of the information we learn this way can, at least in principle, be verified by our own reasoning and observation, the fact is that not everything we learn is like this. All of us are finite creatures with limited life spans. We can’t check out all the information we rely on in any given day, let alone over a whole lifetime.

  So, while information cascades, rumors and ignorance do spread like wildfire, we aren’t going to give up on Google-knowing because of that. Nor should we, any more than we should give up learning from others. The lesson I glean from Mom and Locke, therefore, isn’t to be an intellectual hermit. You don’t have to throw your iPhone away and stop using Twitter. What we—both as individuals and as a society—should learn from Mom and Locke is that we must be extremely careful about allowing online information acquisition—Google-knowing—to swamp other ways of knowing. And yet that is, increasingly, precisely what we are prone to do.

  Being Reasonable: Uploading Reasons

  Imagine you want to buy a good apple from folks who have their own apple-sorting device. They claim it is great at picking out the good apples from the bad. Does that help you decide whether to buy one of their apples? Not really—even if they were to later turn out to be right. For unless you already have reason to trust them, the mere fact they say they have a reliable apple-sorter is of little use to you. And that remains the case even if they are actually right—they really do have the good apples. Analogously, where the issue isn’t sorting good apples from bad but true information from false, my merely saying I’ve got good information isn’t generally enough to make you want to buy it—even if I really do have a way of telling the difference between what’s true and what isn’t.

  This tells us something important: that if we were to define knowing as only being receptive—as accurate downloading and nothing more—then we ignore something important about the human condition.

  In some cases—many cases, in fact—we trade information in situations where trust has already been established to some degree. It is a spectrum. On one end of the spectrum are cases where we have a reasonably high degree of trust in the person we are asking for information: as when you talk to your trusted family doctor about your health, your spouse about your children, your professor about the material which she is professing. Of course, no human is infallible, but you ask those you trust for information because you think they have a good probability of being right—or at least more right than yourself. Farther down the spectrum is the case where you stop a passerby to ask for directions. Here too, you already have some degree of trust, since a) you have some reason to think he is local (he is walking down the street) and b) you have no reason to think he w
ill lie. We all know that both of these conditions can be defeated, of course (sometimes it turns out that you’re asking another tourist), but nonetheless, we ask directions with the reasonable expectation of getting useful information.

  Yet while many of our interactions are like this, many are farther down the spectrum still. In some cases, we may need information but have very little to go on. That’s why we seek evidence to help us assess other people’s opinions. When, for example, we aren’t an expert on something ourselves, we seek advice from those who say they are. But if we are wise, we also get evidence of that person’s expertise: references, degrees, or word of mouth. Moreover, we look for them to explain their opinions to us in ways that make sense given what we do know. We don’t trust blindly—we look for reasons.

  This is another place where a philosophical thought experiment can help. The seventeenth-century philosopher Thomas Hobbes postulated that governments are rational responses to the nasty, brutish and short lives we are doomed to lead in what he called the state of nature—a state where everyone is against everyone and no one works together to distribute resources. His thought was that those in the state of nature would face strong pressures to form a government, allowing them to coordinate and share these resources: to stop the “war of all against all.” Usually, when we think about this idea, we are thinking of shelter, food and water as resources. But it is pretty clear that justified accurate information—knowledge—is a resource as well. In order to escape the state of nature, we would need to exchange information in a situation where, at least to begin with, we would have fairly low levels of preexisting trust. In other words, we would face what we might call the information coordination problem.11

  The information coordination problem isn’t just hypothetical. All societies face it, since no society can survive without its citizens trading information with one another. But how do we solve it? You can’t just look and see the truth in my brain. What you need is some reflectively appreciable evidence—you are looking for a reason to believe that my apple sorting is reliable, so to speak. By a “reason” here, I mean a consideration in favor of believing something. Not all reasons are good ones, of course. But when we are consciously deciding what to believe, we are engaging in our capacity for reflection, or “system 2” as Kahneman calls it. We are trying to sort the true from the false. When we do so successfully, we are knowing in a different way: we are not just being receptive. We are being reflective, responsible believers.

  A key challenge to living in the Internet of Us is not letting our super-easy access to so much information lull us into being passive receptacles for other people’s opinions. Locke and Descartes may have overemphasized the role of reason in our lives. But we can’t fall into the opposite error either. Knowing now is both faster and more dependent on others than Descartes or Locke would have ever imagined. If we are not careful, that can encourage in us the thought that all knowing is downloading—that all knowing is passive. That would be a serious mistake. If we want more than to be just passive, receptive knowers, we need to struggle to be autonomous in our thought. To do that is to believe based on reasons you can own—stemming from principles you would, on reflection, endorse.12

  But if the principles we use to evaluate one another’s information are forever hidden from view, they aren’t of much use. In order to solve the information coordination problem we can’t just live up to our own standards. We need to be willing to explain ourselves to one another in terms we can both understand.13 It is not enough to be receptive downloaders and reflective, responsible believers. We also need to be reasonable.

  Reasonableness isn’t a matter of being polite. It has a public point. Exchanging reasons matters because it is a useful way of laying out evidence of credibility. It is why we often demand that people give us arguments for their views, reasons that they can upload onto our shared public workspace. We use these reasons, for good or ill, as trust-tags. And the converse holds as well—if I want you to trust me, I will find it useful to give you some publicly appreciable evidence for thinking of me as credible. One way to do that is to upload a reason into our shared workspace.

  Public workspaces require public rules. If we are going to live together and share resources, we need people to play by at least some of the same rules. We need them to be reasonable, ethical actors. The same holds when it comes to sharing information. If that is going to work, we need people to be reflective and reasonable believers—to be willing to play the game of giving and asking for reasons by rules most of us could accept were we to stop to think about it. Only if we can hold one another accountable for following the rules can we make sense of having a fair market of information.

  But how realistic is this? Digital media gives us more means for self-expression and autonomous opinion-forming than any human has ever had. But it also allows us to find support for any view, no matter how wacky. And that raises an important question: What if our digital form of life has already exposed “reason” as a naive philosopher’s fantasy? What if we no longer recognize the same rules of reason?

  What if it is too late to be reasonable?

  3

  Fragmented Reasons: Is the Internet Making Us Less Reasonable?

  The Abstract Society

  “We can conceive,” wrote the philosopher Karl Popper in 1946, before television, computers or iPhones, “of a society in which men practically never meet face to face—in which all business is conducted by individuals in isolation who communicate by typed letters or telegrams, and who go about in closed motor-cars. . . . Such a fictitious society might be called a completely abstract or depersonalized society.”1

  This passage is remarkable in several ways. It is certainly prescient. The idea—if not the prose—is more like something you would have found at the time in Amazing Stories or the adventures of Tom Swift, rather than buried in a difficult two-volume essay on democracy, fascism and knowledge. It is also very honest. Popper was, in effect, offering a warning about his own ideas concerning open societies—namely, that such societies could easily become “abstract” or “depersonalized.” Popper was a fierce advocate of openness, and he saw open societies as being marked by their values: they are committed to freedom of speech and thought, equality, reasonableness and an attitude of progressive criticism. Today we might say that a society is open to the degree that it protects freedom of communication and information, imposes little government censorship and has a diverse (and diversely owned) media.2 By this standard, a country like the United States is reasonably open (if not as open as many of us would like it to be, and becoming less so). And the Internet is a large part of this. The Web flourishes in part because it allows us unprecedented control over the sources and types of information we receive, to dip into the flow of information where and how we wish, and to extract and isolate what interests us more quickly, all in the comfort of our pajamas. It allows us to get what we want—or what we think we want—faster. And it allows us to do so without leaving our protective bubble, without sullying ourselves with the messy and inconvenient physical lives of others; it offers anonymity and friends we’ve never met. And that, one might think, is precisely what Popper was warning about: that with increased freedom of expression and consumption comes the risk of increased individual isolation.

  Research over the last couple of decades suggests that Popper was right to be concerned.3 But it is not clear that the only, or even the most fundamental, problem is that we are more isolated individuals. Really, communication is communication—even if some methods might be better for certain purposes than others. And the information technology coursing through our society’s veins has given us more ways of communicating.4 Indeed, we can hardly get away from one another: we email, we text, we tweet and soon, maybe, we’ll just think to one another. But to whom do we talk, and to whom do we listen? That’s the question, and the evidence suggests that we listen and talk to those in our circle, our party, our fellow travelers. We read the blogs of those we agree with
, watch the cable news network that reports on the world in the way that we see it, and post and share jokes made at the expense of the “other side.”5 The real worry is not, as Popper feared, that an open digital society makes us into independent individuals living Robinson Crusoe–like on smartphone islands; the real worry is that the Internet is increasing “group polarization”—that we are becoming increasingly isolated tribes.

  As one of the most influential thinkers about digital culture, Cass Sunstein, has noted, one reason the Internet contributes to polarization is that “repeated exposure to an extreme position, with the suggestion that many people hold that position, will predictably move those exposed, and likely predisposed, to believe in it.”6 So, with a steady diet of Fox News, conservatives will become more conservative. Liberals who only read the Huffington Post or the Daily Kos will become more liberal. And that means, Sunstein argues, that true fragmentation of the society results, “as diverse people, not originally fixed in their views and perhaps not so far apart, end up in extremely different places, simply because of what they are reading and viewing.”7

  We are getting more and more used to fragmentation now. It is reflected in our social media. Liberals tend to be friends on Facebook with other liberals, and Twitter feeds are clogged with the tweets of daily outrage: the latest news that is sure to piss your friends off as much as it did you.8 Yet most discussions of polarization talk about the fragmentation of our moral and political values. That makes sense: we live in a world of Christians and Muslims, atheists and theists, Republicans and Democrats, free-marketers and socialists, etc. These differences in religious, moral and political values are how we identify one another as members of the same tribe; and they affect our behavior in all sorts of ways, from determining who gets invited to the dinner party to which candidate we’ll vote for in the election.