Free Novel Read

The Internet of Us Page 5


  But could it be that the Internet is helping to fragment not only our moral and religious values, but our very standards of reason? Could it be making us less reasonable?

  When Fights Break Out in the Library

  Let’s go back to the idea of the Borgesian library discussed in chapter 1. It encompasses the world. It contains books on every subject, from politics to physics to pencil-making. But not all the books agree. And we cannot leave the library to find out which is right and which is wrong. It is all there is.

  Were we to live in such a Borgesian library world—as we do, in an obvious sense—we would be in a state of information glut. Information on any topic we can imagine—and much that any particular individual can’t—is contained within the infinite walls of the library. Some of it will be accurate and some partly so, and some complete gibberish. The question is how to tell which is which. Theories will be propagated, and certain reference books will be seen as keys to unlock the secrets of the other books or as useful maps to the truths and falsehoods. But people will disagree over which books are the best references, over the very standards for sorting the good books from the bad.

  How would people react in such a situation? A natural reaction would be an increasing tribalization of the sort we saw Sunstein remarking on above. Just as in a room of shouting people you start by focusing on the voices you recognize, so the library-dwellers would be prone to read some books over others, and to discount not only their rivals’ books but their reference books—the very standards they use to sort good books from bad. As such, rational discussion about whose books are best, and how to sort new books that come in, will become increasingly difficult. Tribes within the library will evaluate one another’s reasons by completely different standards.

  The intellectuals of the world will nod their heads sagely. It is inevitable, they will say. There is no way to know which books contain the objective truth, some will announce. “There is no objective truth!” others will assert; all books are relative to other books. Still others will declare that only faith in the one true book can solve the problem—appeals to references, citation records, card catalogues and rational standards generally is all for naught. Such reactions will only increase tribalization, and the more practical-minded of the inhabitants may begin to listen to those who point out that the only real way to settle the issue is to burn the other tribes’ books.

  There are reasons to think we are living in this sort of environment now. By giving us more information, the Internet not only gives us more things to disagree about, it allows us to more easily select and choose those sources that validate our existing opinions. And that, in turn, can cause our disagreements to spiral ever deeper.

  Consider for example, so-called “fact-checking” sites like Politifact, which was started by the old media outlet the St. Petersburg Times to help cut through all the tribe-talk and verify different claims to truth made in the cultural and political debates that fill the news. And by and large, they’ve had a healthy impact. But they‘ve also come under increasing assault themselves. In his essay “Lies, Damned Lies, and ‘Fact-Checking’: The Liberal Media’s Latest Attempt to Control the Discourse,” Mark Hemingway charged that fact-checkers are themselves biased—toward the left. His evidence: several examples where fact-checkers seem to get things wrong, in a politically biased way. According to Hemingway, “What’s going on here should be obvious enough. With the rise of cable news and the Internet, traditional media institutions are increasingly unable to control what political rhetoric and which narratives catch fire with the public. Media fact-checking operations aren’t about checking facts so much as they are about a rearguard action to keep inconvenient truths out of the conversation.”9

  Notice how Hemingway frames his fact-checking of the fact-checkers. He takes himself to be exposing a hidden truth: the truth that some folks (the fact-checkers) are keen to keep inconvenient truths out of the conversation. Whether or not Hemingway is right about his claim, the point here is that the truth wars in this country have grown to such proportions that the very idea of “fact-checking” is seen as suspect.

  Once debates reach this point they are very difficult to resolve. It has become a matter of principle. Not moral principles but “epistemic” principles—“epistemic” because they are about what is rational to believe and the best sources of evidence and knowledge. Disagreements over principles such as these illustrate a very old philosophical worry: namely, that all reasons end up grounding out on something arbitrary.

  For example, suppose I challenge your epistemic principle P which says that such-and-such a method is a reliable means to the truth. You defend it by appeal to some other principle, Q. If I persist in my skepticism and question Q, your options seem to dwindle. Pretty soon, being a finite creature with a finite mind, you are going to run out of principles. It seems that you must either end up defending your Q with P (whose truth is still not established) or simply dig in your heels and tell me to take off. Either way, you haven’t answered my challenge, and your faith in your principles—and therefore the very methods you use to reach the truth about matters both mundane and sublime—seems blind.

  This paradox goes back at least as far as the pre-Socratic philosophers of ancient Greece. Yet it reappears in cultural debates like clockwork. Today, it is heard in the claims made by evangelicals to the effect that science is really just another religion: “Everyone, scientist or not, must start their quests for knowledge with some unprovable axiom—some a priori belief on which they sort through experience and deduce other truths. This starting point, whatever it is, can only be accepted by faith. . . .”10 This is a powerful idea—in part because it contains more than a grain of truth, and in part because it simply feels liberating. It levels the playing field, intellectually speaking. After all, if all reasons are grounded on something arbitrary, then why assume science is on any firmer foundation than anything else? You might as well just go with what you already accept on faith.

  If we were to concentrate just on the receptivity model of knowledge that we saw in chapter 1, then such debates wouldn’t threaten knowledge at all. But that misses the point. Because the problems they cause are not for receptivity but for reasonableness. What they threaten is our ability to articulate and defend our views. The problem that skepticism about reason raises is not whether I have good evidence by my principles for my principles. Presumably I do.11 The problem is whether I can give a more objective defense of them. That is, whether I can give reasons for them that can be appreciated from what the eighteenth-century philosopher David Hume called a “common point of view”—reasons that can “move some universal principle of the human frame, and touch a string, to which all mankind have an accord and symphony.”12

  Those who wax skeptical about the use of scientific methods to resolve debates such as the origins of life on earth, or the beginning of the universe, for example, are rarely if ever skeptical about science across the board. Their quarrel is with its use in certain domains. The folks at AnswersinGenesis.org aren’t going to say that we should never use observation, logic and experiment to figure things out. What they will argue is that these methods have a lower priority in some subject matters than others, where other methods trump them. People who think that the Torah or the Bible or the Koran is a better—not the only—means to the truth about the origin of our planet, for example, see the matter in that way.

  Imagine a dispute between a scientist and a creationist over these two principles:

  (A) Inference to the best explanation on the basis of the fossil and physical record is the only method for knowing about the distant past.

  (B) Consultation of the Holy Book is the best method for knowing about the distant past.

  The friends of (B) aren’t rejecting outright the strategy of consulting the fossil and historical record. So we can’t just call them out for using it sometimes and not others. And obviously, we can’t travel back in time and use observation (another commonly shared metho
d) to settle who is right and who isn’t about the distant past. What this shows is that debates over even very specific principles like these can end up grounding out—either the participants will end up defending their favored principles by appealing to those very principles (e.g., citing the Book to defend the Book) or appealing to specific principles that the other side shares but assigns a lower priority. Neither side will be able to offer reasons that the other will recognize for his or her point of view.

  As I’ve already noted, the Internet didn’t create this problem, but it is exaggerating it. Yet you might think that this isn’t so bad. As philosopher Allan Hazlett has pointed out, if everyone agrees in a democracy, something’s gone wrong.13 Democracies should be, in John Rawls’ words, places where there are “a plurality of reasonable yet incompatible comprehensive doctrines.”14

  But that’s just the point. How do we figure out, as a society, whose views are “reasonable” and whose are not, if our standards for what counts as reasonable don’t overlap? And how do we engage in dialogue with people with worldviews that are different from our own (as opposed to oppressing them, or manipulating them, or simply shouting at them) without an exchange of reasons? The answer is: we don’t. And that tells us something: civil societies not only need a common currency to exchange money; they also need a common currency to exchange reasons.

  So, the point is not that we should all agree. We all have different experience bases, after all, and that means we can use different evidence even if we agree on what counts as evidence. But if we don’t agree on what counts as evidence, on our epistemic principles, then we aren’t playing by the same rules anymore. And once that happens, game over.

  The Rationalist’s Delusion

  Perhaps we shouldn’t be surprised that we have a hard time defending our “first principles” with reasons. It might be that the fragmentation of reason, while exaggerated by our digital culture, is actually the result of human psychology. If so, then perhaps being reasonable, defined as the willingness to give and ask for reasons that others can appreciate, is an untenable ideal. After all, you don’t have to be Karl Rove to suspect that the evidence often fails to persuade and that what really changes opinion is good advertising, emotional associations and having the bigger stick (or super PAC).

  Recently, some social scientists, most notably the psychologist Jonathan Haidt, have suggested that this is not far from the truth.15 Haidt has done remarkable work exposing some of the psychological causes of our divisions in values. But he thinks this work shows that the philosopher’s dream of reason isn’t just naive, it is radically unfounded, the product of what he calls “the rationalist delusion.” As he puts it, “Anyone who values truth should stop worshipping reason. We all need to take a cold hard look at the evidence and see reasoning for what it is.”16 Haidt sees two points about reasoning to be particularly important: the first concerns the relative efficacy (or lack thereof) of reasoning; the second concerns the point of doing so publicly: of exchanging reasons. According to Haidt, value judgments are less a product of rational deliberation than they are a result of intuition and emotion. In neuroscientist Drew Westen’s words, the political brain is the emotional brain.

  If this is right, then we not only have something of an explanation for why knowledge fragmentation continues to persist (people just won’t listen to one another’s reasons) but also a lesson for what to do about it. Or at least what not to do: trying to come up with reasons to convince our cultural opponents isn’t going to work. If peace is in the offing, it is going to have to come about some other way.

  There is, without question, a lot of sense to the idea that reasons often—perhaps mostly—fail to persuade. As we’ve already seen from Kahneman’s work, our reflective self, whose job it is to monitor our fast judgment-making processes, is often not on the ball. And even when it is, “reasoning” often seems to be post-hoc rationalization: we tend to accept that which confirms what we already believe (psychologists call this confirmation bias). And the tendency goes beyond just politics. When people are told that they scored low on an IQ test, for example, they are more likely to read scientific articles criticizing such tests; when they score high, they are more likely to read articles that support the tests. They are more likely to favor the “evidence,” in other words, that make them feel good. This is what Haidt calls the “wag the dog” illusion: reason, he says, is the tail that we mistakenly believe wags the dog of value judgment.17

  Much of this has to do with our brain’s ability to trump reason with emotion. Consider some of Haidt’s own well-known research on “moral dumbfounding.” Presented with a story about consensual, protected sex between an adult brother and sister—sex which is never repeated, and which is protected by birth control—most people reacted with feelings of disgust, judging that it is wrong. Yet they struggled to defend such feelings with arguments when questioned by researchers.18 Even so, they stuck to their guns. Haidt suggests that this means that whatever reasons they could come up with seem to be just along for the ride: their feelings were doing the work of judgment.

  Data like this should give us pause, but we need to be careful not to exaggerate the lessons it has to teach us. The inability of people—in particular young college students like those in Haidt’s study—to be immediately articulate about why they’ve made an intuitive judgment doesn’t necessarily show that their judgment is the outcome of a nonrational process, or even that they lack reasons for their view. Intuitions, moral or otherwise, can be the result of sources that can be rationally evaluated and calibrated.19 Moreover, rational deliberation is not a switch to be thrown on or off. It is a process, and therefore many of its effects have to be measured over time. Tellingly, the participants in Haidt’s original harmless-taboo studies had little time to deliberate. But as other studies have suggested, when people are given more time to reflect, they can change their beliefs to fit the evidence, even when those beliefs might initially be emotionally uncomfortable.

  Haidt has been careful to say that reasons do play some role in moral and political judgments. His point is that reasons are far less influential than intuition and emotion. The latter factors trump reasons: “reasons matter (except when intuitions object).”20

  As I’ve said, it is hard to argue with this—even just based on the anecdotal evidence that daily life provides. But it doesn’t show that reason doesn’t have a role to play. Consider the changing attitudes toward homosexuality and same-sex marriage in the United States. What caused this change? Part of the story is simply that younger people, in general, are increasingly tolerant of same-sex marriage. Another part is increased contact and exposure to gays and lesbians through the media. But the battle over same-sex marriage has also been partly a legal battle, where the issue has concerned not just the definition of marriage but the alleged harm same-sex unions cause to others not in those unions. Tellingly, the evidence—reason—has shown those claims to be unjustified.21 And that fact seems to have had an impact on judicial proceedings about the matter—most famously in the 2009 Proposition 8 legal case, when the attorney arguing the case against same-sex marriage was reported to have conceded in court that he did not know what harm would result from letting same-sex partners marry.22 So perhaps we can explain massive moral and political change of this sort without having to invoke the causal influence of reasons, but it seems just as likely that appeals to evidence—evidence, in fact, often uncovered by social scientists—have had an impact on how people view same-sex (or interracial) marriage via affecting institutions such as the law.

  Moreover, as the psychologist Paul Bloom has pointed out, it seems likely that rational deliberation is also going to be involved in the creation of new moral concepts—such as human rights, or the idea that all people should be treated equally under the law.23 Changes in moral concepts are often changes that occur despite the resistance of the “intuitive” or “emotional” judgments people inherit from the culture around them. But such changes take time. So, to
show that reasons cannot trump intuition in value judgments, we would need to show that they don’t change our moral judgments over time.

  This brings us around to Haidt’s second main point about reasoning, mentioned above. He endorses what he calls a Glauconian view of reasoning about value. The reference here is to an old saw from Plato: What would you do with a ring of invisibility? Fight for truth, justice and the American way—or spy and steal? In Plato’s Republic, the character Glaucon asks this question to illustrate the idea that it is merely the fear of being caught that makes us behave, not a desire for justice. Haidt takes from this a general lesson about the value of defending our views with reasons. Just as those who do the “right” thing are not really motivated by a desire for justice, those who defend their views with reasons are not “really” after the truth. As the cognitive scientists Hugo Mercier and Dan Sperber put it, the function of both reasoning and the exchange of reasons is persuasion and persuasion alone. If so, then what people are really after when looking for reasons—whether they acknowledge it or not—are arguments supporting their already entrenched views, and/or a way to push people into agreeing with them.24 So even if appeals to evidence are sometimes effective in changing our values over time, that’s because reasons themselves are aimed at manipulating others into agreeing with us, not because they might have also uncovered the facts. On this view, to think otherwise is to once again fall into the rationalist delusion.

  Anyone who has spent time on the Internet will probably feel the pull of the Glauconian view of human rationality. Social media and the blogosphere are filled with “reasoning”—but much of it seems to be either blatant marketing or aimed only at supporting what people already believe. Maybe that is what the Internet is teaching us. Maybe we are all Glauconians, and always have been.