Free Novel Read

The Internet of Us Page 6


  Democracy as a Space of Reasons

  I began this chapter by asking whether the Internet is making us less reasonable. Being reasonable, I’ve said, amounts to defending your views with reasons that are in line with shared epistemic principles or standards. We’ve canvassed two deep challenges to reasonableness so defined. The first stems from an ancient philosophical paradox. It points out that when disagreements go all the way down to epistemic principles, reasonableness goes by the board. The second challenge comes from results in social psychology. It forces us to wonder whether reasons are really effective tools for persuasion at all.

  Neither of these challenges is new. And so it would be wrong to say that the Internet itself is making us less reasonable. It would be more accurate to say that we are making ourselves less reasonable with the help of the Internet. Or more precisely still, that the Internet is exaggerating these challenges, making them even more pressing.

  In both cases, it is the very availability of so much information—our life in the library—that is part of the problem. That’s a point Haidt emphasizes: “Whatever you want to believe about the causes of global warming or whether a fetus can feel pain, just Google your belief.”25 Our ability to access so much information just makes it easier than ever to follow our hardwired tendencies to make the facts fit what we already think.

  There are reasons for hope, of course. We actually do use the Internet to hold one another to account—to solve the information coordination problem I talked about in the last chapter. Think of the ubiquitous smartphone check. How often have you been at a party, or in a bar, or in a lecture, and someone makes some point of fact; out come the phones and a race is on to see who can verify (or falsify) it first. We are holding one another accountable when we do this (and sometimes also being annoying). Wikipedia has become one of our most widely shared public standards of evidence. And that is often a very good thing—it cuts down on irresponsible assertions (even if it also cuts down on spontaneity). Moreover, as we’ll see later in the book, there are obvious ways in which the Internet can be a force for social cohesion and democratic discussion.

  We also shouldn’t be too willing to accept the Glauconian view of the function of human rationality. That’s partly because human rationality is too complex to have a single kind of function. In giving reasons, we certainly aim to get others to agree with us (I’m doing that now, after all). And aiming at agreement is a good thing, as is searching out effective means of reaching it (indeed, this is one of the noble ideals of Haidt’s book). But it is less clear that we can coherently represent ourselves as only aiming to get others to agree with us in judgment.

  To see this, think about the difficulty in being skeptical about the role of rationality in our lives today. The judgment that reasons play a weak role in judgment is itself a judgment. And the Glauconian skeptic has defended it with reasons. So, if those reasons persuade me of his theory despite my intuitive feelings to the contrary, then reasons can play a trumping role in judgment—contra the theory.

  Of course, one might reasonably say that the reasons to accept views like Haidt’s are not value judgments. They are scientific claims. But even the most “scientific” of claims is informed by value judgments. Science itself presupposes certain values: truth, objectivity and what I call epistemic principles—principles that give us our standards of rationality. Moreover, outside of mathematics it is rare that the data is so conclusive that there is just one conclusion we can draw.26 Usually the data admits of more than one interpretation, more than one explanation. And that means that we must infer, or judge, what we think is the case. And where there is judgment, there are values in the background. Hence the point: arguing (with reasons) that reason never plays a role in value judgments is apt to be self-defeating.

  There is a larger point here. Even if we could start seeing ourselves as only giving reasons to manipulate, it is unclear that we should. Suppose I offered you a drug that, once dropped in the water supply, would make most folks agree with your political views. It would be tempting, wouldn’t it? But it would also be wrong. And it is wrong in the very way that we think the sleaziest political ads are wrong. To engage in democratic politics means seeing your fellow citizens as equal autonomous agents capable of making up their own minds. That means that in a functioning democracy, we owe each other reasons for our political actions. And obviously these reasons can’t be “reasons” of force and manipulation, for to impose a view on someone is to fail to treat her as an autonomous equal. That is the problem with coming to see ourselves as more like Glauconian rhetoricians than reasoners. Glauconians are marketers; persuasion is the game and truth is beside the point. But once we begin to see ourselves—and everyone else—in this way, we cease to see one another as equal participants in the democratic enterprise. We are only pieces to be manipulated on the board.

  This ethical or political point doesn’t, of course, settle the issue of whether our psychologies are hardwired in such a way that reasons have very little influence over us. But it does illustrate what’s at stake, and cautions against drawing conclusions too quickly.

  A similar point can be made about the first, and older challenge to reasonableness stemming from the ancient skeptics, as another story from the history of philosophy suggests.

  Johann Friedrich Zöllner was an eighteenth-century clergyman and political essayist now remembered for being the guy who inspired Kant to define the Enlightenment. The context will sound weirdly familiar. At the beginning of the French Revolution, Zöllner published an article in the Berlin Monthly opposing the progressives of his day, who were arguing that marriage should be treated as a civil, not a religious institution. Zöllner said that only religion could provide a proper basis for marriage and that religious authorities should be given more weight in civil matters. Enlightenment values, as the progressives called them, weren’t up to snuff. And besides, he sneered in a footnote, no one could ever explain what “enlightenment” meant anyway.

  A few months later, Kant published a direct answer to Zöllner’s challenge. We encountered it in the first chapter: enlightenment, Kant said, means having the courage to think for yourself. Thus the Kantian bumper sticker: Sapere aude; dare to know.

  Kant’s concern was partly with intellectual autonomy. But he also points out that we are beings that can think for ourselves, and so in our role as citizens we owe it to one another to explain ourselves in ways that respect that fact. That’s because, Kant says, when I give you reasons I treat you as someone who is free to make up your own mind. I treat you with dignity. I treat you as a grownup. So, even if you really do know the truth—even if you are an oracle with all-knowing powers, or Plato’s philosopher king—you shouldn’t just appeal to that fact in public debate. We owe one another reasons that appeal to our shared humanity—that others have the potential to recognize as reasons just because they are human.

  Kant’s point helps to mitigate the force of the ancient skeptical argument even if it doesn’t answer it directly. The skeptical argument says, in effect, that we can’t defend fundamental scientific methods as any more rational than other methods. What Kant points out, however, is that we can show that they are more democratic, more respectful of basic human autonomy. Why? Because scientific methods use human cognitive capacities such as observation and inference. That doesn’t mean these capacities are always reliable, or even that we are very skilled at using them (news flash: we aren’t). But human capacities like these—capacities that are at the very basis of science—-do have an obvious virtue for a digitalized society: they aren’t secret or the province of a few. Observation and logic are strategies that everyone can, at least to some extent, use themselves and employ in their social networks, and that can be made at least a little more effective with training. It is no coincidence that Locke and other champions of science were also champions of what we now call human rights. Prioritizing scientific methods is liberating precisely because it frees one from appeals to authority, from the th
ought that something is true because someone in power says so.

  The Internet has created an explosion of what I called in the last chapter receptive knowledge. We saw there that while this is wonderful in many respects, it isn’t enough; we need to exchange reasons and play by shared epistemic rules if we are going to solve the information coordination problem that faces all societies. But Kant reminds us that reasonableness defined in this way also has serious political and democratic value. That’s why it is so crucial that we pay attention to how we are encouraging people to know about the world, and in particular the sorts of institutions that help them do that. As Haidt has remarked, we aren’t going to get people to be more reasonable with one another by having them sign “civility pledges.”27 We need to promote institutions that encourage cooperation, and even face-to-face contact with people who have very different views. And more than that, we need to promote institutions that encourage us to engage our capacities for receptive thought. Institutional structures can help us overcome our private limitations—our biases, implicit or otherwise. That is what institutions are for. And that’s why, even if in private life you think of the Bible, or the Koran or Dianetics, as the ultimate authority on the universe, in public life you should support institutions like the National Science Foundation or the National Endowment for the Humanities or, frankly, your local university. Institutions that encourage the use of critical thinking and the civil exchange of reasons are doing the work of democracy. In part, that is because broadly scientific principles of reasonableness privilege principles that everyone appeals to most of the time—just because we are built that way.

  Indeed, that’s the very reason some people don’t like the idea that we should privilege these sorts of principles in public discourse. Consider this little item from the “you can’t make this up” department. In 2012, the Republican Party in Texas included the following in their platform:

  Knowledge-Based Education – We oppose the teaching of Higher Order Thinking Skills (HOTS) (values clarification), critical thinking skills and similar programs that are simply a relabeling of Outcome-Based Education (OBE) (mastery learning) which focus on behavior modification and have the purpose of challenging the student’s fixed beliefs and undermining parental authority.

  Part of this is inside-baseball education-speak: the real target is “Outcome Based Education,” not critical thinking per se. The interesting point here is why they are opposed: because it challenges the student’s fixed beliefs and undermines authority.

  I have no opinions on “Outcome-Based Education,” nor do I think that all Republicans are against critical thinking. My point is that this particular critique is profoundly, utterly undemocratic. It also illustrates Kant’s point. We need to privilege “scientific” epistemic principles and methods of thinking in public discourse precisely because such principles allow us to evaluate authority. What makes scientific methods of rationality important is that without them you can’t hope to have anything like an open society. Critical thinking—the teaching of it, and the use of it in political argument on the Web and in the media at large—matters because without it, we fragment.

  The philosopher Richard Rorty famously declared that, “if you take care of freedom, truth will take of itself.” His idea, which he found in the educator and philosopher John Dewey, was that we can’t hope to ground our political principles on our scientific or epistemic principles. We can’t hope for a “foundationalist” view that places science on the bottom, holding up democracy. That’s because sometimes it goes the other way around: we have to ground our fundamental epistemic principles on our democratic values. But that doesn’t mean we should put politics first, science and epistemology second. Foundationalism turned on its head is still foundationalism. The right lesson to draw is one Kant would have thought obvious: our political and intellectual values are intertwined. The hard part isn’t seeing this fact; it is in trying to make sense of how we should improve our values— epistemic, intellectual and political—making sure that truth and freedom take care of each other.

  However we ultimately solve this problem, accelerating fragmentation is not to be taken lightly. Civil society requires that we treat one another with respect. We need to view one another (at least some of the time) as autonomous thinkers—as persons who can make up their own mind and have the right to do so. Give up on that and you give up on a central element of what Dewey would have called the public life: a common currency of principles and reasons that we can use to sort information and disputes over that information. The worry we’ve canvassed in this chapter is that the infosphere is making a true public life harder to achieve. We live in a Library of Babel, isolated in our separate rooms, poring over information culled from sources that reinforce our prejudices and never challenge our basic assumptions. No wonder that—as the debates over evolution, or what to include in textbooks, illustrate—we so often fail to reach agreement over the history and physical structure of the world itself. No wonder joint action grinds to a halt. When you can’t agree on your principles of evidence and rationality, you can’t agree on the facts. And if you can’t agree on the facts, you can hardly agree on what to do in the face of the facts, and that just increases tribalization, and so on and on in a recurring loop.

  Before you know it, the library has burned to the ground.

  4

  Truth, Lies and Social Media

  Deleting the Truth

  In July of 2012, Western news outlets reported that “truth” was deleted from the Internet in China. According to Chinese bloggers, searches for the Chinese character for “the truth” on the popular Twitter-like social media site Weibo resulted in the following message: “According to relevant laws, regulations and policies, search results for ‘the truth’ cannot be displayed.”1

  If the reports are to be taken seriously, Chinese censors were not content with preventing people from accessing truth; they wanted to prevent people from even discussing what it is.

  Over and above its inadvertent hilarity, this reminds us that the Internet is a revolutionary tool partly because it allows people to look for the truth on their own—independently of what governments, the scientific establishment or their own mother think is true. Perhaps the most striking example of this was the Arab Spring. As is now well documented, social media—specifically Twitter—not only allowed protesters to effectively organize, it gave them a way to let the world know about what was happening in their countries—countries that were ruthless in squashing regular media outlets. Since then, Twitter activism has only increased, and protest movements around the world use it to get their message out and to speak truth to power. That’s widely known.

  It is also widely known that the Internet is the world’s most powerful tool for controlling and distorting the truth. We’ve already talked about how the geography of the Internet encourages herd mentalities, lemming-like information cascades and group polarization. But it is also, in the philosopher and critic Jason Stanley’s term, an excellent “vehicle” for propaganda.2 And not just in China, obviously. Ask Google, “What happened to the dinosaurs?” and you may get at the top a framed answer “card,” as I recently did, that says, “The Bible gives us a framework for explaining dinosaurs in terms of thousands of years of history, including the mystery of when they lived and what happened to them. Dinosaurs are used more than almost anything else to indoctrinate children and adults in the idea of millions of years of earth history.”3 This alarming result shows that Google can, and often is, gamed. A savvy organization can make smart use of its metrics to get its result right on the cards themselves—so it becomes the very first result, framed in a way that makes it seem to the casual Google-knower to be a “fact.” And the organization can do all that even while insisting—as all good propaganda does—that it isn’t propaganda at all.

  In short, the Internet is very much a bloody, messy battleground for the truth wars. As such, it can seem harder than ever to know what is true, and this in turn has cause
d some to think that the concepts of truth and objectivity have outworn their welcome.

  The Real as Virtual

  The problem of distinguishing the real from the unreal, or the true from the untrue, is hardly the result of the digital age. What’s new is how the problem manifests itself.

  Take a coin out of your pocket and hold it in your hand before you. Now look at the coin: what shape does it look like? If, like most people, you say “round,” then I suggest you look again. Unless you are holding the coin directly in front of your face, chances are you are seeing a more elliptical shape. This is confirmed if you make a realistic drawing of the coin. A child might draw a circle, but a more skilled artist would draw the ellipse. Why? Because that’s what we are perceiving. But if so, then we have a puzzle. The coin is circular. What we perceive is not circular. Therefore, what we perceive is not the coin.

  This is the sort of argument that persuaded Locke to hypothesize that what we directly perceive are not the objects themselves but our perceptions or representations of them: our “ideas” of them, as he put it. Locke argued that this was the only way to explain how we sometimes get the world wrong. Optical illusions (like the shape of the coin) are one example.

  Locke also used the “idea” idea to explain the fact that our perceptions are often relative. Here’s another of his experiments, one which you may have done as a child. Take three bowls of water, one hot, one cold and one lukewarm. Put your right hand in the cold, your left in the hot, and then put both in the lukewarm. We know the result: the middle bowl will feel hot to the hand that was in the cold water and cold to the hand that was in the hot water. So, what is it that we perceive? Locke’s answer, following Galileo, was that all substances had two types of “qualities.” Their primary qualities were those aspects that were really “in” the objects, as Locke put it—those properties that they had independently of anyone perceiving them. Locke’s favorite examples were size, shape and extension in space, but today we might say that mass is the prime primary quality. Secondary qualities, on the other hand, were not “in” the object. Instead, Locke said, they were really just the powers that objects had, by virtue of their microstructure, to cause in us certain perceptions or ideas. Colors, smells, tastes and feelings like warmth and cold were like this, he said. Thus to say a fire engine is red is not to say that it has some inherent redness in it: there are no “red” particles that compose it. Redness and other “secondary” qualities are defined by reference to how we perceive them.