Know-It-All Society Page 4
Sharing Emotions
One of David Hume’s most famous philosophical maxims is that reason is the slave of the passions. His point is that while reason can tell us how to get someplace, it can’t tell us where to go. Only the heart can tell us our ends; reason gives us the means. Hume himself was a man of large appetites, famous for his charisma, conversation, and literary ambition as much as his philosophy, which was ahead of its time and often shocking to his contemporaries. Another of his central insights was that human beings often deceive themselves about the role of passion in our lives. We know that emotions matter in love and war, but we fool ourselves into thinking they play a lesser role in our ordinary interactions with one another. Hume thought morality itself was based on our “social passions,” and he felt that religion often misled us about the basis of moral distinctions. One doesn’t have to agree with him on this to see the general point. Too often we think we are playing the game of reason when we are actually playing the game of passion. And Hume’s point turns out to be crucial for understanding how our use of technology can unwittingly feed our tendency toward arrogance.
To see how, let’s start by noting that, while useful, the term “information pollution” can be misleading. The metaphor assumes that the broader information culture that is being polluted, like nature, is already given and—save for the interventions of the polluters—pure. But it is we humans who make and convey information, and it is we who construct and live in the wider information culture. The internet is not just something that has happened to us; it is a world we’ve created. As every good propagandist knows, one can’t seize hearts and minds without appealing to something that is already hidden deep within them. And so it is with fake news. We’ve created a digital world that reflects our tendency to care less about the truth than we profess, even while it encourages us to be more arrogant about our tribal convictions. In other words, we are living in not just a polluted information culture but a corrupted one.
Corruption is not the same as pollution. Pollution is something that happens to a system; corruption is something that happens within a system. One way social systems are corrupt is that sometimes their stated rules are not their real rules. A criminal justice system is corrupt, for example, if it purports to treat everyone fairly but actually discriminates on the basis of race, or when quid pro quo actions (such as favors on behalf of police or judges) are widespread. Likewise, an information culture is corrupt when the rules of evidence and reliability that some of its participants allegedly adhere to—their epistemic principles, in other words—are not the ones they more frequently employ.9 This phenomenon might be what some people mean when they talk about living in a “post-truth” culture. Of course, we don’t literally live in a world where nothing is true. Truth exists as much as it ever has. What has happened is that our information culture has become so corrupt as to tolerate and encourage self-deceptive attitudes toward truth and evidence. It encourages us to care more about our convictions than about truth, but to tell ourselves we are doing otherwise.
A small but telling example of how bad faith manifests itself online is the way some people reacted to the Pizzagate story. That story, recall, was very specific: Hillary Clinton was selling children for sex out of one particular pizza joint in Washington, DC. The story was widely circulated. By the time Mr. Welch pulled his Chuck Norris act, it had been running around various far-Right circles for months. Welch himself reported that he had heard it by word of mouth. He reportedly was surprised and “checked into it” by watching videos and reading stories on the internet. All of these stories claimed this was really happening: child abuse in a specific place.
So ask yourself: If you really believed this was happening, wouldn’t you, too, try to do something? Well, some people did act—sort of. There were some scattered and very small protests—actually just a few folks standing across the street with signs. There were a lot of death threats—not just to those owning and working at the pizzeria, but apparently to those owning and working at business establishments up and down the block. Which is, frankly, bizarre, and no doubt frightening. But it is not a particularly effective way to stop a child-trafficking ring that was allegedly happening in a public establishment and that millions of people supposedly knew about. And in any event, death threats weren’t the most common reaction. The most common reaction by those who claimed to believe this story was simply to pass it on, to repeat it, and to form other views based on it.
Even more interesting was the reaction within certain far-Right media circles following Mr. Welch’s attack. It is a reaction that has become all too familiar. Immediately after the attack, the far Right began circulating the theory that Edgar Welch was a “crisis actor”; that is, he was paid by political interests on the Left to enter the pizzeria with a gun in order to embarrass far-Right conservatives.10 These accusations—and similar charges about “false flag attacks”—are now ritualistically repeated following mass shootings in the United States. And they do not come out of the blue; civil rights protestors were often accused in the 1960s of being paid actors. So go conspiracy theories, you might say; there is no getting around their craziness. But this is a bit different: in the case of Edgar Welch, such accusations not only provided a weird spectacle but pointed to one way in which we corrupt our media environment. It was as if the Trump supporters who had insisted they believed the story now wanted to label that very fact—the fact they had believed it—“fake news.” They wanted to deny that anyone like them would act on a story that they constantly said required action. This is bad faith and intellectual arrogance in action. And it shows how disinformation can corrupt as well as pollute.
It would be a grave mistake, however, to think that nutcase conspiracy theorists are the only ones corrupting our information culture. To some extent, we all do—at least those of us on social media. That’s because, as I earlier noted, one way a system can be corrupted is by running on different rules than the “official” ones—the ones we think it runs on. This often happens on social media, which is designed to function as a vehicle for our passions more than for reason.
This fact manifests itself in ways we often don’t realize—right down to the function of the communicative acts we engage in online. One of the things that many people—including many people reading this book—do on Facebook is post or share news stories. The posting of a news story can be seen as a communicative act. Our most common communicative acts are verbal, but written ones count too—memos, letters, and yes, Facebook and Twitter posts. Our communicative acts come in a variety of forms. We can question (“Is the door closed?”) or command (“Close the door!”) or assert and describe (“The door is closed”). Sometimes—indeed, often—we are doing more than one thing at a time, such as when I say that the food at a particular restaurant is really tasty while we are wondering where to grab dinner. In that context, I could perform several communicative actions at once: assert that the food at that establishment is tasty, endorse the establishment as the place where we ought to eat, and just express my feelings of anticipation and hunger at the prospect of eating there.
When we share media stories online—especially when we do so without comment—we typically appear (to others and ourselves) to be doing something similar. We appear to be engaging in one or both of the following communicative acts: providing testimony—asserting that something is the case, typically assumed to be summed up by the headline (for example, “Hillary Clinton Suspected to Have Been Acting for Russians”); or endorsing or recommending a piece of information as worthy of attention or even belief, possibly even saying just that in a comment on the post (“Everyone should read this!”).
That’s not all we do with news stories that we post, naturally. Sometimes we aren’t actually endorsing the story; we are sharing it as something we think is amusing, really dumb, or ironic. When we do something like that, the kind of act we are engaging in is self-consciously expressive; we are expressing our amusement, or ironic detachment, or frustration. We aren’t
trying to convey something factual. Yet although that happens, we don’t assume it is the typical case—as evidenced by the fact that most people feel obligated to signal in some way that their act of sharing shouldn’t be understood as endorsement. On Twitter, for example, it is not uncommon for people to declare on their profile page that “retweets are not endorsements.” Such a declaration wouldn’t make sense if the default assumption weren’t that shares are endorsements.
So, shares typically seem to us like assertions and/or endorsements of assertions. But what if that appearance is just that: appearance and not reality? What if we are just confused about the way communication actually functions online? As it turns out, there are reasons to think that we are, in fact, confused.11 These reasons concern both what we do and what we don’t do when we share content online.
Let’s start with what we don’t do. Current research estimates that at least 60 percent of news stories shared online have not even been read by the person sharing them. In 2016, researchers at Columbia University, for example, arrived at this conclusion by cleverly studying the intersection between two data sets.12 The first data set was made up of Twitter shares over the course of a month from five leading news sites—as tracked by tweets containing links to stories on those sites. The second set of data consisted of the clicks over the same period connected to that set of shortened links. The sets were massive—2.8 million shares responsible for seventy-five billion potential views, and almost ten million clicks. After designing a methodology for sorting through these correlations, and correcting for its biases, the researchers found that only four in ten people tweeting out news items have actually read them.13 As one author of the study summed up the matter, “People are more willing to share an article than read it.”14
So that’s what we don’t do: read what we are sharing. What we do is share content that gets people riled up. Research has found that the best predictor of sharing is strong emotions—both emotions like affection (think posts about cute kittens) and emotions like outrage. One study suggests that morally laden emotions are particularly effective: every moral sentiment in a tweet increases by 20 percent its chances of being shared.15 And plausibly, social media actually tends to increase our feelings of outrage. Acts that would not elicit as much outrage offline elicit more online. This intensification may be due in part to the fact that the social benefits of expressing outrage online—such as increased tribal bonding—still exist and are possibly amplified, while the risks of expressing outrage are lessened (on the internet, it is harder for those you are yelling at to strike back with violence). Moreover, outrage can itself simply feel good. And since our digital platforms are designed to maximize shares and eyeballs on posts—and outrage does that—it is not surprising that the internet is a great mechanism for producing and encouraging the spread of outrage. As the neuroscientist Molly Crockett puts it, “If moral outrage is like fire, then social media is like gasoline.”16
Put together, these points—what we are doing with our shares and what we are not doing—make it difficult to believe that the primary function of our communicative acts of sharing is really either assertion or endorsement, even though that’s what we typically think we are doing.17 By the “primary function” of a kind of communicative act, I mean that which explains why the act continues to persist. The primary function of yelling “Air ball!” at a basketball player trying to make a free throw is to distract him. It may do other things too—amuse people, or even describe what, in fact, turns out to be an air ball. But the reason people continue to yell “Air ball!” is that it is distracting. Someone new to the game could conceivably get this backward. They might think that people are warning the player or predicting how the shot is going to fall. Such interpretations would be misunderstanding the act’s primary function.
Something like this is happening on a massive scale on social media. We are like the person just described, new to the game of basketball. We think “Air ball!” is meant to describe or predict. But it isn’t. Put differently, we think we are playing by one set of rules—the rules of assertion and endorsement—when we are actually playing by a different set of rules altogether. We think we are sharing news stories in order to transfer knowledge, but much of the time we aren’t really trying to do that at all—whatever we may consciously think. If we were, we would presumably have read the piece that we’re sharing. But most of us don’t. So, what are we doing?
A plausible hypothesis is that the primary function of our practice of sharing content online is to express our emotional attitudes. In particular, when it comes to political news stories, we often share them both to display our outrage—broadcast it—and to induce outrage in others. As Crockett has noted, expression of attitudes like moral outrage is one way that tribes are built and social norms enforced. Social media is an outrage factory. And paradoxically, it works because most folks aren’t aware, or don’t want to be aware, of this point. But it is just this lack of awareness that trolls and other workers in the fake news industrial complex find so useful. Purveyors of fake news are keenly aware that when we share, we’re doing something different from what we think we’re doing.
This is precisely where Hume’s point, noted earlier, gets traction. As Hume and philosophers following him have often been at pains to illustrate, humans frequently misunderstand their own communicative acts. In ordinary speech, that’s partly because the same words can be used to say, and do, very different kinds of things—to engage in different communicative acts. The sentence “I’m sorry” can mean “I apologize for what I did,” or it can be used to simply express feelings of sympathy for someone experiencing a loss. Used that way, it is a bit like a consoling hand on the shoulder or the phrase “There, there.” It is a way of signaling that we care. Given all the things we can do with words, it is not surprising that we can sometimes be confused. It is possible to use a medium of expression for one purpose when we think (and tell ourselves) we are using it for another.
In the middle of the twentieth century, one philosophical movement inspired by Hume diagnosed all of our moral thought and talk as falling into this category. Advocates of this view, appropriately called expressivism, argued that the purpose of moral language isn’t to describe the world. Saying that the death penalty is wrong is not an attempt to describe some feature of the death penalty. It isn’t like describing the weather as balmy. When we make moral judgments on this view, our real aim is to express our feelings and attitudes, in order to motivate others to feel similarly. As a result, it is a mistake, the expressivists argued, to think that an utterance like “the death penalty is wrong” is substantively true or false, because we aren’t really aiming for it to be true in the first place. It is not an attempt (whether successful or not) to state a fact but a way of expressing ourselves and what we feel matters in the world. In some ways, it is more like saying, “Boo, death penalty!” Shouts of “Boo!” and “Yay!” aren’t attempts to state facts at all; they convey emotion. By using them, we express ourselves, but we also hope to instill emotional reactions in others and strengthen the bonds between us. Knowledge and reason are not on our minds.
The original versions of expressivism, like a lot of philosophical views in their initial giddy moments of creation, overstepped. While it seems right that moral judgments are often a kind of self-expression, it seems just as likely that we can also (often at the same time) use them to describe what we think is really true. It is not an either/or situation. As a result, the original theories have been superseded by more nuanced views about the expressive aspects of moral communication.18 Nonetheless, there is something clearly right about the view as well because, as Hume reminds us, we often do ignore the power of the heart in moral life. In any event, whether or not expressivism about all moral judgments is correct, it is a very plausible explanation for what is happening when we share content online. Indeed, it is especially plausible there, for digital platforms are intentionally designed to convey emotional sentiment—because the desi
gners of those platforms know that such sentiment is what increases reshares and ups the amount of attention a particular post gets. And whatever does that makes money.
I am not saying that we don’t endorse and assert facts on social media. Of course we do—just as some of us read what we share. Moreover, it is plausible to take ourselves to be endorsing or asserting that part of a shared post that we typically do read: the headline. As with shouts of “Air ball!” at a basketball game, our communicative acts online can do many things at once. But if you want to understand what I’m calling the primary function of a kind of communicative act, you need to look at the reason that the act continues to be performed. And in the case of sharing online content, that reason is the expression of emotional attitudes—particularly tribal attitudes. Why? Because expressions of tribal emotional attitudes like outrage are rewarded by the amount of shares and likes they elicit.
The expressivist account of online communication is also compatible with the fact that we do form beliefs and convictions as a result of sharing attitudes. Compare “team-building” exercises. These kinds of exercises (like falling back into your colleague’s waiting arms) are not directly aimed at conveying information or changing your mind. They are aimed at building emotional bonds with your coworkers. But that, if all goes well, will have a downstream effect on what you believe. In learning to trust your team members, you will come to believe that this is the team you want to be on. A similar thing happens during the training of military recruits. Many of the exercises that new soldiers are put through are aimed at building trust and self-confidence. But especially in wartime, they are also aimed at making soldiers hate the enemy. This aim, too, has downstream effects: the soldiers come to believe they are fighting on the right side.
Social media is like boot camp for our convictions. It bolsters our confidence, increases trust in our cohort, and makes us loathe the enemy. But in doing so, it also makes us more vulnerable to manipulation and feeds our hardwired penchant for being know-it-alls. We think we are playing by the rules of rationality—appealing to evidence and data. But in fact, the rules we are playing by are those that govern our self-expressions and social interactions—the rules of the playground, the dating game, and the office watercooler. These rules have more to do with generating and receiving emotional reactions, solidifying tribal membership, and enlarging social status than with what is warranted by the evidence and what isn’t.19