Free Novel Read

Know-It-All Society Page 3


  Examples abound—from the “smartest guys in the room” at Enron to the “best and brightest” in the Kennedy and Johnson administrations who got America embroiled in Vietnam. Intellectual arrogance blinds us to mistakes. This is the heart of the somewhat misleading proverb “Pride goeth before a fall”—which is deceiving because justified pride is no more dangerous than justified confidence. But intellectual arrogance can be dangerous, precisely because it involves a confusion of ego with truth. When you see yourself as one of the “smartest guys in the room,” you see yourself as confident and strong, and you think your view must be right because, well, it is yours. But that confidence just makes you miss obvious facts, precisely because you think you know all the facts already.

  The delusional nature of intellectual arrogance explains why people rarely see it in themselves. But we can spot it in others. There’s the paradigmatic drunk uncle who weighs in on any political topic and always sees his own opinion as the last word. This is the guy no one wants to sit next to at Thanksgiving, who smugly tells those whose experience he knows little about what they should be feeling and thinking, who refuses to acknowledge alternative viewpoints as anything other than fake news. Then there is the person who, while civil and seemingly reflective, never changes her view or even admits that she might need to think about things from a different perspective. Over time you realize she is not actively listening to what you say, but only waiting for her turn to speak. There is the ubiquitous man who refuses to accept other people’s points except when he repackages them as his own personal insight. Someone like this might seem open to new ideas; he listens, maybe even learns, but he doesn’t see himself as learning from others. He sees his beliefs as being improved by his own genius.23 The fact that we recognize these characters so easily is a sign of the pervasiveness of intellectual arrogance.

  Like other socially oriented attitudes, intellectual arrogance is interpersonal and dependent on context. People can be intellectually arrogant on certain topics but receptive and humble about their knowledge of others. But intellectual arrogance is also typically directed at particular kinds of people or sources of information. Someone who is intellectually arrogant feels superior, and typically, people feel superior not just in general but toward a person or a kind of person in particular. And that is what makes intellectual arrogance politically important and troublesome: it can become tribal.

  Attitudes are often contagious; any group can rapidly come to share almost any attitude. When we feel happy, or sad, or fearful, others around us can come to those same feelings. An attitude becomes tribal, however, when it is not just shared by a group but is implicitly or unconsciously social in its content—that is, experienced as part of a “we” and directed at a “them.” Racist attitudes are a prime example, as are attitudes of contempt or resentment when we direct them at entire groups of people. Intellectual arrogance, too, is most virulent when it is tribal. Tribal (intellectual) arrogance means being arrogant toward others because they are not like us: they are Republicans or Democrats, African Americans or immigrants, atheists or religious believers. We know; “they” don’t. We have nothing to learn from them, and our capacities for knowing (or knowing about a specific topic) are superior, more developed, more refined.

  Tribal arrogance is therefore intrinsically hierarchical. It is the arrogance of whites over nonwhites, of men over women, of native-born over immigrant. But it is also the arrogance of the educated over the uneducated, the rich over the poor, the cosmopolitan over the provincial. For the tribally arrogant, those in other tribes are like children, and for that reason, there is a sad history of the arrogant denying rights to those they consider inferior, precisely because they view those “inferior” people as having less of a capacity to reason and to know.

  When we become tribally arrogant toward others, we go beyond experiencing arrogance as a mere attitude. Attitudes can come and go: one can be arrogant on Tuesday but not on Wednesday. But tribal arrogance is typically more of a mind-set, by which I mean a collection of attitudes and beliefs about your tribal convictions and knowledge and their superiority over others. Mind-sets are less transitory. They become fixed and rigid, and hard to shake. That’s why tribal arrogance, when pervasive, is so very dangerous.

  While he didn’t speak about it in the same terms, this danger was something Montaigne knew all too well. Our arrogant assurance in the superiority of what he called our own habits and customs could lead to the worst in humanity. “I live in a season when unbelievable examples of this vice of cruelty flourish because of the license of our civil wars; you can find nothing in ancient history more extreme than what we witness every day.”24 This is Montaigne’s warning about intellectual arrogance: once it becomes tribal, it dehumanizes and destroys. That’s a warning as relevant today as it was almost five centuries ago. To heed it, we must understand not only the social conditions that give rise to this attitude but the factors promoting its spread in our culture right now.

  2

  The Outrage Factory

  Google Knows All

  Much of what we know we “Google-know.” The internet is our go-to source of information on almost any topic. It is what we check first—and what we check last, using it to settle our disputes both banal and profound. How many times has someone you know pronounced on some point of fact and everyone else in the room races to their phones to verify or falsify it? We routinely use googling to trump other forms of inquiry, even to question experts. While we all know that googling can lead us astray, that doesn’t stop us from using it routinely, or from regarding it as essentially reliable on a broad range of topics. Indeed, for most of us, searching online happens without much forethought. It is just the obvious, immediate first step in answering almost any question about the social world; for those questions, it has a kind of priority.1 We trust it.

  Our reliance on Google-knowing turns out to feed our natural tendency to overinflate what we know. The devices we carry around in our pockets give us access to a world of information at the tip of our fingers (or thumbs). No wonder we feel more knowledgeable with our phones in our hands, as the psychologist Matthew Fisher found in studying the relationship between internet searches and the illusion of explanatory depth. In his study, Fisher again asked different groups questions, like “How does a zipper work?”2 The first group was encouraged to search the internet to check on their explanations; a second group was not allowed to use outside sources. Then each group was asked to rate how much they knew about topics that had nothing to do with the first question they had been asked (in this case, nothing to do with the way zippers work). The result? Those who had searched the internet rated their ability to know answers to unrelated questions higher than those who had not been allowed to search the internet. Merely searching the internet convinces people that they know more than they do—even about things they haven’t yet researched. It is as if the sheer speed and ease of access to information on the internet causes us to lose track of how much we rely on it, thereby “distorting how we view our own abilities.”3 And that makes us think we know more than we do. Of course we’re right, we think—just google it!

  Yet the most important fact about Google-knowing—and indeed about our entire online life over the last decade—is not how much information it gives us, but the fact that it gives us just the information we want. The Internet of Things is really an Internet of Us, our fingers busily curating our online lives, visiting the sites we want, using the apps we want, and carefully crafting our Facebook experience so it reflects the image we want. Our online life, in other words, is deeply personalized. That’s because Facebook, Google, and most of our apps, search engines, and social platforms all work in the same basic way, different algorithms aside. They attempt to track people’s preferences by way of tracking their likes, their clicks, their searches, or their friends. That data is then analyzed and used to predict what a given person’s current and future preferences will be. It is used to predict what sort of information you
—and crucially, those similar to you—will find interesting, what posts you will like, and what links you will click. This preference aggregation then helps to dictate the results of your searches, and what you see on the various platforms you use. That is why those shoes you were thinking of buying are being advertised on your Facebook feed. And these same algorithms help predict not only what we’ll click and like next but what we’ll buy, who we’ll find attractive, and how we’ll vote. As a result, both our online and offline lives are increasingly tailored—meant to satisfy our preexisting preferences.

  It seems likely, even obvious, that the preference-tracking structure of our digital platforms is playing a part in our growing know-it-all-ism. That’s not because the internet has any dark power all its own. It just feeds into our human tendency to overinflate what we know by reinforcing what we already believe. Googling is like being in a room with a million shouting voices. It is only natural that many of us hear the voices most similar to our own, shouting what we already believe; as a result, Google can find you confirmation for almost anything, no matter how weird. No wonder there has been an explosion of dogmatism. The mechanisms that make Google-knowing possible make us not only individually overconfident about what we understand but overconfident about what our tribe understands. The internet becomes one big reinforcement mechanism, obtaining for each one of us the information that we are already biased to believe, and encouraging us to regard those in other bubbles as misinformed miscreants.

  But that doesn’t just make us dogmatic and arrogant. It also makes us easy marks.

  Fake News and Information Pollution

  In December 2016, a man by the name of Edgar Welch entered a pizzeria in Washington, DC, armed with an assault rifle. Welch was there to “self-investigate” a bizarre conspiracy theory, according to which Hillary Clinton and other Democratic politicians were allegedly running a child sex-trafficking ring from the basement of the restaurant. To Welch’s surprise, the theory turned out to be false. Not only was there no sex-trafficking ring in the basement; there wasn’t even a basement. As Welch would later tell the New York Times, “The intel on this wasn’t 100 percent.”4

  The dark fantasy that caused Welch to turn up in DC that day is the epitome of “fake news”—by which I mean deliberately misleading news stories that are spread for fun, profit, and political gain. And Welch’s actions are widely cited as an example of the harm that fake news can do. Conspiracy stories like Pizzagate—and even weirder variants, like QAnon—are just one form of a broader phenomenon that we might call “information pollution.” Information pollution is the dumping of potentially toxic information into the media environment. Information can be toxic in different ways, but the most obvious ways are by being false (misinformation), intentionally deceptive and misleading (disinformation), or simply not based on any evidence at all.

  Information pollution is not new, nor are justifications of its use for political ends. Machiavelli advised that princes must always be ready to deceive, and to do so boldly. Benjamin Franklin apparently planted false stories about the Seneca Indians and their alliances with Britain for the purposes of swaying public opinion both in America and abroad. And The Protocols of the Elders of Zion, a virulent, made-up anti-Semitic text that is still in circulation today, was initially devised around 1905 for the purposes of stoking resistance to the Bolsheviks during the Russian Revolution.

  Information has always been, and will continue to be, a chief tool of empire and war. It is by the use, and misuse, of information that those who desire to manipulate hearts and minds have always acted. And the purpose of that action is almost invariably the same: to rouse the passions of the ordinary citizen and to instill a dogmatic and unforgiving attitude. With this attitude instilled in them, people can be made to do the most inhuman things to each other. That is the case in totalitarian regimes, and it is the case in contemporary liberal democracies—where propaganda, while necessarily different in character, is no less effective.5

  Yet while information pollution has been around as long as knives in the back, whispers in the ear, and the written word itself, we are living in a new golden age for information polluters. In large part, that is because of the preference-tracking nature of our digital platforms. Internet personalization is not just good for business; it is good for politics. Russian troll farms, political campaigns, and “research” firms (like the now despised Cambridge Analytica) use the personalized internet to help place targeted political advertisements and to get people to “like” and “follow” fake social-media accounts that feed them “news” reinforcing their political opinions. That’s why the personalization of the internet is great when it comes to shopping for shoes, but terrible when it comes to shopping for facts. When the only facts you receive are those tailored to fit your biases, you are a ripe target for manipulation.6

  Most people tend to assume that information pollution is wrong because it amounts to lying. But framing the issue solely in terms of lying actually underplays and mischaracterizes the danger. That danger isn’t lies per se but deception. Lying is not quite the same thing as deception. To lie is to deliberately say what you believe to be false with the intention of deceiving your audience. I can deceive you without lying (silence at a key moment, for example, can be deceptive). And I can lie to you without deceiving—either because you are skeptical and don’t believe me, or because what I say is inadvertently true; either way, you are lied to but not deceived. You might be led to think that deception occurs when someone is actually caused to believe what is false. “Deception,” as philosophers say, is a “success term.” But that’s only halfway there. Deception can happen even without false belief.

  Reflect on that old con the shell game. The con man pre­sents three shells, one of which has a coin underneath. He moves the shells around and asks you to pick the shell with the coin. If done right, it looks easy; but it isn’t. Using sleight of hand, he distracts you so that you can’t track the right shell and know where the penny is. But one can lack knowledge without having a false belief. One can be simply confused, and that is typically the case with such tricks. You don’t know what to think, so you simply guess. You can be deceived not only by believing what is false but by not believing what is true.

  The use of social media to spread political misinformation online is partly just a giant shell game. Propagandists often don’t care whether everyone, or even most people, really believe the specific things they are selling. They don’t have to get you to actually believe the penny is under the wrong shell. They just have to get you confused enough that you don’t know what is true. That’s still deception. And it is this kind of deception that dreadful for-profit conspiracy sites, and Russian-sponsored troll farms, have been particularly adept at spreading on social media. No doubt, some percentage of people actually believe such postings, but a far greater number of people come away ever so slightly more doubtful of what is true. They don’t know what to believe.

  It used to be that when someone would say something outrageously false (for example, “the moon landing was faked”), it would be ignored by most folks, who reasoned, “If that were true, I would have heard about it by now” (in other words, heard about it from creditable, independent sources). Filters (primarily, editors) worked not only to weed out the bad but to make sure the truly extraordinary real news came to the surface. The internet has made such reasoning moot, simply because so many of us are ensconced in our own information bubbles. Few people reject strange claims on the grounds that they haven’t heard them before now, because chances are they already have heard them, or something close to them, from the sites that tend to confirm their biases.7 That fact makes them more susceptible to accepting fake news, or at least not rejecting it. It makes them easier to confuse.

  Even more insidiously, the tendency to accept bizarre claims actually can encourage us to be more close-minded and dogmatic—and for weirdly “rational” reasons. Part of being open-minded is taking relevant alternatives to
your view seriously.8 You look into the alternatives; if they check out, you modify your view. If they don’t, you carry on. That, after all, is presumably what Edgar Welch tried to do. But as his case also illustrates, the internet has made many more alternatives relevant. And it has normalized crazy. Because for almost any view that you want to think is true—like, leading Democrats are selling children with pizza slices in DC—you can find “evidence” supporting it online. That means not only that some naïve people are deceived but also that the rest of us now need to decide which bizarre stories we pay attention to and which we dismiss. And because there is so much weirdness, we often dismiss stories out of hand when they don’t emerge from sources we already trust. We don’t really check out the alternatives, because (at least we tell ourselves) there are simply too many outrageous claims out there. And that’s true, but the net result is that we are just spending more and more time reading news from sources geared to tell us exactly what we want to hear anyway.

  That’s got to be one of the saddest ironies in a time rich with them: the technology that allows us to know more than ever before can also turn us into close-minded dogmatists. And information pollution, while it feeds that fact, isn’t the only reason. There’s another, even more basic way that the internet encourages our know-it-all culture.