Free Novel Read

How to Think Page 8


  When one watches some tired hack on the platform mechanically repeating the familiar phrases—bestial atrocities, iron heel, bloodstained tyranny, free peoples of the world, stand shoulder to shoulder—one often has a curious feeling that one is not watching a live human being but some kind of dummy: a feeling which suddenly becomes stronger at moments when the light catches the speaker’s spectacles and turns them into blank discs which seem to have no eyes behind them. And this is not altogether fanciful. A speaker who uses that kind of phraseology has gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved, as it would be if he were choosing his words for himself. If the speech he is making is one that he is accustomed to make over and over again, he may be almost unconscious of what he is saying, as one is when one utters the responses in church.*

  And Orwell concludes that “this reduced state of consciousness, if not indispensable, is at any rate favourable to political conformity”—and also, one might add, social conformity. Orwell is quite right to call it a “reduced state of consciousness”: to borrow once again Daniel Kahneman’s language, it is as though complex questions that ought to be actively considered by System 2 have been shunted to System 1, where they run automatically. You have to suspect that if you took this “tired hack” to a pub and bought him a pint and tried to get him to defend his position, he would have nothing to fall back on except “the familiar phrases.” As Hobbes might put it, what ought to be his counters have become his money, and he has no idea what to do if someone refuses to accept them as legal tender.

  THE WORK OF METAPHOR

  These keywords are always dangerous, always threatening to become parasitic on thinking, but they do some of their most wicked work when they take the form of unacknowledged metaphors. This is one of the great themes of that seminal book by George Lakoff and Mark Johnson, Metaphors We Live By. In an especially important passage, they discuss the consequences of one of the most deeply embedded metaphors in our common discourse, the one that identifies argument as a form of warfare. Their examples:

  Your claims are indefensible.

  He attacked every weak point in my argument. His criticisms were right on target.

  I demolished his argument.

  I’ve never won an argument with him.

  If you use that strategy, he’ll wipe you out.

  He shot down all of my arguments.*

  The identification of argument with war is so complete that if you try to suggest some alternative way of thinking about what argument is—It’s an attempt to achieve mutual understanding; It’s a means of clarifying our views—you’re almost certainly going to be denounced as a wishy-washy, namby-pamby sissy-britches.

  We fixate so immovably on this notion of argument as war in part because human beings, generally speaking, are insanely competitive about everything; but also because in many arguments there truly is something to be lost, and most often what’s under threat is social affiliation. Losing an argument can be a personal embarrassment, but it can also be an indication that you’ve sided with the wrong people, which means that you need to find a new ingroup or else learn to live with what the Marxists call “false consciousness.” (It was in hopes of avoiding this choice that Phelps-Roper cut off communication with David Abitbol, but, as we saw, she had already crossed a kind of social and intellectual Rubicon.)

  So yes: argument can indeed be war, or at least a contest in which it is possible to lose. But there’s another side to this story: what is lost not in an argument but through passive complicity with that militaristic metaphor. Because there are many situations in which we lose something of our humanity by militarizing discussion and debate; and we lose something of our humanity by dehumanizing our interlocutors. When people cease to be people because they are, to us, merely representatives or mouthpieces of positions we want to eradicate, then we, in our zeal to win, have sacrificed empathy: we have declined the opportunity to understand other people’s desires, principles, fears. And that is a great price to pay for supposed “victory” in debate.

  If we look more closely at the argument-as-war metaphor, we’ll see that it depends on a habit of mind that is lodged very deep in our consciousness: the habit of dichotomizing. The best—the most accurate and nuanced—description of this habit I know was written twenty years ago by the paleontologist and evolutionary theorist Stephen Jay Gould, in an essay on what he called the “science wars”—yes, that metaphor again. Those “wars” pitted what Gould calls “realists”—

  “working scientists . . . who uphold the objectivity and progressive nature of scientific knowledge”—against what he calls “relativists,” who think that science is but a “social construction” and therefore “just one system of belief among many alternatives.”*

  Now, the people Gould calls “relativists” would probably have called themselves “social constructionists,” because “relativism” is usually perceived as a pejorative word, whereas “social construction” is a mode of doing something. (We’re “constructing”!) And the realists—well, who wouldn’t like to be called that, since it puts you on the side of reality. So we can see already that the words people use to describe themselves become a kind of currency, as Hobbes might put it, and a way to make those who disagree more completely Other. Lining up the definitions in this oppositional way puts us already in the region where military metaphors seem like the most accurate ones.

  Thus the “science wars” are launched. But for Gould, it seems clear that science is indeed a “culturally embedded” set of practices but also a reliable means of getting at the truth about the natural world. Had he left it at that, he would have fallen prey to another characteristic form of not-thinking, the kind that happens when someone cries, “It’s not either-or, it’s both-and!”—and then dusts off his hands and strolls contentedly out of the room. For Gould, the real intellectual work begins when you realize that “for reasons that seem to transcend cultural peculiarities, and may lie deep within the architecture of the human mind, we construct our descriptive taxonomies and tell our explanatory stories as dichotomies, or contrasts between inherently distinct and logically opposite alternatives.” That is, we have an inbuilt and powerful disposition toward dichotomizing—but one that we don’t have to obey. Once you know that the tendency to think dichotomously and militaristically is not just a local phenomenon, pertaining to this or that particular case, but exemplary of “our deeper error in parsing the complexities of human conflicts and natural continua into stark contrasts formulated as struggles between opposing sides,” then you have set yourself a task, not completed one. For now you must try to figure out how these nondichotomous forces work in relation to one another.

  Thus, in Gould’s case, if scientific practices are culturally constructed but can also lead us toward the truth about the world, how do you distinguish between the practices that are really helping us to sort things out and the ones that are leading us astray? This is another version of our discussion in Chapter 3 of bias and prejudice: we’re faced in a slightly different way with the problem of distinguishing the true prejudices, which help us to understand, from the false ones, which generate misunderstanding. And that’s exceptionally hard work. The cheerful both-ander doesn’t know what he’s about.

  So when people say, “They really mean the same thing, they’re just using different vocabularies to express it,” or “We all believe in the same God, we just express that belief in different ways,” we may with some justification commend those people for attempting to get beyond confrontation, dichotomy, argument as war. But we have to go on to say that the attempt is a facile one. The real story will be far more complicated, and not to be grasped by replacing a fictitious polarity with an equally fictitious unity. Blessed are the peacemakers, to be sure; but peacemaking is long, hard labor, not a mere declaration.

  The opposite of the sunny we-all-really-agree optimism is the gloomy we-never-can-agree pessimism. The
great nineteenth-century writer Sydney Smith was once walking through a narrow alley (a “close”) in London and saw two women, a couple of stories up, leaning out of their respective windows and shouting at each other across the gap between their buildings. “Those two women will never agree,” he said; “they’re arguing from different premises.”* The oft-stated view of the literary and legal theorist Stanley Fish is that whenever we disagree we do so from different, and irreconcilable, premises. So, in a recent interview he gave to The Guardian of London, he commented that two radically different accounts of the massacre at the Pulse club in Orlando, in June 2016, had immediately arisen: “In the one case, the case of the New York Daily News, the villain was the National Rifle Association, who was practically accused of pulling the trigger. . . . On the other side, the New York Post side, this was an event which was one more round in the long-running battle between ISIS and the United States.” But these accounts run on parallel tracks that never meet.

  Fish points out that this is true of many communities of conspiracy theorists, those who believe that the Holocaust didn’t happen, or that Lyndon Johnson was behind the Kennedy assassination. “The question is, ‘Could you show to those people a set of facts that would lead them to abandon what we consider to be their outlandish views?’ ” said Fish. “The answer to that question is no, because all people who have a story to which they are committed are able to take any set of counter-evidence and turn it back, within the perspective of the story they believe in.”†

  I’ve quoted this example at some length because I want to give Fish his due. And his point is a strong one. But it’s strong actuarially, as it were, not philosophically. It’s true that most people don’t change their minds. But as we have seen repeatedly in this book, some people do—they really do change “the story they believe in.” And that’s a remarkable and encouraging thing.

  We have discussed how keywords lead to governing metaphors (“metaphors we live by”) and how those metaphors do a tremendous amount of underground work, directing our responses to others in ways that we’re often unaware of. We’ve also acknowledged that those metaphors can capture something that’s true about our human situation—sometimes in argument things really are lost—but cannot be universalized without doing harm to our relations with one another. But now it’s time to take our diagnosis a step further.

  THE POWER OF MYTH

  George Lakoff and Mark Johnson wrote Metaphors We Live By; the philosopher Mary Midgley wrote what I like to think of as a companion volume, though she didn’t intend it as such: The Myths We Live By. Introducing her theme, Midgley writes,

  Myths are not lies. Nor are they detached stories. They are imaginative patterns, networks of powerful symbols that suggest particular ways of interpreting the world. They shape its meaning. For instance, machine imagery, which began to pervade our thought in the seventeenth century, is still potent today. We still often tend to see ourselves, and the living things around us, as pieces of clockwork: items of a kind that we ourselves could make, and might decide to remake if it suits us better. Hence the confident language of “genetic engineering” and “the building-blocks of life.”*

  Just as Lakoff and Johnson reveal that we use metaphors without knowing that they are metaphors, so Midgley shows that we rely on myths—which are, in effect, stories woven from metaphors—without knowing that they are myths. Organic creatures do not actually have “building-blocks.” Similarly, despite what thousands of computer scientists, neuroscientists, and philosophers will tell you, the human brain is not a computer. As the psychologist Robert Epstein has recently written, humans aren’t born with features native to computers: “information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers.”†

  The myths we choose, or more likely simply inherit, do a tremendous amount of intellectual heavy lifting for us. Even more than the empty words and phrases of Orwell’s “tired hack on the platform,” these myths do our thinking for us. We can’t do without them; the making of analogies is intrinsic to thinking, and we always and inevitably strive to understand one thing in relation to another thing that we already know. (When we call this process the “association of ideas”—association from social, society—we’re engaging in this kind of mythmaking, treating ideas as though they’re little communities. See?) And every analogy helps—but also, as Kenneth Burke reminds us, if it directs our attention one way it also turns our attention aside from other things. To consider the brain as a computer is to ignore its biochemical character and its embodied state—and such a metaphor encourages us to believe that we understand the brain better than we do.

  The most dangerous metaphors for us are the ones that cease to be recognizable as metaphors. For many people the analogy between brain and computer has reached that point: the brain isn’t like a computer, they think, it is a computer. (“A computer made of meat,” some say.) When that happens to us, we are in a bad way, because those screens become permanently implanted, and we lose the ability to redirect our attention toward those elements of reality we have ignored. Consider, as one final and disturbing example, the belief common in the early modern era that animals were effectively robots or (to use the term of the period) “automata”—“actuated by the unerring hand of Providence,” as one eighteenth-century lady wrote, to fulfill the Creator’s inscrutable purposes. Therefore when you strike an animal and it cries out, it does not feel pain—that is reserved for humans. An action has merely produced a preprogrammed reaction, as when you push a button and a doorbell rings. Therefore one need not worry about cruelty to animals; one actually cannot be cruel to them.*

  Think for a moment, if you can bear it, of what the consequences of the “automata” theory were. Such is the power of our myths.

  So that’s the story so far: in search of social belonging, and the blessed shortcuts that we can take when we’re in the presence of like-minded people, we come to rely on keywords, and then metaphors, and then myths—and at every stage habits become more deeply ingrained in us, habits that inhibit our ability to think. We can only hope that there are strategies by which we might counteract the force of those habits—and develop new and better ones.

  OTHER WORDS

  As we seek those new and better habits we should, in the meantime, be tolerant of our inevitable shortcomings. As Daniel Kahneman and his research partner Amos Tversky remind us, nothing is to be gained by demanding that we adhere to a standard of objective rationality that no human being can manage. And deployment of these big prefabricated language machines may be necessary for managing the fire hose of opinionating that online life brings us. But despite their superficial impressiveness of size, our myth machines are more delicate than they appear, and our unconscious awareness of that fact tempts us to deal in less than fair ways with the myth machines of others. Take, for example, one of the most common and least appealing defensive strategies I know: what I call “in-other-wordsing.”

  We see it every day. Someone points at an argument—a blog post, say, or an op-ed column—and someone else replies, “In other words, you’re saying . . .” And inevitably the argument, when put in other words, is revealed to be vacuous or wicked.

  Now, there’s no doubt that writers can use words evasively, to indicate or suggest things that they wouldn’t dare to say straight out. This is what “Politics and the English Language,” that Orwell essay I mentioned earlier, is all about. But often—astonishingly often, really—the “other words” people use to summarize an opponent’s argument grossly distort or even invert that argument.*

  Even worse, perhaps, is the Twitter version, which begins like this: “Shorter David Brooks,” or “Shorter Pope Francis,” or whomever the object of scrutiny is, followed by a colon and then an absurdly reductive account, not of what the person actually said but of what the tweeter is absolutely confident that the person meant.

  This kind o
f thing is closely related to the building of a straw man. The straw man is an evidently stupid argument that no one actually holds: refuting the ridiculous straw-man argument is easier than refuting the argument that someone actually made, so up in flames goes the figure of straw. And straw-manning is a version of in-other-wordsing. But it’s also possible to in-other-words someone’s argument not to make it seem that she holds simplistic views but rather to indicate that she holds views belonging to your adversary, to your outgroup.*

  In-other-wordsing is a bad, bad habit, but anyone who wants to resist it can do so. (Again, as we have had cause to remember throughout this exploration, many people don’t want to avoid it, they want to use it to win political or social or religious battles. And again: this book is not for such people.)

  Robin Sloan, author of the wonderful novel Mr. Penumbra’s 24-Hour Bookstore, has described attending debates sponsored by the Long Now Foundation. He was struck by the debate format, which, he says, is “nothing like the showdowns on cable news or the debates in election season.”

  There are two debaters, Alice and Bob. Alice takes the podium, makes her argument. Then Bob takes her place, but before he can present his counter-argument, he must summarize Alice’s argument to her satisfaction—a demonstration of respect and good faith. Only when Alice agrees that Bob has got it right is he permitted to proceed with his own argument—and then, when he’s finished, Alice must summarize it to his satisfaction.*

  And Sloan comments: “The first time I saw one of these debates, it blew my mind.” This kind of approach is not unique to the Long Now Foundation: it is, for instance, a feature common to the kind of debate subculture that Leah Libresco belonged to (see Chapter 2). But Sloan was amazed to see it in action because it is so rare in the world of argument as war. And as Sloan starts to unpack the implications of this model of debate, he gets into wonderfully deep waters. Writing that follows this model is profoundly dangerous to people who operate within the dichotomizing Us-Them, Winners-Losers model: