Thinking Fast and Slow by Daniel Kahnman REVIEW
Thinking Fast and Slow is about the different systems you have for thinking and how knowing more about them can help you to think better. Kahnman calls these systems, “System 1” and “System 2.” System 1, has evolved (or been designed) to give you fast answers to survival questions. It was not designed for accuracy so much as speed. It does not like inaccuracy but it will trade a little for survival. If it gets you out of the way of a T-Rex, it does not mind if the T-Rex was actually a falling tree.
System 2 is logical and procedural. But it is inherently lazy and quite satisfied if System 1 does all or at least most of its work. The experiments that Thinking Fast and Slow is rife with all draw the same conclusion: System 1 is impossible for us to shut off and. System 2 is very difficult to get to stay awake. This is something that Kahnman (a Nobel Prize winning economist) stresses over and over. “The question that is most often asked about cognitive illusions is whether they can be overcome,” he writes,
“The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent.”
“. . . Many people are overconfident, prone to place too much faith in their intuitions. They apparently find cognitive effort at least mildly unpleasant and avoid it as much as possible.”
“The notion that we have limited access to the workings of our minds is difficult for us to accept because, naturally, it is alien to our experience, but it is true: you know far less about yourself than you feel you do.”
We may not like it but experiment after experiment shows it to be true: “System 1 constructs a story and system 2 generally believes it.” And, I might add, in his conclusion, the author asserts that no amount of study will cure us. “System 1 is not readily educable,” he says there,
“Except for some effects that I attributed mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues. I have improved only in my ability to recognize situations in which errors are likely.”
“The [only] way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcements from System 2. . . . We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available, and cognitive illusions are generally more difficult to recognize then perceptual illusions.”
And cognitive illusions is what this book is all about exposing. Kahnman treats us to a cornucopia of human thinking errors that we cannot seem to defend ourselves from even when we know they exist (just as you can’t “unsee” an optical illusion even if we know it is one.)
Start with a simple word problem:
“A bat and ball costs $1.10. The bat cost one dollar more than the ball. How much does the ball cost?”
If you are like most people, you immediately leap to the solution. 10 cents.
But the answer is 5 cents.
And the problem is that once System 1 produces an answer that causes no anxiety or survival concern, system 2 yawns, goes to sleep, and says, “fine by me.” “System 1 is gullible and biased to believe,” writes Kahnman,
“System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy. . . . When information is scarce, which is a common occurrence, System 1 operates as a machine for jumping to conclusions.”
System 1 is, as the author puts it, “radically insensitive” to quality and quantity of information when it process. It suffers from what Kahnman calls a serious WYSIATI problem (What you see is all there is). In other words, it really could care less about information that it doesn’t have. Thus, “if a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it [instead].” It literally has been tasked with getting answers fast and if it has to replace an unanswerable question or lengthy question with an unasked but answerable one, it will do it.
At the risk of sounding like this is all a joke, most of us will agree with these conclusions intuitively. But Kahnman makes it clear that we are all far more confident in our System 2 process than we have reason to be. “Self-criticism is one of the functions of System 2,” he writes,
“In the context of attitudes, however, System 2 is more of an apologist for the emotions of System 1 than a critic of those emotions – an endorser rather than an enforcer.”
We do not realize, he argues, just how often our System 2 processes let us get away with illogical murder. Time after time, the research shows that “the emotional tail wags the rational dog.” As the psychologist Daniel Gilbert observed, “disbelieving is hard work, and System 2 is easily tired.” Or, as Kahnman puts it on one aphorism, “System 2 is not impressively alert.” Many chapters of the book are dedicated to showing how this is so. “The exaggerated faith in small samples,” he says with respect to how easily convinced we can be by insufficient evidence for example
“is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.
“As we saw in the discussion of the law of small numbers that a message, unless it is immediately rejected as a lie, will have the same effect on the associative system regardless of its reliability. The gist of the message is the story, which is based on whatever information is available, even if the quantity of the information is slight and the quality is poor: WYSIATI”
“You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from information available to you, and if it is a good story, believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.”
“The amount of evidence and its quality do not count for much, because poor evidence can make a very good story. For some of our most important beliefs we have no evidence at all, except that people we love and trust hold these beliefs. Considering how little we know, the confidence we have in our beliefs is preposterous – and it is also essential.”
“We are confident when the story we tell ourselves comes easily to mind, with no contradiction and no competing scenario. But ease and coherence do not guarantee that a belief held with confidence is true. The associative machine is set to suppress doubt and to evoke ideas and information that is compatible with the currently dominant story. A mind that follows WYSIATI will achieve high confidence much too easily by ignoring what it does not know. It is therefore not surprising that many of us are prone to have high confidence in unfounded intuitions”
Kahnman talks about the fundamental irrationality that results from the loss aversion of System 1. Essentially, your “fast thinker” weighs loss more heavily than potential gain in its calculations. As he puts it, “loss aversion is built into the automatic evaluations of System 1. . . . Animals, including people, fight harder to prevent losses than to achieve gains.”
“giving up a bottle of nice wine is more painful than getting an equally good bottle is pleasurable.”
His chapter on this subject goes into some detail as to how this reality causes investors to lose millions if not billions of dollars a year. In a similar vein, he shows how System 1 will be dominated by vivid imagery far more than it will be by statistics (which would make a far more reliable foundation for a decision). Vivid imagery, you will recall, can be thought about fast but statistics can’t . . . so System 1 doesn’t bother with them too much. This is why System 1 (when combined with lazy system 2 thinking) leads to frequent error. If parents of a sick child are told of a potential cure and the doctor says that the “one month survival rate is 90%.” The parents are likely to respond very differently than they would to the sentence “There is 10% mortality in the first month.” “The logical equality of the two descriptions is transparent,” says Kahnman,
“and a reality bound decision maker would make the same choice regardless of which version she saw. But System 1, as we have gotten to know it, is rarely indifferent to emotional words: mortality is bad, survival is good, and 90% survival sounds encouraging whereas 10% mortality is frightening.”
“An important finding of the study is that physicians were just as susceptible to the framing effect as medically unsophisticated people,” the book adds, “Medical training is, evidently, no defense against the power of framing.”
“Reframing is effortful and System 2 is normally lazy. Unless there is an obvious reason to do otherwise, most of us passively accept decision problems as they are framed and therefore rarely have an opportunity to discover the extent to which our preferences are frame-bound rather than reality-bound.”
As I often tell my philosophy students, many of our ethical dilemmas are resolved by the people who give us the vocabulary we choose to think about them. One of Kahnman’s chapters talks about the errors we make because we are not aware of the fact that we make decisions as “experiencing selves” and “remembering selves” and that the two are not necessarily working in collaboration with each other. He gives details about a specific experiment where people are given a painful experience and a less painful experience where the less painful experience simply ends with more pain. Because of the ending, the remembering self remembers the less painful experience as something more to be avoided in the future. “The experiencing self,” he writes,
“is the one that answers the question: Does it hurt now? The remembering self is the one that answers the question: How was it, on the whole? Memories are what we get to keep from our experience of living, the only perspective that we can adopt and as we think about our lives is therefore that of the remembering self.”
You can see this dichotomy between experiencing self and remembering self if you conduct a little thought experiment about your next vacation. Imagine that at
“the end of the vacation, all pictures and videos will be destroyed. Furthermore, you will swallow potions that will wipe out all your memories of the vacation. How would this prospect affect your vacation plans? How much would you be willing to pay for it, relative to a normally memorable vacation?”
Some people, Kahnman suggests, “say that they would not bother to go at all, revealing that they care only about the remembering self, and care less about their amnesic experiencing self than about an amnesic stranger.” System 1 will often make illogical decisions because of its more intimate relationship to the remembering self than to the less important experiencing self.
One more example of an Achilles heel in System 1 thinking ;
“Daniel Gilbert and Timothy Wilson introduced the word miswanting to describe bad choices that derive from errors of fact in forecasting. This word deserves to be in everyday language. The focusing illusion is a rich source of miswanting, in particular, it makes us prone to exaggerate the effect of significant purchases or changed circumstances on our future well-being.”
If you have ever bought a new car or shoes or house, you know this is true. We are grievously prone to miscalculate the happiness that we will get from such purchases. We are notorious “miswanters” (You don’t need to be a Buddhist to understand this.)
The implications of Kahnman’s argument are both personal and political. If one agrees with him, one has to conclude that we humans need some protections from ourselves. We need a scaffolding that will counter our natural weaknesses. “Although humans are not irrational,” he says,
“they often need help to make more accurate judgments and decisions, and in some cases policies and institutions can provide that help. These claims may seem innocuous, but they are in fact quite controversial. As interpreted by the important Chicago school of economics, faith in human rationality is closely linked to an ideology in which it is unnecessary and even immoral to protect people against their choices. Rational people should be free, they should be responsible for taking care of themselves. Milton Friedman, a leading figure in that school, expressed this view in the title of one of his popular books: Free to Choose. The assumption that agents are rational provides the intellectual foundation for the libertarian approach to public policy: do not interfere with the individual's right to choose, unless the choices harm others.”
“Freedom has a cost, which is borne by individuals who make bad choices, and by a society that feels obligated to help them. The decision of whether or not to protect individuals against their mistakes therefore presents a dilemma for behavioral economists. The economists of the Chicago school do not face that problem, because rational agents do not make mistakes. For adherents of the school, freedom is free of charge.”
“Humans also need protection from others who deliberately exploit their weaknesses –“ Kahnman adds, “and especially the quirks of System 1 in the laziness of System 2.”
The implications for government policies towards things like “saving for retirement” or “luxury taxes” is self-evident. And on a personal level, we must – as we mature, build into our lives external thinking mechanisms to offset our inbred “foolishness.”
“System one registers the cognitive ease with which it processes information, but it does not generate any warning signal when it becomes unreliable. Intuitive answers come to mind quickly and confidently, whether they originate from skills or from heuristics. There is no simple way for System to 2 to distinguish between a skilled and a heuristic response [a response based on a simplified abstract]. It's only recourse is to slow down and attempt to construct an answer on its own, which it is reluctant to do because it is indolent. Many suggestions of System 1 are casually endorsed with minimal checking, . . This is how System 1 acquires its bad reputation as the source of errors and biases.”
Lacking an internal warning signal, perhaps we need external ones. Perhaps another word for these “brakes” on our tendency for error is the word “friend.”
Question for Comment: Do you think the record of your life shows you to be better or worse at decision making than most? What is the source of most of your worst “life decision errors”?
Comments