Traditional economic theory was based on the Enlightenment assumption that while people were irrational, as a species, we could depend on a trajectory towards rationality such that one should always bet that humans would be more rational in the future than they were now. Behavioral Economist counter that humans are essentially wired for irrationality and that it is smarter to accept that they are irrational and will always be so – and then to learn how to predict *how* they will be irrational by experimenting and doing empirical tests. Richard Thaler calls is “evidence based economics.”
There is a part of me that sees the Behavioralist argument as the more accurate one. But something deeper inside me says, “Why, if we can point out an irrationality, can we not teach people to stop acting irrational in that way?” The behavioralist, I believe, would argue that we must accept the realities of the brain and that it has limitations (bounded rationality). Just as we cannot stop ourselves from seeing an optical illusion even when we know it is an optical illusion, we cannot stop ourselves from making irrational decisions even when we know we are making them.
Behavioral economists also argue for something they call “bounded will power.” That is, they argue that while in theory, we humans have the ability to make choices that are in our long-term rational self-interest, we fail to always exercise that freedom. Our capacity to choose what is best for us is not limited in theory but it seems to be in practice – just as our capacity to be rational is not limited in theory while it is in practice.
Thirdly, Behavioral economists assert bounded self-interest. That is, where traditional economists would argue that people will always act in their own self-interest, behavioral economists will say that in practice, they will place limits on the self’s domain. For example, if you give a person a $100 and tell them that there is a partner somewhere in the world who will receive whatever part of that $100 they wish to give but, the partner has the ability to reject the offer such that neither person gets anything, a lot of people will split the money 50-50 to avoid the offer being rejected (the average offer is about $40 out of the $100.) But if you ask another person in the world if they would rather split $10 with the person who got the short end of the split or split $12 with the person who chose to be selfish and give himself more of the $100 than he gave away, the majority of people will take the $5 instead of the $6 in order to inflict some justice upon the greedy person. This is not what a computer would do. “81% of people in the study that Thaler conducted chose to share with a fair allocator. People are clearly willing to take a financial hit to punish unfair offerors. This is referred to as “inequity aversion.”
It is worth noting many of the conclusions that behavioral economists have drawn in studying the actual economic decision making of human beings. Here are a few of them:
Reference Points: People will decide whether a deal is a good deal or a bad deal based on some imaginary reference point. If the car is $20,000 but 5% below the Manufacturers Suggested retail Price (MSRP), it is a good deal. If it is $20,000 but 5% above the Manufacturers Suggested Retail Price, it is a bad deal. Same $20,000. This is why letting manufacturers set the MSRP, is just silly. These examples are examples of the irrationalities of human mental accounting. Thaler argues that we are particularly vulnerable to this sort of manipulation when we are faced with a decision to buy something we rarely shop for (houses, cars, mattresses” etc. Getting people to perceive that they have gotten a good deal may well be more important than giving them a good deal. As people want products but also want to sense that they somehow saved money in getting them. This is called transaction utility.
Thaler gives the following example from his life as a college teacher.
“Finally, an idea occurred to me. On the next exam, I made the total number of points available 137 instead of 100. This exam turned out to be slightly harder than the first, with students getting only 70% of the answers right, but the average numerical score was a cheery 96 points. The students were delighted! No one’s actual grade was affected by this change, but everyone was happy. From that point on, whenever I was teaching this course, I always gave exams a point total of 137, a number I chose for two reasons. First, it produced an average score well into the 90s, with some students even getting scores above 100, generating a reaction approaching ecstasy. Second, because dividing one’s score by 137 was not easy to do in one’s head, most students did not seem to bother to convert their scores into percentages.”
“In the eyes of an economist, my students were ‘misbehaving.’ By that I mean that their behavior was inconsistent with the idealized model of behavior that is at the heart of what we call economic theory. To an economist, no one should be happier about a score of 96 out of 137 (70%) than 72 out of 100, but my students were. And by realizing this, I was able to set the kind of exam I wanted but still keep the students from grumbling.”
Another irrational habit the behavioral economists have identified has to do with framing. People can be induced to make different decisions based solely on how the question is presented. “Do you wish to opt out of the organ donor program? Or “Do you wish to opt into the organ donating program?” If people believe from the way that you ask the question that most people want to be in it, then they will be reluctant to opt out, and visa versa. Consider the question Thaler asks by way of illustration. Two groups of people will be asked one of the two following questions.
A. Suppose by attending this lecture you have exposed yourself to a rare fatal disease. If you contract the disease you will die a quick and painless death sometime next week. The chance you will get the disease is 1 in 1,000. We have a single dose of an antidote for this disease that we will sell to the highest bidder. If you take this antidote the risk of dying from the disease goes to zero. What is the most you would be willing to pay for this antidote?
B. Researchers at the university hospital are doing some research on that same rare disease. They need volunteers who would be willing to simply walk into a room for five minutes and expose themselves to the same 1 in 1,000 risk of getting the disease and dying a quick and painless death in the next week. No antidote will be available. What is the least amount of money you would demand to participate in this research study?
Mathematically, the question is the same and a rational person would answer in a similar way for both questions. But they do not. Thaler explains:
“The answers to the two questions were not even close to being the same. Typical answers ran along these lines: I would not pay more than $2,000 in version A but would not accept less than $500,000 in version B.”
People are not even close to rational.
Another logical error we often make is connected to what behavioral economists call “diminishing sensitivity to gains and losses.” This describes the reality that we make financial decisions differently when we are flush with money than we do when we are broke. If someone were to offer me a bet – my house for their mansion – and give me 20 to 1 odds, I should probably take that bet. But, if my life will be wrecked if I lose, I might not do it even if the odds are 100 to 1. If I have the money for another house in the bank, I probably will. Being on the edge of survival will cause humans to act differently than they would other wise. Thus, our happiness increases as we get wealthier, but at a decreasing rate. As we get wealthier, we are made less and less happy by each added thousand dollars until the point where it almost has no impact. Thus, “changes in wealth matter more than levels of wealth.” The person who makes 25% more than last year and thus nets $5,000 is made happier than the person who makes 10% more than last year and thereby nets $20,000.
This is related to something called “loss aversion” and “sunk costs.” In short, we dislike losing about two and a quarter times as much as we like winning. Thus, someone who has lost a lot of money will take more risks to make the lost money back than a person who is ahead of where he started but just wants to make a bit more. The same amount of money will induce different levels of risky decision making based solely on the fact that one person is trying to avoid suffering a loss. If you buy a pair of shoes you will wear them even though they hurt because you can’t stand the idea of losing that money. If you make a bad investment, you will poor good money after bad to somehow avoid suffering that loss. A compute asked to calculate good bets, would not care what had been lost or would be lost.
Present Bias and Probability Weighting. This ingenious little irrationality is what will cause us to place more weight on a low probability event if that event has happened recently. For example, if there is a 1 in 1,000,000 chance of getting hit by lightning while golfing, you will golf. But if someone was actually hit by lightning while golfing last week, you will treat that potential possibility as having way more likelihood than it actually does. You will act as though somehow the odds have changed. Last year, the Patriots came back from a 25 point deficit in the Superbowl. Historically, over the course of NFL history, teams with a 25 point lead, as the Falcons had, have won 1057 times and lost 4 times. The odds of the Patriots coming back were ridiculously low.
They would be similarly ridiculous next year if the same scenario happened again. There is almost no statistical difference between 1057-4 and 1057-5. And yet, since this memory would be in people’s minds, it would carry more weight in their betting (to their rationalist detriment). A similar problem occurs when we overweight factors simply because they are available to us to see. We assume that if a factor is presented to us, it must be the most important factor. Thus, I could load a drink up with sugar but say on the bottle “gluten free” and people would rate the absence of gluten as an important distinction and would not even look to see the sugar content. By simply not mentioning an important factor, you can influence people’s decisions. People will irrationally use the factors you supply them as important, even if they are not.
Another interesting habit of irrationality we seem vulnerable to is called “discounted utility.” This involves the way that we value something right now as opposed to next year. If I say to a small child, “Would you like one marshmallow now or two marshmallows in an hour?” many of them will chose the one-now option. One now is worth more than two later. This makes very little sense. But if I say to the same children, “Would you like one marshmallow at 2:00 tomorrow or two marshmallows at 3:00 tomorrow,” they will almost inevitably chose the two-marshmallow option. Thaler describes this as a negotiation between the planner self and the doer self. The doer self has total control of what will happen right now. And the doer self cares entirely about the present self only. The planner self is the guardian ad litum of tomorrow’s doer self (who has not arrived and has no voice yet). All it has is the ability to inflict some guilt on the present self to offset the present doer self-‘s immediate pleasure.
This next logical quirk, relates to loss aversion in many ways. It is called “the endowment effect.” What it means is that the energy is will take to take a benefit away is far more than the energy it will take to get them a benefit. Consider what would happen if a restaurant started charging people to use their tables and chairs. Customers would be outraged? Why? Because they have never paid for tables and chairs before. And yet consider that Comcast will charge for the modem that you use to access the internet that you pay for. Why? Because it has always been done that way. It is not seen as adding a charge.
As examples of the endowment effect, consider how consumers would feel about a hardware store raising the price of shovels after a major snow storm. How would they feel if a cold drink vending machine was programmed to raise the price of drinks as the outside temperature rose (and easy feet of programming). Consider how people feel about record companies raising the price of a particular singer’s music the week after they die. In many parts of the economy, supply and demand forces are perfectly legitimate but only if they have been so forever. People do not like others to introduce randomness, uncertainty, or unopredictability to their status quo.
Confirmation Bias: If I see evidence for or against a certain assertion, it should not matter to me whether I previously believed or disbelieved that assertion. Logically, what my relationship is to the idea in question should be immaterial. And yet, it is not. People will generally prefer answers to questions that confirm their pre-conception. An idea is like a child. We become attached to them because it is work to rebuild a life based on an idea. We resist replacing an idea that we sense will require us to restructure our lives. We actually prefer to continue on in an inaccurate belief until forced to change by hard reality. This is hardly rational. Daniel Kahneman calls this “theory induced blindness.”
Law of Large Numbers: Another goofy little glitch in our calculating selves has to do with the way that the size of numbers impacts our thinking when we think about those numbers. If a bet would be a good bet over a hundred tries, then it is a good bet for just one. And yet we do not act like thatThe Law of Large Numbers tells us that sometimes people will base a decision not on actual odds but on the recent small sized experiment. Flip a coin ten times and it might just come out tails eight times. But basing a bet on the ratio of a million flips on that basis would be a bad idea. And yet humans will make that mistake often. . Incidentally, going into Superbowl 51, tails was leading heads 26-24.
According to behavioral economics, human beings generally live by “heuristics” rules rather than reason. They find more satisfaction obeying those rules than they do listening to mathematical logic.
There are many implications of this work and you will see people using it to advertise to you all the time. When is the last time you were mailed a thousand dollar “check” that you could use to buy a new car? Behavioral economists have become interested in applying the theories to the ways that governments make policies. They advocate the use of “choice architecture” to make it easy for irrational humans to make rational decisions. They also suggest the use of “nudging” to prompt people to be rational at just the points when they are most likely not to be. Thaler calls is “libertarian paternalism.”
Question for Comment: Is it time to surrender that Enlightenment assumption that we can teach ourselves out of our irrationalities? Or should we simply accept that our thinking faculties are always going to be handicapped and admit that we need some outside power to “think” for us?