View in Browser
Key insights from

Thinking, Fast and Slow

By Daniel Kahneman

What you’ll learn

What kind of decision maker are you—an impulsive, shoot from the hip, let the chips fall where they may type? Or are you a more analytical, methodical, left-brained person? Truth be told, most of us exhibit traits from both camps. Rarely is anyone just one type or the other. Kahneman’s aim in this book isn’t to help us identify what kind of decision-maker we are, but to enlighten us about the factors which influence our decision processes, highlighting those factors of which we are often unaware.


Read on for key insights from Thinking, Fast and Slow.

1. Evolution has divided our brain’s functionality into two distinct systems.

System 1 is automatic, quick, intuitive, and always operating without voluntary control. Some automatic activities are depth perception, sourcing an unexpected noise, reacting to facial expressions, answering 2+2=, comprehending simple sentences and associating personal characteristics with occupational stereotypes.

System 2 focuses on those mental activities which require effort and include complex computations. Some examples of System 2 are anticipating a starter’s gun, training attention on a desired object, walking faster or slower than is normal, comparing two items for overall value, completing a W-2 form and analyzing the validity of a logical argument.

System 1 operates 24/7 while System 2 is available only when deliberately invoked. The majority of our daily functions are ruled by System 1, which can generate surprisingly complex thought patterns. When called upon, System 2 will appropriate the normally automatic functions of attention and memory if so directed.

Some interesting experiments have been done comparing the beliefs of these systems when presented with visual and cognitive illusions. Take, for example, linear perspective as used in drawings and paintings. Our brain automatically sees buildings or people in the background as farther away. This is System 1. However, System 2 knows full well that the work is a 2-dimensional piece, that all the objects are on the same plane and so instructs System 1 to mistrust its impression. Consequently, even though we are conscious of perspective at work in the piece, System 1 will always register background objects as farther away. Yet, due to the instructions of System 2, we won’t be fooled if questioned about the issue.

A number of psychologists have done research which measures physical reactions to System 2 stimuli. They have found that the pupils of the eye are very good indicators of mental effort. For example, pupils dilate substantially when a subject is asked to multiply two-digit numbers—the harder the problem, the greater the dilation. Researchers have also noted that heart rates increase by an average of 7 beats per minute. Interestingly, there was a point where the subjects’ pupils actually shrank and the heart rate subsided. The researchers concluded that the problem had become too taxing and System 2 shut down.

A number of mental characteristics have emerged as a result of various experiments. The aim is to gain a better understanding of what shapes our beliefs, how judgments are formed and what triggers biases and cognitive errors. The benefits of this research extend to every discipline of study.

2. We usually choose the easiest, simplest answer available to us, often ignoring more complex evidence.

Kahneman examines those factors which affect our decisions, and which dictate how we tend to make them based on the characteristics of Systems 1 and 2. When presented with information, System 1 tries to draw an immediate conclusion paying little or no heed to statistical data. For example, mathematically speaking, extreme outcomes either high or low are likely found in small sample sizes, not large ones. Across the board, large samples yield more precise results than small samples. These data are not relevant to System 1 and perhaps not to System 2 either, depending on the complexity of the problem. Kahneman’s unflattering conclusion is that we often tend to be lazy in our mental processes.

Heuristics are simple procedures which help us select suitable answers to difficult problems. Very often, they will be imprecise or imperfect processes. We find ourselves attracted to heuristics when the task we are trying to solve is either impossible or extremely complex. One common heuristic which affects our data gathering is WYSIATI—Kahneman’s What You See Is All There Is. In effect, System 1 tells System 2 it has assembled all relevant, available data. No additional help is required.

Another heuristic is pattern seeking. We tend to believe that the world is a logical, coherent place. This belief may have evolutionary advantages in that we tend to be on the lookout for changes in the environment which signal danger. However, Kahneman warns that we might see patterns where there are none because we want to see them.

Biases or prejudices are another type of heuristic. They also have major impacts on judgments, beliefs and decisions. Very often these biases are based on a small sample or just a few past experiences, but can provide that ease of decision-making that we tend to seek. For example, Joe owned a 2005 Helix automobile. He had two recalls and several reliability issues. He will never buy another. Of course, there is often an “unless.” The “unless” might be a super low price on a new model that Joe can’t ignore. Perhaps Helix now has the industry’s best warranty. Still, Joe is biased against Helix autos.

In general, our minds seek causal explanations even when none exists. We want coherence and ease of operation. Luck isn’t part of either System’s vocabulary, even though randomness does exist. As mentioned, System 2 tends to be lazy and will substitute an easier question for a more difficult one in order to produce a fluent answer. Data which seem inconsequential, such as a random event, may be noted but given no consideration. We want our efforts to determine the outcome.

System 1 attempts to rely on intuitive predictions based on WYSIATI. In controlled testing, the results of these types of predictions tended to be overconfident and overly extreme. Correcting these predictions falls under the domain of System 2, which may require a good deal of research and thought. Our decision to invoke System 2 will depend on whether or not the stakes are sufficiently high and on our tolerance for accepting mistakes. Coherence, fluency, ease of recall, biases and intuitive predictions are all heuristics—ways to more easily explain, defend, or justify our choices.

3. Our ability to recall accurately and predict reliably is often quite flawed, although we don’t acknowledge it.

Humans are reasonably sure that their understanding of past events is grounded in reality, and thus, they are certain they can predict future outcomes. The narrative fallacy states that we predict future events based on an often flawed understanding of the past. We compose stories that are simple rather than complex, concrete rather than abstract, intentional rather than random, and focused on a few events that happened rather than the many outcomes which didn’t occur. Our minds don’t process non-events well. 

One of the major elements of narrative fallacy is the halo effect, which is defined as viewing all of a person’s attributes favorably or unfavorably when we have only judged one quality to be significant, and that over a short period of time. Simply put, good people always do good; bad people always do bad.

In terms of overconfidence, Kahneman states, "A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge or beliefs that have changed."  Clinical psychologists have tested this phenomenon by asking participants to select a topic they have given some thought to, but about which they haven’t reached a firm conclusion. Next, the experimenter presents either a pro or con message regarding that topic to each member of the group. When re-measuring attitudes, the majority of participants accepted the message they heard, had difficulty recalling their prior beliefs and were surprised they had ever thought differently. 

Everything reinforces the dominant System 1's desire to assemble a fluent, valid, coherent story, typically relying on the most recently absorbed data. This, in turn, provides the human feelings of safety, confidence and cognitive ease. Subjective confidence makes little or no assessment as to the truth of the narrative.

4. Minimizing risk and loss governs our major decisions, especially financial ones.

Kahneman was eventually asked to apply his theories regarding decision-making to the financial field—banking, stock market, etc. Long-held theory supposed that utilities (desirability), such as happiness, depended on one's state of wealth. An economist named Harry Markowitz had recently proposed a different theory, that utility should be attached to the change in one's wealth rather than the amount of one's wealth. This theory generated a great deal of new research into the fields of risk aversion and risk seeking.

Not surprisingly, humans are quite averse to loss. Take away a toy a child is playing with and there will be vocal displeasure and tears. It would seem loss aversion is built into the automatic evaluations of System 1 as evidenced early in life.

The theory becomes more interesting as it becomes more complex, when choices are added to the mix. Here are two of Kahneman's simpler problems:

         Problem 1: Which would you choose? Get $900 for sure OR 90% chance to get $1000

         Problem 2: Which would you choose? Lose $900 for sure OR 90% chance to lose $1000

Most participants were risk averse in problem 1. The certainty of gaining $900 outweighs the chance of winning $1000. Reasonable. In problem 2, most people chose to gamble. The negative value of losing $900 is greater than the 90% value of losing $1000. As his research (and that of others in the field) would bear out, the evaluation of probabilities becomes a significant factor in both risk aversion and risk taking. What also became apparent as the studies continued is that people became risk takers when there were only negative options. 

Risk or loss aversion was found to have emotional components as well—disappointment and regret specifically. Kahneman and his team found that people who had been promised something which was never fulfilled experienced that as a loss. 

Roughly speaking, the research found people gave twice as much weight to their losses as to their gains. Evidently, in terms of survival, avoiding loss is given a higher priority than gains.

5. When it comes to events, we cherish memories more than experiences.

Kahneman refers to the “experiencing” and the “remembering” selves. The remembering self typically answers the question: How was it, on the whole? Of the two selves, since memories are really all we can keep of our lives, the perspective of the “remembering” self is the one we typically adopt. 

A student of Kahneman’s related a story which illustrates the difficulty of distinguishing memories from experiences. The student had listened to a beautiful symphony on a disc that had a terrible scratch right at the end. This scratch produced a shocking sound, and the student reported that this noise ruined the entire experience. On reflection, the entire experience wasn’t ruined, only the memory of it. Most of the listening experience had been enjoyable. Does the entire experience count for nothing because the memory of it is negative?

Herein lies the crux for Kahneman. If we confuse the experience with the memory, we exhibit a case of cognitive illusion—we substitute the memory for the experience. The “remembering” self is the keeper of the records in these cases, and as such, is tasked with maximizing the quality of future memories not future experiences.

In a study termed “Experienced Well-Being,” Kahneman and his team attempted to measure experiences not just from memory but over time as well. Obviously, duration plays a role in our lives. To do this, they asked participants to answer questions that were sent via text to their phones during random times of the day. This way both “selves” would be represented in the findings. The measuring tool has evolved to include a daily two-hour session during which the subjects break down various experiences of the previous day and assign them a value from 0 to 6 (0-no feeling; 6-intense feeling). Thus, the researchers are able to measure negative or positive emotional states over a period of time, duration.

The “Experienced Well-Being” model is now used in the United States, Canada, Europe and over 150 countries. The enormous data bases generated by this research model have enabled psychologists to confirm the importance of the top emotional health predictors—physical health, strong situational factors (marriage, divorce, for example) and interactions with family and friends.

This newsletter is powered by Thinkr, a smart reading app for the busy-but-curious. For full access to hundreds of titles — including audio — go premium and download the app today.

Was this email forwarded to you? Sign up here.

Want to advertise with us? Click here.

Copyright © 2024 Veritas Publishing, LLC. All rights reserved.

311 W Indiantown Rd, Suite 200, Jupiter, FL 33458