1-Page Summary

In today’s society, people often find themselves in encounters with complete strangers. Judges have to grant or deny a stranger’s freedom. College students meet a stranger at a party and have to decide whether to give out their phone number. People invest large sums of money based on a total stranger’s recommendations.

This book is about why we are so bad at understanding the strangers we come across. By looking at stories from world history and recent news, you’ll begin to understand the strategies people use to translate the words and intentions of strangers. You’ll learn where those strategies came from. And, you’ll begin to notice how those strategies ultimately fail.

Two Puzzles

There are two major puzzles about interacting with strangers that this book will attempt to answer.

  1. Why can’t we identify when a stranger is lying to our face?
  2. Why does meeting a stranger face-to-face sometimes make it harder to make sense of that person than it would be without meeting them at all?

Three Answers

The problem at the heart of the two puzzles is that people assume that they can make sense of others based on relatively simple strategies. But when it comes to strangers, nothing is as simple as it seems.

There are three major strategies that people use to make sense of strangers:

  1. People default to truth.
  2. People assume transparency.
  3. People neglect coupled behaviors.

These three strategies ultimately fail because they operate under the assumption that simple clues are enough evidence of a stranger’s internal thoughts or intentions. We will look at each of these strategies separately to see where they came from and why they often result in failed interactions with strangers.

Flawed Strategy 1: People Default To Truth

The primary reason that most people can’t immediately identify when a stranger is lying is that human beings default to assuming truth in others.

Truth-Default and How It Works

To understand and analyze the Truth-Default Theory, psychologist Tim Levine used hundreds of versions of the same basic experiment (referred to here as the Trivia Experiment). Here is how the Trivia Experiment works:

  1. Levine invites participants to a laboratory. They are told that if they can answer a trivia test correctly, they will win a cash prize.
  2. Each participant is given a partner. The participants don’t know that the partner works for Levine.
  3. The test is administered by an instructor named Rachel. The participants don’t know that Rachel also works for Levine.
  4. Halfway through the test, Rachel leaves the room. The partner points to an envelope lying on the table and asks the participant if they should cheat on the test. The participant is given the opportunity to choose whether or not to cheat.
  5. After the trivia test is over, Levine interviews the participant on tape. He asks the participant if they cheated during the test.
  6. After the completion of the interviews, Levine goes back to watch the tapes and categorizes them into two categories: Liars and truth-tellers.
  7. Other people then watch the interview tapes and try to decipher which participants are lying about cheating and which participants are telling the truth.

Levine’s conclusion was this: When watching the tapes, most people will guess that each person interviewed is telling the truth, unless they see a behavior that distinctly makes them think the person is lying. In other words, the viewers default to assuming truth—they naturally operate under the assumption that the majority of participants are honest. This is the Truth-Default Theory (TDT).

Is Truth-Default Beneficial?

Ultimately, Levine concluded that human beings do not need to identify lies (from a survival standpoint) as much as we need to be able to have efficient communication and trusting social encounters.

He argues that truth-default is highly advantageous to survival because it allows for effective communication and social coordination. From an evolutionary standpoint, being vulnerable to deception does not threaten human survival, but not being able to communicate (the result of being skeptical of others’ honesty) does threaten human survival.

Truth-Default Example: The CIA

The CIA is supposed to be the most sophisticated intelligence agency in the world. But even some of the best CIA agents have failed to detect liars in their midst.

During the Cold War, Aldrich Ames, one of the most senior officers in the CIA’s Soviet Counterintelligence agency, was working as a double agent for the Soviet Union. Years later, one of the most highly-respected agents in the CIA, known as the Mountain Climber, said he had always held a low opinion of Ames. But the Mountain Climber never suspected Ames as a traitor—he defaulted to trusting Ames.

If the Mountain Climber, one of the best agents at one of the most selective agencies in the world, can’t pick out a liar among his own team of spies, how can the average person be expected to be able to catch a total stranger in a lie?

Flawed Strategy 2: Assumed Transparency

The primary reason that meeting a stranger face-to-face sometimes makes that person harder to understand is that people assume a stranger will be transparent—that he will present himself outwardly in a way that accurately represents his inner feelings or intentions. But that is not usually the case.

Transparency and How It Works

Humans are not transparent—it’s all a myth. Because we have all watched the same TV shows and read the same novels where a character’s “jaw drops in surprise,” we have been conditioned to believe that there is one expression associated with any particular emotion. But that is unrealistic.

In reality, it takes getting to know someone well to be able to see through them. With a close friend, you come to understand their idiosyncratic expressions and what they mean to express. But when you encounter a stranger, you often have to make assumptions based on their expressions, because you don’t have any personal experience with that person. But your assumptions are based on stereotypes, like a jaw dropping in surprise, that are usually wrong.

After the Trivia Experiment, Tim Levine felt as though there had to be another reason (besides Truth-Default) that people tend to mistake lies for the truth. In an effort to explain this pattern, Levine returned to the tapes of his Trivia Experiment participants.

Two of Levine’s participants were particularly interesting to study. Let’s call one Sally and the other Nervous Nelly.

The viewers were operating under the assumption of transparency that someone who behaves like a liar is indeed a liar. For example, Sally matched—she was being dishonest and she was acting dishonest.

In other words, the average person is only bad at detecting lies when the sender is mismatched. For example, Nervous Nelly mismatched—she was being honest but her demeanor seemed stereotypically dishonest. Mismatching confuses the average person—it is at odds with the natural assumption of transparency.

Assumed Transparency Example: Amanda Knox

On November 1, 2007, an American college student named Meredith Kercher was murdered in the small, Italian town of Perugia. Kercher’s body was found by her roommate, Amanda Knox. Knox called the police to the scene of the gruesome crime. Knox was ultimately convicted for the murder of Meredith Kercher.

What doesn’t make sense is why Amanda Knox was convicted, or even suspected. There was no physical evidence or motive that linked her to the crime. The simplest theory of what went wrong with Amanda Knox’s case is this: the police expected Knox to be transparent and she wasn’t. Her case is an example of the consequences of assuming that the way a stranger looks is a reliable indicator of how she feels.

Amanda Knox was innocent. But in the months following the crime, the way she acted made her seem guilty. She was mismatched, like Nervous Nelly, drawing suspicion from investigators. Here are a few examples of how Amanda Knox behaved after Meredith Kercher’s murder:

So Amanda Knox spent four years in prison (and another four years waiting to be declared officially innocent) for the crime of behaving unpredictably—for being mismatched. But being weird is not a crime.

Flawed Strategy 3: Neglect of Coupled Behaviors

The third mistake that people often make when dealing with strangers: We fail to recognize coupled behaviors, behaviors that are specifically linked to a particular context. For example, we fail to see how a person’s personal history might affect his behavior in a particular environment. Instead, people tend to operate with an assumption of displaced behaviors, behaviors that do not change from one context to the next.

Once you understand that some behaviors are coupled to very specific contexts, you’ll learn to see that a stranger’s behavior is powerfully influenced by where and when your encounter takes place. Then, you’ll be able to recognize the full complexity and ambiguity of the people you come across.

Coupled Behavior Example: Crime

In the early 1990’s, the Kansas City Police Department decided to study how to deploy extra police officers in an effort to reduce crime in the city. They hired criminologist Lawrence Sherman and gave him free rein to make changes in the department. Sherman was sure that the high number of guns in Kansas City was coupled with the city’s high level of violence and crime. So he decided to focus his experiment specifically on guns in the 144th patrol district of Kansas City, one of the most dangerous areas in the city.

In an effort to focus on the coupled behaviors of gun ownership and crime, Sherman deployed four officers in two cars to patrol District 144 at night. He told these four officers to watch out for any suspicious-looking drivers and pull them over. The officers were told to search as many cars that fit the specific requirements and confiscate as many guns as possible. These officers were effectively searching for a needle in a haystack. The ultimate goal was to find a gun or drugs. The four officers in Sherman’s experiment went through specialized training and only worked in District 144 at night—Sherman wanted to make sure that they knew how to target the right kind of traffic stops, in the right kind of locations, at the right times, that led to the right kind of searches.

Over the 200 days that Sherman ran his experiment, gun crime was cut in half in District 144 of Kansas City. The experiment was successful because it made crime-fighting strategies more focused—it targeted one aspect of the coupled behavior (guns) in order to prevent the other coupled behavior (crime).

A Failed Interaction Between Strangers

On July 10, 2015, a young woman named Sandra Bland was pulled over in a small town in Texas for neglecting to signal a lane change. The police officer’s name was Brian Encinia. His interaction with Sandra Bland began courteously enough. But after a few minutes, Sandra lit a cigarette and Encinia asked her to put it out. She refused, and the interaction dissolved from there.

Brian Encinia told Sandra Bland to step out of the car. She repeatedly said no, telling the officer that he had no right to ask that of her. Eventually, Encinia began to reach into the car and try to remove Sandra by force. Finally, Sandra stepped out of her vehicle. She was arrested and put in jail, where she committed suicide three days later.

Sandra Bland’s arrest and subsequent suicide in jail is a tragic example of what can happen when two strangers use flawed strategies to try and understand each other.

Encinia’s Three Mistakes

In dealing with Sandra Bland, officer Encinia used the same three strategies that most people would make when dealing with a stranger:

Conclusion

In our modern, seemingly borderless world, we have no choice but to interact with strangers. Yet we, as a society, are incompetent at making sense of the strangers we come across. So what should we do?

If our society is to avoid failed interactions between strangers, we must learn to:

Most importantly, we must learn not to blame the stranger when an encounter goes awry, but to look into how our own instincts might have played a part, as well.

Introduction

What Happened to Sandra Bland?

On July 10, 2015, a young woman named Sandra Bland was pulled over in a small town in Texas. She was a tall, beautiful, African American woman who liked to post inspirational videos online. She had just gotten a new job at Prairie View A&M University, and she had big plans to work while studying for her master’s degree. That day, just blocks away from her new campus, Sandy was pulled over for neglecting to signal a lane change.

The police officer’s name was Brian Encinia. His interaction with Sandra Bland began courteously enough. But after a few minutes, Sandra lit a cigarette and the officer asked her to put it out. She refused, and the interaction dissolved from there.

Brian Encinia told Sandra Bland to step out of the car. She repeatedly said no, telling the officer that he had no right to ask that of her. Eventually, Encinia began to reach into the car and try to remove Sandra by force. Finally, Sandra stepped out of her vehicle. She was arrested and put in jail, where she committed suicide three days later.

Sandra Bland’s death came at a time in American history in which police brutality against African Americans became particularly frequent and high-profile in the media. Michael Brown, Freddie Gray, Philando Castile, and Eric Garner—these are just a few of the names of African Americans that died as a result of police brutality. The Black Lives Matter movement began in that time frame. Sandra Bland even made a video urging African Americans and white Americans to find peace and understanding with each other. Three months later, Sandra was found dead in jail. What happened to Sandra Bland?

This book is an attempt to understand what really happened between Brian Encinia and Sandra Bland that day. We will come back to this particular interaction—between a white, armed, male police officer and a black, unarmed, female civilian—to question how it might have ended differently if only our society was better at making sense of strangers.

The First Real Strangers

Throughout human history, the vast majority of human interactions occurred between neighbors, relatives, or people that had at least some things in common, such as worshipping the same God. But when Hernán Cortés sailed from Spain to Mexico in 1519, he began an entirely new kind of encounter in human history. This is an example of one of the first times in human history that two people with entirely different assumptions, histories, and cultural backgrounds were thrown into contact. Upon meeting the Aztec ruler Montezuma II and discovering the capital, Tenochtitlán, Cortés and his men were in awe. They had never seen a city as grand or a culture as drastically different from their own.

The two leaders, Cortés and Montezuma, knew absolutely nothing about each other’s language, civilization, or cultural nuances. They had to go through a long and complicated chain of translation to communicate with each other. The result? Montezuma’s capture and murder at the hand of Cortés, followed by the death of nearly 20 million Aztecs. But why?

One answer to this question lies in the difference between the way Cortés and Montezuma communicated, based on their cultural tradition. Montezuma’s native language, Nahuatl, was a reverential mode of speech. When Montezuma spoke, he chose to portray false humility. That was how he communicated his nobility and power. But to Cortés, Montezuma’s humble words sounded like surrender.

In today’s world, people often find themselves in these kinds of encounters—interactions with complete strangers. This book is about why they often go wrong, and why we are so bad at understanding the strangers we come across. By looking at some stories from world history and recent news, you’ll begin to understand the strategies people use to translate the words and intentions of strangers. You’ll begin to see where those strategies came from. And, you’ll begin to notice the patterns by which those strategies fail.

Part 1: Two Puzzles

There are two major puzzles about interacting with strangers that this book will attempt to answer:

  1. Why can’t we identify when a stranger is lying to our face?
  2. Why does meeting a stranger face-to-face sometimes make it harder to make sense of that person than it would be without meeting them at all?

This book will attempt to answer those questions, but first, let’s look at some examples of these puzzling patterns of human behavior.

Puzzle 1 Example: Spies and the CIA

The CIA is supposed to be the most sophisticated intelligence agency in the world. But even some of the best CIA agents have failed to detect liars in their midst.

In 1987, two years before the fall of the Iron Curtain, Cuban intelligence officer Florentino “Tiny” Aspillaga grew disenchanted with Fidel Castro’s style of leadership. On June 6, he defected from his service in the Cuban government. Aspillaga drove to Vienna and immediately found his way to the American Embassy there. This is known in the spy trade as a walk-in. This particular walk-in was one of the most important in the Cold War.

Aspillaga’s Story

After his surprising appearance at the door of the U.S. Embassy, Aspillaga was taken to Germany for debriefing. He had one condition before giving up his information on Cuba: He wanted to meet the CIA agent known as “el Alpinista,” or the Mountain Climber. The Mountain Climber was highly regarded in the CIA for his impeccable and incorruptible tradecraft, so Aspillaga had always revered him.

El Alpinista was intrigued when he heard that a Cuban official had defected. He flew to Germany right away to meet with Aspillaga. There, Aspillaga revealed shocking news from behind the Iron Curtain: The CIA had over a dozen agents working as spies in Cuba at the time that had been turned into double agents working against the United States. These double agents had been feeding the United States falsified information for years—all fabricated by the Cuban government.

The Latin American division of the CIA was shocked and horrified by the news. They were supposed to be the most sophisticated intelligence agency in the world, but they had been made to look like fools.

This was not the first time in the history of the CIA that something like this had happened. In East Germany, Aldrich Ames, one of the most senior officers in the Soviet Counterintelligence agency, was working as a double agent for the Soviet Union. And, even though the Mountain Climber later said he had always held a low opinion of Ames, he never suspected Ames as a traitor.

If the Mountain Climber, one of the best agents at one of the most selective agencies in the world, can’t pick out a liar among his own team of spies, how can a person be expected to be able to catch a stranger in a lie?

Puzzle 2 Example: Neville Chamberlain and Adolf Hitler

In 1938, Adolf Hitler announced his plan to invade the German-speaking side of Czechoslovakia. In response, British prime minister Neville Chamberlain decided to go to Germany and meet with Adolf Hitler face-to-face. He hoped to look Hitler in the eyes, get a sense of his true intentions, and measure Hitler’s capacity to be reasoned with. This was considered a daring move in the effort to avoid a world war, mostly because of how few diplomats had ever done such a thing.

On September 15, Chamberlain arrived at Hitler’s retreat in Berchtesgaden, Germany. There, Hitler said, in no uncertain terms, that he planned to invade Czechoslovakia and only Czechoslovakia. The prime minister gave a long, hard look into Hitler’s eyes and determined that he was telling the truth. Upon arriving back to England, Chamberlain reassured the public that he was confident that he understood Hitler’s true intentions.

Chamberlain met with Hitler twice more after that. The two men spent hours together. Hitler even signed an agreement to keep the peace in Europe. Chamberlain made careful mental notes about everything that Hitler said and did. Once, when Chamberlain was nervous that Hitler’s “storm clouds were up,” he was only calmed by the fact that Hitler gave him a friendly handshake with both hands. That made him feel sure that Hitler was in a sound state of mind and meant to keep the peace. But Chamberlain couldn’t have been more wrong.

The mistakes Chamberlain made in his negotiations with Adolf Hitler are considered some of the most crucial mistakes of World War II. He completely misinterpreted Hitler’s intentions, with horrifying results. Chamberlain made the same assumptions with Adolf Hitler that most people make when meeting a stranger: He believed that the observations he made from interacting with Hitler personally would be valuable, like the double-handed handshake.

So why is it that Neville Chamberlain, one of the only diplomats to ever spend significant time with Hitler, was deceived about Hitler’s intentions? And why were other diplomats who never spent time with Hitler (like Winston Churchill) able to see the truth of Hitler’s intentions? That is Question Two, and it is a pattern of human behavior that many people have attempted to study and understand. Let’s look at another example.

Puzzle 2 Example: Judges Vs. Artificial Intelligence

On a typical Thursday in Brooklyn, Judge Solomon was presiding over his courtroom. His primary responsibility for the day was arraignments. He had to see every defendant who had been arrested in the last 24 hours, look at their criminal history, listen to the testimony of both the prosecution and the defense, and then decide if the defendant would be offered bail and the chance to be released from custody. In short, Judge Solomon had to look a perfect stranger in the eye, assess his character, and decide if he deserved his freedom. But does looking a person in the eye actually help you judge his nature?

A team from the University of Chicago, led by Sendhil Mullainathan, set out to answer that question. The experiment went like this:

  1. Mullainathan gathered the data of all 554,689 defendants that went through the NYC courts from 2008-2013. They found that 400,000 of those defendants had been released by the judges that presided over their arraignments.
  2. Mullainathan built a computer with an artificial intelligence system.
  3. The computer was fed the data of the same 554,689 cases. It then made its own list of the 400,000 defendants least likely to commit a crime while out on bail.

The Results

Translating the Results

The computer was much better than human judges at determining a defendant’s likelihood of committing another crime. The study found that judges were not only setting their standards for release too low, but they were also actually mis-ranking many defendants completely.

It is important to note that the human judges that presided over these cases had three resources available to them when making their bail decision:

  1. The defendant’s record
  2. The testimony of the attorneys
  3. The judge’s own personal observations of the defendant standing before him.

Mullainthain’s computer only had one of these three resourcesthe record of each defendant. Yet the machine still beat human judges when it came to making bail decisions. Why is it that meeting a defendant actually made the judge less likely to be able to gauge his trustworthiness? That’s a version of Puzzle Two.

Word-Completion Task

The examples above were meant to illustrate the two puzzles—to show that there is a pattern in human behavior of

  1. The inability of otherwise intelligent people to decipher when a stranger is lying
  2. Becoming worse at judging a stranger once you’ve met him

The problem at the heart of these two questions is that people assume that they can read others based on simple clues. But when it comes to strangers, nothing is as simple as it seems.

The common word-completion task used by psychologists is a good example of this point. Take a look at the following set of words and fill in the blank letters. Do this as quickly as you can, without second-guessing your answers:

TER

TOU

STR _

B T

NNER

Now, look at your list. Do you feel that your choices say something about who you are as a person? Do you think it’s significant if you completed the word TOUGH instead of the word TOUCH, for example?

Psychologist Emily Pronin and her team gave a group of people this exact same exercise some years ago. When asked if their choices were indicative of their personality, the people in the group said no. They said that their own word completions were nothing but a random set of words.

However, when Pronin gave the people in the group the word completions of other participants (strangers), they seemed to change their minds. They made assumptions about the other participants based on their word choices. For example, one participant accused another of being competitive, based on his completion of the words BEAT and WINNER.

None of these people seemed to be aware of how contradictory it is to call your own word completions random, yet make assumptions about others based on their word choices. This is an example of how strangers often misunderstand each other: A person assumes that he has insights about others, but doesn’t consider that the opposite could be true (that the stranger also has insight about him). This person believes that he can see into the heart of a stranger based on his first impression, but would never expect a stranger to be able to do the same to him.

The three main mistakes that people make when forming a first impression of a stranger are:

  1. Defaulting to believing the stranger will tell the truth.
  2. Assuming the stranger will be transparent.
  3. Neglecting to consider how the stranger’s context affects his behavior.

This book will illustrate those three mistakes and how they relate to the two puzzles of interacting with strangers.

Part 2-1: Why Can’t We Detect a Lie?

The primary reason that most people cannot immediately identify when a stranger is lying is that human beings default to assuming truth in others. This is called the Truth-Default Theory.

Truth-Default and How It Works

In an effort to understand and analyze the Truth-Default Theory, psychologist Tim Levine used hundreds of versions of the same basic experiment (referred to here as the Trivia Experiment). Here is how the Trivia Experiment works:

  1. Levine invites participants to a laboratory. They are told that if they can answer a trivia test correctly, they will win a cash prize.
  2. Each participant is given a partner. The participants don’t know that the partner works for Levine.
  3. The test is administered by an instructor named Rachel. The participants don’t know that Rachel also works for Levine.
  4. Halfway through the test, Rachel leaves the room. The partner points to an envelope lying on the table and asks the participant if they should cheat on the test. The participant is given the opportunity to choose whether or not to cheat.
  5. After the trivia test is over, Levine interviews the participant. He asks the participant if they cheated during the test.
  6. After the completion of the interviews, Levine goes back to watch the tapes and categorizes them into two categories: Liars and truth-tellers.
  7. Other people then watch the interview tapes and try to decipher which participants are lying about cheating and which participants are telling the truth.

Levine’s conclusion:

When watching the tapes, most people will guess that each person interviewed is telling the truth, unless they see a behavior that distinctly makes them think the person is lying. In other words, the viewers default to assuming truth—they naturally operate under the assumption that the majority of participants are honest. This is the Truth-Default Theory (TDT). Because of this default, viewers are more accurate at identifying truth-tellers than liars.

Truth-Default Triggers

The Trivia Experiment itself is an example of how TDT plays a role in human behavior. Each participant that goes through the trivia test knows she is part of an experiment. Suddenly Rachel leaves the room and just so happens to leave the answers on the desk. The participant’s partner, who she’s never met before, suggests cheating. Wouldn’t you expect at least some of these college-educated participants to be suspicious at that point?

Every once in a while, a participant might catch on that one or more aspects of the trivia test is a setup, part of the experiment. However, they almost never assume that their partner is involved. Why not?

Levine concluded that a participant can have a suspicion, or even a series of doubts, but they will continue to believe the truth of the situation. The only way a person will snap out of the truth-default is if they gather enough doubtsif their suspicions rise to a level that they cannot explain away or rationalize. He called this a “trigger.”

So when a person (let’s call him Fred) comes across a stranger, Fred will generally believe what the stranger says. Even if he doubts the stranger several times, Fred will continue to default to the assumption that the stranger is telling the truth. That truth-default will hold until Fred gathers enough doubts to push him past the trigger point. If there are not enough red flags to trigger Fred to notice a stranger’s lie, it is only human that he would fail to identify the stranger’s deception.

Truth-Default in the CIA

Early in the 1990’s, Cubans were fleeing to the United States to escape Fidel Castro’s rule in Cuba. The journey was dangerous, costing the lives of 24,000 Cubans. In response, a group of immigrants in America founded Brothers to the Rescue, or Hermanos al Rescate. The Brothers flew a small team of planes over the Florida Straits and into Cuban airspace in an effort to save Cuban lives and incite revolt against Castro’s regime.

On February 24, 1996, two Brothers to the Rescue planes were shot out of the sky over Cuba. Four people died. Cuban emigres and Americans alike were furious about the attack. The United Nations denounced the Cuban government. The press followed the event closely, calling the attack an act of war. But the story shifted after former United States Armed Forces rear admiral Eugene Carroll appeared on CNN for an interview. On TV, Carroll told a shocking story:

The day before the attack, February 23, Carroll had met with the Cuban Ministry of Defense. They told him that they were capable of shooting down private United States aircraft. Understandably, Carroll interpreted that as a warning of an upcoming Cuban attack on the Brothers to the Rescue. He immediately relayed the information to the State Department and the Defense Intelligence Agency (DIA). Neither department did anything to stop Brothers to the Rescue from flying the next day.

Suddenly, the story about Cuba’s violent attack became a story about incompetence and negligence in the American government. Can you see any coincidences in this story so far? Let’s review:

  1. The Cubans deliberately murder American citizens in international airspace.
  2. Just one day before the attack, a high-ranking American insider (Carroll) was warned it would happen and took that warning to the U.S. government.
  3. The U.S. government failed to protect the Brothers to the Rescue.
  4. Luckily for Cuba, Carroll went on TV the day after the attack and told the world his story. He also asked Americans to see the situation from Cuba’s point of view: imagine that Mexican planes were flying over California, dropping leaflets to incite an uprising. If we warned Mexico to stop sending those planes and they didn’t, how long would the American government tolerate them? Essentially, Carroll made Cuba’s case in front of the world.

Does the timing of all of this strike you as odd? If not, don’t worry. It’s only human that you would default to assuming the most likely circumstance was true and take the story as an unfortunate coincidence. Only one counterintelligence analyst in the DIA, Reg Brown, saw the nuances of the timing and went on alert.

Reg Brown’s Suspicions

The first thing that tipped off Reg Brown that Cuba had orchestrated the events that occurred around the attack was that a Brothers to the Rescue pilot named Juan Pablo Roque turned out to be a spy for Fidel Castro. He had tipped Castro off that there would be planes in the air on February 24. So, in Reg Brown’s mind, it made Carroll’s meeting with Cuba on February 23 very suspicious. This was the perfect day for a warning. Wouldn’t Cuba want to maximize the impact of their warning by making it the day before the attack? Reg Brown decided to look into who set the meeting between Cuba and Eugene Carroll and who selected the date February 23. After a bit of research, he found out that the meeting was scheduled by an agent in the DIA named Ana B. Montes.

There was another suspicious event surrounding the Cuban attack and Ana Montes. After the planes were shot down, Montes was called to the Pentagon. But, later in the evening, Ana Montes had taken a personal phone call and left. This was unheard of, and it made Reg Brown wonder what could have possibly taken the Cuban intelligence expert away from the scene after such a high-profile Cuban attack.

Brown also wondered: If Montes were a spy, wouldn’t her Cuban handler have been desperate to hear from her and learn how the U.S. government planned to respond to the event? Was this why she had left the Pentagon? And on top of that, wasn’t it suspicious that she was also the woman responsible for planning the questionably timed meeting with Eugene Carroll?

Montes was a colleague of Brown’s at the DIA. She was known as the “Queen of Cuba” for her expertise in Cuban intelligence. She was a star in the DIA who had glowing recommendations from her bosses and peers alike. Brown agonized over whether or not to tell someone about his suspicions about Ana Montes. He had very little evidence for such a big accusation. But finally, Reg Brown decided to tell counterintelligence officer Scott Carmichael what he knew.

Scott Carmichael’s Investigation

Reg Brown had to persist for months before Scott Carmichael agreed to pull Ana Montes’ file and look into her. But her file seemed clean—there were no obvious red flags, like large, unexplained sums of money or suspicious behaviors. Still, Carmichael moved ahead, careful to document everything so that he could discount Brown’s suspicions without bringing attention to his investigation.

Eventually, Scott Carmichael called Ana Montes in for a meeting. He was immediately impressed and attracted to her. But Montes seemed to be in a hurry. She said that she needed to leave by 2. Carmichael assumed she was only trying to assert her dominance in the situation. Carmichael lost his patience by saying, “Look. I have suspicions about your involvement with Cuban counterintelligence. This will take as long as it takes.” Carmichael was proud of coming up with something that made Ana be quiet and take him seriously. The interview continued.

When he asked her about the Eugene Carroll meeting, Montes had a perfectly good explanation: February 23 was the most convenient date for all the parties involved with the meeting. When Carmichael asked Montes about leaving the Pentagon, she had another understandable explanation: She had food allergies that prevented her from eating out of the Pentagon vending machine. So she hadn’t been able to eat all day. Finally, after being at the Pentagon for 14 hours straight, she had to get something to eat.

After the meeting, Carmichael corroborated all of her stories. Everything checked out. Carmichael called Reg Brown and assured him that Montes was clean—the investigation was closed.

Five Years Later

In 2001, an analyst at the National Security Agency (NSA) approached Scott Carmichael with sensitive information: Two years earlier, the NSA had intercepted a Cuban communication and successfully broke the code.

The communication mentioned a prominent Washington figure who was secretly working for Cuba. They called this person Agent S. Agent S had interests in the SAFE system and had traveled to Guantánamo Bay in July 1996.

Immediately, Scott Carmichael was on alert. The SAFE system was the DIA’s computer system. So Agent S was most likely a DIA employee. When he went to cross-reference any DIA employees who traveled to Guantánamo Bay by in July 1996, a familiar name came up: Ana Montes. Carmichael had finally hit the trigger point—he knew that Ana was a spy.

It turned out that Ana Montes had been spying for Cuba since the very start of her career. What’s more, she hadn’t even been very sneaky about it. After her arrest, the DIA found the codes she used to communicate with Cuba in her purse. She had a shortwave radio in her closet. She even asked to take a paid research sabbatical in Havana, Cuba. Does this sound like a woman trying to conceal her double life as a Cuban spy? How did no one but Reg Brown have suspicions?

Well, according to Levine, the answer is that Montes’s behavior never pushed anyone but Reg Brown past the trigger point. She kept to herself, behaved normally, succeeded at work—didn’t raise enough red flags to make anyone doubt her.

But there had been red flags. Scott Carmichael thought back to the interview he had with Montes following the Cuban attack and realized that he missed a crucial clue. When he disarmed Montes by saying he was suspicious of her, she only sat there. At the time, he was happy he got her quiet. But looking back, he could see that a normal person would have argued for her innocence, or at least acted confused.

After Ana Montes was revealed a double agent, Carmichael beat himself up for missing this important clue. But he wasn’t negligent, he was human. He followed up on all the suspicions raised by Reg Brown and none of Ana Montes’s behavior pushed him past the trigger point. So he naturally defaulted to truth until five years later when her name popped up on his computer for visiting Guantanamo.

Think about it this way: Carmichael was faced with two alternatives:

  1. The very rare chance that a respected DIA agent was a double agent for Cuba
  2. The more likely possibility that Brown was being paranoid

The human instinct to default to truth is like playing the odds of a situation. History and statistics say that liars and con men are rare, so our natural default is to trust the strangers around us. Unfortunately, that sometimes means that criminals like Ana Montes go years before triggering the doubt necessary to be caught.

Part 2-2: Is Truth-Default Beneficial?

Being able to detect lies seems like an unequivocal good. The Russian archetype of the Holy Fool demonstrates the benefits of being able to see the truth of a situation. The Holy Fool is typically an eccentric character, sometimes even a crazy one, who has access to truths that other characters don’t have access to.

For example, in “The Emperor's New Clothes” by Hans Christian Anderson, the king believes that he has a magic outfit that can only be seen by intelligent people. When he walks down the street, no one is willing to admit that the king is naked, for fear that they’ll be called stupid. Only a small child yells out over the crowd, “The king isn’t wearing anything!” The child is a Holy Fool.

In real life, the Holy Fool is a whistleblower. What sets whistleblowers apart from the rest of society is that they have a better sense of deception and are less likely to default to truth and to believe that liars are rare. Instead, the Holy Fool sees con men around every corner.

So why don’t more people think like the Holy Fool? Isn’t it beneficial to be able to spot a lie?

The Benefit of Truth-Default

If you think about it, Truth-Default Theory is the reason that criminals so often go undetected. So wouldn’t it be beneficial for humans to have evolved with better lie detection skills? Why is it still a natural human behavior to default to truth? Defaulting to truth aids in our survival.

After the Trivia Experiment, Levine was concerned with the following questions:

Ultimately, Levine concluded that human beings do not need to identify lies (from a survival standpoint) as much as we need to be able to have efficient communication and trusting social encounters.

He argues that truth-default is highly advantageous to survival because it allows for effective communication and social coordination. From an evolutionary standpoint, being vulnerable to deception does not threaten human survival, but not being able to communicate does threaten human survival. To better understand this point, we have to look at what happens when we don’t default to truth.

Bernie Madoff and Harry Markopolos

In the 1900’s and early 2000’s, Bernie Madoff was a well-known name within the financial industry. He was successful, wealthy, imperious, and reclusive—the kind of mysterious figure that draws attention.

In 2003, he drew the attention of Nat Simon and the executives at Renaissance Technologies. The firm looked into Madoff’s trading strategies and decided that something wasn’t quite right. They held an internal investigation and became even more skeptical of how Madoff was making money. But did Renaissance Technologies cut ties with Madoff completely? No. They only cut their stake in half. They didn’t have enough doubts to write him off completely.

Five years later, Madoff turned himself over to the authorities. He was exposed as the perpetrator of the biggest Ponzi scheme in history. Over the course of those five years, several journalists, investigators, and bankers had suspicions about the validity of Madoff’s business (just like Nat Simon had), but no one came to the conclusion that he was a criminal. Everyone in the financial industry defaulted to truth and assumed that he couldn’t be a fraud—everyone except Harry Markopolos.

Harry Markopolos

Harry Markopolos is a financial expert who's deeply skeptical of large organizations, and of people in general. He has a very low threshold for doubt. Consequently, his trigger point is extremely sensitive.

Markopolos first heard of Bernie Madoff in the late 1980’s. He worked at a hedge fund and was trying to copy Madoff’s trading strategy. But he couldn’t figure out how Madoff was coming up with his figures. So he called some contacts he knew in the financial industry. No one knew where or how Madoff was doing his business. Markopolos was immediately triggered. He knew that Madoff’s business was illegitimate.

Markopolos went through the exact same process that Nat Simon and Renaissance Technologies would go through a few years later. But instead of defaulting to truth, Markopolos immediately saw through Madoff’s dishonesty and the incompetent regulation of the financial market.

Markopolos went to the SEC with what he believed to be a fool-proof argument against Madoff in 2000, 2001, 2005, 2007, and 2008. If they had acted on his tips and conducted a thorough enough investigation to reach the trigger point, Madoff could have been stopped with only $7 billion, as opposed to the $50 billion he had stolen by 2009.

Technically speaking, the truth-default theory is to blame for allowing Madoff to get away with fraud for all those years. So is it dangerous that people naturally default to truth? Remember, the trade-off for enhanced lie detection skills is often effective communication and social skills. Here’s how this materialized in this case:

In 2002, upon realizing the magnitude of Bernie Madoff’s empire, Markopolos began to fear for his life. Very powerful people had personal interests in keeping Madoff in business, and Markopolos had been publicly criticizing Madoff’s business for years. He felt he couldn’t be safe until the Ponzi scheme was revealed. So he made a plan to approach Attorney General Eliot Spitzer.

Markopolos was so paranoid that he made an elaborate plan to hide his identity at one of Spitzer’s events. He put his documented evidence against Madoff in a series of large envelopes and handed them to another woman in the hopes of getting them to Spitzer. But the plan failed—the documents never reached the attorney general and another seven years passed before Madoff was exposed.

Markopolos’s natural skepticism was the reason he was able to catch Madoff in the crime, but it was also the reason he was unable to effectively communicate his evidence in a way that made a difference in the case. That’s the consequence of not having a default to truth. Without a natural sense of trust, you cannot have effective social encounters.

Exercise: Consider Your Tendency to Default to Truth

Psychologist Tim Levine says that the cost of not defaulting to truth is the sacrifice of meaningful social interactions. But most people will default to truth until they have enough doubts about a person to reach the trigger point.

Part 2-3: When People Default To Truth

When scandals break in the news, one of the first things people tend to do is to blame the authorities who oversaw the criminal, like blaming the SEC for not catching Madoff sooner. People jump at the chance to judge that person or entity for covering up the criminal’s behavior or putting their own interests ahead of the truth. There’s a tendency to think of every scandal as a conspiracy.

But that interpretation is not always fair, and it doesn’t take Truth-Default Theory into account. Sometimes, the person in power is truly blinded by his own default to truth.

The Parents of Larry Nassar’s Victims

In 2017, USA Gymnastics team physician Larry Nassar was convicted on federal charges of child molestation. By the time of his trial, Nassar had over one hundred accusers with similar stories of sexual assault. The police retrieved 37,000 images of child pornography from Nassar’s personal computer. It was a very high-profile case in the media and one that was relatively open-and-shut.

But why did it take so long for Nassar to be convicted? In his 20+ years as a gymnastics physician and sexual predator, allegations against Nassar had been brought to people of power (coaches, officials, or parents) at least 14 times. He had even performed acts of sexual assault on young girls with their parents in the room. But because the assault was hidden under the guise of a medical procedure, none of those people ever stopped it. How is this possible?

What Nassar was accused of was so monstrous that these parents and coaches simply couldn’t believe it. If Nassar was accused of drinking too much or being openly mean to the girls, someone would have spoken up immediately. But the truth was so devastating that it seemed unlikely, and so the parents’ truth-default kicked in. It is unfair to judge these parents as negligent. They are human, and their inability to detect a lie was a human mistake.

Even a case as clear-cut as Nassar’s was hindered by Truth-Default. So how can we expect witnesses to be any better when the details of a case are inscrutable?

Pennsylvania State University’s Leadership Team

One day in 2001, Michael McQueary entered the locker room of the Lasch Football Building at Pennsylvania State University. He expected the building to be empty, so he was surprised when he heard slapping sounds coming from the showers. Looking around, McQueary saw Jerry Sandusky, retired defensive coordinator of the Penn State football team, showering (naked) with a “minor individual.” McQueary later said that the boy looked to be about ten to twelve years old. Sandusky and the boy were standing close enough to be touching.

McQueary didn’t know what to do. He ran upstairs and called his parents to tell them what he had seen. Some days later, McQueary went to Penn State football’s head coach, Joe Paterno, to tell him about what he saw. Eventually, Paterno took McQueary’s story to the university’s athletic director Tim Curley, senior administrator Gary Schultz, and president Graham Spanier.

A full decade later, in 2011, Jerry Sandusky was finally arrested and convicted of 45 counts of child molestation. Eight young men accused Sandusky of sexual abuse. Once Sandusky was behind bars, heat began to fall on Penn State’s leadership team. Tim Curley and Gary Schultz were charged with conspiracy, failure to report child abuse, and obstruction of justice. Previously beloved university president Graham Spanier was fired. Six years later, Spanier was convicted of child endangerment. (Shortform note: A federal judge has since thrown out Spanier’s conviction.)

It might seem easy to point the finger at these men and say that they allowed Jerry Sandusky to roam free and abuse young boys. You might think that they were putting their own self-interests or the success of the university above the law. But taking into account all the evidence we’ve seen of Truth Default Theory, here’s the real question: If you had been president of Penn State at that time, would you have done anything differently?

Jerry Sandusky

Jerry always loved sports and loved being around young people. He and his wife adopted six children and fostered countless more. In 1977, Sandusky started a charity, a sports program for troubled youth called the Second Mile. Among the children, Sandusky was known to be a goofball. And Sandusky had a great reputation among his coworkers, the families of Second Mile kids, reporters, and almost everyone else he came into contact with.

But allegations against Sandusky began as early as 1998 when a young boy from Second Mile told his mother that he had showered with Sandusky. Ten years later, another boy came forward describing Sandusky’s behavior as inappropriate. But in both cases, Sandusky’s behavior was rationalized away as nothing more than “boundary issues.” The two cases together were not enough to push any doubts past the trigger point.

From 2009-2011, the police interviewed former Second Mile members to find more victims. For two years, they failed. But then, the prosecutor’s office received an email from an unknown source. It said to contact Mike McQueary about what he witnessed in the Penn State locker room a full decade earlier.

All of a sudden, the case against Sandusky was not based on the unreliable testimonies of troubled teens. Mike McQueary’s testimony gave the prosecution a very strong story:

But is that really what happened? Isn’t there a lot more ambiguity involved?

Sandusky’s Trial

Let’s look at some of the ambiguities, as they revealed themselves in Sandusky’s trial:

This case, like most sexual assault cases, was very complicated. There were layers of denial, confusion, and shame that confounded the jury. Can you understand how the first people who heard about Mike McQueary’s allegations (the university leadership) might have been confused about how to handle the situation?

Truth-Default and Leadership

Imagine how Curley, Schultz, and Spanier must have felt when Joe Paterno came to them with this disturbing and complicated story. What was more likely: That Jerry was being his goofy self with one of the kids he knew well or that he was engaging in some sort of inappropriate behavior that wasn’t even distinct enough to make Mike McQueary step in and stop it?

The three men sat around and contemplated the best way to handle the situation. They even reached out to the university’s attorney, Wendell Courtney. Courtney thought he just needed to have a word with Sandusky—to warn him to be careful so he wouldn’t be called a pedophile.

Curley, Schultz, and Spanier made a human error by defaulting to truth, but does that deserve to be judged as a criminal offense? No one tried to put the parents of Larry Nassar’s victims in jail for failing to see the abuse that was going on right in front of them. It was assumed that those parents simply trusted the community around their children.

The same level of trust should be given to the communities around every child. If everyone defaulted to assuming that every football coach was a pedophile, there would be no more sports. We default to truth because we have to for peace of mind. Sometimes, that trust is ruined by betrayal. In those instances, the people whose trust was broken deserve to be sympathized with, not blamed.

Of course, if someone like Harry Markopolos had been president of Penn State at the time, he would never have defaulted to trusting Sandusky’s intentions. He would have immediately leaped to the worst possible conclusion. But it’s important to remember that isn’t the most natural human reaction.

People with a high threshold for belief, those that default to truth, make the best kind of leaders because they have a capacity for loyalty. Sending people like Curley, Schultz, and Spanier to jail sends the message that we want leaders like Harry Markopolos, who are always on the lookout for conspiracy. But there would be consequences. (We will come back to this point in the discussion of Sandra Bland in the last chapter.)

Part 3-1: Why Do We Misinterpret Strangers?

Meeting a stranger can make it more challenging to make sense of that person than not meeting him. That’s because people assume transparency in others.

Transparency refers to the assumption that the way people present themselves outwardly (through behavior and demeanor) is an accurate and reliable representation of their inner feelings and intentions. But that’s an unrealistic assumption to make when dealing with strangers.

Before we explore why we can’t assume others will be transparent, let’s look at how transparency works.

Transparency and How It Works

Facial expressions are one of the primary ways that we interpret a stranger’s feelings (because we mistakenly assume that a person’s demeanor is an accurate representation of his feelings).

Coded Facial Actions

Psychologist Jennifer Fugate is an expert in the system of coding facial actions (referred to here as FACS). FACS assigns a name, or “action unit,” to each of the 43 possible muscle movements of the face. This action unit is used to notate and score people’s facial expressions like music is scored by notes on a page. For example:

It is important to note that these smiles only seem to be insincere or genuine based on how they look. People’s expressions are not always transparent, as discussed below.

Transparency and Friends

Friends is one of the most successful and recognizable shows of all time—in part because the characters are completely transparent. To prove the connection between transparency and the success of Friends, Jennifer Fugate watched the first scene of season five, episode 15 and performed a FACS analysis of each character’s expressions throughout.

At the beginning of the episode, Ross sees Chandler and Monica in a romantic moment. This is significant because Chandler is Ross’s best friend and Monica is Ross’s sister. Ross rushes to Monica’s apartment to bust in and stop them. He is frantic. Joey and Rachel come into the scene, while Chandler hides behind Monica to stay out of Ross’s warpath. Monica and Chandler announce that they are in love. Slowly, Ross comes around to being happy for them.

That might seem like a lot to keep up with, but Friends is incredibly easy to follow. Why is that? Because the characters are transparent, as Jennifer Fugate proved with her FACS reading of the scene.

The Results

Fugate’s FACS reading proves that the actors in Friends portray every emotion their character goes through inwardly directly through their facial expressions—the characters are completely transparent. The show’s popularity is evidence that people like dealing with transparent people (and characters). Transparency makes people and their stories easier to understand.

When we meet strangers, we tend to believe that they’ll be as transparent as the characters on Friends. This is the “Friends Fallacy.” It’s a fallacy because real life isn’t like an episode of Friends. In reality, strangers often aren’t transparent. So why do we assume they are?

Is Assuming Transparency Beneficial?

In The Expression of the Emotions in Man and Animals by Charles Darwin, Darwin argues that it is beneficial to human survival that people are able to quickly and accurately communicate emotions to one another.

The ability to smile, frown, and wrinkle the nose in disgust are some examples of how the human face evolved as a tool to represent internal feelings. This will probably strike you as a relatively obvious principle. After all, children everywhere naturally smile when they’re happy and frown when they’re sad, and that helps them get what they need to survive. So it seems reasonable to assume transparency.

But you should be careful not to assume that every stranger you come across will be transparent. That assumption requires everyone you meet to express themselves in the same predictable ways. Unfortunately, that is not the case.

Experiment: Transparency Across Different Cultures

Psychologist Carlos Crivelli spent years testing the limits of human transparency—he wanted to find out if people from entirely different cultures evolved to express the same emotions in the same ways. So in 2013, he teamed up with anthropologist Sergio Jarillo to conduct a social experiment in the remote Trobriands Islands.

Crivelli and Jarillo chose the Trobriands Islands because:

The experiment went like this:

  1. Crivelli and Jarillo showed school children in Madrid six photos of a person. Each photo conveyed a different expression: Happy, sad, scared, angry, disgusted, and neutral.
  2. The children were asked to identify which picture went with which emotion.
  3. Crivelli and Jarillo then went to the Trobriands Islands and showed the same photos to the people there. They asked the islanders to identify which picture went with which emotion.
  4. Crivelli and Jarillo examined the results of both groups to compare and contrast the difference of expression across the two groups.
The Results

Clearly, the residents of the Trobriands Islands have an entirely different way of expressing familiar emotions than young children in Madrid. In other words, every human might experience the same emotions inwardly, but the way that emotion is expressed outwardly is very different from culture to culture.

If someone from the Trobriands Islands was to encounter a child from Madrid, the assumption of transparency would make it very difficult for the two strangers to understand one another.

Experiment: Transparency Within a Culture

Cultures that express emotions differently aren’t the only roadblocks to transparency. German psychologists Achim Schützwohl and Rainer Reisenzein created an experiment to test how consistent someone’s expression is with how he is feeling inwardly—that is, to test whether people are generally transparent. The experiment went like this:

The Results

When the 60 participants that went through this experiment rated their feelings of surprise at the moment they exited the testing room, the average score among all participants was 8.14, out of 10. They were truly shocked.

Schützwohl and Reisenzein then asked each participant if he thought that level of surprise registered on his face. Almost all of the participants were convinced that they made a transparent expression of surprise when exiting the testing room. But that wasn’t the case. Upon reviewing the recording, Schützwohl and Reisenzein determined that only 5% of participants made the stereotypical expression of surprise (eyes widen, eyebrows raise, jaw drops).

The participants overestimated their own transparency—they reasoned that if they felt surprised they must have also looked surprised.

Transparency Is a Myth

Humans are not transparent—it’s all a myth. Because we have all watched the same TV shows, like Friends, and read the same novels where a character’s “jaw drops in surprise,” we have been conditioned to believe that there is only one expression associated with any particular emotion. But that is unrealistic.

In reality, it takes getting to know someone well to be able to read their outward demeanor accurately. With a close friend, you come to understand their idiosyncratic expressions and what they mean to express. But when you encounter a stranger, you often have to make assumptions based on their expressions because you don’t have any personal experience with that person. But your assumptions are based on stereotypes, like Ross’s exaggerated facial expressions in Friends, that are usually wrong.

Example:

One day, George goes to take a shower. Suddenly, from the bathroom, he hears his wife scream. George runs to the kitchen and sees a man holding a knife to his wife’s throat. George is naked and wet from the shower, but he intimidatingly yells at the assailant, “Get out of here NOW.” The young man gets scared and flees the scene.

On the inside, George was absolutely terrified for his wife’s safety in that moment. But he didn’t show it on the outside. Maybe someone who knew him well would have been able to tell that George’s intimidating demeanor was his natural reaction to fear, but the assailant (a stranger) had no way to know that.

What did the assailant assume about George based on his intimidating expression—that he was dangerous, violent, cold, or something else completely? The intruder mistakenly assumed transparency (which was lucky for George and his wife).

Assuming Transparency Can Be Dangerous

The bail hearings described in the section Judges vs. Artificial Intelligence are another exercise in transparency. Judges who look at their defendants make more mistakes in judging character than a computer that never sees the defendants at all. Watching someone’s facial expressions is not a fail-proof way to see how that person is feeling. Someone who is surprised might not show it, and someone who is dangerous might come across as stereotypically demure.

Patrick Dale Walker

A man named Patrick Dale Walker was arrested in Texas for trying to kill his girlfriend. The only reason he didn’t succeed in murdering her was that the gun jammed when he pulled the trigger. The judge presiding over the case set bail at $1 million, and Walker went to prison. But four days later, the judge lowered the bail to $25,000 and Walker was released.

The judge reasoned that four days in jail would be enough for Walker to “cool off.” He saw Walker as a polite young man with a clean record. Most important, he saw that Walker was remorseful for what he had done. But could he really see something like remorse in a stranger? Apparently not. Walker shot his girlfriend to death four months later.

In this case, seeing Walker made the judge worse at interpreting his intentions. The information the judge thought he gleaned from observing Walker’s seemingly remorseful behavior was actually misinformation, because Walker was not transparent.

What Is the Solution?

It’s clear that looking at a stranger and assuming transparency has negative consequences. But what do we do with that information? Should judges have to sit behind a curtain when hearing their cases? That might make a certain part of the decision-making process easier, but it would also take away the humanity of the defendant. A judge needs to see the human whose freedom is on the line.

Transparency, just like truth default, is a flawed strategy for dealing with strangers. However, it is a necessary strategy. Humanity would suffer if every social interaction was mistrusting and anonymous because social coordination is a necessary part of the human experience. So, in an effort to keep our humanity, we make ourselves vulnerable to deception. That is the confusing paradox of dealing with strangers—we need to communicate, but we’re terrible at it.

Part 3-2: Transparency and Mismatching

The assumption of transparency is especially dangerous in interactions where one person is mismatched—when his outward demeanor is mismatched with his internal feelings and intentions.

Mismatching and How It Works

Remember Tim Levine’s Trivia Experiment, in which participants were asked whether or not they cheated when Rachel left the room and then viewers were asked to determine which participants were lying about cheating on the test.

The experiment measures how accurate the viewer is at detecting the participant’s lie. In this experiment, Levine found that the viewer correctly detects a participant’s lie 54% of the time on average. That percentage is only slightly better than chance. Why is that?

One answer is Truth-Default Theory. But Levine felt as though there had to be another reason that people tend to mistake lies for the truth. In particular, Levine was perplexed by the pattern that most lies are not detected until after the fact. (For example, Scott Carmichael missed all the clues that Ana Montes was a Cuban spy in the moment. But later, Carmichael could recognize those red flags.) In an effort to explain this pattern, Levine returned to the tapes of his Trivia Experiment participants.

Participants: Sally and Nelly

Two of Levine’s participants were particularly interesting to study. We will call one Sally and one Nervous Nelly.

There were a lot of participants like Nelly and Sally, who got the same response from nearly every viewer. In fact, some participants were judged correctly by 80% of judges or more. And some participants were judged incorrectly by 80% of judges or more. Why?

Levine argues that this is an example of a judge using demeanor as a clue to someone’s honesty—an example of the assumption of transparency. These viewers were operating under the assumption that a liar in reality would behave like a liar on Friends.

A person will be judged as honest if she is:

A person will be judged as dishonest if she is:

The participants that were judged correctly 80% of the time were those whose internal state and external demeanor were matching. For example, Sally matched—she was being dishonest and she was acting dishonestly. The participants who were judged incorrectly were mismatched. For example, Nervous Nelly mismatched—she was being honest but her demeanor seemed stereotypically dishonest.

In other words, the average person is only bad at detecting lies when the sender is mismatched. Mismatching confuses the average person—it is at odds with the natural assumption of transparency. When a liar acts stereotypically honest or an honest person acts stereotypically dishonest, we don’t know how to make sense of it.

Law Enforcement Viewers

It makes sense that the average person might not be a professional when it comes to detecting lies. But what about law enforcement officials that are supposed to be professionals at detecting lies, such as interrogators? Wouldn’t they be better judges than the average adult? To answer this question, Levine found interrogators with over 15 years of experience and asked them to watch the Trivia Experiment participant’s interviews.

The Results

Law enforcement officials are just as bad or even worse at interpreting strangers with mismatched behavior than the average adult. This is distressing because these are the people charged with determining a stranger’s innocence or guilt.

You have to wonder if this is part of the reason there are so many cases of wrongful convictions (like Amanda Knox, discussed below) and unsuccessful bail decisions (like Patrick Walker). Mistakes like these might seem random, but Levine’s research proves that they aren’t random at all. These mistakes are a result of the systematic discrimination against people who, often unknowingly, don’t conform to our ridiculous assumption of transparency.

Amanda Knox

On November 1, 2007, an American college student named Meredith Kercher was murdered in the small, Italian town of Perugia. Kercher’s body was found by her roommate, Amanda Knox. Knox called the police to the scene of the gruesome crime. Almost immediately, Amanda Knox was added to the list of subjects—she was ultimately convicted of murder and put in prison.

The arrest and conviction of Amanda Knox was a sensation in the media and tabloids. But it doesn’t make sense that Amanda Knox was implicated in the murder. There was no physical evidence that put Knox at the scene of the crime. Nor was there any motive that explained why she would have murdered her roommate. The police botched the investigation of evidence and DNA, and the prosecution relied on fantasy to make the case. But, somehow, eight years went by before the Italian Supreme Court declared Knox innocent.

There are plenty of articles and books that detail all of the mistakes made by the investigators and prosecutors in Amanda Knox’s case. But the simplest theory of what went wrong with Amanda Knox’s case is this: the police expected Knox to be transparent and she wasn’t. Her case is an example of the consequences of assuming that the way a stranger looks is a reliable indicator of how she feels.

Amanda Knox and Mismatching

Amanda Knox was innocent. But in the months following the crime, the way she acted made her seem guilty. She was mismatched, like Nervous Nelly. The investigators on the case drew wild conclusions about Amanda’s role in Kercher’s murder based on the way she behaved.

Lead investigator Edgardo Giobbi said that he didn’t even need to rely on any other kinds of investigation because he had observed Knox’s guilt through her psychological and behavioral reactions to the murder. And the prosecutor on the case said that although the physical evidence collected had been very unclear, what was clear was that Amanda’s behavior was irrational.

Amanda Knox (like most strangers) was a mystery to the people who didn’t know her closely. She was the kind of outcast kid in high school who sang in the hallways and pretended to be an elephant in front of her classmates. Like most misfits, Amanda Knox had learned to be herself even when the people around her couldn’t understand her. So, in the days following Meredith Kercher’s murder, Knox didn’t adjust her behavior to conform to peoples’ expectations.

Here are a few examples of how Amanda Knox behaved after Meredith Kercher’s murder:

As Diane Sawyer would later say in an interview with Amanda Knox (after she had been declared innocent), that kind of behavior didn’t look like grief to most people. Many people found Knox’s demeanor to be cold and calculating.

So Amanda Knox spent four years in prison (and another four years waiting to be declared officially innocent) for the crime of behaving unpredictably—for being mismatched. But being weird is not a crime.

Part 3-3: Alcohol and Transparency

The assumption of transparency is clearly a problem for trained police professionals attempting to make sense of a suspect and seasoned judges trying to read a defendant. But the assumption of transparency also has devastating effects for young people across the nation.

For example, approximately one in five American female college students has been a victim of sexual assault. Many of these cases follow a similar pattern:

  1. Two young people (often strangers) meet and have a conversation.
  2. At some point, a sexual encounter begins.
  3. One party feels violated by the encounter.
  4. Later on, it is difficult to reconstruct how things escalated: Was there consent from both parties? If not, did one person actively ignore the other’s objection? Or did one person simply misunderstand the other’s sexual intentions?

It has been established that it is nearly impossible to judge a person’s general intentions based on their behavior or expressions. So how can we expect young, immature people to judge a person’s sexual intentions based on their behavior?

There are two major elements that contribute to this pattern of sexual assault among young people:

  1. There are no hard and fast rules about what constitutes consent, so people are forced to make assumptions: In a 2015 study, 1,000 college students were asked whether sexual activity when both people have not clearly given consent should be considered sexual assault. Half of the students answered that they were unsure. What are the implications of the fact that half of all young people are unsure whether consent needs to be clearly given in a sexual interaction? How can we expect young people to respect boundaries if there is no real consensus of what those boundaries are?
  2. Alcohol makes it much more difficult to express yourself and to understand others, and young people are often drunk during these sexual encounters: Alcohol puts the drinker into a transformative state of myopia in which only the short-term seems important and longer-term consequences are easy to ignore. Even more problematic, many young people drink to a state of “blackout.” (Myopia and blackout are discussed more fully below).

The misunderstanding of consent is essentially an assumption of transparency gone wrong, and alcohol only adds to the confusion. Consent is something that two people must negotiate together, and it requires both people to be who they say they are and want what they say they want. When under the influence of alcohol, neither person is their true self and neither person can manage their short-term wants with the long-term consequences. (Shortform note: Gladwell doesn't address situations in which rape happens with true lack of consent, rather than when alcohol is involved.)

Alcohol and Myopia

Alcohol induces myopia, a state in which the drinker’s mental and emotional field of vision becomes narrow. In other words, the drinker becomes short-sighted and his behavior and emotions are strongly affected by his immediate experience.

Myopia is a result of alcohol’s effects on the brain:

The most crucial implication of the myopic state of intoxication is that the drinker’s understanding of self changes. A person normally constructs his personality and character by managing the struggle between immediate experience and long-term consequences—that’s ethical decision making. But a drunk person no longer considers those long term consequences because the immediate experience takes sole focus. His normal character is broken down by myopia. Alcohol doesn’t disinhibit a person—it totally transforms a person.

Example of Myopia:

People who think of alcohol as a disinhibitor might drink in an effort to forget all their troubles. And that might work (for a time) as long as the drinker is in a fun environment, like a football game with tons of excited fans. The myopic state of intoxication will make the football game (the immediate experience) the only thing in his field of vision. And his problems become crowded out.

However, if the drinker is in a quiet, solitary environment his problems will become even more overwhelming. He will become more depressed because there is nothing in his immediate experience to replace his painful emotions and thoughts.

Blackouts

When a person drinks an excessive amount of alcohol in a short period of time, the hippocampus in the brain is affected. The hippocampus is responsible for memory, which is why excessive drinking often leads to a state of blackout. Blackout refers to a state in which some or all of your memories are lost.

The first stage of a blackout:

The hippocampus starts to be affected when the blood-alcohol level reaches approximately 0.08. The National Institute of Health’s leading blackout expert, Aaron White, says that there is no traceable logic to what gets remembered and what doesn’t when a drinker reaches the early state of blackout. For example, it is totally possible to forget being raped but remember getting in a taxi to go home.

Total blackout:

At a blood-alcohol level of approximately 0.15, the hippocampus shuts down entirely. At that point, all memories disappear completely and there is nothing to recall. Even in this state of total blackout, when the hippocampus is entirely shut down, it is possible for the drinker to continue to function like a “normal” drunk person. In fact, it can be impossible to tell when someone else has reached the point of blackout.

Example of Blackout:

Alcohol researcher Donald Goodwin once gave ten men from St. Louis a bottle of bourbon in an effort to study the blackout state. After letting them drink for four straight hours, Goodwin suggested that the men might be hungry. He handed each man a frying pan, covered with a lid.

When the men took the lid off the pan, there were three dead mice inside. Goodwin thought that would be a highly memorable, even traumatic, experience. But the bourbon drinkers had completely forgotten the event within 30 minutes, and they didn’t regain the memory the next morning.

Blackouts and Sexual Assault

In a state of blackout, all of the strategies people typically use to interact with strangers are dull—most notably the memory. Memory is usually the first line of defense when talking to a stranger:

But if you can’t remember your interactions with a stranger, then you might make your choice very differently:

Showing respect to another person requires managing your own desires with the desires of the other person. But in the blackout state, myopia transforms your brain so that you can only see your desires—the other person’s desires become crowded out. Essentially, it is impossible to be yourself in a social interaction when you are blackout drunk.

Women are at a much greater risk for blackouts, for physiological reasons. For a woman of average weight, for example, eight beers over four hours would put her at a blood-alcohol level of 0.173. She would be blacked out. But a man of average weight could drink the same amount and have a significantly lower blood-alcohol level of 0.15.

Other factors that increase the likelihood of blackouts among women:

When a young woman is blacked out, she is particularly vulnerable. It is important to let young women know that they don’t need to drink as much as a man does. And it’s equally important to empower women to know that they drastically increase the chances of being taken advantage of when they are unable to be responsible for their own decision-making. Of course, sexual assault is always the predator’s fault. Teaching women these principles is an effort to prevent more of them from becoming victims—never to blame the victim.

Men are vulnerable when blacked out, too. It is important to teach young men that when they reach the myopic state, terrible things can happen. We need to teach young men that binge-drinking is not a harmless social activity—it drastically increases the odds of committing a violent or sexual crime. Of course, alcohol does not excuse the predator. Teaching men these principles is an effort to prevent more perpetrators of sexual assault.

Brock Turner

On January 18, 2015, two graduate students witnessed a young man, Brock Turner, lying on top of a young woman (referred to here as Emily Doe) on the ground near a dumpster. The two men observed that Turner was thrusting on top and Emily was completely still underneath. Alarmed, they asked Turner what was going on. Turner turned and ran. The two men followed Turner and tackled him.

Emily Doe was lying near the dumpster, unconscious, with one breast revealed. Her skirt was around her waist and her underwear had been removed. She woke up in the hospital oblivious to what had happened. She was told that she had been the victim of sexual assault.

Most sexual assault cases rely on witnesses to help solidify the victim’s allegations. But in Brock Turner’s case, there were no available witnesses other than the two graduate students who tackled Turner. All of Emily’s friends at the party had gotten too drunk and had to leave. Turner’s friend was too drunk to even make it to the party. So Turner’s account was the only one heard in the case. By his account, Emily Doe had consented to their encounter by the dumpster.

But California law states that someone cannot be considered to have given consent if that person was unconscious or so intoxicated that they couldn’t adequately resist. This means that the person must’ve been so intoxicated that they couldn’t make a reasonable judgment or make sense of their surroundings.

So really, the case against Brock Turner would be determined by the degree of Emily Doe’s intoxication. Was Doe a consensual partner in the sexual activity, or was she incapable of consent based on her level of drunkness?

How Intoxicated Was Emily Doe?

When she arrived at the party that night, Emily Doe was already drunk. She left a voicemail for her boyfriend just after midnight, in which she sounds incoherent. Her voice certainly didn’t sound like the voice of someone capable of making reasonable judgments. By the time of the incident, Emily Doe had a blood alcohol level of .249—nearly four times the legal limit. She has no memory of even meeting Brock Turner.

Brock Turner’s blood-alcohol level was twice the legal limit at the time of the incident—enough to be considered blackout. At trial, he claimed to remember everything that happened between him and Emily Doe that night. But on the night of his arrest, his memory wasn’t at all clear. When asked why he ran away from the scene, Turner said that he didn’t remember running or being tackled. He said, “I think I was kind of blacked out.”

So everything about the story that Turner told in his trial was made up—it was wishful thinking. It is impossible to know what actually happened that night because Brock Turner, Emily Doe, and almost everyone they knew at that party was in a state of blackout.

In the end, Brock Turner was convicted of three felonies associated with sexual assault and sentenced to six months in prison. For the rest of his life, he will be a registered sex offender.

You might wonder how a seemingly harmless interaction on the dance floor ended in such a horrible crime. The answer is that alcohol and assumption of transparency is a dangerous combination when interacting with a stranger. Remember, drunk people are in a state of myopia in which their behavior is directly influenced by their environment. That night, Emily Doe and Brock Turner’s environment was a fraternity party dedicated to binge-drinking and sexual dancing. And both Emily Doe and Brock Turner were in a state of blackout, meaning that they were completely out of touch with the consequences of their actions.

What Is the Solution?

The night that he met Emily Doe, Brock Turner had to make sense of a stranger’s sexual desires. He assumed that her behavior was transparent, which is an incredibly flawed system to begin with. But when both parties are young, immature, and drunk, it is impossible. That’s why there is such a prevalent pattern of sexual assault on college campuses. But how do we stop it?

In a letter to Brock Turner, Emily Doe expressed that she wanted to see programs set up to raise awareness of sexual assault and teach men how to respect women to prevent sexual assault in the future.

In his own statement, Brock Turner said that he hoped to set up a program to speak out against the college culture of binge-drinking and sexual promiscuity to prevent sexual assault in the future.

But in reality, shouldn’t we hope for both? It is important to teach young people how to respect each other and how to drink less. The two problems are connected.

Part 4: How to Talk to Strangers

Ana Montes and Bernie Madoff got away with criminal deception for years. Amanda Knox was held in prison for a crime she didn’t commit. And Emily Doe woke up in a hospital completely unaware that she had been sexually assaulted. All of these are evidence of the first two mistakes people make when trying to make sense of strangers:

  1. We default to truth, which leaves us vulnerable to deception.
  2. We expect others to be transparent, even though transparency is a myth.

So once you accept those two major shortcomings, what are you supposed to do to change them?

A Lesson: Trying Harder Isn’t Always the Answer

The lesson you need to learn about interacting with strangers is that strangers become more elusive the harder you try to get to know them. For example:

In other words, it is important to remember that the “truth” you hope to find out about a stranger is fragile. The “truth” of a person cannot be dug out or searched for. You have to tread carefully with strangers, or the interaction will crumble under your feet.

You must accept that there are limits to what you can learn about a stranger. You will never know the total or absolute truth. Speak to strangers with caution and humility, or you risk never getting to know them at all.

High Stakes Interrogation

Perhaps the most extreme example of an interaction between two strangers and the danger of trying too hard to dig out the “whole truth”: A terrorist who wants to keep his secrets being questioned by interrogators who will do anything to learn those secrets.

Charles Morgan and the Impact of Stress on Memory

In the late 1990’s, psychiatrist Charles Morgan was interested in researching post traumatic stress disorder—primarily why some veterans suffer from PTSD but others don’t. So Morgan decided to study the military personnel going through a SERE training program.

SERE programs are designed to teach Survival, Evasion, Resistance, and Escape techniques to any key military personnel that might be captured by enemy forces. These key personnel were arrested by surprise and brought to a mock prisoner of war camp at a SERE facility, where they were subjected to common interrogative practices in a simulated traumatic environment.

To test his questions about PTSD, Morgan did three primary tests on the soldiers before and after the simulated interrogations:

Test #1: Morgan tested blood and saliva samples.

Test #2: Morgan conducted a figure drawing test, in which the soldiers were shown a complex image made up of abstract lines and shapes and then asked them to draw the image from memory.

Test #3: Morgan put together a facial recognition test, in which the soldiers were asked to look at a line up of people and identify who had been in charge of their interrogation.

Translating the Results

Charles Morgan’s findings showed that SERE students mentally, physically, and chemically reacted to SERE interrogation simulations as though they had been through a real, traumatic interrogation. The interrogative practices used at SERE very clearly had negative effects on the subjects’ brain function and memory, especially the pre-frontal cortex.

These results raised concerning questions for Morgan. If the point of these interrogation practices was to extract valuable information from a prisoner, then why would they want to put that prisoner through a process that would negatively affect his ability to remember valuable information?

After the terrorist attack on September 11, 2001, Charles Morgan went to work for the CIA. And he did his best to share his findings with officials in the agency. He was worried that too much credibility was being given to information that was given in times of duress—such as information given by prisoners being interrogated or information given by soldiers just back from combat.

Mitchell, Jesson, and Questionable Interrogation Tactics

In March 2003, CIA interrogators James Mitchell and Bruce Jesson were brought into a CIA black site to deal with a high-value, highly difficult prisoner, Khalid Sheikh Mohammed (referred to here as KSM). KSM was a senior official in the terrorist group Al Qaeda and was expected to have information regarding upcoming terrorist attacks on the United States. KSM had proven to be a very difficult prisoner—two CIA interrogators had tried and failed to extract information from him. So Mitchell and Jesson were called in to question KSM because of their expertise in “high stakes” interrogation practices known as “enhanced techniques.”

The CIA asked Mitchell and Jesson, both trained in the SERE program, to define the most definitively effective methods of enhanced interrogation. Mitchell and Jesson listed:

KSM

From their first meeting with KSM, Mitchell and Jesson knew that it would require every enhanced technique in their arsenal to get KSM to talk. They could tell he was hard-core because he wasn’t even affected by waterboarding—he was able to resist the one technique that was almost 100% effective on other prisoners. Beside that, KSM knew that he was in prison for the rest of his life so he didn’t have much to gain by complying.

Mitchell and Jesson interrogated KSM for three full weeks, using every technique they had. Then one day, KSM suddenly stopped resisting. On March 10, 2007, KSM issued a public confession after being in captivity for four years.

Speaking through a “personal representative,” KSM admitted to single-handedly planning, organizing, and executing the 9/11 attacks on America. He also took (at least partial) responsibility for 31 other Al Qaeda operations and attempted operations. The disturbingly detailed confession was considered a big victory for Mitchell, Jesson, and the CIA.

A Question

KSM’s sudden and complete cooperation raises a question: Was KSM telling the truth?

Think of it the way Charles Morgan would:

Despite all of these critical doubts, no one questioned the interrogation of KSM or challenged his confession. Why? Because with the threat of war on the horizon, it was essential to try and achieve peace with the enemy.

Mitchell and Jesson were in a difficult position. They needed KSM to give them any information he had about any possible upcoming terrorist activity. But the more they interrogated KSM, the less trustworthy his information became. The quality of the interaction was diminished every time they attempted to get him to talk.

Part 5-1: Coupled Behaviors

The third mistake that people often make when dealing with strangers: We fail to recognize coupled behaviors, behaviors that are specifically linked to a particular context. For example, we fail to see how a person’s personal history might affect his behavior in a particular environment. Instead, people tend to operate with an assumption of displaced behaviors, behaviors that do not change from one context to the next.

Whereas default to truth and assumption of transparency both affect your understanding of a stranger as an individual, failure to recognize coupled behaviors affects your understanding of the context in which a stranger operates.

Once you understand that some behaviors are coupled to very specific contexts, you’ll learn to see that a stranger’s behavior is powerfully influenced by where and when your encounter takes place. Then, you’ll be able to recognize the full complexity and ambiguity of the people you come across.

Coupled Behavior: Suicide

Many people jump to conclusions when they hear of a stranger who committed suicide. But suicide is a coupled behavior—it tells us more about the world in which the stranger lived than about the character of the stranger himself.

Suicide Coupled With Method

In 1963, Sylvia Plath committed suicide. She had suffered from depression for most of her life and was often haunted by thoughts of death. She wrote about suicide in her poems and talked about it excitedly with her friends and peers. After a particularly cold and depressing winter, Plath put her children to bed, went into her kitchen and sealed the door, laid her head inside the oven, and breathed in carbon monoxide until she died.

At the time of Plath’s death, British ovens ran on something called “town gas,” a mixture made from coal that had relatively high levels of deadly carbon monoxide gas. In other words, almost everyone had an easy means of killing themselves right inside their kitchen. Not surprisingly, carbon-monoxide poison was responsible for almost half of all suicides in the United Kingdom in 1962. The suicide rate for women in England was the highest it has ever been in history.

But by 1977, every gas appliance in Britain had been updated to use natural gas, which had no carbon monoxide in it at all. As the amount of town gas used in the United Kingdom went down (as more and more homes were converted to natural gas), the number of carbon-monoxide suicides fell at nearly the exact same rate. But did the number of total suicides change during that time? How did people commit suicide once the number one method became more and more impossible?

The displacement theory would assume that people who wanted to kill themselves would simply find another way. If suicide could be displaced, it would mean that a suicidal person would be just as likely to commit suicide no matter what methods were readily available to them. The rate of suicides would be relatively steady over time.

The alternative possibility, the coupling theory, would assume that suicide is coupled to a particular context, such as the availability of carbon monoxide. If suicide is a coupled behavior, it would mean that to commit suicide does not only require a depressed person—it requires a depressed person, in a particular mindset, with a particular means of killing themselves readily accessible. The rate of suicides would vary as contexts changed over time.

So which one proved true? The coupling theory. In the 14 years between Plath’s suicide and the eradication of town gas, suicide rates steadily dropped overall. Thousands of deaths were prevented by switching to natural gas. Women were only half as likely to commit suicide as they had been a decade before. These numbers suggest that suicide is truly a coupled behavior—it is linked to a particular context, such as the availability of carbon monoxide.

Understanding how the context of method affects the behavior of suicide can help you understand Sylvia Plath’s complex character. She wasn’t just a doomed genius destined to commit suicide. She was a woman who tragically lived in a particular time and place that allowed her to easily commit suicide.

Suicide Coupled With Location

In 1937, the Golden Gate Bridge opened in San Francisco. Since then, it has been the site of 1,500 suicides—more than any other place in the world in that period of time. But it took more than eighty years before a suicide barrier was added to the bridge by the municipal authority. Why? Because the municipal authority couldn’t grasp the concept that a behavior like suicide could be coupled to a place. The public, too, denied the idea. In fact, in a national survey, 75% of Americans predicted that preventing suicide on the bridge would not prevent suicide overall. In letters to the municipal authority, citizens said things like:

Psychologist Richard Seiden, however, did not ignore the coupling theory as it applied to the Gold Gate Bridge. He decided to follow up on the 515 people who had failed their attempts to jump off the bridge, for one reason or another. He found that only 25 out of those 515 people had gone on to kill themselves another way. This proves, quite clearly, that suicide is a behavior that can be coupled with the context of a particular place. It also shows that understanding a stranger’s context can help you understand his behavior.

Part 5-2: Crime is a Coupled Behavior

A person’s likelihood to commit a crime is coupled with their context—like their personal history, their geographic location, or their access to guns. The Kansas City Police Department completed an experiment that proved this theory.

The Kansas City Experiment

In the early 1990’s, the Kansas City Police Department decided to study how to deploy extra police officers in an effort to reduce crime in the city. They hired criminologist Lawrence Sherman and gave him free rein to make changes in the department.

Sherman was sure that the high number of guns in Kansas City was a direct cause of the city’s high level of violence and crime. So he decided to focus his experiment specifically on guns in the 144th patrol district of Kansas City, one of the most dangerous areas in the city. Sherman’s experiment was a relatively simple one that made use of a loophole in the American legal system.

The loophole: The U.S. Constitution requires police officers to have a reasonable suspicion in order to search a citizen, which is a relatively difficult standard to meet when the citizen is at home or walking down the street. However, when the citizen is driving a car, the standards for reasonable suspicion are much lower. Police can stop a driver for hundreds of legal reasons, such as running a red light or driving with a brake light that’s out. And once they have stopped a driver, police are allowed by law to search the car for any reason they believe might be suspicious.

Sherman chose to take advantage of this loophole by deploying four officers in two cars to patrol District 144 at night. He told these four officers to watch out for any suspicious-looking drivers and pull them over for any reason they could justify by law. Sherman told the officers to search as many cars that fit the specific requirements and confiscate as many guns as possible. These officers were effectively searching for a needle in a haystack. The ultimate goal was to find a gun or drugs.

Sherman was careful to warn police leaders about the dangers of aggressive preventative patrol. He knew that overly suspicious (and therefore aggressive) police officers could create distrust between the police and the public. That’s why the four officers in Sherman’s experiment went through specialized training and only worked in District 144 at night—Sherman wanted to make sure that they knew how to target the right kind of traffic stops, in the right kind of locations, at the right times, that led to the right kind of searches.

The Results

The four officers deployed by Sherman worked from 7 p.m. to 1 a.m. every night for 200 consecutive nights in District 144. During that time, Sherman’s officers issued 1,090 traffic citations, stopped 948 vehicles, arrested 616 people, and seized 29 guns. They averaged one stop every 40 minutes. And the result? Gun crime was cut in half in District 144 over those 200 days.

Sherman was able to prevent violent crime with only four officers, simply by keeping the officers extremely busy searching for guns. The experiment was successful because it made crime-fighting strategies more focused—it targeted one aspect of the coupled behavior (guns) in order to prevent the other coupled behavior (crime).

The Reaction

Sherman’s success in District 144 inspired law enforcement officials. Police departments across the country asked Sherman how they could achieve the same results, and Sherman shared the technique with them.

In the years following the second experiment in Kansas City, police departments across the nation began to follow Sherman’s model. For example:

Police officers in America today stop around 20 million drivers per year (approximately 55,000 stops per day), all in an effort to replicate Sherman’s success in Kansas City. However, one of the most crucial elements of Sherman’s technique has been lost—the focus on coupled behaviors. For example, when North Carolina Highway Patrol doubled their stops made per year, they did not focus on suspicious-looking drivers likely to commit crime in a particular area at a particular time—they just stopped as many drivers as they could.

Police Training Post-Kansas City

After Sherman’s success in Kansas City, police training began to change. The new brand of police officer is supposed to be highly suspicious—always on the lookout for any small behavior that could indicate criminal intent. In other words, police officers are being trained to not default to truth.

Police officers are taught that savvy criminals would be careful not to make any obvious infractions. So they learn how to use creative tactics to catch criminals in the act. Here are some examples:

It’s important to note that most drivers who have air fresheners or who stutter when talking aren’t criminals. But post-Kansas City, police were told that in order to be successful, they had to operate as though every civilian they came across was a suspect. They had to break their default to truth.

Police post-Kansas City are also trained to strongly assume transparency—to operate under the assumption that guilty-seeming behavior was an accurate indication of guilt. For example, the Reid Technique, which is used in police training in two-thirds of all American states, instructs police officers to use eye contact to gauge a person’s innocence. If the person breaks eye contact, the Reid Technique says that they are probably guilty.

The law enforcement community at large missed the real lesson of Sherman’s success—that preventative patrol only works if it is focused on behaviors or places that are specifically coupled with crime. That is an understandable human error on the part of law enforcement officials. For some reason, the idea of coupled behaviors is difficult for most people to grasp. When you combine that with broken default to truth and absolute assumption of transparency (like post-Kansas City police are trained), you get tragic interactions—like Sandra Bland and Brian Encinia.

Part 5-3: Sandra Bland and Brian Encinia

Sandra Bland’s arrest and subsequent suicide in jail is a tragic example of what can happen when two strangers use flawed strategies to try and understand each other. After Bland’s suicide, Brian Encinia was fired on the grounds that he did not conduct himself with courtesy and patience, as required by the Texas State Trooper Manual. But the case is about much more than that.

Everything that happened between Brian Encinia and Sandra Bland happened because Encinia behaved exactly as he was trained to—by police officials and society as a whole.

Brian Encinia’s Mistakes

Broken Default to Truth

At 4:27 p.m. on July 10, 2015, Brian Encinia noticed Sandra Bland go through a stop sign on campus. He was unable to legally pull her over at that time because the campus was outside his jurisdiction. So he drove up behind her to get a better look—to see if she had any signs of potential criminal intentions, like air fresheners or fast food wrappers. When Sandra Bland saw the police car behind her, she pulled over to let him pass. She unknowingly gave Encinia a lawful excuse to pull her over. She was pulled over for neglecting to signal her lane change.

When Encinia approached Bland’s vehicle, he saw a couple of small things that triggered him to doubt her:

Brian Encinia was taught, through his post-Kansas City police training, not to default to truth. He was taught to treat everyone like a suspect. And (because the cost of not defaulting to truth is to have mistrusting social interactions) Encinia was immediately scared of Sandra Bland because her license plate and food wrappers had already triggered his doubts.

Assumption of Total Transparency

When Brian Encinia approached Sandra Bland’s car on the passenger side, he immediately noticed that she was upset. Remember, Encinia was trained to assume transparency in people. So when Sandra Bland acted irritated and defensive, Encinia automatically began to assume the worst. In his testimony, Encinia said that he “immediately” knew that there was something “wrong” about Bland, based on her demeanor. He felt afraid that she was “aggressive,” and even suspected she might have a gun.

Sandra Bland was mismatched—she was an innocent driver who behaved in ways that Brian Encinia believed to indicate criminal intention. But he never stopped to consider that she was innocent, because his assumption of transparency was ingrained through his police training.

Neglect of Coupled Behaviors

In a single day on the job, Encinia often pulled over as many as 15 drivers in Prairie View, Texas for small infractions, like failure to signal a lane change. In fact, he had stopped three other drivers in the 30 minutes before he pulled over Sandra Bland. His post-Kansas City police training taught him that more stops would mean less crime. But remember, Sherman’s experiment showed that the needle-in-a-haystack approach only works when it is focused on particular contexts, like high-crime areas or gun ownership.

The area of Prairie View where Brian Encinia pulled Sandra Bland over was not a high-crime environment. Although Encinia said in his testimony that he had come across drugs and weapons in that area, his record shows that to be untrue. Lawrence Sherman himself was horrified that an officer would pull over a driver in that location, in the middle of the day, when the likelihood of crime was so low.

Likely, Encinia never stopped to think that the context of location and time could be coupled with the likelihood of crime, and he hadn’t been taught to think this way.

Sandra Bland’s Context

There are several things about Sandra Bland’s context that affected her behavior in the interaction with Brian Encinia:

Without getting to know Sandra Bland, Brian Encinia had no way to know how much of an emotional crisis Bland was going through during their interaction and her following days in jail. And because he had the false confidence of someone who has been trained to assume transparency, Encinia assumed that her upset behavior was evidence of sinister intentions.

Society’s Collective Mistakes

The death of Sandra Bland was not just a case of a bad police officer, or even a defective police department—it was a collective failure of our society. Everyone that Brian Encinia ever came across, in the police force or otherwise, operated with the same flawed strategies for making sense of strangers. Too few people have ever challenged those strategies or tried to replace them.

In our modern, seemingly borderless world, we have no choice but to interact with strangers. Yet we, as a society, are incompetent at making sense of the strangers we come across. So what should we do about it?

If our society is to avoid tragic interactions like that of Brian Encinia and Sandra Bland, we must learn to:

Most importantly, we must learn not to blame the stranger when an encounter goes awry, but to look into how our own instincts might have played a part, as well.

Exercise: Identify Common Strategies for Interpreting Strangers

The most common strategies people use when interacting with strangers are default to truth, assumption of transparency, and neglect of coupled behaviors.