Thursday, January 18, 2018

The Stay Puft Marshmallow Man Paradox: Ironic Processing

Learning By Doing

Let's play a simple game. There's only one rule: Don't think about white bears. I will give you a minute.

Okay, so how did you do? Did you think about white bears? If you didn't think about white bears, what did you think about instead? What was your strategy? Maybe you should teach your strategy to Dr. Ray Stantz from the movie Ghostbusters (1984).

"I couldn't help it. It just popped in there." --Dr. Ray Stantz

Why, oh why, did Dr. Raymond Stantz conjure up the Stay Puft Marshmallow Man? All he had to do was clear his mind! WHY!? It's easy. He fell victim to a rather pernicious feature of the human mind. Sometimes, when you actively try to suppress thinking about something, your mind goes ahead and thinks about it. If you ever had the experience of not being able to stop laughing in church, then you've experienced ironic processing. Ironic Processing is the cognitive phenomenon of your mind betraying you and doing exactly the opposite of what you tell it. 

The game company Hasbro cleverly figured out a way to monetize ironic processing. They designed a board game aptly called Taboo. If you haven't had the frustrating experience of playing this game, the rules are as follows. You are given a word, and your goal is to get your partner to say that word. But here's the catch. You aren't allowed to use certain words as clues. For example, suppose I want you to say the word "Sweet." I am not allowed to use the words: Sugary, Tea, Nice, Sour, Sixteen. How evil is that? I'm terrible at this game because as soon as I read the list of verboten words, I immediately want to say them. Why? Ironic processing.

"Isn't it ironic...dontcha think?" --Alanis Morissette 

So what is going on? Why doesn't your brain do what it's told? According to one theory, the mind draws upon two separate processes to direct our behavior [1]. The first is an action-oriented process. It has a goal, and it motivates us to take steps toward that goal. Let's call this the "Operate" process. The second process needs to evaluate whether the goal has been achieved. Let's call this the "Test" process. The Operate and Test processes work in tandem to achieve a goal.

The problem arises when the Test process is checking Operate's progress before the Operate process has completely finished. In other words, Test is the annoying kid in the back seat asking, "Are we there yet?" Thus, if you are actively trying to suppress a thought, and the Test process kicks in to evaluate, then it ends up causing a violation of the thought suppression. By testing if you aren't thinking about white bears, you are now in violation of the rule. The Test process puts the "irony" in ironic processing.

The S.T.E.M. Connection

What is the connection to education? Put yourself in the shoes of a student who has test anxiety. It might be tempting to advise that student to not think about his or her anxiety. You could tell him or her to avoid negative thoughts about failure or the implications of failing. I'm sure you can see the problem with that advice. It would be analogous to telling the student not to think about "white bears." The problem is, if the student is thinking about failing, then they clearly aren't thinking about the material on the test. As we have seen in previous posts, working memory and attention are severely limited resources. If they are focused on the wrong information, then there will be fewer resources available to do well on the test.

What advice should we give instead? We might take a cue from high-pressure sports where the athlete faces negative thoughts (e.g., "Don't screw up. Don't screw up"). The advice for them is to focus on something (e.g., a word or concept) that is related to the task a hand [2]. In other words, if the student is worried about failing, then give that student something to think about instead.

It's not easy to do, obviously. But knowing how ironic processing works might help students understand how their mind betrays them. More importantly, knowing about the Operate and Test processes might also help students formulate their own strategies for handling situations when processing turns ironic. Once they have those strategies in hand, perhaps they can teach them to Dr. Stantz so he doesn't "accidentally" destroy downtown New York.

Share and Enjoy! 

Dr. Bob

Going Beyond the Information Given

[1] Wegner, D. M. (1994). Ironic processes of mental control. Psychological Review, 101(1), 34.

[2] Dugdale, J. R., & Eklund, R. C. (2002). Do not pay any attention to the umpires: Thought suppression and task-relevant focusing strategies. Journal of Sport and Exercise Psychology, 24(3), 306-319.

Thursday, December 21, 2017

Joined At the Hip: The Conjunction Fallacy

Learning By Doing

Consider the following scenario: 

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
  1. Linda is active in the feminist movement.
  2. Linda is a bank teller.
  3. Linda is a bank teller and is active in the feminist movement.
Your task is to order the three descriptions from most likely to least likely. In other words, is Linda more likely to be a feminist than a bank teller (or vice versa)? Once you have rank-ordered the descriptions, feel free to read on.

Scientists on a Plane

After attending a conference, I was on flight home when a woman with poster tube sat down next to me (I had the middle seat...lucky me). Curious, I asked if she was a scientist or an architect? Her reply took me by surprise. "I'm a scientist. Don't I look like one?" I feebly tried to explain that my guess wasn't based on looks. Instead, I was trying to use information about the base rates of scientists in the US. I assumed there are more scientists than architects [1]. 

I think her question highlights a very common way of thinking. It is extremely easy to ignore the frequency of a particular class or the probability of the occurrence of a specific event. In other words, it is easy to forget about base rates (e.g., the number of female social scientists employed in the US) when there is a concrete example immediately in front of you.

Let's return to the question that opened this post. I haven't taken the time to investigate how many feminists there are in the United States, nor have I tried to mine the US Census data to figure out the number of employed bank tellers. But I know for certain that there are fewer feminist bank tellers than there are bank tellers or feminists. How do I know that? Because it is logically impossible to have more feminist bank tellers than either subgroup. The following Venn diagram proves this logical necessity (see Fig. 1).

Figure 1. A Venn diagram depicting feminist bank tellers.

"We are connected/Siamese twins/At the wrist" –Smashing Pumpkins, Geek U.S.A.

Forgetting that overlapping populations are less frequent than their parent populations goes by the term Conjunction Fallacy. This mental slip was first introduced by Amos Tversky and Daniel Kahneman in the early 80s. They noticed that they themselves would initially make this error; however, after they had sufficient time to think about it, they realized their mistake. This dynamic duo decided that their own logical fallacy was probably symptomatic of the larger population (I mean...they were are some pretty smart dudes. If they admit falling into the conjunction fallacy, then none of us are safe). When they tested their hypothesis in a wider group, they found strong evidence that others also make the same mistake [2].

The S.T.E.M. Connection

The educational implications of the conjunction fallacy probably belong in the category of "critical thinking." We need to teach our students not to fall into the trap of thinking that data from a subpopulation is going to be more frequent than the pool from which the data are drawn. For example, the description of Linda fits closely with the stereotype that we might have about feminists. But it turns out that we should ignore that data when it conflicts with the parameters of a larger population. 

If you are teaching a class on probability, the "Linda Problem" is a good way to introduce the topic of conjunctive probabilities. After the students complete the task, you can then go into a lesson of explaining the mathematics behind probabilities that depend on one another. In other words, the lesson can focus on the importance of the word "and." As a heuristic, you can explain that each time you add the word and, the probabilities are multiplied together; and when decimals are multiplied together, the entire probability decreases. As a wrap-up to the lesson, you can circle back to the Linda Problem and have a discussion about how the mathematics applies to this scenario. Once we say Linda is a bank teller and a feminist, then the proportion of people who fit that scenario drops precipitously. 

Estimating the likelihood of an event or the frequency of a class (or subclass) is extremely important in assessing risk. It is used in financial estimation, medical reasoning, and in psychological experiments when defining a target population. Falling prey to the conjunction fallacy is easy, but if we remember to slow dow our thinking, we might be able to detect (and escape!) our error. 

Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] To verify my assumption, I tried to look through the US Census occupation data. It was more difficult than I initially thought because the Bureau of Labor Statistics lumps together architects and engineers. However, if we restrict our analyses strictly to architects (non-naval) and physical scientists (all other), then my assumption was correct (246,000 architects and 261,000 physical scientists). 

[2] Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4), 293.

Thursday, November 16, 2017

Ab Fab: Confabulation

Learning By Doing

To get post started, let's memorize a bunch of useless stuff. Do your best to memorize the following list:


What did you notice about the list? Did you reorganize the list to make the items easier to remember? Did you use any other memorization strategy?

"I have memories, but I can't tell if they're real." -K, Blade Runner 2049

In previous posts, we addressed the misconception that "memory is like a video recording." Instead, memories are reconstructed at the moment of recollection, and they are influenced by the way in which the memory is probed (e.g., "About how fast were the cars going when they hit [vs. smashed into] each other?"). Memories are also colored by one's emotional experience [1]. In recalling past events, the mind may put a positive spin on something that was horrific at the time the memory was created. A group of veterans, playing cards at the local VFW, fondly reminiscing of their time at war is a vivid example.

Pop Quiz! Without looking at the top of this post, rate how confident you are that the following words appeared in the original list [2].

Item High Med. High Medium Med. Low Low
Kiwi 5 4 5 2 1
Strawberry 5 4 5 2 1
Apple 5 4 5 2 1
Banana 5 4 5 2 1

Were you right? Some people mentally insert the word apple into the list because it is a highly iconic member of the category fruit. Activation of the category spreads to tightly knit members; thus, we infer that apple was on the list. This is may be a elementary example, but it is symptomatic of a much larger (and more interesting) issue. Memories are not indelibly stamped onto our neurons. Instead, we are prone to inserting new details at the time of retrieval.

Memory Insertion & False Memories

A much more serious example can be found in a line of research where scientists actually implanted false memories in children [3]. Scientists were able to implant false memories in about a quarter of their volunteers, and the false memories ranged from being lost in the mall to being attacked by a dog. The most outlandish implanted memory was convincing the participant that he or she had witnessed a demonic possession [4]!

Inferring an item on a list that wasn't originally there, or a false memory from childhood, are examples of confabulation, which is defined as a memory disturbance that is neither intentional nor created to deceive other people. The more commonplace version is usually just a harmless insertion of a memory that did not necessarily happen to that person. For example, on a work trip, I was completely convinced that it was my first overnight trip to West Virginia. I firmly believed that until one of my friends helpfully pointed out that I spent my 30th birthday in a resort in WV. He knew because he had been there with me! In my defense, the trip was a surprise by my wife, so I didn't know we were going to WV until we got there, and then I was shocked to see many of my good friends were there, too.

The S.T.E.M. Connection

The relevance of confabulation might not present a huge problem in middle- or high-school, but it may become an issue later in life. For example, the authorship of a scientific paper or assigning credit for an invention can become contentious when the parties involved selectively forget or insert false memories [5]. For example, George H. Daniels wrote a book entitled Science in American Society: A Social History (1971). A reviewer of his work pointed out that he had plagiarized entire paragraphs and other large sections without giving proper credit to the original source. Daniels was mortified, and he wrote an apology to the scientific community [6]. It is probably the case that Daniels did not intend to deceive his readers; instead, he falsely accepted these ideas as his own.

Aside from this specific (and potentially embarrassing) example, confabulation is important when thinking critically about someone's recollection of events. This is, of course, extremely important in eyewitness testimony (as we saw earlier). But it's also important to social scientists who rely on their participants' retrospective accounts of their behavior. Participants might think about what they logically should have done, instead of what they actually did. This inference can color the data that is ultimately collected.

In the end, we can all sympathize with K, the main character from the movie Blade Runner 2049 (2017), because our memories are subject to intrusions. He is rightfully skeptical of his memories because they can be difficult to verify if they are real (or not)! 

Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] A great example of this is in the Pixar movie Inside Out (2015).

[2] Using the recognition test with confidence ratings was used in: Brewer, W. F., & Treyens, J. C. (1981). Role of schemata in memory for places. Cognitive psychology, 13(2), 207-230.

[3] Elizabeth Loftus is probably the most widely recognized name in this area. A summary of her research can be found in her Ted Talk

[4] Mazzoni, G. A., Loftus, E. F., & Kirsch, I. (2001). Changing beliefs about implausible autobiographical events: a little plausibility goes a long way. Journal of Experimental Psychology: Applied, 7(1), 51-59.

[5] Goldberg, C. (2006) Have you ever plagiarized? If so, you're in good company. Retrieved from

[6] Daniels, G.H. (14 Jan 1972) AcknowledgementScience 175 (4018), 124-125.

Wednesday, November 1, 2017

Reading Room Material: Stranger Things & The Frontal Lobe

If you're like me, then you are probably working your way through the second season of Stranger Things. Imagine my delight as this particular episode (s2e3) touched on a familiar topic.

Stranger Things: Season 2, Episode 3 "The Pollywog"

The main characters are listening to a lecture by their favorite teacher (complete with overhead transparencies!). He describes one of the most famous people in the history of neuroscience [1]:

Scott ClarkeThe case of Phineas Gage is one of the great medical curiosities of all time. Phineas was a railroad worker in 1848 who had a nightmarish accident. A large iron rod was driven completely through his head. Phineas miraculously survived. He seemed fine. And physically, yes, he was. But his injury resulted in a complete change to his personality.

The story of Phineas Gage is a well worn tale, and it is told in nearly every undergraduate neuroscience course. Thus, I found it extremely curious that Mr. Clarke was telling this story to his 5th grade science class. I also found it curious that Mr. Clarke ends the story with "a complete change to his personality." He didn't explain in what way Phineas changed. 

According to The American Phrenological Journal and Repository of Science (1851), Gage's physician reported that he had become, "gross, profane, course, and vulgar to such a degree that his society was intolerable to decent people" [2]. In other words, Gage became a jerk. Given the change in his personality, it was assumed that function of the frontal lobe was for inhibiting behaviors and thoughts. No frontal lobe? No inhibition. 

That doesn't sound like a very fulfilling life. However, if you continue to dig into this fascinating story, there is a small ray of hope (unfortunately, that ray doesn't always make it into the textbooks). A few years after he recovered from his injuries (including a fungal infection!), Phineas's personality renormalized. He wasn't such a jerk, and he even held down a job driving a stagecoach [3]. 

The story of Phineas Gage is hopeful because it demonstrates the brain's amazing ability to overcome severe trauma. He didn't live a very long life, but Gage remains immortalized in the annals of neuroscience (as well as the greatest TV series of all time). 

Share and Enjoy!

Dr. Bob

More Material

[1] Read the transcript or watch the full episode.

[2] Fowler, O. S., & Fowler, L.N. (Eds.). (1851). The American Phrenological Journal and Repository of Science, Literature and General Intelligence, Volumes 13-14, New York, NY: Fowlers & Wells, p. 89.

[3] Hamilton, J. (May 21, 2017Why Brain Scientists Are Still Obsessed With The Curious Case Of Phineas Gage Retrieved from

Thursday, October 5, 2017

How to Build an Atom: Analogical Reasoning

Learning By Doing

You are leading a siege on the most fortified castle in the land. Your army is ready to attack, but just at the last minute you notice that sending all of your soldiers across the wooden bridge will collapse it. How will you attack the castle, without your army being eaten by the mote-dwelling alligators?

Fast forward a few hundred years. You are now a world-class oncologist, and you are working with a new technology to treat cancer. It's called a "gamma knife" because it uses gamma rays to kill cancerous cells. At high energy levels, a gamma ray will destroy healthy tissue. At low energy levels, it can't knock out the cancer. How can you use the gamma knife to destroy the cancerous cells, without harming the surrounding tissue? 

Did you solve each of the problems? If so, how did you solve them? (Note: the image for this blog was meant to serve as a hint.) Did you notice a similarity between the two scenarios? Did the second scenario help with the first (or vice versa)? This famous analogical problem was originally stated by Mary Gick and Keith Holyoak in 1980 [1].

Nucleus : Sun :: Electrons : Planets

Much of our problem solving is done analogically. We see a problem, and when we're lucky, it might remind us of a similar problem we've solved in the past. If a true relationship exists, then we can extrapolate from the past to the current problem. The history of science contains several illuminating examples of this process.

Take, for instance, Ernest Rutherford's model of the atom that he proposed in 1911 [2]. Knowing that the atom was made up of protons, neutrons, and electrons, he took what he knew about the solar system (i.e., the base), and applied the same logic to the structure of the atom (i.e., the target). The proton and neutron were found at the center of the atom, much like the sun sits at the center of the solar system. The electrons revolved around the nucleus in a manner similar to the planets revolving around the sun. In other words, Rutherford saw a mapping between the atomic nucleus and the sun and the electrons and planets (see Figure 1).

Figure 1: The analogical mapping between the solar system and the atom

Notice, however, that there are some properties of the solar system that he did not map onto the atomic structure. For instance, the sun gives off an intense amount of heat and might be considered "yellow." Nowhere in this theorizing did Rutherford claim that the nucleus gave off heat or is "yellow." That means Rutherford was sensitive to the properties and relationships between the two systems. He knew that some of the properties of the base domain (i.e., the solar system) should not map onto the target domain (i.e., the atom).

"Hey! That thing gotta hemi?"

To better understand the psychological processes used during analogical reasoning, Dedre Genter and her colleagues built a computational model called The Stucture Mapping Engine (SME) [3]. One of the key features of the SME is the emphasis that it places on relations instead of features

Let's take electricity for example. In the early days, when scientists were trying to make sense of the concept of electricity, they likened it to something they understood quite well: the flow of water. The analogy is that electrons are like water and they move from one location to another. A battery is like a reservoir, and gravity is like the difference in electrical potential. The SME looks for alignments between the relations in the base and target domains. For example, it sees a commonality between two different types of FORCES (i.e., gravity vs. electrical potential) and two different types of ENTITIES (i.e., water vs. electrons).

It necessarily throws out the surface-level features that are irrelevant to understanding how electricity works. For example, one feature of water is that it is blue. Since this is a feature and not a relation, the SME does not transfer the features water is blue or water is wet onto electrons.

The S.T.E.M. Connection

There are several learning studies that explicitly instruct students to do their own analogical comparisons between two sources of information. For example, my friend and collaborator, Tim Nokes-Malach and Dan Belenky, explicitly trained students in a physics class to compare worked-out examples of rotational kinematics problems. The students had to answer questions such as: 

  • What is similar and what is different across the two problems?
  • Are there differences in what the two problems ask for in terms of acceleration? If so, what are they?
The goal was to motivate the students to compare and contrast the two examples, with the hope that the students could then see the mappings between the relations of the two examples. In their study, the authors demonstrated doing this analogical comparison led to better performance on far transfer problems

This kind of intervention could be done for many topics. The goal, of course, is to show how relations in the base domain map onto the target domain. It's also relevant to talk about how the features of the base and target domains don't necessarily have to align. 

Analogical reasoning is extremely powerful because it can extend the knowledge that we have into the unknown. It can help us draw upon the knowledge we have from previous problems we've solved and apply that knowledge to problems we've never seen before. That's pretty cool (analogically speaking, of course). 

Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1]  Gick, M. L., & Holyoak, K. J. (1980). Analogical problem solvingCognitive psychology, 12(3), 306-355.

[2] Allain, R. (Sept. 9, 2009) The development of the atomic model. Retrieved from

[3] No, the structure mapping engine doesn't gotta hemi, but it does a pretty good job modeling the analogical processes that humans use! Check out their original paper: Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial intelligence, 41(1), 1-63.

[4] Nokes-Malach, T. J., VanLehn, K., Belenky, D. M., Lichtenstein, M., & Cox, G. (2013). Coordinating principles and examples through analogy and self-explanation. European Journal of Psychology of Education, 28(4), 1237-1263.

Thursday, August 17, 2017

Smoking, Non-smoking, or First Available?: Availability Bias

Learning By Doing

Pop quiz! Do your best to answer the following questions. 

  1. Since 1994, the homicide rate in the US has: risen sharply, risen slightly, stayed the same, fallen slightly, or fallen sharply.
  2. After a plane crash, people's estimates of air traffic accidents: increases, decreases, or stays the same.
  3. Bad things always happens in threes. Do you: strongly agree, slightly agree, not have any feelings one way or the other, slightly disagree, or strongly disagree.

Your Information Ecology

Last time, we talked about the confirmation bias. We explored how the mind uses shortcuts to gather information and make judgements about the world. In addition to the confirmation bias, the mind uses many other shortcuts. One of these is the availability bias, which states that our judgments of the "truthiness" [1] of a given statement is based on how easily relevant information comes to mind [2].

Consider the following example. Is the suicide rate among Americans higher or lower than the homocide rate? Stop for a second and think about your answer. Then, pause again and ask yourself how you formed your answer. What information did you draw upon? What long-term memories did you consult? 

If you're like me, then you might be surprised to learn that the suicide rate is almost double the homicide rate [3]. If that surprises you, then consider why you thought that the homicide rate was higher. One reason might be because the media reports more stories about homicide than suicide. Therefore, the information we are exposed to does not reflect the actual rates (i.e., there are more news reports of homicide even though the suicide rate is higher). We are influenced by the information that is available (hence the name of this bias).

Availability Mechanisms: How Often and When?

Much of the early work on "heuristics and biases" was conducted by a team of psychologists named Daniel Kahneman and Amos Tversky. In one of their papers, they empirically demonstrated the availability bias with a very simple manipulation [4]. First, they created two lists of 39 names. The first list contained 19 names of famous women and 20 non-famous men's names. The second list was the exact opposite. It featured 19 famous men (and 20 non-famous women's names). After constructing the two lists, they asked people to listen to them and estimate if the list had more names of men or women. Can you guess what they found? 

In the list containing famous women, participants estimated that there were more woman than men in the list, even though there was actually one fewer female name in the list. The reason this works is because the participants were able to easily recall the name of the woman in the list (and less able to recall men's names). 

In their paper, Tversky and Kahneman proposed several potential mechanisms for the availability bias. First, easily generated ideas, thoughts, and memories are the ones that are the most frequently encountered. For example, you see your family members and coworkers more often than your distant cousins or high-school classmates. When asked to name the people you know, it is more likely that you will name the people you see everyday than those whom you haven't seen in years. 

Another property that has an impact on the fluent generation of ideas and memories is recency. It is easier to recall the names of people, places, and things that you've recently encountered. As they say: Out of sight, out of mind.

In summary, the frequency and recency of exposure to information can have a large impact on how easily ideas and memories are called to mind.

The S.T.E.M. Connection

Scientific thinking is synonymous with critical thinking, and knowing about the availability bias might help students become more critical of the information they hear reported in the news. They might also become a little more skeptical of their own beliefs. For example, if they hear someone claim, Bad things happen in threes, they might realize that the claim is based on the (false) notion that, "It must be true because I can think of lots of examples." The same might be true in designing a hypothesis to test. Just because you can easily imagine an outcome to the experiment doesn't make it more true (or likely). 

In conclusion, it is handy to know about cognitive biases. Why? Although you might not become immune to them, it might help reduce their impact (see also the post on metacognition). An understanding of the availability bias might help students better calibrate their view of the world if they realize that frequent and recent information can influence their thinking. In the immortal words of G.I. Joe: Knowing is half the battle. Battle on my friends...and don't be heavily swayed by the first thing that pops into your mind!

Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] In case this term is new to you, truthiness was coined by Stephen Colbert on his show, The Colbert Report.

[2] Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207-232.

[3] According to this Freakonomics podcast, there were "36,500 suicides in the U.S. and roughly 16,500 homicides" in 2009.

[4] Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2), 207-232.

Thursday, May 18, 2017

Can You Either Confirm or Deny?: Confirmation Bias

Learning By Doing

Let's play a game. Unfortunately, I have to send you away from this page. But go play and come right back! 

Reflection Questions
  • did you do? 
  • Did you figure out the rule that governs the sequence of numbers? 
  • What problem-solving strategies did you use? 
  • Was there something that you wish you would have done differently? 

The Two Flavors of Confirmation Bias

Informally, the confirmation bias is the tendency to seek evidence that is consistent with your beliefs. The more personal the beliefs, the stronger the bias. More formally, there are two parts to the definition. The first part is "searching for confirmatory evidence," and the second part is "selectively interpreting the data to fit with one's hypothesis."

Selective Search of Data: The Luminiferous Ether

The number generation game that you played at the beginning of this post is a good example of looking for evidence that conforms to your initial hypothesis [1]. It's a tricky puzzle, and an overwhelming majority of people submit triples that confirm their suspicions. If this describes you, then you are not alone.

Confirmation bias is not relegated to the psychological laboratory. It also operates in the real world. Scientists, for example, often have a vested and personal interest in seeing their hypotheses confirmed by their data. A classic example in the history of science is the search for evidence of the “luminiferous ether." Up until the 19th century, it was believed that this was the substance that carried light. Like sound, it was believed that light needed a medium through which to propagate. Finally, in 1887, Albert Michelson and Edward Morley conducted a famous experiment that conclusively disconfirmed the existence of the ether [2]. Before that experiment, there was a lot of effort invested in finding evidence for this mysterious ether.

Bottom line: The data are selectively collected and disconfirmatory evidence is deemed irrelevant.

Selective Interpretation of Data: The People v. O. J. Simpson 

The O. J. Simpson trial is a good example of selectively interpreting evidence to support your position or claim [3]. As in most trials, there was evidence that nobody can deny: blood at O. J.'s house contained the DNA of Nicole Brown Simpson. There was blood found in O. J.'s white Ford Bronco that matched both Nicole and Ron Goldman's DNA. O. J. Simpson had been arrested for physically assaulting Nicole. These are all incontrovertible facts. However, the defense and prosecution interpreted the data differently. The defense said that the blood samples were placed there by a racist LAPD cop. The defense claimed that the blood was not placed there, but was a result of the murders and subsequent coverup by O. J.

Bottom line: The data are right, but the interpretation of the data are subject to dispute.

The S.T.E.M. Connection

There are implications of the confirmation bias for the classroom as well. In the mid- to late-1960's, educational psychologists experimentally manipulated teachers' expectations of their students. They were told that certain students were about to experience a learning "spurt" (or not). They randomly selected kids to be in the "spurt" condition (or not). 

What did they find? They found that teacher expectations had a measurable impact on the number of IQ points the students gained over the course of an academic year. The effect was particularly strong for kids in first and second grade [4]. Although the authors did not provide a mechanism, we might expect that the confirmation bias was at work. Every time a child in the spurt condition did something notable, it confirmed that teacher's expectation. If the student failed to live up to her expectation, then you might imagine the teacher was able to explain away her behavior (e.g., she was just having a bad day).

Confirmation bias plagues us all, and it can be difficult to avoid. Given that, it is important to experience it first hand, receive feedback when it does happen, and practice looking for and interpreting evidence that goes against one's beliefs. Only then can we get a true picture of the world.  

Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly journal of experimental psychology, 12(3), 129-140.

[2] Motta, L. (2007) Michelson-Morley experiment. Retrieved from

[3] Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175.

[4] Rosenthal, R., & Jacobson, L. (1968) Pygmalion in the classroom. New York: Hold, Rinehart and Winston.