Thursday, November 16, 2017

Ab Fab: Confabulation

Learning By Doing

To get post started, let's memorize a bunch of useless stuff. Do your best to memorize the following list:

banana
orange
kiwi
strawberry
peach
blueberry
mango
watermelon
lemon
grapefruit




What did you notice about the list? Did you reorganize the list to make the items easier to remember? Did you use any other memorization strategy?


"I have memories, but I can't tell if they're real." -K, Blade Runner 2049

In previous posts, we addressed the misconception that "memory is like a video recording." Instead, memories are reconstructed at the moment of recollection, and they are influenced by the way in which the memory is probed (e.g., "About how fast were the cars going when they hit [vs. smashed into] each other?"). Memories are also colored by one's emotional experience [1]. In recalling past events, the mind may put a positive spin on something that was horrific at the time the memory was created. A group of veterans, playing cards at the local VFW, fondly reminiscing of their time at war is a vivid example.

Pop Quiz! Without looking at the top of this post, rate how confident you are that the following words appeared in the original list [2].


Item High Med. High Medium Med. Low Low
Kiwi 5 4 5 2 1
Strawberry 5 4 5 2 1
Apple 5 4 5 2 1
Banana 5 4 5 2 1

Were you right? Some people mentally insert the word apple into the list because it is a highly iconic member of the category fruit. Activation of the category spreads to tightly knit members; thus, we infer that apple was on the list. This is may be a elementary example, but it is symptomatic of a much larger (and more interesting) issue. Memories are not indelibly stamped onto our neurons. Instead, we are prone to inserting new details at the time of retrieval.


Memory Insertion & False Memories

A much more serious example can be found in a line of research where scientists actually implanted false memories in children [3]. Scientists were able to implant false memories in about a quarter of their volunteers, and the false memories ranged from being lost in the mall to being attacked by a dog. The most outlandish implanted memory was convincing the participant that he or she had witnessed a demonic possession [4]!

Inferring an item on a list that wasn't originally there, or a false memory from childhood, are examples of confabulation, which is defined as a memory disturbance that is neither intentional nor created to deceive other people. The more commonplace version is usually just a harmless insertion of a memory that did not necessarily happen to that person. For example, on a work trip, I was completely convinced that it was my first overnight trip to West Virginia. I firmly believed that until one of my friends helpfully pointed out that I spent my 30th birthday in a resort in WV. He knew because he had been there with me! In my defense, the trip was a surprise by my wife, so I didn't know we were going to WV until we got there, and then I was shocked to see many of my good friends were there, too.


The S.T.E.M. Connection

The relevance of confabulation might not present a huge problem in middle- or high-school, but it may become an issue later in life. For example, the authorship of a scientific paper or assigning credit for an invention can become contentious when the parties involved selectively forget or insert false memories [5]. For example, George H. Daniels wrote a book entitled Science in American Society: A Social History (1971). A reviewer of his work pointed out that he had plagiarized entire paragraphs and other large sections without giving proper credit to the original source. Daniels was mortified, and he wrote an apology to the scientific community [6]. It is probably the case that Daniels did not intend to deceive his readers; instead, he falsely accepted these ideas as his own.

Aside from this specific (and potentially embarrassing) example, confabulation is important when thinking critically about someone's recollection of events. This is, of course, extremely important in eyewitness testimony (as we saw earlier). But it's also important to social scientists who rely on their participants' retrospective accounts of their behavior. Participants might think about what they logically should have done, instead of what they actually did. This inference can color the data that is ultimately collected.

In the end, we can all sympathize with K, the main character from the movie Blade Runner 2049 (2017), because our memories are subject to intrusions. He is rightfully skeptical of his memories because they can be difficult to verify if they are real (or not)! 


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] A great example of this is in the Pixar movie Inside Out (2015).

[2] Using the recognition test with confidence ratings was used in: Brewer, W. F., & Treyens, J. C. (1981). Role of schemata in memory for places. Cognitive psychology, 13(2), 207-230.

[3] Elizabeth Loftus is probably the most widely recognized name in this area. A summary of her research can be found in her Ted Talk

[4] Mazzoni, G. A., Loftus, E. F., & Kirsch, I. (2001). Changing beliefs about implausible autobiographical events: a little plausibility goes a long way. Journal of Experimental Psychology: Applied, 7(1), 51-59.

[5] Goldberg, C. (2006) Have you ever plagiarized? If so, you're in good company. Retrieved from boston.com

[6] Daniels, G.H. (14 Jan 1972) AcknowledgementScience 175 (4018), 124-125.

Wednesday, November 1, 2017

Reading Room Material: Stranger Things & The Frontal Lobe

If you're like me, then you are probably working your way through the second season of Stranger Things. Imagine my delight as this particular episode (s2e3) touched on a familiar topic.

Stranger Things: Season 2, Episode 3 "The Pollywog"

The main characters are listening to a lecture by their favorite teacher (complete with overhead transparencies!). He describes one of the most famous people in the history of neuroscience [1]:


Scott ClarkeThe case of Phineas Gage is one of the great medical curiosities of all time. Phineas was a railroad worker in 1848 who had a nightmarish accident. A large iron rod was driven completely through his head. Phineas miraculously survived. He seemed fine. And physically, yes, he was. But his injury resulted in a complete change to his personality.

The story of Phineas Gage is a well worn tale, and it is told in nearly every undergraduate neuroscience course. Thus, I found it extremely curious that Mr. Clarke was telling this story to his 5th grade science class. I also found it curious that Mr. Clarke ends the story with "a complete change to his personality." He didn't explain in what way Phineas changed. 

According to The American Phrenological Journal and Repository of Science (1851), Gage's physician reported that he had become, "gross, profane, course, and vulgar to such a degree that his society was intolerable to decent people" [2]. In other words, Gage became a jerk. Given the change in his personality, it was assumed that function of the frontal lobe was for inhibiting behaviors and thoughts. No frontal lobe? No inhibition. 

That doesn't sound like a very fulfilling life. However, if you continue to dig into this fascinating story, there is a small ray of hope (unfortunately, that ray doesn't always make it into the textbooks). A few years after he recovered from his injuries (including a fungal infection!), Phineas's personality renormalized. He wasn't such a jerk, and he even held down a job driving a stagecoach [3]. 

The story of Phineas Gage is hopeful because it demonstrates the brain's amazing ability to overcome severe trauma. He didn't live a very long life, but Gage remains immortalized in the annals of neuroscience (as well as the greatest TV series of all time). 


Share and Enjoy!

Dr. Bob


More Material

[1] Read the transcript or watch the full episode.

[2] Fowler, O. S., & Fowler, L.N. (Eds.). (1851). The American Phrenological Journal and Repository of Science, Literature and General Intelligence, Volumes 13-14, New York, NY: Fowlers & Wells, p. 89.

[3] Hamilton, J. (May 21, 2017Why Brain Scientists Are Still Obsessed With The Curious Case Of Phineas Gage Retrieved from npr.org.


Thursday, October 5, 2017

How to Build an Atom: Analogical Reasoning

Learning By Doing


You are leading a siege on the most fortified castle in the land. Your army is ready to attack, but just at the last minute you notice that sending all of your soldiers across the wooden bridge will collapse it. How will you attack the castle, without your army being eaten by the mote-dwelling alligators?

Fast forward a few hundred years. You are now a world-class oncologist, and you are working with a new technology to treat cancer. It's called a "gamma knife" because it uses gamma rays to kill cancerous cells. At high energy levels, a gamma ray will destroy healthy tissue. At low energy levels, it can't knock out the cancer. How can you use the gamma knife to destroy the cancerous cells, without harming the surrounding tissue? 

Did you solve each of the problems? If so, how did you solve them? (Note: the image for this blog was meant to serve as a hint.) Did you notice a similarity between the two scenarios? Did the second scenario help with the first (or vice versa)? This famous analogical problem was originally stated by Mary Gick and Keith Holyoak in 1980 [1].


Nucleus : Sun :: Electrons : Planets

Much of our problem solving is done analogically. We see a problem, and when we're lucky, it might remind us of a similar problem we've solved in the past. If a true relationship exists, then we can extrapolate from the past to the current problem. The history of science contains several illuminating examples of this process.

Take, for instance, Ernest Rutherford's model of the atom that he proposed in 1911 [2]. Knowing that the atom was made up of protons, neutrons, and electrons, he took what he knew about the solar system (i.e., the base), and applied the same logic to the structure of the atom (i.e., the target). The proton and neutron were found at the center of the atom, much like the sun sits at the center of the solar system. The electrons revolved around the nucleus in a manner similar to the planets revolving around the sun. In other words, Rutherford saw a mapping between the atomic nucleus and the sun and the electrons and planets (see Figure 1).


Figure 1: The analogical mapping between the solar system and the atom

Notice, however, that there are some properties of the solar system that he did not map onto the atomic structure. For instance, the sun gives off an intense amount of heat and might be considered "yellow." Nowhere in this theorizing did Rutherford claim that the nucleus gave off heat or is "yellow." That means Rutherford was sensitive to the properties and relationships between the two systems. He knew that some of the properties of the base domain (i.e., the solar system) should not map onto the target domain (i.e., the atom).


"Hey! That thing gotta hemi?"

To better understand the psychological processes used during analogical reasoning, Dedre Genter and her colleagues built a computational model called The Stucture Mapping Engine (SME) [3]. One of the key features of the SME is the emphasis that it places on relations instead of features

Let's take electricity for example. In the early days, when scientists were trying to make sense of the concept of electricity, they likened it to something they understood quite well: the flow of water. The analogy is that electrons are like water and they move from one location to another. A battery is like a reservoir, and gravity is like the difference in electrical potential. The SME looks for alignments between the relations in the base and target domains. For example, it sees a commonality between two different types of FORCES (i.e., gravity vs. electrical potential) and two different types of ENTITIES (i.e., water vs. electrons).

It necessarily throws out the surface-level features that are irrelevant to understanding how electricity works. For example, one feature of water is that it is blue. Since this is a feature and not a relation, the SME does not transfer the features water is blue or water is wet onto electrons.


The S.T.E.M. Connection

There are several learning studies that explicitly instruct students to do their own analogical comparisons between two sources of information. For example, my friend and collaborator, Tim Nokes-Malach and Dan Belenky, explicitly trained students in a physics class to compare worked-out examples of rotational kinematics problems. The students had to answer questions such as: 

  • What is similar and what is different across the two problems?
  • Are there differences in what the two problems ask for in terms of acceleration? If so, what are they?
The goal was to motivate the students to compare and contrast the two examples, with the hope that the students could then see the mappings between the relations of the two examples. In their study, the authors demonstrated doing this analogical comparison led to better performance on far transfer problems

This kind of intervention could be done for many topics. The goal, of course, is to show how relations in the base domain map onto the target domain. It's also relevant to talk about how the features of the base and target domains don't necessarily have to align. 

Analogical reasoning is extremely powerful because it can extend the knowledge that we have into the unknown. It can help us draw upon the knowledge we have from previous problems we've solved and apply that knowledge to problems we've never seen before. That's pretty cool (analogically speaking, of course). 


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1]  Gick, M. L., & Holyoak, K. J. (1980). Analogical problem solvingCognitive psychology, 12(3), 306-355.

[2] Allain, R. (Sept. 9, 2009) The development of the atomic model. Retrieved from https://www.wired.com/2009/09/the-development-of-the-atomic-model.

[3] No, the structure mapping engine doesn't gotta hemi, but it does a pretty good job modeling the analogical processes that humans use! Check out their original paper: Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial intelligence, 41(1), 1-63.

[4] Nokes-Malach, T. J., VanLehn, K., Belenky, D. M., Lichtenstein, M., & Cox, G. (2013). Coordinating principles and examples through analogy and self-explanation. European Journal of Psychology of Education, 28(4), 1237-1263.

Thursday, August 17, 2017

Smoking, Non-smoking, or First Available?: Availability Bias

Learning By Doing

Pop quiz! Do your best to answer the following questions. 

  1. Since 1994, the homicide rate in the US has: risen sharply, risen slightly, stayed the same, fallen slightly, or fallen sharply.
  2. After a plane crash, people's estimates of air traffic accidents: increases, decreases, or stays the same.
  3. Bad things always happens in threes. Do you: strongly agree, slightly agree, not have any feelings one way or the other, slightly disagree, or strongly disagree.



Your Information Ecology

Last time, we talked about the confirmation bias. We explored how the mind uses shortcuts to gather information and make judgements about the world. In addition to the confirmation bias, the mind uses many other shortcuts. One of these is the availability bias, which states that our judgments of the "truthiness" [1] of a given statement is based on how easily relevant information comes to mind [2].

Consider the following example. Is the suicide rate among Americans higher or lower than the homocide rate? Stop for a second and think about your answer. Then, pause again and ask yourself how you formed your answer. What information did you draw upon? What long-term memories did you consult? 

If you're like me, then you might be surprised to learn that the suicide rate is almost double the homicide rate [3]. If that surprises you, then consider why you thought that the homicide rate was higher. One reason might be because the media reports more stories about homicide than suicide. Therefore, the information we are exposed to does not reflect the actual rates (i.e., there are more news reports of homicide even though the suicide rate is higher). We are influenced by the information that is available (hence the name of this bias).


Availability Mechanisms: How Often and When?

Much of the early work on "heuristics and biases" was conducted by a team of psychologists named Daniel Kahneman and Amos Tversky. In one of their papers, they empirically demonstrated the availability bias with a very simple manipulation [4]. First, they created two lists of 39 names. The first list contained 19 names of famous women and 20 non-famous men's names. The second list was the exact opposite. It featured 19 famous men (and 20 non-famous women's names). After constructing the two lists, they asked people to listen to them and estimate if the list had more names of men or women. Can you guess what they found? 

In the list containing famous women, participants estimated that there were more woman than men in the list, even though there was actually one fewer female name in the list. The reason this works is because the participants were able to easily recall the name of the woman in the list (and less able to recall men's names). 

In their paper, Tversky and Kahneman proposed several potential mechanisms for the availability bias. First, easily generated ideas, thoughts, and memories are the ones that are the most frequently encountered. For example, you see your family members and coworkers more often than your distant cousins or high-school classmates. When asked to name the people you know, it is more likely that you will name the people you see everyday than those whom you haven't seen in years. 

Another property that has an impact on the fluent generation of ideas and memories is recency. It is easier to recall the names of people, places, and things that you've recently encountered. As they say: Out of sight, out of mind.

In summary, the frequency and recency of exposure to information can have a large impact on how easily ideas and memories are called to mind.


The S.T.E.M. Connection

Scientific thinking is synonymous with critical thinking, and knowing about the availability bias might help students become more critical of the information they hear reported in the news. They might also become a little more skeptical of their own beliefs. For example, if they hear someone claim, Bad things happen in threes, they might realize that the claim is based on the (false) notion that, "It must be true because I can think of lots of examples." The same might be true in designing a hypothesis to test. Just because you can easily imagine an outcome to the experiment doesn't make it more true (or likely). 

In conclusion, it is handy to know about cognitive biases. Why? Although you might not become immune to them, it might help reduce their impact (see also the post on metacognition). An understanding of the availability bias might help students better calibrate their view of the world if they realize that frequent and recent information can influence their thinking. In the immortal words of G.I. Joe: Knowing is half the battle. Battle on my friends...and don't be heavily swayed by the first thing that pops into your mind!


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] In case this term is new to you, truthiness was coined by Stephen Colbert on his show, The Colbert Report.

[2] Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207-232.

[3] According to this Freakonomics podcast, there were "36,500 suicides in the U.S. and roughly 16,500 homicides" in 2009.

[4] Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2), 207-232.

Thursday, May 18, 2017

Can You Either Confirm or Deny?: Confirmation Bias

Learning By Doing


Let's play a game. Unfortunately, I have to send you away from this page. But go play and come right back! 

Reflection Questions
  • So...how did you do? 
  • Did you figure out the rule that governs the sequence of numbers? 
  • What problem-solving strategies did you use? 
  • Was there something that you wish you would have done differently? 


The Two Flavors of Confirmation Bias

Informally, the confirmation bias is the tendency to seek evidence that is consistent with your beliefs. The more personal the beliefs, the stronger the bias. More formally, there are two parts to the definition. The first part is "searching for confirmatory evidence," and the second part is "selectively interpreting the data to fit with one's hypothesis."


Selective Search of Data: The Luminiferous Ether

The number generation game that you played at the beginning of this post is a good example of looking for evidence that conforms to your initial hypothesis [1]. It's a tricky puzzle, and an overwhelming majority of people submit triples that confirm their suspicions. If this describes you, then you are not alone.

Confirmation bias is not relegated to the psychological laboratory. It also operates in the real world. Scientists, for example, often have a vested and personal interest in seeing their hypotheses confirmed by their data. A classic example in the history of science is the search for evidence of the “luminiferous ether." Up until the 19th century, it was believed that this was the substance that carried light. Like sound, it was believed that light needed a medium through which to propagate. Finally, in 1887, Albert Michelson and Edward Morley conducted a famous experiment that conclusively disconfirmed the existence of the ether [2]. Before that experiment, there was a lot of effort invested in finding evidence for this mysterious ether.

Bottom line: The data are selectively collected and disconfirmatory evidence is deemed irrelevant.


Selective Interpretation of Data: The People v. O. J. Simpson 

The O. J. Simpson trial is a good example of selectively interpreting evidence to support your position or claim [3]. As in most trials, there was evidence that nobody can deny: blood at O. J.'s house contained the DNA of Nicole Brown Simpson. There was blood found in O. J.'s white Ford Bronco that matched both Nicole and Ron Goldman's DNA. O. J. Simpson had been arrested for physically assaulting Nicole. These are all incontrovertible facts. However, the defense and prosecution interpreted the data differently. The defense said that the blood samples were placed there by a racist LAPD cop. The defense claimed that the blood was not placed there, but was a result of the murders and subsequent coverup by O. J.

Bottom line: The data are right, but the interpretation of the data are subject to dispute.

The S.T.E.M. Connection

There are implications of the confirmation bias for the classroom as well. In the mid- to late-1960's, educational psychologists experimentally manipulated teachers' expectations of their students. They were told that certain students were about to experience a learning "spurt" (or not). They randomly selected kids to be in the "spurt" condition (or not). 

What did they find? They found that teacher expectations had a measurable impact on the number of IQ points the students gained over the course of an academic year. The effect was particularly strong for kids in first and second grade [4]. Although the authors did not provide a mechanism, we might expect that the confirmation bias was at work. Every time a child in the spurt condition did something notable, it confirmed that teacher's expectation. If the student failed to live up to her expectation, then you might imagine the teacher was able to explain away her behavior (e.g., she was just having a bad day).

Confirmation bias plagues us all, and it can be difficult to avoid. Given that, it is important to experience it first hand, receive feedback when it does happen, and practice looking for and interpreting evidence that goes against one's beliefs. Only then can we get a true picture of the world.  


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly journal of experimental psychology, 12(3), 129-140.

[2] Motta, L. (2007) Michelson-Morley experiment. Retrieved from http://scienceworld.wolfram.com/physics/Michelson-MorleyExperiment.html

[3] Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175.

[4] Rosenthal, R., & Jacobson, L. (1968) Pygmalion in the classroom. New York: Hold, Rinehart and Winston.

Thursday, April 20, 2017

Departures and Arrivals: Linguistic Relativity

Learning By Doing

How many words do you have in your vocabulary for that white stuff that falls from the sky when the weather turns cold? How does your list of words compare to somebody who grew up in the desert?

Arrival

In the movie, Arrival (2016), we are introduced to Dr. Louise Banks, who is an expert in linguistics. When an alien ship touches down in Montana, she is called upon by her government to help translate the alien language. During the time she spends with the heptapods (i.e., the aliens), Dr. Banks introduces the audience to an idea from linguistics that helps explain what is happening. Here is her dialog with her collaborator, Ian Donnelly. 

Dr. Louise Banks: If you immerse yourself into a foreign language, then you can actually rewire your brain.
Ian Donnelly: Yeah, the Sapir-Whorf hypothesis. It's the theory that the language you speak determines how you think and...
Dr. Louise Banks: Yeah, it affects how you see everything.

First of all, I am impressed that Ian is familiar with the Sapir-Whorf Hypothesis because his character is a physicist by training. I guess he must have taken a linguistic or pyschology course just for fun. Second, I didn't realize it, but there are a bunch of misconceptions swirling around the Sapir-Whorf hypothesis.


"This is pure snow! It's everywhere!" –Charles De Mar

The first misconception I ran into was the name: Sapir-Whorf Hypothesis. According to some references [1], Benjamin Lee Whorf was a student of Edward Sapir. However, they never co-authored a paper espousing "the Sapir-Whorf Hypothesis," nor did they even formulate it as a testable hypothesis. It was only later that the field of linguistics gave it a name and a solid formulation. Hmm. This "hypothesis" is not off to a great start.

The second misconception is my favorite. According to the Sapir-Whorf Hypothesis (i.e., linguistic relatively)words that have many variations are important to that culture. For example, Eskimos have 50 different words for the word "snow." Given where they live, snow figures prominently into their daily lives. Ergo, they have lots of ways to refer to snow, right? They must! While it's true that there are lots of words for snow, it turns out that our language also has a lot of words for snow (e.g., snow, sleet, slush, powder, freezing rain, drifting snow, etc.). So it is difficult to establish a baseline as to what counts as "a lot of words" and what is not. 

Finally, the original statement of the hypothesis was tempered a bit. So now there are two formulations. The first is the strong version which stipulates that that language determines our thoughts. In other words, if I had grown up speaking German, my thought patterns would be different from those that I enjoy as an English speaker. Who knows what I could have achieved if I spoke a different language! If that version seems a bit heavy-handed, there is also the weak version which says that language influences our thoughts.

The S.T.E.M. Connection 

There is at least some evidence for the weak version of the Sapir-Whorf Hypothesis. Consider, for example, the well documented difference in mathematical achievement between Chinese and American students. Where does this advantage come from? One possible explanation is the differences in the way numbers are represented in Chinese and English [2]. In both languages, the digits between one and nine have an arbitrary mapping between the numeric concept (e.g., 9) and the spoken word (Jiǔ vs. nine). So we wouldn't expect any advantages either way for counting small numbers. 

But after ten, things start to get interesting. In Chinese, the way to represent numbers between 11-20 is to prefix the number with "ten." So the Chinese word for "11" can be translated as "ten one." In English, however, the arbitrary naming convention continues because the word "eleven" does not give any information about its place value. The hypothesis, then, is that Chinese students will have an easier time learning about place value than English-speaking students. Place value becomes extremely important, for example, when learning to "borrow" during multi-column subtraction.

Linguistic relativity is a fascinating topic, and I am glad that an academy-award winning movie introduced the topic to a broad audience. Maybe it will provoke us to think in new ways...now that we have a word for it!


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] Am I embarrassed that I'm using wikipedia as a reference? Sure. Is there any reason to believe it isn't true? Not that I know. https://en.wikipedia.org/wiki/Linguistic_relativity

[2] Miller, K. F., Smith, C. M., Zhu, J., & Zhang, H. (1995). Preschool origins of cross-national differences in mathematical competence: The role of number-naming systems. Psychological Science, 6(1), 56-60.

Thursday, March 9, 2017

Rise of the Machines: Machine Learning

Learning By Doing

Allow me to attempt to simulate what it's like to be two years old again. Below are two types of bugs. The first type (on the left) are called moneks, and the second type (on the right) are called plaples [1]. Study both types and pay particular attention to the attributes of each type of bug. You might even imagine your mom pointing to each one and saying, "That's a monek. Can you point to the monek?"


Figure 1. Examples of moneks and plaples.

Once you've familiarized yourself with these delightful creatures, test your knowledge by taking the following quiz [1]. You might want to scroll your window so that you're not tempted to cheat!


Figure 2. Test your knowledge of these two types of bugs. 

How did you do? Was it easy? What features did you rely on to figure out if something was a monek or a plaple?


What is "Machine Learning?"

Learning to categorize two different types of bugs may not seem all that incredible. That is, until you try to teach a computer how to recognize and classify visual objects. It's not easy! How might you approach this problem? One method is called machine learning.

Maybe you've heard about machine learning as it applies to Facebook's facial recognition software, or Google's reliance on machine learning to serve up highly specific (and accurate) search results. Or maybe you heard about the machine learning project to identify pictures of cats on the internet (I've heard a rumor that there are a couple of pictures of cats on the internet).

As it turns out, all of the big tech companies are using it. Apple, Microsoft, and Amazon all rely on machine learning to solve some of their thorniest technical problems. But have you ever wondered what the heck "machine learning" is? Have you also wondered, Can I learn how to harness the power of machine learning to solve my own problems? If you've given any thought to either of these two questions, then this is your lucky day! I am going to attempt to explain what machine learning is.


"I said you're holding back" –Walk the Moon

To talk about machine learning, it's useful to introduce a few concepts. The first concept is the outcome that we would like to predict. I'm going to refer to this as a labeled instance. If you recall the steps of the scientific method, you may remember talking about the dependent measure (or "outcome variable"). A labeled instance is analogous to the dependent measure. Second, each labeled instance has a set of quantifiable or measurable properties. The properties are used to describe the labeled instances.

Now that we've defined our data, there are three steps in developing our model.


Step 1 - Training

Like the monek/plaple example, we need to train our algorithms on a dataset for which we have known values for the instances. When we are training our machine learning algorithms, it helps if we can provide it with unequivocal examples, which we call the ground truth. Thus, the first step in machine learning is to run the algorithms on a training dataset. The training dataset has values for both the properties and the labels. The machine-learning algorithm is attempting to learn the association between the values of the properties and their labels. Table 1 is an example of a very small training dataset, which is derived from Fig. 1.


Table 1: Training Data (with Labeled Instances)
ID Antenna Head Body Legs Tail Number of Legs Label
M-01 Fuzzy Oval Striped Short Stinger 8 Monek
M-02 Short Oval Spotted Short Stinger 8 Monek
P-01 Short Oval Striped Long Long 4 Plaple
P-02 Fuzzy Square Striped Long Long 4 Plaple

Step 2 - Validation

We withhold a subset of data so that we can start the second step, which is to evaluate our machine-learning model. We will call this the validation dataset. The goal is to measure how accurate our model is. We do this by feeding the model all of the property values, and we make it guess what the labels are. We then compare those guesses against the withheld "answers." It's common practice to keep track of the types of errors that the model makes and report them as accuracy statistics. Table 2 is an example of a validation dataset.


Table 2: Validation Data (label withheld)
ID Antenna Head Body Legs Tail Number of Legs Label
M-03 Fuzzy Oval Spotted Long Stinger 4 Monek
M-04 Fuzzy Square Spotted Short Stinger 8 Monek
P-03 Short Square Striped Short Long 8 Plaple
P-04 Short Square Spotted Long Long 4 Plaple


Step 3 - Testing

Now it's time to release our fledgling machine and start categorizing instances for which we do not have labeled instances. In other words, we feed our machine the property values, and we let the algorithms choose the labels. The dataset in this case doesn't have a ground truth. We are letting the machine do all the work now. Table 3 is an example of the input into our machine-learning algorithm that has been trained to recognize the two types of bugs.


Table 3: Test Data (label unknown)
ID Antenna Head Body Legs Tail Number of Legs Label
K-06 Fuzzy Oval Spotted Short Long 8 ???
K-07 Short Square Striped Short Stinger 4 ???

The S.T.E.M. Connection

Suppose you teach math or computer science, and your students are curious about learning to set up a machine-learning project. There are many different tutorials out there, but these two seem like particularly good starting places:
  1. Categorize Lilies using Python libraries
  2. Handwriting recognition using TensorFlow
The first is a little more basic, and it leaves out many details. However, the author does a good job of getting the user up and running quickly. You may need to install some software on your computer, but I found doing so was as simple as advertised. Personally, I'm not super-excited about categorizing lilies, but this is a good project to get your feet wet. 

The second tutorial is a little more advanced. The authors discuss matrix multiplication and vector addition. If you need a way to motivate these topics in your own class [2], then this would be a good resource. In addition, the topic is cool. Your goal is to teach a computer to recognize handwritten digits between zero and nine. Banks, for example, rely on this technology for cashing personal checks. 

Machine learning is cool for so many reasons. It is accessible to people who are interested in the topic [3], it solves many difficult problems, and it has a connection to psychology. For example, learning how to categorize objects is a fundamental skill that young brains must master to make sense of the world!


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] I am indebted to Takashi Yamauchi for allowing me to recreate the stimuli he used in his study on categorization: 

Yamauchi, T., & Markman, A. B. (2000). Inference using categories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(3), 776.

[2] By "motivate," I am of course referring to a potential answer to the age-old student lament: When are we ever going to need to know this?!

[3] I would be remiss if I didn't mention the weka workbench that's also freely available. It's generally used for educational data-mining projects.