Thursday, August 27, 2015

They Call Me the Working Man: Working Memory (Part 1)


Editorial Note: 
For the next two weeks, I want to discuss the distinction between short-term memory and working memory. Once we've sorted out the differences, then we will dive into the connection between working memory and intelligence. First, let's talk about how to model what's going through your mind...right now.


Short-term vs. Long-term Memory

In a previous post, we talked about the distinction between short-term and long-term memory. The evidence for proposing that there are two distinct systems came from a study that demonstrated enhanced memory for items that were early in a list of words, as well as superior recall for items later in the list. To make sense of this type of U-shaped curve, the authors theorized that the items early in the list made it into a permanent memory buffer, whereas the items that occurred later in the list were still hanging around in short-term memory.

In addition to behavioral evidence, there is also neuro-scientific evidence for the two memory systems. Using a methodology called a double-dissociation, neuroscientists demonstrated that some patients have damage to their long-term memory, but their short-term memory works just fine. The double-dissociation was established when they also found patients with the opposite problem. Namely, patients' long-term memory was intact; however, they had difficulty remembering information for a short period of time.


Working Memory & The Three Sub-components

Although short- versus long-term memory was successful in explaining some of the empirical findings, it became clear that it couldn't explain all of the behavioral results. Here is an example. Consider the following list of words: pit, day, cow, pen, rig. According to the research on the limitations of short-term memory, these five items should fit comfortably in short-term memory. But consider a different list of words: man, cap, can, map, mad. Does it seem harder to remember these words? According to the model of short-term memory, this list should be neither easier nor harder than the previous list of words because, again, there are only five items. How do we reconcile these observations?

Because the concept of "short-term memory" was unable to explain these findings, the concept of a temporary memory buffer had to be extended. To do so, a cognitive scientist named Alan Baddeley proposed a revision to short-term memory that he called working memory [1, 2]. It is similar to short-term memory in the sense that it is a temporary storage facility, but it had to be elaborated to help explain why phonetically similar words, such as cap/map and man/mad were easily to confuse when trying to remember them. The new model of memory included three distinct sub-components: the central executive, the phonological loop, and the visuo-spatial sketch-pad. To see how these components interact, Baddeley provided the following diagram (see Fig. 1).


Figure 1. A schematic representation of the working memory components.


Central Executive

The first component is called the central executive. It is responsible for focusing your attention on relevant information and to switch attentional focus when needed. In other words, it is the central executive's job to coordinate the flow of information to and from the subsystems to accomplish a task. An example of coordinating information occurs when you are attempting to navigate with a map. You have to hold spatial information from the map in mind while looking up at the real world. The central executive has to synthesize the spatial information from the map with the verbal information located on the street signs.

Phonological Loop

The next component is the articulatory or phonological loop. The best way to visualize the phonological loop is to imagine an extremely short cassette tape. When I say "extremely short," I mean it only can hold about two seconds of audio or phonological information. It's also called an "articulatory" mechanism is because it replays the audio over and over. This makes intuitive sense because when people have a list of numbers or words they have to remember for a short period of time, they repeat it to themselves over and over. The purpose of rehearsing the list is to hold that information until it can be recalled. After which time, it can be dumped from the phonological loop.

Visuo-Spatial Sketchpad

Finally, the visuo-spatial sketch-pad is meant to track and momentarily retain spatial information. For example, when driving on the highway, it is necessary to keep track of the arrangement of cars behind you so that you don't unintentionally cut someone off when changing lanes. A quick glance in your rearview mirror quickly updates the spatial information found in the visuo-spatial sketch-pad.

"Are you sure we're not getting some interference?"

Occam's razor posits that the simplest explanation is best. Do we really need three different sub-components? In the case of a momentary memory storage, I think it is completely warranted [3]. The concept of working memory, which includes a central executive aided by two sub-systems, can explain behavioral findings that a unitary concept of short-term memory could not. Probably the best example of a finding that working memory can explain, but short-term memory cannot, is the concept of interference

Suppose we play a game similar to the old electronic game Simon. We will play two rounds. In the first round, just play as usual. For the second round, however, you have to repeat the word the. How did you do? If you're like most people, repeating the doesn't really interfere with your ability to play the game because the information is held in a spatial buffer.

However, suppose I ask you to memorize the following list of words, but after you read through the list, you have to repeat the.
  • Butterfly
  • Airport
  • Kitchen
  • Church
  • School
  • Knife
  • Solid
Now how did you do? If you're like me, it is impossibly hard. Why? Because the articulatory loop can't do its job refreshing the contents of the list that you want to remember.
That concludes Part 1 of our discussion of working memory. Check back next week for the link between working memory and intelligence, plus the connection to education!

Share and Enjoy!

Dr. Bob

For More Information

[1] Baddeley, A. D., & Hitch, G. J. (1974). Working memory. The psychology of learning and motivation, 8, 47-89.

[2] Baddeley, A. (2000). The episodic buffer: a new component of working memory? Trends in cognitive sciences, 4(11), 417-423.

[3] There have been further refinements to the model of working memory. For example, Baddeley proposed that an additional set of components are needed to bind episodic information held in long-term memory to the contents of working memory. Here is a schematic of those components (see Fig. 2). 


Figure 2. A further elaboration of the working memory model.

Baddeley, A. (2003). Working memory: looking back and looking forward. Nature reviews Neuroscience, 4(10), 829-839.

Thursday, August 20, 2015

Getting Off the Couch: Motivation

In the movie Office Space, the main character is struggling to figure out what he wants to be when he grows up. He recalls a procedure from high school for determining what his profession should be:
Our high school guidance counselor used to ask us what you'd do if you had a million dollars and you didn't have to work. And invariably what you'd say was supposed to be your career. So, if you wanted to fix old cars then you're supposed to be an auto mechanic.
Suppose you didn't have to get up and go to work tomorrow. How would you spend your time? What would motivate you to get out of bed?

"It's a problem of motivation, all right?" -Peter Gibbons

Motivation is a slippery subject. To help clarify our discussion, let's start with a definition. I am using the word motivation to describe your own private rationale for engaging in some activity. In other words, motivation is your internal mechanism for figuring out how to spend your time. You might be motivated to engage in an activity because it will result in some concrete output (e.g., earning a paycheck, painting a picture, or writing a poem), builds a new memory or skill (e.g., going sight-seeing, attending a top-rope course, or skateboarding), or just passes the time (e.g., watching television or playing a game).

What are the sources of motivation? Off the top of my head, I can think of a few:
  • Power
  • Money
  • Prestige
  • Fame
  • Impressing a potential mate
  • Demonstrating mastery
  • Contributing to something bigger than yourself (meaning)
  • The promise of a better future
  • Someone in power tells you what you must do (authority)
  • Your or someone’s life depends on you completing a task (survival)
Some of these sources are better motivators that others [1]. For example, seeing a grizzly bear is a powerful motivator to leave the situation (i.e., survival). On the other hand, some sources are more nuanced, and they might ebb and flow. On some days you might feel like practicing the piano, while other days you just can't bring yourself to sit down in front of the keys and practice your scales (i.e., demonstrate mastery).

Thanks for all your hard work...bzzz!

Now that we have a working definition, let's talk about what the data say about motivation. One of my favorite motivation studies was led by the behavioral economist Dan Ariely [2]. The study included two experiments. The first of which asked undergraduate participants (who we will call "laborers") to find duplicate letters (e.g., "ss") on a sheet of paper filled with hundreds of letters. There were three different experimental conditions. In the Acknowledged condition, the laborers turned in their work to the experimenter. The experimenter then checked the work to see if the laborer found all the duplicate letters. In the Ignored condition, the experimenter took the sheet and, without checking the accuracy of the completed work, placed it on a very tall ream of paper. In the Shredded condition, the experimenter took the laborer's sheet and promptly ran it through a paper shredder. Bzzt!

After completing the first sheet, the laborer had to make a choice. Does she want to fill out another sheet or stop? Of course, there was a catch. The first sheet paid a "salary" of $0.55; but for each sheet thereafter, the salary decreased by $0.05. There was a diminishing return on the laborer's time. The experimenters were interested to see if manipulating the meaning of the work had any impact on how many sheets the laborers completed in the three experimental conditions. What would you predict? 

If you mentally placed yourself in the participant's shoes, you may have predicted that the Acknowledged condition completed far more sheets. And you would be correct. On average, they completed about 9 sheets, which was many more than the participants in the Ignored (~7 sheets) and Shredded (~6 sheets) conditions. By the way, there was no statistical difference between the Ignored and Shredded conditions.

In the second experiment, participants were asked to assemble Bionicle Lego robots. What could be more fun than getting paid to play with Legos!? In the Meaningful condition, the robots were placed on the experimenter's table, so the laborer could see the fruits of her labor. In the Sisyphus condition, when the laborer turned in a robot, the experimenter promptly disassembled it. Like the previous study, the laborer could decide to stop at any time. Laborers in the Meaningful condition assembled an average of 10.6 robots, while the Sisyphus condition only assembled an average of 7.2 robots.

What is the implication for motivation? It demonstrates that when you hold money constant, people are willing to work much longer on tasks that they find even the tiniest bit meaningful. The meaning in this experiment, of course, was derived from the acknowledgement from another person. The person in charge had to acknowledge that the work had been completed. Looked at another way, the worst thing a manager can possibly do is fail to acknowledge that an employee has done a task. Instead, a manager should help employees see how their work is in some way connected to a greater purpose or project. In other words, don't run your employee's work through the proverbial paper shredder once they are done.


The STEM Connection

The danger in education is that in-class assignments and homework can feel like busy work. Unfortunately, students can't fall back on rationalizing the time they spend by thinking, "Well, at least I'm getting paid." Instead, students need to find motivation elsewhere. Teachers and guidance counselors might have to periodically remind students that they are investing their time in the promise of a better future. 

Some students, however, are more concrete and live for the moment. How do we help this type of student find motivation to study? Offering rewards won't work because studying is, by its very nature, a deferred investment. Offering an external reward can also easily undermine students' intrinsic motivation [3]. Instead, the Lego study suggests that students might be motivated by keeping track of their progress. Each completed assignment is an incremental step along a much greater path. If each assignment and exam can be quantified in some way, it is highly motivating to look back and see how far you've come.

Another source of inspiration for how to motivate people is the video-game industry. We all know how addictive video games can be. What is it about their design that draws us in and keeps us coming back? Keeping track of progress is almost universally used, and so is the idea of "leveling up." As you play the game, the user becomes more proficient. Like the idea of the flow channel, a game needs to be simultaneously accessible to beginners and challenging for expert players. Thus, games need to evolve to be commensurate with the user's improving skill. Video games allow kids to demonstrate their proficiency. Is there a way we can engineer the classroom experience so that demonstrating mastery is looked upon favorably (e.g., spelling bees, math competitions, debates)? 

In summary, motivation is fickle. Sometimes we have it; sometimes it is nowhere to be found. Probably the most reliable source of motivation is spending time on an activity that we chose for ourself, and that we find meaningful. If we can help our students find a connection to something beyond themselves, then we can tap into the same motivation that has built things like Wikipedia


Share and Enjoy!

Dr. Bob

For More Information

[1] Pink, D. H. (2011). Drive: The surprising truth about what motivates us. New York: Penguin.

[2] Ariely, D., Kamenica, E., & Prelec, D. (2008). Man's search for meaning: The case of LegosJournal of Economic Behavior & Organization, 67(3), 671-677.

[3] Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A's, praise, and other bribes. New York: Houghton Mifflin Harcourt.

Thursday, August 13, 2015

Target Acquired!: Cognitive Skill Acquisition

Assuming you have your driver's license, think back to when you first learned how to drive. What were the stages that you went through? Did you take a class (e.g., Driver's Ed)? Did your parent take you to an abandoned country road and turn over the wheel? Did you learn to drive a car with a manual or automatic transmission? (Lawyers may object to the next couple of questions because I might be "leading the witness.") In the first few months behind the wheel, were you allowed to listen to the radio? Could you carry on a conversation and drive at the same time? Did you talk to yourself when trying to recall which pedal was the gas or which gear you were in?

Learning to drive is complex because it is a mixture of motor learning (e.g., when to disengage the gas, engage the clutch, and shift gears) and verbal learning (e.g., the traffic laws). After several years of practice, driving becomes second nature. How, then, did we go from a nervous teenage driver to an expert on the road? Acquiring this complex skill requires that we traverse several stages of development.


The Three Stooges, er...Stages!

Acquiring a motor task of sufficient complexity must undergo three stages [1]. An example of a motor task might be learning to serve overhand in volleyball or learning the breaststroke. These tasks are complex because the person must synchronize the timing of various muscle groups. When a novice begins to learn how to serve overhand, the first stage, called the cognitive stage, is best described as verbal. The person learning the task benefits from hearing a verbal articulation of the steps needed to complete the task. The learner might even recite the steps to themselves while practicing. Then the learner transitions to the second stage, the associative stage, where some of the motor subroutines become more fluid. The individual commits fewer errors and relies less on verbal articulation. The final stage, the autonomous stage, is when performance is nearly error free and completely fluid. You know that the learner has entered the autonomous stage when she can now do the task and carry on a conversation. This indicates that a verbal representation of the task is no longer needed and does not interfere with performance.

Given its usefulness in describing the development process for learning motor tasks, this 3-stage framework was adapted to describe how one goes about acquiring a complex cognitive skill [2]. Examples of complex cognitive skills are learning multi-column addition or learning how to balance a checkbook. Like a motor task, the acquisition of a cognitive skill is theorized to undergo three stages. The first stage is the declarative stage where information is represented as declarative chunks. Like the associative stage, the declarative stage can be articulated verbally, and the problem solver is very deliberate when attempting to practice the skill. The second stage, the knowledge compilation stage, takes place when the declarative chunks are "compiled" into procedural representations. Again, there are fewer errors when an individual reaches the second stage, and the problem solver relies less on verbally stating the rules. The final stage, the procedural stage, is similar to the autonomous stage in that all of the declarative chunks have been successfully converted over to procedural rules. Performance is smooth and much faster than the first two stages. 

The mapping between the two theoretical frameworks can be summarized as follows:



Steps or Waves? 

I've described both frameworks in terms of discrete stages that transition from one stage to the next. This is a step-wise theory of development, as represented in Figure 1. Is this an accurate depiction of how we acquire a complex skill? 


Figure 1: A step-wise, schematic representation of development.

While this looks good on paper, life is not so simple. Stages of development are rarely discrete [3]. Instead, there can be forward progress on one day, but then a regression back to the old way of doing things on a different day. This would be more like an overlapping set of functions, as represented in Figure 2.


Figure 2: A schematic representation of discontinuous development.

A good example of the contrast between the step vs. wave model of development is watching kids learn how to add. At first, their performance on this task is heavily dependent on a physical representation of the number system. In other words, kids like to add by counting their fingers. If I ask a child, "What is three plus four?", one strategy he could use is to start counting, using his fingers as placeholders:
1, 2, 3 [holds up three fingers]1, 2, 3, 4 [holds up an additional four fingers][Goes back to the beginning and counts all of the raised fingers] 1, 2, 3, 4, 5, 6, 7Three plus four is seven!
If kids do this enough, they begin to realize that they can jump start the counting by holding up three fingers and start the counting from there: 
4, 5, 6, 7 [holds up another finger for each new number]Three plus four is seven!
But when we up the ante and give the child a more difficult problem (e.g., one that goes beyond ten), he may fall back to his first strategy or come up with an entirely different strategy altogether.

A similar observation can be made in the driving example. When traffic is normal, we might be motoring along with no problems (i.e., Stage 2). But then something unexpected happens, and we suddenly find ourselves needing to turn off the radio or interrupt a conversation so we can focus on the current situation (i.e., regress back to Stage 1).


The STEM Connection

Take a minute to solve the following problem. While you are working through each step, keep track of what is currently in your working memory and where you must guide your attention. For bonus points, see if you can supply a mathematical justification for each step. 


   614
   438
 + 683  

The goal of this little exercise is twofold. First, I want to simulate what it was like to be a novice back in Stage 1. As a novice, most of your knowledge is stored declaratively, so that means you need to think about what to do in terms of the verbalizable chunks of information stored in long-term memory. Second, I also wanted interrupt your pre-compiled procedures for this task. You have been doing multi-column addition for so long that you have probably automatized the steps. That means you might not have access to those declarative representations anymore. Asking for a justification is my attempt to get you to "un-compile" your knowledge.

To teach a complex cognitive skill, it is informative to try and answer the following questions:
  • What is my current goal? 
  • Which pieces of information should I focus on? 
  • How did I know which action to take? 
  • What information can I ignore after I am done with this step? 
Once we have answers to these types of questions, we can backtrack and figure out the best way to teach the steps. In addition, knowing about the stages of development might help us figure out how to personalize our instruction. If we can figure out which stage the student is in, then we can help support the types of representations that are currently being used and offer guidance that will help the student reach the next stage.


Share and Enjoy!

Dr. Bob

For More Information

[1] Fitts, P. M. (1964). Perceptual-motor skill learning. Categories of human learning, 47, 381-391.

[2] Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological review, 89(4), 369.

[3] Siegler, R. S. (1996). Emerging minds: The process of change in children's thinking. Oxford University Press.

Thursday, August 6, 2015

Mirror, Mirror: Memory As a Reflection of the Environment

Take a few minutes to reflect on these questions:
  • Why is there a distinction between short-term memory and long-term memory?
  • Why does forgetting happen?
  • What are the environmental demands my memory?
  • Why does a lot of forgetting happen initially, but then it tapers off?
  • What is the optimal amount of time that I need to spend studying to remember something?


Why do we forget?

In a previous post, we graphed the forgetting curve of Hermann Ebbinghaus's study of his own memory. For very short delays, his memory was very good. But as the delays got longer, his memory for his list of trigrams (e.g., "LEK") dropped precipitously. Then the accuracy for remembering the list leveled off at around 20%.

We also drew a forgetting curve for the recall of Spanish vocabulary words across an entire life span. The shape of the graph was surprisingly similar to the forgetting curve of non-sense words used in Ebbinghaus's study. There was a large amount of forgetting initially, but then the percent of recalled words off at around 60%.

These two graphs are interesting in their own right, but you might be asking yourself the following questions: Why are these graphs shaped this way? Why are forgetting curves steep at first and then asymptote at some value? There must be a reason why memory works this way. Let's look to finches, moths, and ants for some answers.


Finches, moths, and ants...oh my!

During his trip to the Galápagos Islands, Charles Darwin noticed something peculiar about a particular family of birds. Although they were of the same family, there were several different species of finches, each with a distinctive beak. Some of the birds had a wide, stout beak; whereas, other finches had a sharp, pointed beak. It turned out that the different shapes aided the birds in consuming food for their different diets. The wide-beaked finches ate nuts and berries, while the sharp-beaked finches ate insects. In effect, the shape of the beak was optimized to the finch's environment, which included their dietary requirements [1].

But what happens if the environment changes? Can an organism's features evolve to respond to the change? It's hard to conduct a controlled laboratory experiment to answer this question; fortunately, a natural experiment occurred at the turn of the century. During the rise of the Industrial Revolution in Great Britain, the amount of pollution escalated rapidly. Ash from the factories coated trees in the surrounding region. Trees that once had light-colored bark, now covered in soot, turned to a dark gray. Resting on the bark of these trees was a species of moth, called the peppered moth. The most prevalent pepper moth had bodies that matched the original color of the tree bark. When the trees became gray, birds could now easily spot the white moths against the gray background. Due to some natural variation in pigmentation, some moths were born a darker color, which was much more difficult for predators to see. As the lighter colored moths were eaten, the ratio of darker moths to lighter moths tipped in favor of the dark-bodied pepper moths [2].

The study of finches demonstrated that species are optimized to their environment, and the study of pepper moths showed that the range of variation within a species can tilt depending on the factors that lend themselves better to survival. So far, however, this conversation has been about the outward appearance of an organism. What about an animal's behavior? Herbert A. Simon wrote this about the complex behavior of the ant:
Imagine watching an ant on the beach. Its path looks complicated. It zigs and zags to avoid rocks and twigs. Very reminiscent of complex behavior — what an intelligent ant! 
Except an ant is just a simple machine. It wants to return to its nest, so it starts moving in a straight line. When it encounters an object, it zigs to avoid it. Repeat until the destination is reached. 
Trying to simulate the path itself would be difficult, but simulating the ant is easy. It’s maybe a half-dozen rules.
The point of this parable is to illustrate the interaction between the environment and perceived complexity. Lots of complex looking things are really the result of the territory, the shape of the beach, and not the agent, in this case, an ant. 
But, of course, with this metaphor, I’m not really talking about ants. I’m talking about people. How much of the complexity of human behavior is really the product of the environment? [3]
Memory, as we have seen, is highly complex. There appears to be at least two different storage mechanisms (i.e., short- vs. long-term memory) and several different classifications of memory types (i.e., semantic vs. episodic; procedural vs. declarative). Can we explain the complexity of memory by looking at the environment? 


"All the news that's fit to print" (in memory).

To answer that question, we need a model of the environment, and see if it matches (more or less) to the models of memory that we currently have (i.e., the forgetting curves). How would you construct a model of the information in your everyday environment? That seems like a tall order. Since we live in an information-rich environment, it might be a good idea to narrow it down. 

That's precisely what two cognitive scientists did when attempting to construct their own model of the informational environment [4]. They decided to look at all of the words that appeared in the headlines of the New York Times for a two-year period (i.e., 730 days between Jan. 1, 1986 and Dec. 31, 1987). They tracked two variables. The first was the day on which a word appeared. For example, the word Challenger occurred on days 29, 31, 34, 36, 40, 44, and 99. Then they counted how many times that word appeared in a 100-day window (n = 7 for Challenger). These two variables allowed them to construct a retention function, which is the probability that a word will appear on the 101st day given n number of days since its last occurrence. 

To make that a little more clear, let's look at some hypothetical data (see Fig. 1). Suppose we want to know, What is the probability that a word will appear on the 101st day, given it has been 20 days since the last time I saw it? According to the hypothetical retention function, I have about an 11% chance of that particular word appearing. If it's been 100 days since the last I saw it, however, then the probability drops to around 3%. In other words, it is highly unlikely that a particular word will appear in my environment as more time passes since the last time it appeared. I think this makes intuitive sense. It's unlikely that we will read about Muammar Gaddafi today, given we haven't heard anything about him in several years.


Figure 1: A hypothetical retention function for words appearing on the 101st day.

To bring this full circle, it seems that our memory is like a finch's beak, a pepper moth's coloration, or an ant's path on the beach. Memory is optimized to the environment in which it operates; thus, memory is a reflection of the environment. The forgetting curves show that memory is solid for short durations, similar to the New York Times headlines model showing that it is highly likely that a particular word will show up again given a short delay [5]. But then as time goes on, that word is less likely to show up. So why bother remembering a piece of information if it is unlikely to appear again in one's informational world?


The STEM Connection

If memory is in fact a reflection of our environment, then what does that mean for the way we structure the informational environment in our classrooms? First of all, the forgetting curves reinforce the adage: Use it or lose it. If the informational environment does not demand that I remember something, then guess what? I'm probably not going to remember it. 

However, we can systematically and intentionally structure the information environment so that important declarative chunks or procedural memories are needed and exercised in a periodic fashion. If something is important enough to remember (e.g., the slope-intercept form of a linear equation), then keep bringing it up. Keep using important information. As the demands in the informational environment escalate, then students will rise to the occasion. If learned well, then it might even make it into the part of the curve that never goes to zero (i.e., permastore).


Share and Enjoy!

Dr. Bob

For More Information

[1] Darwin's finches

[2] Peppered moth evolution

[3] Simon, H. A. (1996). The sciences of the artificial (Vol. 136). MIT press.

[4] Anderson, J. R., & Schooler, L. J. (1991). Reflections of the environment in memory. Psychological Science, 2(6), 396-408.

[5] Some might argue that editors of newspapers and magazine's are sensitive to our ability to remember and therefore might decide not to write about something that occurred long ago. While that might be the case, Anderson and Schooler (1991) also used two other databases to construct their argument. They included a database of children's speech (CHILDES) and the second author's email inbox.