Thursday, January 1, 2015

Midnight in the Garden of Encoding and Retrieval: Memory Models

A Simple Model of Memory

In a recent post, I made the following claim: Learning cannot occur when there is no attention. In other words, information must pass through all of the attentional filters before learning can take place. We also talked about working memory as an important buffer, and it is a location where information temporarily resides. But what happens after that? How do we store information for later use? Once we have a model for learning, then we can theorize where things might go wrong. If we better understand where things go wrong, then we can help debug those processes and help our students do a better job learning new information.

To motivate this a little bit, remember the first time you learned the Macarena? You first watched people flail on the dance floor in perfect synchrony. Then, you practiced the dance steps in the privacy of your bedroom. Finally, at the next party, you expertly threw down all of the moves. How does that map onto a simple model of memory?

Let us assume that information has gone from your sensory register (a buffer that holds a ton of information but for only a very brief duration) and through the selection mechanism employed by the attentional system. Now the information is in working memory. What next? Here is a simple process-model of memory that we can use to track what happens next.



Through the process of encoding, information is passed from working memory into long-term memory. There, the information undergoes a storage process. And now for the moment of truth. The final step in this model is retrieval, where information is pulled out of long-term memory. Retrieval is when you need to recall a fact (or procedure) and put it to use. 

Debugging the Process

Each step outlined above has some probability that it will fail. It doesn't necessarily mean something is wrong (only that we are human). Let's consider each process independently. 

Encoding: As a learner, you may fail to encode the information in a precise fashion that helps you recall it later. Let me give you a perfect example of a problem with encoding. You've seen a penny before, right? So which way is Lincoln facing? What words, if any, appear above his head? Is there any information to the left or right of Lincoln? If you're like most people, this is a very difficult task. It's even hard when you are asked to pick the real penny from a lineup of fakes, as opposed to recalling all of the various features. If you struggled with this, it's because you never bothered to encode the features of a penny. And why should you? They are readily available, and it is likely that nothing really depends on you encoding all of the features.

Storage: There is some decay function associated with memories held in long-term memory. Memories typically decay when they aren't actively used. There are exceptions, of course (i.e., permastore [1] or flash-bulb memories). But for the most part, memories that aren't used fade from long-term memory. Think back to your first history class. If you're like me, it's been a while since you've thought about the names of the U.S. presidents, in order, and the dates they were in office. If so, then it's likely those memories have faded away. 

Retrieval: Finally, there may be a problem when you go to remember something. You know it is there, but you can't get access to it. A great example of this is called the "tip of the tongue" phenomenon. It's largely a problem with retrieval because you know you know something. But at the time of retrieval, something is blocking your path to that information. Often, the memory is blocked because of interference from some other, related (but irrelevant) memory. A good strategy for getting around problems with retrieval is to leave it alone, and try recalling it at some later time. The reason this works is because the interfering memory has started to fade away. 


A STEM Example

Suppose you are tasked with learning a new procedure, such as programing a computer to add two integers. Your background research suggests Python is a great programming language for beginners because the syntax is simple. You also are delighted to discover that your computer already has Python installed. 

To start the interpreter, you locate and launch the application called Terminal. Then you type python to start a session. Next you learn that a program is a collection of functions, which are small blocks of code that do something useful. To define a function, you use the keyword def followed by the name of the function, any arguments you wish to include in parentheses, followed by the colon character ":". All functions require a return value, which you specify. Your little program ends up looking like this: 

def add_two_numbers(addend1, addend2):
    sum = addend1 + addend2
    return sum

By my count, putting this together requires learning at least 8 different pieces of new information. Some of it is pretty arbitrary (e.g., using a colon to close the definition portion of the function), and some is highly conceptual in nature (e.g., a program is a collection of functions).

Now, suppose you want to teach the above lesson to someone, but you soon discover that his or her program does not work. To help diagnose why, you might ask a series of questions to figure out which memory process is at fault. Is there a problem where the information was never encoded in the first place? Has there been a long lag between the initial encoding and the first attempt to retrieve the information? If so, then there might be a problem with decay. Those memories might just have faded because of the lack of use. Finally, it could be a problem with retrieval because Java is kind of like C++, which may share some syntax with Python.

I admit, when I wrote this up, I had to go find an old Python program that I wrote a couple years ago because I couldn't remember which keyword to use. Namely, I was getting interference from Lisp, which uses defun instead. It is likely that I never actually encoded def because I always have some example code laying around (like pennies).

The model presented above is an overly simplistic version of memory. But like most things in life, problems seem simpler to solve when you have a nice working model for how they should operate in an ideal setting. 

Share and Enjoy! 

Dr. Bob


For More Information

[1] The idea of a permastore is extremely interesting, and probably warrants a separate blog in and of itself. Basically, it's the hypothesis that there are some memories that you create that will never fade away, no matter how old you get. The hard part, of course, is verifying the veracity of those memories, especially if they are auto-biographical.

No comments:

Post a Comment