Thursday, December 25, 2014

The Myth of Multitasking: Serial Attention

"Achtung!" --U2

Cognitive Science is awesome for (at least) two reasons. First, the field likes to debate all sorts of binary questions (e.g., Does the mind use symbols or not? When reading, do we process surface features or semantic features?). Second, cognitive scientists come up with all sorts of crazy metaphors to better understand the complex inner-workings of the mind. The topic today is awesome for both reasons. Early investigations into attention tried to answer binary questions, such as: Is attention parallel or serial? Does information get selected for deeper analysis early in the process or later? Also, they came up with some pretty cool metaphors to describe attention, such as switches, filters, attenuators, and spotlights. 

Before we dive in, let's make a distinction between information selection and information processing. We've all heard the phrase "selective attention" (or, the close cousin "selective hearing"). It seems that some people have an amazing ability to pay attention to only one thing at a time. For example, if your roommate is texting her friend, she might not even notice when you ask a direct question. When texting, your roommate has decided, consciously or not, to select the information emanating from her phone. Processing information, on the other hand, refers to the analysis and response to that information, such as a replying to a text message. 

Now that we've laid the groundwork, let's look at the fascinating world of auditory and visual attention! 


"Were you listening to me, Neo? Or were you looking at the woman in the red dress?" --Morpheus

So we know that deep down, at its core, the attentional system is massively parallel. It doesn't matter how engrossed you are in a task, you will respond to a very loud siren and a red flashing light. Not much analysis needs to take place because your attentional system is always on high alert to keep you alive. If something threatening comes your way, odds are your attention will be captured and you will respond immediately [1].

Going beyond all the loud noises and lights, the attentional system also has to be designed to help you select and process information. Most evidence suggests that there is a bottleneck somewhere in the attentional system such that, once selected, we can only process one stream of information at a time. In early auditory attention experiments, researchers asked people to listen to a recording where they shadow, or repeat, the message in one ear and ignore the message played in the other ear. You can try it for yourself here. After the task was over, the researchers asked about the information in the ignored ear. Most could say if the voice was male or female, and give other surface characteristics of the sound, but not much more than that (i.e., the content of the message). 

A similar finding has also been demonstrated for visual attention. Here is one of the coolest demonstrations of this phenomena. You need to experience it for yourself.


==================================================


Simon's Visual Cognition Lab

==================================================

This is a very powerful demonstration of the effect of a goal (e.g., count the number of passes) on the selection of information. It also demonstrates that we can only process one stream of information at a time. 


A STEM Example

I'll be honest, selectively attending to a single stream of information is one of the most fundamental principles of education. You can't learn what you don't pay attention to! That seems almost too simple to state, but it seems like it's easy to forget. 

Knowing that we are serial processors is useful when evaluating educational applications. A great example is duolingo, which is an app that teaches (or re-teaches) a second language. Each task is presented in a simple interface where the learner concentrates on only one goal at a time. Once that task is finished, the learner progresses to the next task. Again, the app keeps things simple and doesn't try to split the student's attention across too many sources of information. 


Share and Enjoy! 

Dr. Bob


For More Information

[1] The lone example I can think of are some Tibetan monks who are able to get deep into a meditative state where they do not respond to loud noises (see Chapter 1 in Search Inside Yourself for a description).

Thursday, December 18, 2014

Better Than Soup: Chunking

"Sloth love Chunk!" --Sloth


In a previous post, we talked about the severe constraints on working memory. Early estimates of the capacity of working memory started out around seven (plus or minus two) items. That translates into looking up a phone number in the phonebook (remember those?), walking over to the phone, and dialing the number. Unfortunately, seven seems like a very low number. In fact, later estimates put working-memory capacity around four items. Four items?! But that seems crazy low. Fortunately, there is a way to expand your working-memory capacity through a process called Chunking.

Does chunking really work? If it does work, what are the limits? How far can we stretch this strategy? 


Does Chunking Work?

How do we know that the brain is able to aggregate or "chunk" information? What is the evidence? To generate some evidence, this interesting study asked a couple of "volunteers" to memorize the position of chess pieces on a chessboard [1]. There were three types of participants. The first was a world-renowned chess master. The second was an intermediate player, but wasn't anywhere near the ability of the first player. The third person knew how to play chess, but was not ranked in any official capacity. The scientists showed them the configuration of a couple of chessboards that were in mid-game. The twist was that some boards were actual games, while the other boards had the same number of pieces, but they were randomly placed across the board. Before I tell you the outcome, what do you think they found? 

As you probably guessed, the chess master's memory for the position of the chess pieces was vastly superior to the intermediate and novice player's memory. What wasn't totally obvious, however, was how well they did relative to each other on the random boards. It turns out that they were all equally the same. This suggests that the chess master wasn't looking at individual pieces on the actual mid-game boards. Instead, he was aggregating the pieces into groups (e.g., a "castling" position). I love this study because it's an elegant demonstration of the process of chunking.


"Take It To the Limit" --The Eagles

The best answer to the question of limits comes from a study that attempted to train someone to expand his working-memory capacity [2]. Going into the experience, the person that was selected to endure the rigorous training regimen was a runner. That means he was well versed in thinking about numbers in terms of running times. He was able to chunk digits into running times. For example, 4:32:8 is an average time for a men's marathon. The runner worked for many training sessions by adding more and more complex retrieval structures. At the conclusion of the study, the participant was able to correctly recall 79 numbers. Impossible!

What does that mean for us ordinary mortals? First, this person wasn't special in any obvious way. That means that any one of us could also learn to memorize 79 digits if we were willing to put in the time and effort. Second, learning to memorize digits of numbers seemed to apply only to digits. In other words, the participant wasn't able to apply what he learned to memorize state capitals or other forms of information (e.g., letters). Finally, it also means that, although we have severe limits to our cognitive capacities, they can be overcome either by cognitive strategies and/or good, old-fashioned hard work (i.e., "deliberate practice"). 


A STEM Example

I'll be honest. When I took Physics in college, it was brutally difficult. Not because of the math (it was a non-calc version), it was hard because it seemed like each new concept arrived from out of the blue. Rotational kinematics seemed to have nothing to do with linear kinematics  Sure, the form of the equations seemed to have something in common, but they were largely taught as disconnected facts. 

Fast forward several years to my post-doc. I was blessed to work with a real physicist who pointed out to me that Physics is easy because you only need to know a few "first principles." From there, you can derive many other facts That hit me like a bolt of lighting. Once someone took the time to sit down with me and demonstrate the inner-connections, Physics didn't seem so hard. I don't want to trivialize education, especially for difficult topics, but the whole process can be made more simple (and perhaps fun?) if the material is presented as a sequence of ever-expanding chunks of information. 

Let's take velocity as an example. To build up to this advanced topic, it helps to start with our intuitive understanding of speed. Most of us have ridden in cars and talked about the measurement of speed in terms of "miles per hour." Once that gets translated into a symbolical representation (s = d/t), you can then expand it to include the concept of change (i.e., delta). Now the equation becomes s = Δd/Δt. Not a lot has changed, and that's a good thing because the student needs to see the equation, not as something new, but slightly expanded. Then you can expand the notion of the delta: Δd = d_final - d_initial. Plug this back into the equation, and you get a slightly more detailed expression. Again, each step is small and needs to be seen as a single chunk of information. 

Share and Enjoy! 

Dr. Bob


For More Information

[1] The chess study was conducted by a pair of researchers at Carnegie Mellon University (CMU) in the early 70s. The first author, Bill Chase, was my graduate-student advisor's late husband. I never had a chance to meet him, but he is a legend in the field of cognitive psychology. On the other hand, I did have the good fortune to take a course from the second author, Herb Simon. It was a fascinating course, and he gave probably the hardest final exam I have ever taken in my life. It had a single question: "Describe a computationally plausible model of cognition." We then had about three hours to provide an answer. 

Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, 55–81.

[2] Training someone to expand his working-memory capacity took 230 hours of practice! His training was conducted  by K. Anders Ericsson, who we will hear more about in subsequent posts. The original article can be found here

Ericsson, K. A. (1980). Acquisition of a memory skill. Science, 208(4448), 1181–1182.

Thursday, December 11, 2014

Crunched for Space: Working-Memory Capacity

Mental Scarcity

This week, we're going to talk about something so fundamental to cognition that it is easy to overlook. To demonstrate the concept, let's play a simple game:

I'm going to give you a list of numbers, and your job is to repeat them back to me, in the order you saw them. Okay, maybe that's not so simple, but I know you can do it. Ready? Click "Play" to see the list:




Ok, quick, what was the list of numbers!? Did you get them all? If not, don't beat yourself up. I may have been a little unfair because I threw in twelve separate digits. According to this very famous paper [1], you should have only been able to repeat back 7 digits (give or take two) [2].

In other words, the amount of information that you can cram into working memory is severely limited, and we refer to that as your own personal Working-Memory Capacity. First, the bad news: the amount of information we can focus on and use at any one time is very small. Now, some good news: you can use various tricks to expand your working-memory capacity. 

One of the tricks is called Chunking. When I do this demonstration with large groups of people, there's always at least one person who can repeat back the entire list in order. How do they do it? Are they superhuman? Do they practice memorizing numbers all day long? Maybe. But the most likely explanation is that they don't see each digit as a single thing to be remembered. Instead, they focus on grouping the digits together into larger chunks of information. 

Here's the list again: 


1  4  9  2  1  7  7 6  1  9  4  2

Do you notice any patterns in the data? Let me give you a hint: Think about important dates in American history. How about now? Anything emerge? Instead of trying to remember 12 separate digits, now all you have to remember are three years: 1492, 1776, 1942. That's a lot easier, right?


A STEM Example

How does this play out in education? The most obvious example that I can think of is when a student is trying to learn a mathematical formula to calculate something complex, like the circumference of a circle. When a math teacher introduces the idea, there are a bunch of new concepts to learn along with the seemingly random association to their symbols: C is the circumference; pi is a constant; r is the radius. Not to mention that all of these symbols need to be written with operators between them (there's also a spatial configuration). Depending on how you count, that could be 5 (or seven) items to hold in working memory. 



Once you learn the circumference equation, it becomes a single chunk of information, which makes it easier to remember. However, it's easy to forget what it was like not to know something. It's important to keep that in mind when teaching this or any concept. The first time we encounter new information, it is going to appear much more complex because it contains several small chunks of information. As you become proficient in the domain, it becomes easier to take on additional complexity due to the process of chunking.


Share and Enjoy!

Dr. Bob


For More Information

[1] Here is a link to George Miller's very famous paper: The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information

[2] Like the speed of light, the estimate of working-memory capacity is always changing. It depends on how you measure it! Some estimate that working memory is actually capped at a much lower rate -- around four separate items. One methodology for calculating working-memory capacity is the n-back task, which is notoriously difficult. Your job is to remember a digit (or whatever) that is n turns ago. For example, if n = 2, and I give you: 

3 5 7 8 x

When you see "x" you have to say "7" because it happened two turns ago. As if this isn't hard enough, there's also the murderously difficult dual n-back task. If you are feeling strong, try it yourself!

Thursday, December 4, 2014

Rework the Network: Semantic Networks

It's all about the Semantics



In a previous post, we talked about the power of the Associative Network. It explains several interesting cognitive phenomena, such as reminding, creative thinking, and priming. That's pretty powerful; however, there was a weakness in the Associative Network as a way of representing knowledge. It didn't quite capture the interesting differences between the links. In my previous example, I drew a link from whale to mammal and a second link from mammal to three inner-ear bones. Should we treat these links as the same? Maybe not. 

I was purposefully sloppy in the way I presented the whale/fish example. The nodes themselves came in two flavors. The first type of nodes were concepts (i.e., nouns). They included entities like whale, fish, dog, and cat. Then there was a different breed of nodes that described those concepts (e.g., adjectives). They included such modifiers as 3 inner-ear bones, fur, nurse young, and give birth to live young. The network would be so much more useful if the links between these two types nodes were labeled differently. Why is that the case? 





One reason why is that we can use the network to make some pretty interesting inferences. Going back to our whale example, if I know that a whale is-a mammal, and I also know that a mammal has three inner-ear bones, then I can infer the following fact: "A whale has three inner-ear bones." Nobody has to tell me that fact directly. Instead, I can use the labeled relationships in my network to derive or infer these facts. Thus, a Semantic Network is a node-link representation of knowledge where the links have meaningful labels.



A STEM Example



This is a pretty powerful idea for education because it means (at least) two things: 


Number 1: You don't have to tell your students every little fact. Instead, you can let them discover these facts for themselves. Not only is the process of discovery more enjoyable for the learner, it also leads to more robust learning (a topic for another time!). 


Number 2: Thinking in terms of a Semantic Network might also help structure the presentation of ideas in class. For example, it might help map out all of the relations between geometric objects: 


  • A square has four equal sides. 
  • A rhombus has four equal sides. 
  • Therefore, a square is-a rhombus. 

Mapping out these relationships explicitly can help students visualize and understand the distinguishing characteristics between different entities. It also (implicitly) teaches a meta-cognitive strategy of mapping out information in a hierarchical manner, which is easier to memorize. 



Share and Enjoy! 


Dr. Bob



For More Information

Setting up and maintaining a Semantic Network can be an actual career! I like to refer to this as "knowledge engineering." As a knowledge engineer, you get to think about and explore the various types of objects in the world (e.g., the nodes) and their properties (e.g. has and is-a relationships). The ultimate goal of doing this is to either create a system that can either teach existing knowledge or make new discoveries.

The hard part is figuring out a way to represent the nodes and (labeled) links in a way that a machine can read and understand. We call these "propositions," which can take pretty much any format. Here's an example of a format that I made up:


WHALE [ has("blowhole"), has("fins"), is-a("mammal") ]
MAMMAL [ has("3 inner-ear bones"), has("fur"), has("live birth"), is-a("animal") ]

Once you figure out the machine-readable representation, you can then develop a reasoning engine (also, not a trivial task) that you feed in the propositions. Viola! The reasoning engine can spit out conjectures that you never before considered because it uses chains of logic to derive new concepts and ideas. 

There are several projects that are attempting to make this happen. Check them out:

  1. Viv
  2. Cyc
  3. WordNet
  4. The Semantic Web


Thursday, November 27, 2014

Lies My Turkey Told Me: Tryptophan

Learning By Doing

The molecular structure of L-TryptophanThis week's activity is pretty easy. Grab a turkey leg (or spirulina or cashews if you're vegan), and take a big bite. Now wait while your body processes the food. Make a mental note of the time of day, any other food you may have eaten, and how alert you feel.

From Gut to Grey Matter

In this special holiday edition, I wanted to cover a topic related to Thanksgiving. The choice was obvious. I need to blog about Tryptophan! I'm sure you've probably heard the following explanation for why we get sleepy after a massive Thanksgiving feast. Turkey is full of essential proteins, fatty acids, and nutrients. In particular, turkey is said to contain an unusually large amount of Tryptophan, which is a necessary amino acid. Amino acids, as you may recall, combine with other compounds to create proteins that your body can use.

It turns out that Tryptophan is a biological or chemical precursor for other neurotransmitters. The body isn't able to produce its own Tryptophan, so it must be supplied by the food that we eat. Enter the drumstick. Tryptophan, then, is chemically altered by the body to form Serotonin, which is a neurotransmitter that serves many different functions. One such function is to regulate mood (it is associated with feelings of contentment and happiness) and sleep as well. Serotonin, in turn, can be converted into Melatonin, which is a hormone the brain sends the body to tell it that it's time to go to bed [1].

In a nutshell, here's the chemical chain of events


Tryptophan → Serotonin → Melatonin

Tryprophan is a precursor for Serotonin, which gets converted to Melatonin, which then tells the body to go to sleep. That seems logical. Turkey makes us sleepy because if fuels this bio-chemical waterfall.


Armchair Neuroscience: Getting off the couch

Ah...but there's a huge problem with this explanation. Actually, there are a couple of problems. 

First, feeling sleepy after a large Thanksgiving dinner is hopelessly confounded with the time in which people typically eat. The body has a natural rhythm (called a "circadian rhythm") in which people feel wide awake and alert, and other times of the day when we feel sleepy and tired. For most people, there's a natural dip in the afternoon (siesta, anyone?).

Second, eating a large meal containing a glut of protein is a taxing process that your body has to then deal with. Most of the blood in your extremities goes to your stomach to aid in the digestive process and to carry off the newly absorbed nutrients. What better way to pass the time than dozing off for an hour or so?

Finally, the chemical process outlined above, while true, tends to take some time. I would be shocked if this chemical conversion process happens between the time you push away from the table and pass out on the couch. 

The S.T.E.M. Connection

Wouldn't it be cool if we were teaching a chemistry class, and we could synthesize melatonin in the lab? A more likely connection might be to ask a science class to use the scientific method to confirm or disconfirm a hypothesis that everyone seems to take for granted (e.g., "turkey causes people to become sleepy because it contains tryptophan"). 

That could also lead to an interesting discussion about applying critical thinking to claims that sound "scientific." We could then discuss the following questions: 
  • How do you know turkey causes sleepiness? 
  • What other foods cause people to become tired? 
  • What other foods that contain tryptophan don't generally induce sleep? 
  • How do you counteract alternative explanations, such as time-of-day effects (e.g., circadian rhythms) and other confounding factors (e.g., the size of the meal or the amount of protein)? 
I admit...I thought that I was going to end up blogging about how turkey makes people sleepy. Then again, I believe all sorts of lies that my teachers, family members, tv shows, movies, and friends have taught me. I just need to remember to keep asking myself, "How do I know that it's true? What's the evidence? Is it any good?"


Share and Enjoy!

Dr. Bob


Going Beyond the Information Given

[1] It's useful to distinguish between hormones and neurotransmittersA neurotransmitter is a chemical that cells in the brain use to communicate with other cells or regions of the brain; whereas, a hormone is how the brain communicates with the rest of the body. 

Thursday, November 20, 2014

Priming the Pump: Semantic & Perceptual Priming

Musicum Revelio!

 I have this amazing app installed on my iPad, called Shazam. It can identify pretty much any song that's ever been recorded. I have no idea how it works. As far as I'm concerned, it operates on magic. I shared my theory with a colleague, and he said it probably uses an algorithm called nearest neighbor. Ok, so maybe magic is the wrong explanation. Instead, the app uses a collection of features (e.g., beats per minute, key, and vocal range) to classify the song. That got me thinking...our brain does a similar bit of magic everyday, all the time! Any time you see someone you know or recognize their voice, you just did a quick bit of classification based on very little information. Not only that, it happens almost instantaneously. The brain is super fast at classification. 

If our brain is so freakin' awesome, why does it blow it every once in a while? Have you ever had the experience of recognizing someone's face, but you have no idea what the person's name is or any biographical information about them? You know that you know them, but you don't know how you know them. It usually happens when you see that person out of context. For example, you may see a coworker, but they are at the grocery store instead of the office. It takes longer to recognize them out of context. Why is that the case? 


"Lake Superior! That's the answer to the first question!" —Lane Maxwell

Our mind uses a trick to speed up processing of incoming information. It helps boost efficiency by working within the same semantic space. In other words, classification works better when you know where to look. For example, in the gameshow Wheel of Fortune, they always give the contestants the category before they start solving the puzzle. The categories are often vague (e.g., "before and after" or "around the house"), but at least you don't have to comb through your entire body of knowledge to identify the words. In other words, knowing the category primes you to think about certain things. Eliminating huge swaths of information can help increase processing speeds. Thus, Priming is the phenomena where early information helps speed up processing of later information [1].

How does priming work? If you've been reading this blog, then you can probably anticipate my answer. The Associative Network and Spreading Activation can help explain how priming works. Once a node in the network is active, activation will spread to its nearest neighbors first, and then radiate out to other, related concepts. I would predict that you would be able to identify the song ABC quickly when, right before the song came on, we were talking about the Jackson family. Indeed, this makes intuitive sense. Getting in the mindset helps you become more accurate and faster at processing new information. 

Another example appeared in a previous post where I attempted to use priming to help the reader solve a puzzle. The goal is to find the common connection between three unrelated words: blue, cake, and cottage. Later, on that page, I had a picture of a piece of swiss cheese. My hope was that the image would prime the reader to figure out that the connection between blue, cake, and cottage is cheese. Priming can be our friend. 

There is, however, a downside. Priming doesn't work at a conscious level. In other words, priming happens outside of our awareness. Why would this be a bad thing? One reason why it can be a disservice is that we don't always give proper credit, or attribution, to the source of our ideas. We may think we are being creative and coming up with our own ideas. But as this amazing video demonstrates, that isn't always the case [2].


A STEM Example

How can we use the idea of Priming to help enhance education? It's tough to exploit, mainly because priming operates outside of our conscious awareness. However, a creative educator might engineer a lesson so that she can prime students to answer a logical chain of questions. 

Our stats teacher in college did something like this. We didn't know it at the time, but the lesson was about calculating the standard deviation of a sample. Instead of putting the formula on the board and asking us to memorize it, he started by talking about something (seemingly) unrelated: linear transformations. How would you shift the mean of a set of data, represented as a vertical line, up or down the x-axis? Once we got good at that, he asked us another question: How far away, on average, is each point from the mean? That got us thinking about the spread of the data, and he drew upon something we already knew: how to compute the mean. Finally, we noticed that the difference between the data points were both positive and negative. So we had to figure out a way to standardize that. 

Asking students leading questions, and allowing them to explore the problem space in a structured way, is a good way to exploit the power of priming in education. I am curious to hear in the comments section other ways that we can harness the positive power of priming in education. 

Share and Enjoy! 

Dr. Bob


For More Information

[1] Priming is a very cool concept, but how do we know it's real? What is the scientific evidence that convinced the field that priming is a property of the mind? Early evidence came from a study that asked people to judge whether a string of letters were words (butter) or not (plame). When words were semantically related (e.g., "butter" and "bread"), participants were faster to respond than when they were not related (e.g., "butter" and "nurse"). Of course, we're talking about a difference of 85 milliseconds, but still! That study gave us early evidence that priming was real. 

     Meyer, D. E., & Schvaneveldt, R. W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence upon retrieval operationsJournal of Experimental Psychology, 90, 227–234.


[2] Other real-life examples abound in Walter Isaacson's fascinating book The Innovators. His historical analysis of the computer demonstrates time and again that inventors downplay the influence they received by talking with other people or by looking at their prototypes. When a lot of money is at stake, people conveniently ignore the impact of priming on their inventions.



Thursday, November 13, 2014

Covering the Spread: Spreading Activation


"Oh, rats!" —Indiana Jones

Let's play a game. What do the following three things have in common?   


blue       cake       cottage

One is a color, the second is a type of dessert, and the last is a little house in the woods. It doesn't seem like they have much in common. But there is one concept that binds them together. Keep thinking about it. Or don't! Sometimes the best way to see the connection between (seemingly) related things is to leave it alone and let your mind engage in some background processing.

Switching gears a moment...let's talk about what it means to be "reminded" of something. For example, I had lunch with one of my coworkers, and she told me about this man who randomly stopped by her house when she wasn't home, and he left some candy on her front porch. Her story reminded me of a movie that my wife and I recently watched about a guy who wants to be a freelance journalist. So wait a minute...What does a guy leaving candy at my friend's house have to do with a movie about journalism? Well, the movie is entitled Nightcrawler, and the connection I saw was "anti-social behavior" (or maybe even "mental illness!"). This type of thing happens all the time in conversation. Something one person says reminds another person about a completely different topic. How does that happen?


Back to the Network

One potential explanation is to return to an idea that was introduced in a previous postI made the claim that an Associative Network is a very powerful way to represent someone's knowledge. It is powerful because it can explain other cognitive phenomena, such as "reminding." When we say something "reminds you" of something else, what are we talking about? And how can we use that information to map someone's knowledge?

One of the properties of a network is called the "connection strength" (or proximity) between two concepts (or nodes). For example, apples are strongly associated with bananas because they are both types of fruit. But apples are only very remotely associated with the Kentucky Derby. (It's a long walk, but you can imagine the following chain of associations: Horses eat apples, which give them energy to run, and people like to watch horses race at the Kentucky Derby.) 

That means something can remind us of another thing either by the strength of the connection between them or the number of hops you need to connect any two concepts. Back to the original question: According to this theory, how does "reminding" work? The theory states that each node is connected to one (or many) other nodes. When that node becomes active, due to some stimulus in the environment, activation spreads throughout the network of ideas. Back to our fruit example, if I see an apple, then activation spreads out to other fruit, including bananas, and continues to radiate outward to other concepts. Spreading Activation, then, is the idea that one node becomes active, which activates  another node, which then activates a third node, and so on until the activation dies out.


A STEM Example

I like the idea of an associative network of ideas because, as an educator, you can start to bootstrap your lessons based on what your students already know. A perfect example is Netwon's Law Universal of Gravitation, which is summarized by the following equation:


F=(G*m_1*m_2)/r^2


F represents the force between two objects (e.g., the sun and the Earth), G is a constant, m is the mass of the first and second object, and r is the distance between them. It is extremely helpful to know this particular equation when students later learn Coulomb's law, which describes the force experienced by two charged particles:


Notice anything? There are subtle differences between the two laws, but the overall structure of the equations is remarkably similar. In fact, when teaching Coulomb's law, it is helpful to ask the students if they are reminded of anything from their previous lessons.


Back to the Beginning

I opened this post with a "game." It's origin isn't a game, but a test of creativity called the "Remote Associates Test" (or "R.A.T." for short). The idea is that creative individuals have many connections between nodes and when activation spreads, it hits remote parts of the network. This makes intuitive sense because creative people are most likely to be described as "divergent thinkers." Now we have a way to visualize what divergent means. Maybe we can even train ourselves to be more creative by not stopping at the first thing that you are reminded of. Instead, force yourself to keep activating other parts of your associative network.

Share and Enjoy!

Dr. Bob



For More Information


You can test your creativity by taking the RAT here. Also, remember the overused phrase, "think outside the box"? Ever wonder where that came from? According to internet lore, its origin is found in the "nine dot problem" where you have to connect all of the dots with only 4 lines. Try it! 




Thursday, November 6, 2014

Work the Network: Associative Networks

Contradictions in Memory

Our intuition about how memory works says that you can only remember a couple of things at a time, right? For example, if I start rattling off a grocery list, you might want to start jotting things down after I list the fourth fruit or vegetable. 

So here’s the conundrum: Why does memory get better when we start adding additional information? That sounds like a contradiction, right? Absolutely! But there’s a good reason why it works, and it has everything to do with the way memory is structured. 

Our memory system is a fascinating knot of complementary (and often contradictory!) mechanisms. We need these different systems because our environment is sufficiently complex. We are confronted with many different tasks that include different sources of information. If you have a quick task that will only take a few seconds, then you need a fast memory system that inhales information and spits it out quickly. However, most of the interesting things that we do require us to remember something over a long period of time. You might call that “learning.”

How, then, can we enhance our learning? How can we make sure the information that we see or hear gets cemented in long-term memory? One memory hack is to start adding all sorts of details that will help enhance the memory that you want to form. Here’s an example from my own life. 

What's in a name?

A few weeks ago, I met one of my new coworkers. I had no problem remembering her first name, but her last name escaped me. It’s embarrassing when you can’t remember someone’s name, even when you try. I needed help, and here’s what I came up with. 

I am a hockey fan, and in college I started following the Detroit Red Wings. They have a history of recruiting promising players from other countries. While these players might not shine during their first year, the Red Wings sign them for extended contracts and commit to developing their talent. A perfect example is when the Red Wings signed Pavel Datsyuk in 2001. 

So what does a forward for the Red Wings have to do with remembering my coworker’s name? Well, the first five letters of her last name are “Pavel” (plus some additional letters at the end). In effect, what I did was add a bunch of seemingly irrelevant information to help me remember her last name. I made an effort to embed her name in a larger network of information. Moreover, when I try to recall her name, I have several hooks to get me to the right name. I can think about the field of Cognitive Science, Hockey, or work, and all routes should lead me to the desired destination. 

Why does that work? Or said another way, what does the structure of long-term memory look like? I have no idea, mainly because it is so fluid and multi-faceted. However, one way Cognitive Scientists have attempted to visualize the complexity our memory is to use a node-link structure called an Associative Network. A small portion of my network probably looks like this:




Each node is a concept and a link between them is an “association.” In other words, each concept reminds me of the other nodes connected to it. For example, when I think of John, I am reminded of Chas (and vice versa). The degree to which concepts are connected also matters. The distance between Hockey and Cognitive Science is remote. So they shouldn’t remind me of each other.

A STEM Example

This has obvious implications for education. For example, suppose you were teaching a biology class to a group of young children. They know the definition of a “mammal,” and they can give many examples (e.g., cats and dogs) and counter examples (e.g., birds and fish). When they first learned about mammals, they learned that mammals have a couple of defining characteristics: They breath air; they have fur or hair; they have 3 inner-ear bones, they give birth to live offspring; and nurse their young. Most kids at this age, however, incorrectly classify a whale as a type of fish. That means they think whales don’t have hair, don’t give birth to live young (or nurse them for that matter!). 

In essence, what you have done as a teacher is completely broken one of the links in their Associative Network and moved it over. Thus, learning might look like this:


==becomes==>


An Associative Network representation helps demonstrate the importance of prior knowledge on learning. It also helps explain other cognitive phenomena like priming, cued recall, and spreading activation (all of which will be the topics of future posts). 

Share and Enjoy!

Dr. Bob


For more information

Here is my favorite empirically derived network. It depicts an expert child's representation of her dinosaur knowledge. You can see that some dinosaurs hang together tightly, while others are more remotely associated. Also, the number of links between them also shows the strength of the connection between the two animals. 


Used with permission from the author.


Source: Chi, M. T. H., & Koeske, R. D. (1983). Network representation of a child’s dinosaur knowledge. Developmental Psychology, 19(1), 29–39.

Thursday, October 30, 2014

Welcome to Dr. Bob's Cog Blog

In grad school, we had a saying, "Science is hard." When I left academia to join the dark side (aka "industry"), I also realized that "Marketing is hard." Taken together, it would be fair to say that "Marketing science is insanely hard." Many have tried, and a few have even succeeded.

Mission and Vision

My goal for this blog is to make Cognitive Science relevant, useful, and understandable (and maybe even a little fun!). The reason why that's the goal is because I work with a bunch of really great math teachers. They are in the trenches, teaching other math teachers how to teach. It's my belief, shared by others, that teaching is improved when you add an understanding of how the mind works. 

This belief comes from a couple of sources. First, I went to grad school at this amazing place called the "Learning Research and Development Center" at the University of Pittsburgh. We lovingly referred to it as simply as "LRDC." While working and studying at LRDC, I met some of the smartest scientists and educators on the planet. They were all working toward the common goal of understanding how people learn, with an eye toward creating interventions that fed on that understanding. I also had the privilege to teach a course called "Cognitive Science for Non-majors." The goal of that class is where I draw the inspiration for this blog: I had to explain heavy-duty scientific concepts in a way that was understandable to my students.

In the posts that will follow, I will attempt to address the question: How can the theories and empirical findings from Cognitive Science help inform instruction and the design of educational technologies? The goal is to present possible answers to this question in the context of everyday examples, and (hopefully!) connect back to the STEM (science, technology, engineering, and math) disciplines. 

Inspiration

Okay, so what's with the name? First of all, I must give credit where credit is due. My wife, Leslie, came up with the name. It's a mashup of two popular-cultural references. The first is the beloved Dr. Bob from the Muppets. I earned my PhD in 2005, but I never really felt comfortable with the distance that "Dr. Hausmann" created. Instead, I like the more fun reference. Second, the title of this blog is an homage to Bob Loblaw's Law Blog from Arrested Development. Thus, I give to you Dr. Bob's Cog Blog.

Share and Enjoy!

Dr. Bob