Showing posts with label Transfer. Show all posts
Showing posts with label Transfer. Show all posts

Thursday, August 4, 2016

The Future Is Now: Preparation for Future Learning

Learning By Doing

I have an assignment for you. A family of bald eagles was spotted on a hillside in an adjacent neighborhood. Your task is to set up a refuge that will keep them safe from poachers, crowds, and other impediments to nesting. As you start this exercise, what do you already know that will help you solve this problem?

Why would I give you this assignment? If your boss came up to you and said, "I need you to construct a plan for an eagle sanctuary. Have it on my desk by the end of the week." What would your reaction be? Like yours, mine would be, "Um, that's not my job. I don't know anything about eagles, sanctuaries, or how eagles raise their babies." I would protest and generally lobby to be taken off this project.

Now, consider what it's like to be a kid in school. Teachers assign their students projects from wildly disparate domains. Despite what kids know or don't know, they have to oblige their teacher (or face the consequences). Students can't complain that this is "not their job" or that they "don't know anything about eagles." They have to dive in, ask a ton of questions, and try to learn as much as possible. 


Hold Please While I Transfer You.

In a previous post, we defined transfer as the application of knowledge from one setting to another. For example, you might learn how to calculate the length of a hypotenuse of a right triangle in math class. While working on building a shed with your Dad, you have to calculate the length of the roof. You need to recognize that your knowledge from math class applies to this construction scenario. Being able to do so would be a great example of positive transfer. 

Unfortunately, far transfer is rare. People don't see how their knowledge is applicable in many situations. For example, when I was working on my book, I found out that I had to remove all my links to a certain online bookstore. The software I was using didn't allow me to search for the contents within the hyperlinks. I was lost and didn't know how to solve my problem. A few days later, it occurred to me that I do a similar task work all the time. I just need to use grep, which is a command-line search tool. Once I made the connection, I felt like an idiot because the knowledge for solving this problem was always right there, inside my head. I forgot what I knew because I didn’t recognize its application to the new problem at hand!!

The literature on learning is rife with similar examples of transfer failure. So is it a problem with our students? Or is it a problem with the theories on transfer in the literature? The answer is probably a little bit of both [1].

Why Is Transfer So Hard?

Let's make a distinction between two different theories of how to optimize transfer. The direct application theory of transfer is when we must do something new in the absence of any other resource, for example, when the student isn't allowed to look in a textbook, search the web, or phone a friend. In the preparation for future learning theory of transfer, the student is given an interim learning opportunity – an example to study or a related problem to solve. The intermediate step between the initial learning opportunity and the target transfer material should make the student more prepared to master the new problem (see Figure 1).


Figure 1. Two different theories of transfer.

To compare these two theories, Dan Schwartz and Taylor Martin conducted the following study [2]. They wanted to compare two types of instruction. The first type asked students to invent a procedure for assessing the accuracy of a pitching machine. In other words, the scientists wanted students to struggle with inventing a mathematical formula that captures variation in a set of data. After they struggled, the teacher presented the "real" solution. This was contrasted with the "tell-and-practice" type of instruction where the students first heard a lecture, and then they are asked to practice applying the mathematical procedure.

Ordinarily, this would be a standard test of the direct application theory of transfer; therefore, they added a twist. On the post-test, they included a worked-out example that was related to the far transfer problem. The worked example served two purposes. First, it represented the resource for an intermediate learning opportunity. It also allowed them to evaluate the preparation for future learning theory. So what did they find?

The results of their experiment are captured in Figure 2. The right side of the figure demonstrates that the students in the Tell-and-Practice classroom did not benefit from the worked example. However, if we contrast that with the left side of Figure 2, we can see that the pattern of results for the Invention-Based instruction was different. The students who had the opportunity to invent a formula for variation were better able to take advantage of the worked example. Their performance on the transfer problem was much better relative to all the other students. This pattern of results lends empirical support to the preparation for future learning theory of transfer.


Figure 2. The learning differences between two types of instruction combined
with the opportunity to learn from a 
resource on the test.



The S.T.E.M. Connection

How does this play out in education? Going back to the eagle example, if we gave this assignment to both children (5th graders) and adults (college students), they would probably both do poorly. It's not really their fault because we just sprung it on both of them. None of the prior learning helped either population transfer their knowledge to this task. Their poor performance on this type of assessment would be consistent with the direct application theory of transfer. 

What if, however, we asked the two groups to generate questions that they would like to have answered so that they could successfully complete this task? By this metric of transfer (i.e., evaluating the questions each group asked), the adults blew the kids out of the water. The 5th graders were more focused on local features of the eagles; whereas, the adult questions demonstrated an appreciation for the inter-relationship between the organisms and their environment.

Examples Questions: College Students 
  • What type of ecosystems support eagles?
  • Do eagles have predators? How about their babies? 
  • What are some man-made threats to eagles?
  • What kind of experts are needed for the refuge?

Examples Questions: 5th Graders
  • How big are eagles?
  • What do they eat?
  • Where do they live?
  • How do they take care of their babies?

In other words, the performance on a "preparation for future learning" measure of transfer, the adults did quite well. They are able to ask the right questions about eagles because they can use their general knowledge of biology. Asking the right questions should, in turn, should help them find the right answers.

It's still true that transfer is hard. However, we should construct our learning opportunities such that "preparation for future learning" is taken into account. This also has implications for assessment because solving problems in a vacuum might not be the gold standard. Instead, as educators, we might be more interested in making sure students learn how to learn, and structure our assessments accordingly


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. Review of Research in Education, 24, 61-100.

[2] Schwartz, D. L., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22129–184.

Thursday, September 17, 2015

I Work Out!: Brain Training

Editorial Note: I'm really excited about this week's topic because we are going to hear from a very good friend of mine, Dr. Jason Chein. In today's post, our guest writer is going to discuss a highly controversial and extremely interesting topic: brain training. Dr. Chein has conducted research in this area [1-3], which is why I'm so excited that he agreed to write this week's post. Take it away, Jason!

Is it time to hit the gym…for your brain? 

With so many advertisements and pop-culture books claiming that you can achieve a “smarter you” in just a few minutes a day of “brain training,” you might be thinking about hitting the cognitive gym. But don’t start strapping on your brain workout gear just yet. In today’s post I’ll take you through a brief history of some brain training research, and tell you about where the field stands today. (Hint: it’s not ready for primetime.)

If you were going by what had been the conventional wisdom in experimental psychology for the last several decades, then brain training — engaging in regular mental exercises that are intended to enhance your general cognitive functioning — would seem like a pretty silly idea. Just about all of the research from the mid-1960’s up through the turn of the millennium indicated that, while you could get incredibly good at just about any task (even really demanding ones) with enough practice, the benefits would be observed only for that specific task, and wouldn't transfer to other mentally challenging activities. Take for example the seminal work of Chase and Simon (1973) exploring the amazing memory of chess experts [4]. With just a few seconds to glance at the arrangement of pieces on the chessboard, advanced chess players can reconstruct the position of nearly every piece. Pretty impressive stuff! But, that’s true only if the pieces are in positions that “make sense” in the course of actual game play. If you change things up so that the pieces are placed randomly on the board (not in positions that would occur in a real game), then the experts’ memory drops to near novice levels (see for yourself in this video posted by psychologist Daniel Simons). And, it turns out that playing all that chess doesn't make someone generally smarter than others, or any better at problem solving in other situations. All those thousands of hours of practice and all it’s good for is beating someone at chess? Yep.


Core Strength

In the ensuing years, many psychologists have tried to find a mentally engaging activity that would leave a bigger footprint on the landscape of cognitive functioning, but time and time again the results suggested that practice with a given skill just doesn't transfer to other skills. So, you'd think everyone would have given up on the idea of brain training long ago (and many had). But in the early 2000's a new(ish) idea started to gain some traction. What if, just as performance with many physical activities can be enhanced by focusing exercises on "core" musculature, intellectual functioning could be generally improved by focusing mental exercises on "core" cognitive systems. Makes sense, right? And, based on a large body of prior behavioral experiments, and corroborating neuroimaging studies, researchers had a pretty good idea what some of those “core” cognitive abilities might be. One that seemed especially promising was working memory; the topic of an earlier post from Dr. Bob. Working memory is supposed to serve as a general workspace for the mind, and a slew of studies show that individual differences in working memory capacity can explain why some people excel while others lag behind on a very wide range of cognitively demanding tasks. If the capacity of this general mental workspace could somehow be expanded, perhaps through repeated exercises that target working memory, this could have a profound impact on overall intellectual functioning!

With this basic idea in mind, a few pioneering researchers decided to throw caution to the wind and to try their hand once again at the brain training enterprise. And, to many scientists great surprise (especially those who were pretty settled on the conclusion that practice just doesn’t transfer), the early results looked really promising. First came a pair of studies showing that training focused on working memory and other executive processes was effective in improving cognitive performance among kids diagnosed with ADHD, and it turned out, even among the healthy kids and college students who had been included as the comparison groups in those studies [5, 6]. Those exciting early results inspired another study [7] that really captured the imagination of the field, showing that scores on a test of general fluid intelligence (the closest thing we have to an index of someone’s general intellectual ability) were improved by working memory training, and in a dose dependent fashion (more training = more improvement). At the time that paper was published, I was myself already engaged in another working memory training study [1], which ultimately showed that a month of training could enhance both attention control and reading comprehension in college students (we looked, but didn’t find any evidence of improved fluid intelligence in this group).


Drinking From the Firehose

What started as a trickle of papers on working memory training soon turned into a deluge. Study after study seemed to be finding the same basic thing: that mental exercises targeting core functions of the mind (not just working memory, but also other “executive” and attentional functions) could produce meaningful transfer to important intellectual abilities. Yay, brain training works!…right?

Well, that depends on who you ask and what you mean by “works.” This is where the story gets interesting (and complicated, but don’t worry, I’ll keep it simple). After some of the initial excitement wore off, reports of failed replication attempts and null results (studies showing no benefits of training) started to come in. Others trying to reproduce the most impressive findings, like the gains in fluid intelligence and improvements in ADHD symptoms, weren’t always meeting with as much success. It seemed like the field was dividing into camps: let’s call them the ‘believers’ and the ‘doubters’. The doubters were understandably worried about failed replications, and raised some really important concerns about the methods used in earlier studies (like whether the groups that completed training and those that didn’t just had different expectations about how they should perform, similar to the placebo effect that can arise in drug studies). The believers kept at it, improved their studies to address the doubters’ concerns, and, even with these more careful measures in place (e.g., better control groups), many of their studies continued to produce exciting results.

So, which camp is right, the believers or the doubters? In situations like this we need to take a step back and look at the overall pattern and weight of the evidence. One way to do that is through meta-analysis – pull all of the relevant studies together, account for the size of the study sample (how many people participated) by giving more weight to larger studies, and then look to see where the “truth” lies. But here too the doubters and believers come to different conclusions. That’s because the answer you get depends on which specific studies you think should count, which methods you use to gauge the size of the training effect produced by each study, and most importantly, which behavioral outcomes you decide to focus on. There isn’t much debate about the benefits of training on tasks that are really similar to those that made up the training regime (we call these “near transfer” measures). In general, training does seem to improve performance on closely related tasks. So, if by “works” you mean “makes you better able to remember lists of things” (and indeed, that might be an important skill in some scenarios), then yes, it looks like training works. But does it boost your IQ, sharpen your attention, and improve your overall cognitive acumen (does it lead to “far transfer”)? I’d say we just don’t know yet. While there is some evidence that it can do these things, the overall body of evidence isn’t unequivocally favorable. But on the flip side, there also isn’t enough evidence that it doesn’t work (getting a little technical here, Bayesian factor analysis suggests that there is neither enough evidence to accept the claim nor to reject it). So pick your favorite metaphor – the jury is still out, the dust hasn’t settled, the waters are still too muddy – and maybe wait until the next New Year before you make a brain training resolution.


About the Author

Dr. Jason M. Chein is currently a faculty member at Temple University, where he is the principle investigator of the Neurocognition Lab. I met Jason in 1998 when we were both graduate students at the Learning Research and Development Center. While in grad school, Jason became an expert in cognitive neuroscience, which included learning cool methodologies like conducting studies using fMRI. While in grad school, Jason also became quite proficient at frisbee golf.


For More Information

[1] Chein, J., & Morrison, A. (2010). Expanding the mind’s workspace: Training and trans- fer effects with a complex working memory span taskPsychonomic Bulletin & Review, 17(2), 193–199.

[2] Morrison, A. B., & Chein, J. M. (2011). Does working memory training work? The promise and challenges of enhancing cognition by training working memoryPsychonomic Bulletin & Review, 18(1), 46-60.

[3] Morrison, A. B., & Chein, J. M. (2012). The controversy over CogmedJournal of Applied Research in memory and Cognition, 1(3), 208-210.

[4] Chase, W. G., & Simon, H. A. (1973). Perception in chessCognitive psychology, 4(1), 55-81.

[5] Klingberg, T., Forssberg, H., & Westerberg, H. (2002). Training of working memory in children with ADHD. Journal of clinical and experimental neuropsychology, 24(6), 781-791.

[6] Klingberg, T., Fernell, E., Olesen, P. J., Johnson, M., Gustafsson, P., Dahlström, K., Gillberg, C.G., Forssberg, H., & Westerberg, H. (2005). Computerized training of working memory in children with ADHD-a randomized, controlled trial. Journal of the American Academy of Child & Adolescent Psychiatry, 44(2), 177-186.

[7] Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Perrig, W. J. (2008). Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences, 105(19), 6829-6833.

Thursday, September 10, 2015

There and Back Again: Near and Far Transfer

Imagine for a moment that you are landing in a city you have never visited before, and you have to find your way from the airport to your hotel. When you land and get off the plane, what do you do? What steps might you have to take to navigate your way from the airport terminal all the way to the comfort of your 4-star hotel room? If you have travelled before, what aspects from your prior experiences, if any, might you draw upon to get you to your hotel in this new, unfamiliar city?

Near vs. Far Transfer


Hopefully you will be able to take advantage of some of your previous travel experiences in the above scenario thanks to what is known as transfer. Transfer is when knowledge learned in one domain is applied to a different domain. A domain is the topic or the subject matter of the to-be-learned knowledge. An everyday example of transfer comes from navigating a mass transit system. Suppose you grew up in a place where the only mass transit option was the city bus. When learning how to ride the bus, you figured out that a key to getting on the right bus was the sign found at the top of each bus, which displayed the line number and the terminal destination for that bus. This information was critical because it indicated which route the bus was going to follow, and whether the bus was heading towards the stop you want or away from the stop you want. This information is useful because it tells you if the bus is heading in the direction you want to go.

Now suppose you then find yourself in a new city, such as Washington, DC, and you want to ride their Metro, which is their subway system. You would demonstrate transfer by applying what you know about riding the bus (i.e., the source domain) to riding the subway (i.e., the target domain). Subway trains also display the line number and the terminal destination at the front of each train.

We might say that transferring knowledge from riding a bus on one line to riding a bus on a completely different line isn't much of a stretch because they are both busses, and they both use a destination sign to communicate with the rider. That is an example of near transfer because it involves learning within the same domain (i.e., riding the bus). What would be an example of far transfer? Learning how to ride the subway would be an example of far transfer because the surface features vary slightly and so does the setting or context (e.g., bus stops vs. subway stations).


Why Might Transfer Fail?

Although transfer can be very useful, it can also be very hard to accomplish. Why is transfer hard, and what are some of the ways that it might fail?

Transfer might fail when there is a mismatch between the learning setting and the application setting. A lot of learning occurs in the classroom, and it is the hope of every teacher that students transfer their lessons to real life. A semi-famous counterexample is a study of Brazilian street children making change [1]. When working on the streets, these children were able to perform fairly complicated computations in their heads. But when they were asked problems that had the same deep structure, and only the surface features changed (i.e., isomorphic problems), then they failed to solve the problems. In other words, the Brazilian street merchants were not able to transfer what they knew from selling candy to the classroom environment.

Another way in which transfer might fail is when the surface features of the problem change. One of my favorite examples of transfer failure comes from geometry [2]. In this example, children learn how to calculate the area of a parallelogram. When learning this particular skill, the problem is accompanied by the following diagram (see Fig. 1).


Figure 1. To calculate the area, drop two perpendicular bisectors.

The student is shown that when you drop two perpendicular bisectors (lines 1 & 2), which form two triangles of equal size that essentially equate the parallelogram with a rectangle. Since students already know how to compute the area of a rectangle, they can easily solve this problem. However, when the students are asked to compute the area of the following parallelogram (see Fig. 2), the children claim that they never learned how to solve this type of problem!

Figure 2. How do you calculate the area of this parallelogram?

This is a rather tragic example because it means that the students cannot see the applicability of their knowledge in the two different situations. This is an example of the failure of near transfer.


The STEM Connection

To reiterate, transfer is hard and can fail for multiple reasons. It can fail when the learning and application settings do not match. It can also fail when the learner does not recognize the connection between the surface features of what they learned (e.g., a parallelogram resting on its base) and a slightly different case (e.g., a parallelogram that is rotated and resting on its side). 

Why does the setting and surface features matter? In a previous post, we learned about procedural knowledge, which can be modeled using what is called a production rule. Production rules have two parts: a condition and an action. When I see this (condition), then I should do that (action).

   Production Rule: [ Condition ] => [ Action ]

We can model problem solving in geometry as a series of production rules. We might, for example, say: 

  Condition: If I see a parallelogram,
    AND: My goal is to compute the area;
  Action: Then, calculate the product of the base and the height. 

One potential explanation why transfer fails is because the condition (i.e., the left side of the production rule) is not general enough for the learner to see when his or her knowledge applies. The goal of education is to help students generalize their knowledge to the point where they can see how it applies across settings and across seemingly disparate situations.


Share and Enjoy!

Dr. Bob

For More Information

[1] Carraher, T. N., Carraber, D., & Schliemann, A. D. (1985). Mathematics in the streets and in schools. British Journal of Developmental Psychology, 3, 21-29.

[2] Wertheimer, M. (1945). Productive thinking. New York, NY: Harper.