Thursday, March 22, 2018

More Human Than Human: Human Tutoring

Editorial Note: As of April 1, 2018, our paper, Learning from human tutoring, has been cited over 1,000 times. That qualifies it as a "citation classic." Congratulations to my co-authors, especially Micki, for making this happen!


Learning By Doing

Let's start with a quiz. Rank order the following learning strategies from most effective to least effective.
  1. Listening to a lecture
  2. Participating in a large-group discussion
  3. One-on-one human tutoring
  4. On-the-job training or an apprenticeship 
  5. Hands-on laboratory activity
  6. Collaborative problem solving
  7. Watching an online video
  8. Studying worked-out examples
  9. Solving problems with a computer tutor

"Sort It Out" –Vinny, Snatch (2000)

This is probably an impossible sorting task because it isn't always clear when one particular learning strategy should be employed. For example, it might depend on the learner's prior knowledge. If the material is completely new, then one-on-one human tutoring might be the best approach. Alternatively, if the student is familiar with the material and just needs a refresher, then watching a video might be the most efficient way to learn. In other words, there isn't a grand unified learning theory that can confidently recommend the exact learning activity for a student who is learning a specific topic.

That is not to say, however, we can't make an educated guess [1, 2]. We know that requiring a student to be active during the learning process (i.e., asking deep questions) is better than letting him or her be passive (i.e., watching a video). This general framework was covered in a previous post.

Thus, being active during learning is important. We also know that being interactive is also important for learning. What kind of instruction encourages interactivity? Both parents and students probably agree that one-on-one human tutoring is highly interactive and extremely effective. The open questions is: Why is human tutoring effective? 

Three Possible Explanations

Here are three possible explanations for why human tutoring is effective. 

The Tutor-centered Hypothesis

The first hypothesis suggests that human tutoring is effective because the tutor has a deep understanding of both the topic and pedagogical moves that are useful for eliciting learning from a student. For example, suppose a tutor is teaching how to solve projectile-motion problems. She knows that students frequently hold the misconception that motion in the x-direction depends on motion in the y-direction. Therefore, the tutor knows the exact question to ask when the student displays this confusion. Thus, tutoring is effective because the tutor knows exactly what to say and when to say it.


The Student-centered Hypothesis

In contrast, there is an equally valid way of looking at tutoring that focuses on the student. Instead of giving the tutor all the credit, the student is the reason why tutoring is effective. When the student is engaged with constructing his or her own understanding, then tutoring will be effective. We've seen in past research the importance of the generation effect and self-explaining. Tutoring will be effective in so far as the student is able to engage in these constructive learning activities.


The Interactive Hypothesis

The third explanation for the effectiveness of human tutoring takes a more holistic view of tutoring. Instead of crediting the student or tutor, the effectiveness of tutoring arises from their mutual interaction. In other words, the tutor appropriately sets the current learning task and the student then has the opportunity to engage in this task. The tutor is available to answer questions and give feedback as needed. The tutor is also available to back up and propose an easier task if the student gets stuck.


Let's Put It To a Test

To test these three equally plausible explanations, we conducted the following study [3]. The first part of the experiment asked tutors to teach eighth grade students about the human circulatory system. The tutors, who were nursing students, weren't restricted in any way. Their goal, of course, was to help prepare students to answer really tough questions about the circulatory system. In the second part of the experiment, we took the same tutors, gave them a fresh batch of students, and told them not to give away any information. Instead, we wanted them to get the students to do all the work.

Before we get to the results, we had to make sure that the tutors understood and followed our instructions. Sure enough, in the second round of tutoring, they explained a lot less and  asked a bunch more questions. Therefore, we were satisfied that our manipulation was a success. 

In the first study, we found a correlation between the number of tutorial explanations and shallow learning. I don't find this result very surprising. The student read the text and the tutor explained it, which was effectively like hearing the same lesson twice. The more interesting finding was that when students made reflective comments about their understanding or learning. Those types of comments were correlated with deep learning. So far, we have evidence for the first two hypotheses. What the tutor does is important for shallow learning (i.e., give tons of explanations) and what the students does is important for deep learning (i.e., be reflective or metacognitive).

In the second study, we found something else that was interesting. Recall that the tutors weren't allowed to explain or give feedback. Instead, they asked a lot of "scaffolding" questions (e.g., giving hints, asking fill-in-the-blank questions, or asking for an example). There was about three times as much scaffolding in the second study, and the number of explanations dropped precipitously (see Fig. 1). What makes this finding interesting is that the learning outcomes were the same in both studies. The students excelled on their post-tests. The reason why, we argue, is because the tutors created situations where the student could be generative, and when they engaged in the generative activities, they learned a great deal from the tutoring session.


Figure 1. The number of scaffolding and explanation
episodes for the first and second study.



The S.T.E.M. Connection

The results from the first study clearly indicate that tutors, when left to their own devices, will try and explain the material to the student. This isn't necessarily a bad thing. Repetition generally leads to learning. Unfortunately, that isn't the best way to learn the really hard material. Instead, it is a good idea to create a tutoring situation where the tutor is asking the student questions and pushing them to further develop a line of reasoning. Tutors are effective because when they ask a tough question, and when they hear the student struggle, then they back up and ask an easier question. That is where the metaphor of "scaffolding" comes from. On the flip side, it is important for the student to monitor his or her understanding. We've talked about the importance of metacognition and in this case, metacognitive monitoring was correlated with learning. To the extent possible, we should teach our students to monitor their evolving comprehension.

Human tutoring is effective for so many reasons, and it doesn't just boil down to one thing. But as we continue to investigate and probe, eventually we will be able to figure out tutoring's secret sauce. Once we do, then we can provide tutors with concrete suggestions to become even more effective. And maybe, just maybe, we can build a computer tutoring system that is as good as humans. Maybe even better [4]. 


Share and Enjoy!

Dr. Bob

Going Beyond the Information Given

[1] Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The Knowledge‐Learning‐Instruction framework: Bridging the science‐practice chasm to enhance robust student learning. Cognitive science, 36(5), 757-798.

[2] Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49, 219-243.

[3] Chi, M. T., Siler, S. A., Jeong, H., Yamauchi, T., & Hausmann, R. G. (2001). Learning from human tutoring. Cognitive Science, 25(4), 471-533.

[4] VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems and other tutoring systems. Educational Psychologist, 46, 4, 197-221.