Self-explanation: Meta-cognitive vs. justification prompts

From LearnLab
Revision as of 15:01, 1 February 2008 by Bobhaus (talk | contribs) (Dependent variables)
Jump to: navigation, search

The Effects of Interaction on Robust Learning

Robert G.M. Hausmann, Brett van de Sande, Sophia Gershman, & Kurt VanLehn

Summary Table

PIs Robert G.M. Hausmann (Pitt), Brett van de Sande (Pitt), Sophia Gershman (WHRHS), & Kurt VanLehn (Pitt)
Other Contributers Tim Nokes (Pitt)
Study Start Date Sept. 1, 2007
Study End Date Aug. 31, 2008
LearnLab Site Watchung Hills Regional High School (WHRHS)
LearnLab Course Physics
Number of Students N = 75
Total Participant Hours 150 hrs.
DataShop Loaded: data not collect


The literature on studying examples and text in general shows that students learn more when they are prompted to self-explain the text as they read it. Experimenters have generally used two types of prompts: meta-cognitive and justification. An example of a meta-cognitive prompt would be, "What did this sentence tell you that you didn't already know?" and an example of a justification prompt would be, "What reasoning or principles justifies this sentence's claim?" To date, no study has included both types of prompts, and yet there are good theoretical reasons to expect them to have differential impacts on student learning. This study will directly compare them in a single experiment using high schools physics students.

Background and Significance


See Hausmann_Study2 Glossary

Research question

How is robust learning affected by self-explanation vs. jointly constructed explanations?

Independent variables

Only one independent variable, with two levels, was used:

  • Explanation-construction: individually constructed explanations vs. jointly constructed explanations

Prompting for an explanation was intended to increase the probability that the individual or dyad will traverse a useful learning-event path.


Dependent variables

  • Normal post-test
    • Near transfer, immediate: During training, worked examples alternated with problems, and the problems were solved using Andes. Each problem was similar to the example that preceded it, so performance on it is a measure of normal learning (near transfer, immediate testing). The log data were analyzed and assistance scores (sum of errors and help requests, normalized by the number of transactions) were calculated.
  • Robust learning
    • Long-term retention: On the student’s regular mid-term exam, one problem was similar to the training. Since this exam occurred a week after the training, and the training took place in just under 2 hours, the student’s performance on this problem is considered a test of long-term retention.
    • Near and far transfer: After training, students did their regular homework problems using Andes. Students did them whenever they wanted, but most completed them just before the exam. The homework problems were divided based on similarity to the training problems, and assistance scores were calculated.
    • Accelerated future learning: The training was on electrical fields, and it was followed in the course by a unit on magnetic fields. Log data from the magnetic field homework was analyzed as a measure of acceleration of future learning.


Procedure Participants were randomly assigned to condition. The first activity was to train the participants in their respective explanation activities. They read the instructions to the experiment, presented on a webpage, followed by the prompts used after each step of the example.

All of the participants were enrolled in a year-long, high-school physics course. The task domain, electrodynamics, was taught at the beginning of the Spring semester. Therefore, all of the students were familiar with the Andes physics tutor. They did not need any training in the interface. Unlike our previous lab experiment, they did not solve a warm-up problem. Instead, they started the experiment with a fairly complex problem.

Once they finished, participants then watched a video solving an isomorphic problem. Note that this procedure is slightly different from previous research, which used examples presented before solving problems (e.g., Sweller & Cooper, 1985; Exper. 2). The videos decomposed into steps, and students were prompted to explain each step. The cycle of explaining examples and solving problems repeated until either 4 problems were solved or 2 hours elapsed. The first problem was used as a warm-up exercise, and the problems became progressively more complex.


Further Information

Annotated bibliography



Future plans