Koedinger - Toward a model of accelerated future learning

From Pslc
Revision as of 09:24, 14 September 2010 by Koedinger (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Project Overview

Perhaps the most interesting of the PSLC measures of robust learning is accelerated future learning. A growing number of studies, within PSLC and without, have experimentally demonstrated that some instructional treatments lead to accelerated future learning. These treatments (and associated studies) include inventing for future learning (Schwartz; Roll), self-explanation (Hausmann & VanLehn), and feature prerequisite drill (Pavlik). While results are starting to accumulate, we have little by way of precise understanding of the learning mechanisms that yield these results.

The key goal of this project is to combine data mining and machine learning to create a computational models of learning mechanisms that yield accelerated future learning. We will are fitting this modelsuch models and ablated (or “lesioned”) alternatives against relevant data to isolate critical features of the mechanisms (e.g., Matsuda et.al, 2007, 2008) of future learning (e.g., Li, Cohen, & Koedinger, 2010; Matsuda et.al, 2007, 2008; Shih et al., 2008). We will are considering at least three two kinds of data sources and phenomenon. One data source is the DataShop data associated with experiments, like those listed above, where an accelerated future learning result has been achieved. A second data source is any DataShop data set with valid pre-and post-test data by which we can determine differences in student learning rate. Another A third data source is any DataShop data set with a quality knowledge component model and learning curves. For such a data source, we will are creatinge statistical models of individual differences across in students in learning rate. Dividing students into fast learners and slow learners, we can then are testing alternative versions of the computational or statistical models to see which best fits both the learning rate, and perhaps error patterns, of both slow learners and fast learners. In cases where we have measures of differences in students’ conceptual prerequisite knowledge (e.g., Booth’s equation solving data or Pavlik’s Chinese radical/character and pre-algebra data), we can use such data to further constrain the computational modeling effort.

A computational model of accelerated future learning that fits a variety of student learning data sets across math, science, and language domains would be a significant achievement in theoretical integration within the learning sciences.

Project Goals

  • Shih, Scheines, & Koedinger will create a “Target Sequence Clustering” technique (Shih’s thesis) that will be applied to identify patterns in tutor log data that characterize good and poor student learning strategies and are predictive of individual differences in student learning rate.
  • Li, Cohen & Koedinger will continue their work to produce a model and demonstration of accelerated learning within the SimStudent architecture. We will extend past work that has demonstrated the potential for deep feature learning technique using probabilistic grammar learning, by integrating those machine learning techniques into SimStudent and testing whether SimStudent can learn algebra with only weak prior knowledge (shallow features) by acquiring deep features rather than being programmed with strong prior knowledge as was done in the past.
  • With leveraged funding (DoE IES and NSF REESE), Matsuda, Booth, & Koedinger will continue to explore SimStudent as a model algebra learning data in which differences in student prior knowledge (pre-requisite concepts) lead to differences in student learning rate. The work of Li, Cohen, & Koedinger may contribute to this effort.

Participants

Ken Koedinger & PhD students Ben Shih and Nan Li. Other contributors are Dr. William Cohen (Machine Learning; co-advisor of Nan Li), Dr. Richard Schienes (Philosophy, co-advisor of Ben Shih), Dr. Noboru Matsuda, Dr. Julie Booth, and the SimStudent and CTAT teams.

References

  • Li, N., Cohen, W. W., & Koedinger, K. R. (2010). A computational model of accelerated future learning through feature recognition. In Proceedings of the 10th International Conference of Intelligent Tutoring Systems.
  • Matsuda, N., Cohen, W. W., Sewall, J., Lacerda, G., & Koedinger, K. R. (2008). Why tutored problem solving may be better than example study: Theoretical implications from a simulated-student study. In B. P. Woolf, E. Aimeur, R. Nkambou & S. Lajoie (Eds.), Proceedings of the International Conference on Intelligent Tutoring Systems (pp. 111-121). Heidelberg, Berlin: Springer.
  • Matsuda, N., Cohen, W. W., Sewall, J., Lacerda, G., & Koedinger, K. R. (2007). Evaluating a simulated student using real students data for training and testing. In C. Conati, K. McCoy & G. Paliouras (Eds.), Proceedings of the international conference on User Modeling (LNAI 4511) (pp. 107-116). Berlin, Heidelberg: Springer.
  • Matsuda, N., Lee, A., Cohen, W. W., & Koedinger, K. R. (2009). A computational model of how learner errors arise from weak prior knowledge. In Proceedings of the Conference of the Cognitive Science Society.
  • Shih, B., Koedinger, K. R., & Scheines, R.  (2008). A response time model for bottom-out hints as worked examples. In Proceedings of the 1st International Conference on Educational Data Mining.