October 14, 2013

Research on Cognitive Task Analysis: Capturing expertise for instruction
Dick Clark

view seminar

What have we recently learned about expertise from attempts to analyze the way experts perform tasks and solve problems? Dick Clark will describe the results of a number of experiments where cognitive task analysis (CTA) was applied in professional fields such as health care, software design and engineering. Among the findings to be discussed are evidence that approximately 70% of expert decisions are automated and non-conscious, the results of using CTA to identify additional expert decisions beyond the 30% usually captured for instruction and the impact of using of CTA-based information for the design of instruction. Dick will also describe some of the problems with the design of CTA studies and the need for future research including the use of data mining strategies.
October 16, 2012

Using Data Mining and MultiDimensional Item Response Theory to analyze an MITx MOOC.
Dave Pritchard

view seminar

Prof. Dave Pritchard, Cecil and Ida Green Prof. of Physics at MIT Education group site is

~8000 students completed the 6.002x Massive Open Online Course (MOOC) in the Spring 2012 – a course with videos, a wiki, a standard textbook, discussion fora, and both embedded and collected problems, some requiring use of a circuit simulator. We investigate the patterns of student attrition, resource use, and behavior on homework and during exams, seeking evidence for behaviors that correlate with skill and/or learning. Multi-dimensional Item Response Theory, used to analyze student responses to questions, revealed ~ 20 significant factors with distinct student skills and related question discrimination. Students doing the midterm actually referred back mostly to questions with discrimination patterns similar to the midterm questions, showing that students categorize question similarity using these factors. We showed that questions requiring multiple attempts are a rich source of additional assessment information.

August 30, 2012

Adaptive help giving for physics homework problems
Brett van de Sande
Arizona State University

view seminar

Abstract: Both human and omputer tutors constantly make decisions about what kind of help they are going to give (or not give). Ideally, they make these decisions based on some determination of how the student is progressing coupled with some knowledge of what kind of tutoring strategies have worked in similar situations in the past. With this in mind, we have implemented a method for iteratively improving the help-giving policies of a computer tutor for introductory physics. We created a version of the tutor that randomly uses one of several (reasonable) policies when helping students and then deployed it in the classroom. Next, we used the resulting student log data and machine learning techniques to train a new version of the tutor which has improved hinting policies. Finally, we deployed the new version of the tutor in the classroom.

Bio: Brett Van de Sande is an Assistant Research Professional and Computer Science and Engineering Faculty at Arizona State University. Dr. van de Sande post-doctoral work includes theoretical physics, 1994-1999. He taught physics and math at Geneva College, 1999-2004. He conducted research in physics education and artificial intelligence, University of Pittsburgh, 2005-2008. He joined ASU in 2008.

July 9, 2012

Putting Research Into Practice
Steve Ritter
Founder and Chief Scientist Carnegie Learning

Tristan Nixon
Carnegie Learning

view seminar

As an independent company and, now, as a part of the Apollo Group, Carnegie Learning has put an emphasis on active participation in research as an approach to improving educational outcomes. Balancing customer requests, sales needs and development priorities is difficult, and the path to commercialization does not always go as expected. In this talk, we'll discuss the challenges and the promise of translating university research into product improvements that have a significant impact on student learning. We'll provide examples of successful and unsuccessful commercialization and talk about plans for improving the process.

June 25, 2012

Marmoset: Automated Grading and Data Collection for CS Education
Jamie Spacco
Assistant Professor of Computer Science Knox College

view seminar

Marmoset is an integrated submission, grading and data collection system for programming courses. In addition to automating the grading of programming assignments, Marmoset provides tools that collect snapshots of students' files every time they save. This data is extremely fine-grained, with around 70% of the snapshots collected in CS-2 only changing four lines of code or fewer. Marmoset has been in use at the University of Maryland since the Fall 2004 semester, and is currently used by a dozen courses and over 700 students at Maryland each semester.

Marmoset also supports "release testing", a novel pedagogical innovation the rewards students who begin work early. The grading tests are divided into public tests (given to the students with the project) and release tests (kept on the server). When students upload their code to the server, they can spend a "release token", which reveals the number of release tests passed and failed, and additional information about only the first 2 failed release tests. Students cannot learn anything about the other failed release tests until they fix their code and spend another token. Furthermore, they only have 3 tokens that regenerate every 24 hours, so procrastination now has the cost of "lost" tokens.

Jamie Spacco received a PhD from the University of Maryland at College Park in 2006 with the dissertation focused on the Marmoset project that I'll be talking about. He taught for 4 years at Colgate University in Hamilton, NY as a visiting professor (a very long "1 year leave replacements") until starting a tenure-track job at Knox College in Galesburg, IL in 2010. While at Colgate Jamie mostly worked on software engineering research; since arriving at Knox I've started working on CS Education once again.
June 11, 2012

Automated Student Model Improvement
Ken Koedinger
Carnegie Mellon University

view seminar

Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational technology data sets from intelligent tutors to games in a variety of domains from math to second language learning. In at least ten of the eleven cases, the method discovers improved models based on better test-set prediction in cross validation. The improvements isolate flaws in the original student models, and we show how focused investigation of flawed parts of models leads to new insights into the student learning process and suggests specific improvements for tutor design. We also discuss the great potential for future work that substitutes alternative statistical models of learning from the EDM literature or alternative model search algorithms.

December 13, 2010

Analytic representations: the design of tools to create and exploit them
Gregory Dyke
Carnegie Mellon University

view seminar

My dissertation focussed on computer support for "human analysis" (as opposed to the semi-automated or computer assisted analysis). As a result I created a tool called Tatiana ( ) which has interesting properties for researchers who collect and analyse various kinds of process data (videos, computer logs, transcripts, etc.). Tatiana is built on a framework for constructing and managing analytic representations using four quasi-orthogonal operation types: transformation, enrichment, visualisation and synchronisation.

This talk will describe Tatiana and the underlying framework:
- What Tatiana can currently do and what features are planned for the future.
- How Tatiana may be able to help solve research problems when analyzing videos, computer logs, transcripts, etc.
- Situations in which it might be beneficial to extend Tatiana, rather than constructing new software from scratch.

About Gregory Dyke: I am interested in the creation of tools to help humans analyse data of computer mediated collaboration (and learning). My PhD resulted in the creation of Tatiana (Trace Analysis Tool for Interaction ANAlysts), a flexible, extensible tool particularly well suited for the analysis of small group face to face and computer mediated interaction. My current work involves examining and assisting the discovery of how interaction unfolds over time.

December 10, 2010

Using DataShop Tools to Model Students Learning Statistics - A LearnLab DataShop Case Study
Marsha Lovett
Carnegie Mellon University

view seminar

Marsha Lovett describes using the LearnLab DataShop to improve the Carnegie Mellon Statistics course.

August 30, 2010

Physical Symbols for Powerful Reasoning
Ken Koedinger
Carnegie Mellon University

Photo and link to the seminar
view seminar

One way that cultures advance the intelligence of their members is through the symbolic forms (e.g., language, numbers) they promulgate through formal and informal education. Algebra is a prime example of an external symbolic system that, once learned, greatly enhances human intelligence. This enhancement is reflected in better performance in more complex problem solving even though it may inhibit performance for simpler problem solving (Koedinger, Alibali, & Nathan, 2008). Learning to effectively use this external representational tool is not easy even discounting the time needed to acquire adequate background knowledge, it takes most students a school year or two to learn algebra. In other words, many changes in /internal/ cognition are required before effective use of this /external/ representation is possible. I will discuss our experiments on algebra learning with real and simulated students (e.g., Matsuda, Cohen, Sewall, Lacerda, & Koedinger, 2007) and emphasize the productive interplay between internal and external cognition. I will explore whether there is a human-algebra distributed system that has learning properties beyond the human system.
July 25, 2010

KDD Cup Workshop
John Stamper
Carnegie Mellon University

Photo and link to the seminar view seminar

2010 KDD Cup Workshop in Washington, DC.

July 21, 2010

Cognitive Science 2010 Plenary Talk
Marsha Lovett
Carnegie Mellon University

Photo and link to the seminar view seminar

Marsha will talk about knowledge component (KC) modeling in the context of the OLI-Statistics course, showing a new tool that helps instructors track their students' progress based on the models, and then describing some results from a series of studies showing accelerated learning in the OLI-Statistics course when instructors use this tool.

June 14, 2010

Thinking with your Hands
Laurens Feenstra

Photo and link to the seminar view seminar
Multiple External Representations (MERs) have been used successfully in many instructional domains, including fractions. In most intelligent tutors however, these representations are used as static graphics, limiting student-tutor interaction to traditional inputs such as text boxes, drop-down menus, etcetera. This talk is about enhancing the use of the capabilities of Flash in example-tracing tutors, supporting direct student manipulation of graphics in a dynamic tutor environment. Allowing students to actively manipulate interactive MERs themselves, while retaining the feedback mechanisms that make cognitive tutoring in CTAT so successful is a potent mix to increase student learning.

I will demonstrate some of these possibilities by discussing a study with interactive fraction representations we conducted this spring with 312 4th and 5th-grade students in 13 classes. Students working with interactive fraction representations came away having learned more than students working with static representations and traditional inputs.

May 12, 2010

Engagement, Learning, and Assessment in Immersive Environments
Chris Dede
Harvard University

Photo and link to the seminar view seminar
Becoming a digital person in an immersive virtual world enables types of motivation, learning, and assessment potentially quite powerful for education. With Institute for Education Sciences funding, our research team is building immersive ecosystems ( and virtual performance assessments ( This talk describes our design strategies for fostering engagement that does not undercut learning and long-term intrinsic motivation and for conducting unobtrusive assessments based on event-log analysis.

May 10, 2010

Adventures in Researching Self-Regulated Learning
Philip H. Winne
Simon Fraser University

Photo and link to the seminar view seminar
Amidst wide ranging quests to understand how learners learn, so principles can be offered for improving education, one topic of interest has been skills for learning. Under various guises and in multiple subject domains reading comprehension skills, strategies for composing, problem solving heuristics, and more it has been hypothesized that learners sometimes may direct their resources toward self-observing and attempting to self-improve learning skills by engaging in various metacognitive activities. This is known as self-regulated learning or SRL. As is the case in other areas of research, investigating SRL phenomena is challenging. I summarize these challenges and describe how I hope research on SRL will evolve in three respects: operationally defining constructs as data that actually reflect SRL processes, gathering sufficient data needed to advance learning science, and methods for analyzing data. I demonstrate a software tool, nStudy, that can make a dent in the first two of these topics, and offer conjectures for addressing the third. My goal in this talk is to arouse you to improve research on SRL rather than to present findings and quasi-conclusions.

Bio: Philip H. Winne is a professor of educational psychology and Canada Research Chair in Self-Regulated Learning and Learning Technologies atSimon Fraser University. Winne has made significant contributions to research on self-regulated learning. He is the principal investigator of theLearning Kit Project, which has developed educational software founded on principles of self-regulated learning. Before earning a PhD from Stanford University in 1976, Winne received undergraduate and masters degrees from Bucknell University. He has served as co-editor of the Educational Psychologist and associate editor of the British Journal of Educational Psychology. Winne has authored (or co-authored) over 70 peer-reviewed journal articles, over 30 book chapters, and 5 books including an introductory textbook on educational psychology that is widely used in Canada (Woolfolk, Winne, & Perry, 2006). Phil on the Web: