Photos by Chris Wesselman
“All interesting charts go up and to the right,” said John Mitchell, Vice Provost of Online Learning at Stanford, drawing audience laughs. He was describing his data showing dramatic increases in the number of courses now incorporating web content.
Mitchell, along with Jay McClelland, Stanford Professor and Director of the Center for Mind, Brain and Computation, and Sandra Mitchell, Professor of History & Philosophy of Science at the University of Pittsburgh, spoke at a January 14th forum on teaching and learning as complex systems. The forum was the first in a series on teaching and learning sponsored by Education’s Digital Future within Stanford’s Graduate School of Education.
Professor John Mitchell’s talk focused on the intersection of teaching, learning, and data analysis. The Lytics Lab (short for ‘Learning Analytics’), which he co-directs at Stanford, collects and analyzes data from MOOCs (Massive Open Online Courses) in order to refine models of learning. For instance, by studying the behavioral patterns of people who enroll in online courses, the lab has found a way to predict—about a week ahead of time—approximately half of the people who are likely to drop out of a class (the other half who drop out exhibit such random behavior patterns that their actions cannot be predicted beforehand). As Mitchell explained, this information could prove useful for professors wanting to reach out to struggling students who might just need a bit of encouragement to stick with a course.
The group has also found that online peer-grading networks are surprisingly effective at identifying where their fellow students could improve their work, highlighting an under-utilized strategy for providing quality feedback to students. In research particularly useful to courses that include problem sets, the team is also finding patterns in the pathways that students use to solve problems, noting that some pathways are more successful than others at predicting scores on exams.
Overall, Mitchell highlighted that this research provides examples of the ways that “large data can be useful to determine things that would not be visible otherwise.”
Professor Jay McClelland approached complexity in learning from a different perspective: inside the mind of the learner. McClelland’s presentation focused on the process of learning and the question of how an accumulation of experiences can lead to distinct changes in a person’s knowledge base. In other words, how do people come to know what they know?
McClelland is especially interested in the ‘transitions’ of learning, or those thresholds that must be crossed as a person learns a particular skill or grasps a particular concept. Using examples from historic experimental studies of early childhood learning, he explained that such transitions can now be imitated fairly well using models that incorporate learning dynamics. The models “show gradual knowledge accumulation and capture patterns of change similar to what we see in children.” However, such models have not yet been applied to online learning, and McClelland would “like to take these ideas into that context” in future work.
Professor Sandra Mitchell employed the analogy of medical research in her talk discussing the value of using multiple methods to compile evidence for ‘what works’ in teaching.
As she explained, basic questions about what increases and decreases learning in an experimental setting are complicated by questions about when, where, and how a particular activity or initiative could work in practice. Like the process of healing people, the process of learning has ‘complex causality.’ As Mitchell described, there are multiple causes of learning, interactions between different causes, and sensitivity to the context in which learning occurs. What might work in a particular setting or for a particular subject might not work for another.
This complexity means that multiple methods are needed to inform research on learning. Mitchell advocates for “integrative pluralism,” noting that the ‘big data’ studies now possible with MOOCs can be useful to find associations between particular types of teaching and learning. Then, further studies employing ‘in vivo’ designs (e.g., observations of individual courses) or controlled experiments can further refine our knowledge of the mechanisms of learning and where the most useful teaching interventions might occur.
The panelists’ presentations were followed by a lively question-and-answer session dominated by discussions of the pros and cons of MOOCs and ‘big data’ for informing teaching and learning.
In response to a question about the power of ‘big data,’ Professor Sandra Mitchell emphasized the patterns that can help generate hypotheses about learning, and Professor McClelland added that such patterns can help researchers distinguish “between the signal and the noise” in figuring out what works in teaching. Professor John Mitchell highlighted that big data allows for more information to be known about each individual, and that the interdisciplinary nature of researching MOOCs can be a boon to researchers.
At the same time, the panelists agreed that there was still much to learn, and that many approaches are needed to investigate online teaching and learning. Professor Sandra Mitchell concluded that “everyone needs to come to the table” to help build knowledge about learning in different contexts.
Noëlle Boucquey is a social scientist with a Ph.D. in Environment from Duke University, and is currently a postdoctoral fellow with Stanford’s Thinking Matters program and the School of Earth Sciences.