Photos: Anna Koester Marshall and Linda A. Cicero / Stanford News Service
On May 15, 2014, I became a superhero. It all started as I was standing alone in a vacant, foggy city. My mission? To fly around the city and rescue a child in need. I take off by raising my arms and do my best to navigate around the edges of buildings. If I bring my arms together in front of me, my speed increases. I go back and forth between hovering and darting between buildings, savoring my new superpower.
Actually, I was in Stanford’s Virtual Human Interaction Lab (VHIL) with a group of about 20 tech-curious onlookers.
Our visit had begun in a small room with unusual wall art. Along one wall a screen shows a dizzying image of a plane passing through thick clouds. I say “thick,” because even without the appropriate 3-D glasses, the sky is anything but flat. To our right, another screen shows Stanford’s football team executing a play. If you were to put on a virtual reality headset, you’d literally get an inside view of the action at the line of scrimmage, with a 360-degree range of vision.
The gadgetry is captivating, but what do 3-D TV and virtual reality have to do with learning?
Cody Karutz, the lab’s full-time manager, introduced us to the lab and its mission. The VHIL is a human subjects lab, meaning they experiment on people – mostly Stanford undergraduates. Says Karutz, “We are not engineers. We are social scientists. We do this to really study human behavior.” The idea is to help people feel embodied in a virtual experience and then to observe any real life behavioral outcomes.
“How does my behavior change when I spend time in a virtual environment? What is the transference of spending time in a virtual space into the real space? How can we invent new types of social phenomena?” These are some of the questions that VHIL considers.
Virtual reality (VR) is one kind of immersive media that relies on two things: information tracking about the body’s movement in the real world and multisensory rendering, including 3-D graphics. The lab achieves immersive tracking by using infrared cameras all around the room that give very precise information about how a body moves through the space. “This is very similar to motion capture for animation in the movies,” says Karutz. This cutting-edge tracking equipment follows the participant’s every move and maps them onto an embodied virtual being, sometimes called an avatar. At $100,000 apiece, this system provides a kind of experience you simply don’t get every day.
As part of our tour of the facilities, Karutz leads us into the Multisensory Room, where most of the lab’s experiments take place. The room is well equipped to simulate three human senses –sight, hearing, and touch.
“The way that we do sight is by wearing a head-mounted display (HMD) that has two lenses that overlap to give you the sense of 3-D. It’s a set of goggles that block the rest of the world and become your sight,” explains Karutz. To make sure the display’s lag time is as low as possible, the HMD is tethered to the ceiling by connective cables instead of being wireless. With the assistance of a lab technician, however, the participant can move freely throughout the room. The three-dimensional graphics of the virtual world make the brain feel more as if it were actually moving through the virtual space.
In terms of hearing, a spatialized sound system makes sounds travel around the room through a system of 24 speakers so that there’s no need for headphones. It’s called 360 sound. As for touch, the floor, built with special airplane steel, is equipped with subwoofer shakers whose controlled vibrations help reinforce a sense of movement without the person having to hold anything.
Together, three-dimensional imagery, spatialized sound, and virtual touch make the virtual world seem pretty compelling.
I know, because I tried it. That’s when I became a superhero. The superhero world was built to study the pro-social effects that virtual reality can have on people. When people embody a superhero, even if only for a few minutes, they tend to feel more empowered in the face of effecting positive change afterwards.
In my case, since there are other eager volunteers waiting for a try, I land without completing the mission. When my feet “hit the ground,” my legs are shaking, my heart is racing, and I find myself wondering how I got to the other side of the Multisensory Room.
Because experiences with immersive media can feel very real, they can have real world effects. Those effects are what Stanford researches are studying. For example, says Karutz, “by putting somebody in the shoes of a logger and having them cut down a virtual tree, they can actually [hold beliefs that are] more sustainable later.”
Video: Stanford News Service
Immersive media could have huge implications for teaching and learning. The football play I mentioned at the beginning of the piece (with a 360-degree vision of the field) could easily have been a lecture. “We can show whatever we can film,” explains Karutz.
With VR, the VHIL is going even further than filming. Just three weeks ago, the VHIL finished designing a world at the bottom of the ocean. Calling it “underwater environmental education,” Stanford researchers are using it to teach a marine science lesson. “You can actually be at the bottom of the ocean, learn about ocean acidification, and learn about how it affects the marine ecology in a way that would be difficult to do otherwise,” says Karutz.
At this point, Jeremy Bailenson, founder and director of VHIL, steps into the room. “This world is going to take about 12 or 13 minutes, so somebody who wants to be underwater for that long, please step up,” he says, smiling.
Bailenson points out that while many people talk about using VR to teach and learn, the problem is that lesson preparation can be extremely time consuming. The content alone for this lesson took about a year to develop.
“The key for designing educational lessons is that content and narrative are critical,” says Bailenson. “The reason that MOOCS are so attractive is not because they’re good, it’s because they’re easy. They’re much simpler than designing a simulation that puts you inside the learning material.”
VHIL collaborated with Roy Pea in the Graduate School of Education and Fiorenza Micheli, one of the world’s two or three foremost experts on ocean acidification, to create a pedagogically and scientifically sound lesson. According to Bailenson, this collaboration has been highly successful in terms of creating an immersive learning experience.
“This is one of the best examples that I’ve ever seen of leveraging immersive VR to create connections with a place, and to have somebody learn about science. We leverage theories of embodied cognition to allow your bodies to move, and we leverage self presence and body transfer to make you feel as if you are part of a simulation,” says Bailenson.
Their belief is that those who experience immersive marine learning will think more about ocean acidification than those who don’t, because they actually “become” a piece of coral that degrades along with the environment around them.
The lesson is entirely self-contained, meaning that once you get it started, no outside instructor is needed. It begins with a soft, female voice: “You are on a rocky reef close to the Italian coast of Ischia. If you look in front of you, you will see a large piece of coral…”
For the next 13 or 14 minutes we watch a colleague embody a tall, purple coral. Along with him, we see how rich the ecosystem looks at first, with many kinds of marine organisms going to and fro. We can also hear the deep-water sounds flow around the room. A lab technician reinforces the participant’s coral embodiment by touching him with a stick in synchronized motions with what he sees as a fishing net periodically hitting his body.
Then acidification sets in. After watching everything around him change, the participant admitted, “I really empathized with the coral. I felt like I couldn’t move, but I felt like I wanted to bob in the water at the same time. I only realized that part way through. It felt pretty real.”
Bailenson and his team are now looking at how well VR participants learn the lesson compared to someone who simply watches a normal video of the lesson. Says Bailenson, “The beauty of this as a pedagogical lesson is that all of you probably learned something about ocean acidification, but this gentleman really learned something about ocean acidification.”
“The other thing we can do,” reports Bailenson, “is we can build predictive models that can be used during assessment of all lessons. While he is moving around, we’re tracking where he’s looking, and where his body leans. From that information we build statistical models to try to predict how one learns based on the way their body moves during instruction. We just published a paper showing a very strong relationship using machine learning that takes as input the way you move around a virtual environment, and as output what your test score is. We can predict how well you will do quite accurately.”
VHIL is excited about sharing this learning experience with others. Bailenson estimates that there are currently 75,000 people right now that own the Oculus Rift, a much cheaper but still effective way to access immersive learning. The Oculus Rift is not a consumer product yet, but it will probably retail for a couple hundred bucks. That’s a big difference compared to Stanford’s $45,000 headset. With the recent purchase of Oculus Rift by Facebook, the spread of the device seems ever more imminent. Bailenson hopes to get this lesson into the hands of every Oculus Rift owner as soon as possible. “The second that this is ready, we’re just going to upload it, and 75,000 plus are going to get to learn.”
Speaking about the future of education, Bailenson says, “When you guys think MOOCS, I think this.”
What are your thoughts on, or experiences with, virtual reality? Especially in education? Comment below.
Anna Koester Marshall is a PhD candidate in Iberian and Latin American Cultures.