DiVE is open to visitors on Thursdays 4:30–5:30 p.m. and by appointment. For more information, visit DiVE’s website.

“With appropriate programming, such a display could literally be the Wonderland into which Alice walked.”Ivan E. Sutherland on virtual reality displays in 1965

One sunny day, I’m standing in a vast landscape of tall pines, green meadows and distant mountains. Looking up, I see blue sky overhead. Looking down, I see the forest floor beneath my shoes.

A tall man stands nearby, wearing strange glasses and brandishing a wand like a wizard. Another human figure appears on the horizon. She grows larger and accelerates as she strides over the meadow, as if she had spotted me and had something urgent to say. When it becomes clear that she isn’t going to stop, I try to duck asidetoo late. But instead of colliding, she passes right through me, like a ghost.

I’m having this encounter in the Duke immersive Virtual Environment, or DiVE, a 10-by-10-foot chamber formed by six rear-projection screens in the center of a large room that resembles a black-box theater. The term of art for such a system is CAVE, or “cave automatic virtual environment.” The recursive acronym is apt. It calls to mind Plato’s cave, where reality is merely light and shadow thrown across a wall, and its nested repetition underscores a unique ontological status as a world within a world.

One of a handful of nonindustrial CAVEs in the country, DiVE integrates 3D computer graphics with motion tracking to immerse the user in virtual reality, a fantastical-sounding concept that has real, ongoing scholarly and commercial applications. “Virtual reality,” a term popularized by Jaron Lanier, who started the first company to sell VR goggles and gloves in the mid-’80s, is any computer graphics environment that offers a sense of physical entry rather than a view from outside.

One curiosity of writing about VR is that it’s inescapably in the first person, as being first-person is its main point. In order to enter the virtual forest, I’d removed my shoes and stepped inside the cube of screens alongside DiVE director Regis Kopper. Around my neck is a tracking sensor on a lanyard, on my face a pair of boxy goggles. I’m holding a wand that would feel familiar to anyone who has played the Nintendo Wii.

“You want to fire it up?” Kopper, who is similarly outfitted, calls out to software engineer David Zielinski, and the forest springs up around us in an eruption of color and light.

The tableau isn’t precisely realistic: The simple graphics resemble online virtual world Second Life, and I don’t cast a shadow in the digital sun. But it feels eerily real because I can see my body in the context of the graphical environment, craning my neck to turn my gaze rather than pushing a thumb-stick on a controller.

In the simulation, the flat screens appear to form part of a continuous, curving panorama, although the threshold between screen and real space is subtly perceptible when the female visitor seems to faintly pop out into the enclosure, as if breaking a soap-bubble skin between two dimensions. This only makes it more uncanny when I walk around and inspect her, a virtual being in the same physical space as me, from every angle.

Behind this sylvan scene, a million dollars’ worth of technology labors furiously. In the glasses alone, tiny ultrasonic speakers blast inaudible frequencies at directional microphones, working with gyroscopes and accelerometers to track my head position. Liquid crystal shutters fire in finely calibrated tandem with projections generated from my motion capture data. Offset pairs of images flash into my eyes at a rate of about 60 per second, my brain merging them into forms with volume on planes with depth.

But knowing all this doesn’t prevent me from reaching out to try and touch things that aren’t really there.

The genial Dr. Kopper, who’s controlling the female avatar, seems to enjoy being a little devilish when he shows off DiVE. Moving her to a distant bridge, he plunges her into the pit below. I had been walking around in the physical space immediately around me, but now I’m pulled deeper into the simulation when Kopper “flies” me to the bridge, the landscape streaming past my stationary form. I move to the edge with genuine caution, chuckling to imagine a guy dressed for Laser Tag gingerly picking his way through an empty cube. I peer down into a dark chasm.

“Don’t go too far or you might fall,” Kopper says merrily. Then he retracts the bridge and I plunge, my stomach dropping. It’s as though Second Life is tumbling down Alice’s rabbit-hole and landing in World of Warcraft, as I come to rest in the fiery caverns of the ancient Greek realm of Tartarus. Yes, Kopper has literallywell, virtuallysent me to Hell.

This simulation of paradise and damnationthe first 3D program built by a student when DiVE came to Duke via a National Science Foundation grant under original director Rachael Brady in 2005seems very on the nose. It probes the ethical, perhaps even spiritual, anxieties that arise as virtual reality increasingly penetrates everyday life, weaving itself into our senses ever more seamlessly. How far in do we want to go, and where will we find ourselves when we get there?

While Tartarus makes an unforgettable first impression, most of DiVE’s simulations are more practical. The next one Kopper shows me is of a homey kitchen. It looks lived-in, with umbrellas by the door and a box of raisin bran on the counter. Highlighting a kettle on the stove with the virtual cursor floating at the tip of my wandwhich tracks my hand in the same way the glasses track my headI hold down a button and pick it up, manipulating it in space as if with my hand. I drop it to the floor with a convincing bounce. I’m rummaging through the refrigerator when the kettlehad I even put it back on the burner?starts whistling, and I instinctively close the door to stop letting the cold air out.

Test subjects in the kitchen are tasked with finding a lost set of keys under mounting contextual pressure. Their physiological reactions are measured to see if they’re consistent with the real world. “Some people have to leave because they get too distressed,” Kopper says. I can feel the stimuli working on my nerves. A bug crawls over the counter and a cat slinks over the cupboards. The kettle is still whistling as a phone begins to ring. Then a honking car appears in a window that looks out on the virtual street, where a storm is rolling in.

CAVEs are also useful for training situations too dangerous or costly to simulate in reality. One of DiVE’s most recent collaborations, with the Office of Mine Safety and Health Research, is a program for teaching teams of volunteers to rescue miners in hazardous environments. When Zielinski boots it up, I find myself in a maze of dark underground tunnels blocked by smoke and flames. Using the wand as a flashlight, I explore the corridors, passing conveyer belts and barrels of ore, until I find a disaster shelter from which miners come pouring out. (The environment so strongly resembles a first-person shooter video game that my first impulse, shamefully, is to gun them down.)

Other DiVE applications serve educational purposes. Kopper shows me a data visualization tool where equations are translated to sinuous ribbons of color in the air, like 3D weather maps, and a robot simulation that lets engineers study human-robot interaction on the cheap. A hovering 3D model of the brain gives first-year medical students a spatial perspective that flat screens can’t. And a project by Duke classical studies professor Maurizio Forte immerses you in a 3D model of a Neolithic house at multiple stages of excavation, including an artist’s re-creation of how the site might have originally looked. This is where a digital archaeology project edges into an even further frontier of virtual reality: art.

“On the art side,” Zielinski says, “it’s like the early days of film or video games, still developing its language and genres and techniques.” He boots up a project by DiVE multimedia specialist Sarah Goetz for which he created the software. Floating spheres hang in the dark like strings of softly glowing pearls. Programmed with springy physics, they snap back and unleash dazzling particle effects when tugged, divulging video clips distorted into abstract experiences of color and light.

One of the most outlandish promises of virtual reality is that it might go beyond simulating the real world to create experiences with no natural equivalents. Author Steven Millhauser conjures this possibility in his story “The Wizard of West Orange,” where a Victorian-era “haptograph,” a body suit laced with wires and electromagnets, produces at first familiar and then entirely novel tactile sensations. But for now, most resources are committed to the development of technologiesand to acclimating the public, for whom virtual reality evokes science fiction, to its practical side. DiVE offers weekly tours, and Duke engineering students use it for independent projects.

“It’s a big investment,” Kopper says, “and to justify it, it’s not enough to say it’s very close to realitywhich in some ways, it’s not. A lot of research we do is to prove that it’s effective for certain tasks.” In other words, data obtained in virtual environments are only valuable if they’ve been empirically tested against their real-world equivalents. To learn more about how simulations are finessed into alignment with what they represent, I visit Dave Kaber at North Carolina State University.

Kaber is a professor in the Department of Industrial Systems and Engineering who, with colleague Chang S. Nam and a team of graduate students, uses a virtual reality lab to research things such as pilot behavior in automated aircraft and the potential of haptic interfaces for teaching or rehabilitating the disabled. Rather than a CAVE, State’s lab consists of a treadmill in front of a 3D screen, a driving simulator that looks like a ’90s arcade racer and several computers with different haptic devices.

“We’re investigating human behavior in different circumstances with different types of systems,” Kaber explains. “We use those results as a basis for designvehicle cockpits, for example. We’re also developing new theories on human performance. These are serious games, because we’re publishing the results and suggesting that what we see in virtual reality is representative of the real world.”

Donning glasses again, I mount the treadmill to try out a student’s dissertation project. Onscreen, people wearing different colors walk around a public environment. The program tests situation awareness and pattern detection, and could theoretically be used for training by organizations ranging from the Transportation Security Administration to the Army. Another N.C. State study was a simulation called Black Hawk Up, where subjects load a helicopter at a landing site, assessing how physical demands affect the performance of cognitive tasks.

But first, the treadmill’s integration with the simulationas electromagnetic sensors on the body collect gait data to drive the flow of the visualshas to be validated. This accounts for the long, narrow platform nearby.

“Before we could study anything with this,” Kaber says, “we had to show that locomotion behavior on the treadmill is comparable to walking over ground.”

The treadmill and the platform both conceal force plates that register data on footsteps, which can be compared. “We were able to show that gait parameters are not significantly different,” Kaber continues. “A lot of people do have fairly imaginative ideas about virtual reality, but this is very applied work.”

Kaber has also studied how we drive. The driving simulator is used mainly for research sponsored by the Department of Transportation; for example, studying how different numbers of food and lodging logos on highway signs affect driver distraction. It collects data on performance in various hazard conditions. You sit in the seat, grasping a steering wheel before a triptych of screens representing a driver’s field of view. An infrared light produces a glint in your eye. Cameras capture the positions of the glint and the edge of your pupil so the system can track your eye and head movements, recording blink rate and off-road glances along with lane and speed control.

Kaber acknowledges that the simulation has some limitations. “People don’t have the same anxiety about crashing,” he says. “The DOT knows this and can do field tests of signage to augment the data we’ve collected. They can use it with some confidence because it’s culling data from a representative set of North Carolina drivers. But we have to make sure the signs are perceptually identical to how they are on the roadotherwise, the results are worthless.”

The other half of State’s lab is taken up with computers linked to robotic-looking haptic controllers and paired with Nvidia 3D glasses. Michael Clamann, a Ph.D. candidate, shows me a block-moving task on a screen, explaining, “We’re reproducing occupational therapy regimens to test psychomotor skills. We can add visual or haptic assistance, highlighting the next block placement or making it pull you in as you reach the target location. By providing people with motor deficiencies with assistance in one area, you provide more training in another.”

Graduate student Linus Wooram Jeon shows me how to grip a stylus to move an onscreen cursor, feeling it vibrate when I touch a block and holding a button to pick it up. “With a computer-based simulation, we can capture richer data than we can watching someone perform the physical task,” Kaber says. “If we can deliver these systems at low cost, the tool becomes accessible for medical facilities doing diagnostic work.”

Two other grad students, JaYoung Lee and Shijing Liu, work with students from the nearby Governor Morehead School for the Blind on haptic-based learning for the visually impaired, using an orb-shaped controller that gives tactile feedback on the surface and directional contours of the object being handled. The researchers learn which interfaces are best suited to different kinds of people and tasks. To see this kind of research being therapeutically applied, I visit UNC-Chapel Hill.

As I enter a rehabilitation room near UNC hospital, Michael Lewek is helping a man in a red harness strung on a pulley down from a treadmill in front of three conjoined screens. Lewek, a researcher in the Department of Physical Therapy, studies why people walk asymmetrically after a stroke.

“The virtual environment allows us to manipulate variables it’s hard to manipulate in the real world and get feedback really quickly,” he explains. “We’re trying to give people specific information about the way they’re walking so they can change it.”

If the technology has unique advantages for the therapist, the visual information has unique advantages for the patient. “You’re used to things moving by you when you’re walking,” Lewek says. “It feels harder to run on a treadmill because you’ve lost that optic flow, which we’ve replaced here.”

Like the treadmill at State, Lewek’s is built on a force plate. The difference is that it’s actually two belts mounted side by side. The onscreen environment turns if one leg’s stride is longer than the other, giving the walker intuitive visual feedback.

“I wanted to do this,” Lewek says, “because I knew there were folks who could come up with the system here in the computer science department, which is world-class.”

Indeed, UNC has an illustrious history in virtual reality research and invention, much of it spearheaded by a pair of professors, Henry Fuchs and Frederick Brooks, who first teamed up in the ’70s to refine head-mounted displays.

In an influential 1965 paper, several years before he created the first head-mounted stereoscopic display and mechanical tracking system, called The Sword of Damocles, computer scientist Ivan Sutherland wrote, “The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal.”

His challenge brought Fuchs and Brooks together at UNC in 1978 to work on head-mounted displays. “I came to this interest after hearing Ivan’s great talk at a computer conference in, I think, 1965,” says Brooks, who founded UNC’s computer science department in 1964. “Henry was captivated by the same vision at Utah.”

Over the next 30 years, Brooks and Fuchs’ labs pioneered large-area tracking systems based on optical rather than mechanical devices, created innovative molecular visualization tools for biochemists, tamped down latency in real-time graphics and refined head-mounted displays for augmented reality (AR), where computer graphics blend with rather than block out the real world.

Rapid advances in miniature high-definition displays and powerful graphics cards have led to the point where many problems Brooks and Fuchs originally worked on are no longer problems. “When we started,” Fuchs says, “only multimillion-dollar sites could generate 3D images of complicated scenes in real time, and we built a succession of machines that were the fastest of their day. Now every laptop and mobile phone can do it.” Nintendo’s current handheld video game platform, the 3DS, has an autostereoscopic screen: 3D, glasses-free. Tracking systems are also becoming consumer itemsthink of Microsoft’s Kinect.

While CAVEs remain prohibitively expensive and large for private use, head-mounted augmented and virtual reality systems do not. Google Glass is a sort of augmented reality display, though Fuchs argues that true AR has to black out the world behind virtual objects pixel by pixel, not just project them over it as transparencies. “It’s not really augmented reality if you’re just looking at your text messages,” he says. “It’s got to be related to what you’re looking atseeing the tumor in the patient, or the sofa you’re shopping for in the house.”

Meanwhile, a head-mounted VR display called the Oculus Rift, currently in the hands of video game developers, is predicted to reach consumers at a price of $300 within a couple years. It has a wide field of view and uses inertia trackers to let you physically look around while steering yourself with a pad or keyboard. You might guess that Fuchs would be excited to see head-mounted virtual reality going mainstream, but he expresses the same skepticism about its utility as he does about closed-off CAVEs.

“It’s fine in terms of price,” he says of the Oculus Rift. “But it’s a totally closed system. You can’t see the room or walk with it. It’s good for video games, which I find to be a very limiting application. People who play them are a small fraction of the world and they tend to be young men. The best thing that can be said is that it will bring companies into the field who want to make a more effective AR display, which I think has a lot more potential than immersive VR.”

A fair point, but as one of those pesky male gamers, I was eager to try it. Back in Durham, DiVE had received an Oculus Rift prototype, and I head to Kopper’s office to check it out. “It’s pretty cool,” he says as I strap the View-Master-like device over my eyes. A 3D Italian villa, displayed on a desktop computer monitor, completely fills my vision. Steering myself through the house, I come out on a high balcony, and leaning over it, I see the ocean. Below the wondrous veneer of veracity, conflicts ripple in my perception. I feel my body but I can’t see it. I look around by turning my head but use keystrokes to walk. Mingled with the calls of the gulls, I hear Kopper speaking, and I see my smartphone on his desk in the crack below the goggles.

The physical element remains the most elusive to control in virtual environments. “One advantage with the head-mounted displays,” Kopper says, “is that you see the scene, not whatever you’re interacting with, and our senses are very easily confused.”

I take off the Oculus Rift and become aware again that outside the window, the sun is playing in the leaves of real trees. It’s a beautiful day. I wonder aloud where our drive comes from to confuse our senses and escape from actual reality. Is it a healthy desire?

“As unrealistic worlds become more realistic, the real world can become less realistic,” Kopper offers. “Think of that Korean couple who played video games at an Internet café until their newborn died. It’s a matter of the adaptation of the human mind, and it can be good or bad depending on how it’s used.”

Alice’s Wonderland was also fraught with peril. But who wouldn’t risk it for the chance to walk through a looking glass, into a magical new world? As virtual reality passes from fantasy to actuality to store shelves, we may soon find out.

This article appeared in print with the headline “Virtual wonderlands.”

Watch Regis Kopper demonstrate DiVE’s virtual system at Duke’s Fitzpatrick Center in two unedited videos: