P1: IXL/GEG P2: IXL
WL040A-22 WL040/Bidgolio-Vol I WL040-Sample.cls June 20, 2003 17:34 Char Count= 0
Virtual Reality on the Internet: Collaborative
Virtual Reality
Virtual Reality on the Internet: Collaborative
Virtual Reality
Andrew Johnson,University of Illinois at Chicago
Jason Leigh,University of Illinois at ChicagoIntroduction 591
Virtual Reality 591
Collaborative Virtual Reality 592
Avatars 593
Audio and Video 595
Other Types of Data 595
Synchronous and Asynchronous Work 596Heterogeneous Views and Abilities 597
The Future 598
Conclusion 598
Glossary 598
Cross References 598
References 598INTRODUCTION
Collaborative virtual reality—sharing immersive compu-
ter-generated environments over the high-speed net-
works—is a next-generation interface that will allow col-
laborators on different continents to share a space where
they can interact with each other and with the focus of
their collaboration. This text describes ongoing work in
this area at the Electronic Visualization Laboratory at the
University of Illinois at Chicago. We first discuss what
we mean by the term virtual reality and what the focus
is of our work in collaborative virtual environments. We
then discuss the types on information that must be sent
through the networks to maintain these collaborations.
Finally, we describe current research in the areas of asyn-
chronous collaboration and heterogeneous perspectives
and conclude with a discussion of what we see as the
future of collaborative virtual environments.VIRTUAL REALITY
Before we discuss collaborative virtual reality, we should
define what we mean by virtual reality. Different disci-
plines have different definitions for what virtual reality is
and what hardware is required. A good novel is a form
of virtual reality that requires no special hardware to be
experienced. For our purposes, virtual reality requires
computer-generated stereo visuals, viewer-centered per-
spective, and an ability to interact with the virtual world.
Computer-generated stereo visuals allow the user to see
the computer-generated world in three dimensions (3D),
which is how most (but not all) people see the real world.
Each eye sees the world from a slightly different position,
allowing us to perceive depth. As with the viewing of stereo
photographs or the watching of a 3D movie from the 1950s
or 1980s, the trick is to give each eye its own view of the
material.
Viewer-centered perspective allows the user to move
his body or turn his head and the see the appropriate view
of the virtual world from this new position. Combined
with stereo visuals, this allows the user to not only see a
3D object in the virtual world but to walk around it or lookunder it by moving in exactly the same way as a person
would move around a real 3D object. In a 3D movie or
photograph, the viewing position is static—the viewer
sees only what the camera saw. With viewer-centered
perspective, the viewer is the camera and always has the
correct view of the scene. For this to work, the computer
generating the visuals needs to know where the viewer’s
two eyes are.
There are several different ways to do stereo visuals
and head tracking, which lead to different virtual reality
display hardware. With a head-mounted display (HMD),
the user wears a headset, which isolates her from the real
world, with a small cathode ray tube (CRT) or liquid crys-
tal display (LCD) devoted to each eye. This allows the user
to turn and tilt her head in any direction and still see the
virtual world. A tracker attached to the HMD tells the com-
puter the position and orientation of the user’s head. With
that information, the computer can determine where the
user’s eyes are and then draw the graphics appropriately.
Afish tankvirtual reality system makes use of a com-
puter monitor and a special pair of tracked LCD shutter
glasses. The computer monitor displays an image for the
user’s left eye, at the same time telling the glasses to block
out the user’s right eye. The computer then does the re-
verse, showing an image for the right eye while telling the
glasses to block out the left eye. By doing this quickly,
the user can see objects floating in front of the monitor.
The LCD shutter glasses are lighter than a HMD and don’t
isolate the user from the real world. A tracker attached to
the LCD shutter glasses gives the position and orientation
of the user’s head.
This same technique can be used on a larger scale to
create a single, large-drafting-table-size display, such as
the ImmersaDesk©R. With a larger back-projected display,
several people can stand in front of the display at the same
time and see the virtual world in stereo, but only one per-
son is head tracked.
Moving from a single large screen to several large
screens in a system like the CAVE©R allows the user to
physically walk around virtual objects. A CAVE typically
has three 10-ft^2 walls and a 10-ft^2 floor, although some591