P1: IXL/GEG P2: IXL
WL040A-22 WL040/Bidgolio-Vol I WL040-Sample.cls June 20, 2003 17:34 Char Count= 0
HETEROGENEOUSVIEWS ANDABILITIES 597discovered in scientific visualization environments, and
give researchers an easy way of creating metadata record-
ings from within the virtual space. Most computational
scientists agree that a crucial part of the knowledge crys-
tallization process includes the creation of snapshots and
annotations to track the progress of the exploration and
to record discoveries. In desktop environments, these an-
notations are typically entered in text windows. However,
in an immersive environment, this common mode of data
entry is problematic as well as limiting. Current virtual
reality displays lack the resolution to display text clearly
in a virtual window. Recording and replaying audio mes-
sages has been used to try and circumvent these problems.
Adding in the avatar of the person recording the message,
complete with their gestures, allows those audio messages
to be put in the proper context. More details can be found
in Imai, et al. (2000)
We have used these annotations in Virtual Harlem to
create prerecorded tour guide avatars for visitors. If there
is no live expert tour guide available, the virtual world can
still introduce itself and take a user on a tour, giving infor-
mation at various points of interest. The students in the
classes visiting Virtual Harlem can also leave annotations
in the system—making comments or posing questions to
which other users can respond within the space.
Because all state changes to the collaborative virtual
world come through the virtual reality application’s net-
working layer, we can store the time-stamped sequence
of changes and play back the virtual experience as an
immersive movie and experience it from within the vir-
tual environment, watching the action unfold around us.
This allows users to record their entire virtual reality
experience for playback and analysis. A person playing
back the recording will see everything he would have
seen if he were present in the immersive space at the
time the recording was made. With a video recording of
the collaboration, the user is limited to seeing the action
from the position of the camera. Here, the user can walk
(fly) through the space while the collaboration is under-
way, watching the action from whatever location seems
most interesting. These recordings could also be shared
collaboratively during playback, allowing geographically
distributed participants to collaboratively watch the
recording of a previous collaborative session. For sci-
entists, perhaps the single most compelling reason for
recording and archiving collaborative virtual reality ses-
sions is the fact that scientists need to create permanent
documentation of their discoveries. Results need to be re-
producible and reviewable.HETEROGENEOUS VIEWS
AND ABILITIES
In many existing collaborative virtual reality applications,
participants typically all view and modify the same repre-
sentation of the data they are viewing, but some collabo-
rative virtual worlds allow, and in factemphasize,giving
each user a different view of the shared space. These het-
erogeneous views allow us to leverage the capabilities of
a shared virtual space to allow each user to customize his
view to his needs.Different users viewing a multidimensional scientific
visualization may be able to partition the dimensions
across the various users to break the problem into smaller
pieces. Different users may see the same space at different
scales. For example, in an architectural space, some users
may walk through the space life-size while others see it in
miniature, to make it easier to reposition the components
of the space. Different users may also have different levels
of security access to the data, so some users can see more
than others. Individuals who are trying to solve a com-
mon problem gather (in workshops, for example) in the
hopes that their combined experience and expertise will
contribute new perspectives and solutions to the problem.
Users may also have heterogeneous abilities in the space,
based on their heterogeneous views.
For example, students at Abraham Lincoln Elementary
School use collaborative virtual reality with heteroge-
neous views to learn about the shape of Earth. Two chil-
dren collaborate in exploring a small, spherical asteroid.
One child, acting as an astronaut, explores the surface
of the asteroid while the other child, acting as mission
control, guides the astronaut from an orbital (spherical)
view. Virtual reality helps situate the astronaut on the sur-
face of the asteroid, where she can experience circling the
globe and coming back to the same place, not falling off
the “bottom,” and seeing objects appear over the horizon
top first. Virtual reality gives mission control an obviously
spherical world to monitor. The two children share the
same virtual environment but see it in different ways. They
must integrate these different views to complete their mis-
sion, and through integrating these views they learn to
map between these two different views of the same ob-
ject. More details on this work can be found in Johnson,
Moher, Ohlsson, and Gillingham (1999).
What we have seen so far reaffirms the fact that pre-
vious computer-supported cooperative work findings are
applicable to collaborative virtual environments. This is
discussed further in Churchill, Snowdon, and Munro
(2001), who note the following.There is a need for individual pointers, to allow collab-
orators to point at shared data items. However, these
pointers can become a source of distraction, and users
should have the ability to toggle them on/off.
It is useful to have some cue as to which region of space
a user is manipulating.
Even in a fully shared environment, participants found
the need to work with localized views.
There is a frequent transition between parallel/indepen-
dent and coordinated activities.
The user-interface should be considered part of the visu-
alization, so that collaborators can gain greater aware-
ness of their collective actions as they manipulate the
visualization.The collaborators may also be using heterogeneous de-
vices. Some users, such as those in a CAVE, will have a
wider field of view and be better able to see an overview
of the environment. Those with higher resolution displays
will be better at seeing details. Those with fish-tank sys-
tems will have better access to keyboards and other more