Web3D 2007 Symposium

April 15-18, 2007
University of Perugia, Umbria, Italy

12th International Conference on 3D Web Technology

Sponsored by:

acm logo     siggraph logo
ACM SIGGRAPH

In cooperation with:

web3d consortium logo     eurographics logo

All Camera-ready Abstracts

 

Session 1: Rendering


Mobile, hardware-accelerated urban 3D maps in 3G networks
Antti Nurminen - Helsinki University of Technology
3D maps can visualize static and dynamic features of real
environments, and act as 3D gateways to location-based information.
Insufficient network speed has been a major bottleneck for
dynamic download of 3D content for mobile devices. 3G network
technologies promise to solve this issue, allowing faster response
times and higher data rates. Similarly, mobile 3D graphics hardware
should provide a dramatic increase in rendering speed. We examine
wireless IP network properties, and develop an optimized network
scheme suited for navigation purposes. The presented system allows
free roaming in the 3D scene, while progressively downloading 3D
data. For case platforms, we use two 3G Symbian smart phones, one
with 3D hardware and one without. Network, 3D rendering and overall
application performances are measured. For a
scalable 3D engine, 3D hardware improves the rendering performance 
by over an order of magnitude. By using a compressed network
protocol and efficiently formatted 3D data, a textured but lightweight
3D city can be progressively downloaded in 3G networks fast and
without degrading application responsiveness.
Interactive Walkthrough of Large 3D Models of Buildings on Mobile Devices
Alessandro Mulloni - HCI Lab
Daniele Nadalutti - HCI Lab
Luca Chittaro - HCI Lab
Interactive visualization of large 3D architectural models on mobile
devices such as PDAs would significantly benefit applications such
as indoor navigators and mobile tourist guides, on-site monitoring
and annotation of architectural designs at construction sites, evacuation
training and evacuation guidance.
Although PDAs are becoming more powerful and a few are even
equipped with 3D hardware accelerators, their performance does
not yet allow to handle a large architectural model at an acceptable
frame rate. To face this problem, we propose and experiment
a system that exploits hierarchical view frustum culling and portal
culling for interactively visualizing 3D architectural models on mobile
devices. We also discuss the performance of the system and
its integration with our mobile X3D player (MobiX3D). The performance
of the system has been evaluated on a large three-floor
building with 39 rooms, 42 stairs and 42 doors.
Enhancing X3D for advanced MR appliances
Tobias Franke - Fraunhofer IGD
Yvonne Jung - Fraunhofer IGD
Patrick Daehne - TU Darmstadt
Johannes Behr - Fraunhofer IGD
In this paper, we explore and discuss X3D as an application description
language for advanced mixed reality environments. X3D has been established as
an important platform for today's web-based visualization and VR applications.
Yet, there are very few examples for augmented reality systems utilizing X3D
beyond a simple geometric description format. In order to fulfill the image
compositing and synthesis requests of today's augmented reality applications,
we propose extensions to X3D, especially with a focus on lighting and
realistic rendering.

 

Session 2: Encoding and Transmission


High-quality networked terrain rendering from compressed bitstreams
Fabio Bettio - CRS4
Enrico Gobbetti - CRS4
Fabio Marton - CRS4
Giovanni Pintore - CRS4
We describe a compressed multiresolution representation and a
client server architecure for supporting interactive high quality remote
visualization of very large textured planar and spherical terrain.
Our approach incrementally updates a chunked level-of-detail
BDAM hierarchy by using precomputed wavelet coefficient matrices
decoded from a compressed bitstream originating from a thin
server.
The structure combines the aggressive compression rates of
wavelet based image representations with the ability to ensure overall
geometric continuity for variable resolution views of planar and
spherical terrains with no need for run-time stitching.
The approach is avaluated on a number of test cases and has been
incorporated in an application serving tens of thousands of clients.

On the fly Appearance Quantization on GPU for 3D Broadcasting
Julien Hadim - IPARLA project (INRIA Futurs - LaBRI)
Tamy Boubekeur - IPARLA project (INRIA Futurs - LaBRI)
Mickaël Raynaud - IPARLA project (INRIA Futurs - LaBRI)
Xavier Granier - IPARLA project (INRIA Futurs - LaBRI)
Christophe Schlick - IPARLA project (INRIA Futurs - LaBRI)
This paper presents an improved client-server system that increases the
availability of remote 3D data. In order to reduce the required bandwidth, the
data related to the appearance (color and normal) involved in the rendering of
meshes and point clouds is quantized on-the-fly during the transmission to the
final client, without reducing the geometric complexity. Our new quantization
technique for the appearance that can be implemented on the GPU, strongly
reduces the CPU load on the server-side and the transmission time is largely
decreased.
3D Data Codec and Transmission over the Internet
Su Cai - Ministry of Education Key Laboratory of Virtual Reality Technology, Beihang University
Yue Qi - Ministry of Education Key Laboratory of Virtual Reality Technology, Beihang University
Xukun Shen - Ministry of Education Key Laboratory of Virtual Reality Technology, Beihang University
In this paper, a compression method of encoding/decoding 3D mesh based on
octree is proposed. Vertices of the 3D mesh are re-classified according to the
octree rule. We analyse all the nodes of the octree statistically to identify
the type of nodes which accounts for the max proportion and encode them with
fewer bits. According to the transmission sequence of geometric information,
we rearrange topology and attribute information and encode them. Progressive
strategies adopted by the single model and the scene are different in order to
maximize the use of network bandwidth and computational performance of local
machines. This method has high compression rate, is adapt to network
transmission with  short response time at the client and can control the level
of detail of the model decoding.
Advanced Remote Inspection and Download of 3D Shapes
Emanuele Danovaro - Department of Computer Science - University of Genova
Laura Papaleo - Department of Computer Science - University of Genova
Davide Sobrero - Department of Computer Science - University of Genova
Marco Attene - IMATI - CNR
Waqar Saleem - Computer Graphics group - Max Planck Institut Informatik
Shape inspection options in most of the current online shape repositories
provide limited information on the shape of a desired model.
In addition, stored models can be downloaded only at the original
level of detail (LOD). In this paper, we present our application that
combines remote interactive inspection of a digital shape with realtime
simplification. Simplification is parameterised, is performed
in real-time and the results are again available for inspection. We
have embedded the application in a shape repository whereby, having
found a suitable simplification, users can download the model
at that LOD.

 

Session 3: Applications #1


Using Web3D Technologies for Visualization and Search of Signs in an International Sign Language Dictionary
Fabio Buttussi - HCI Lab - Dept. of Math and Computer Science - University of Udine
Luca Chittaro - HCI Lab - Dept. of Math and Computer Science - University of Udine
Marco Coppo - Italian Deaf Association (ENS) - LIS Working group - Udine
Sign languages are visual languages used by deaf people to communicate.
As with spoken languages, sign languages vary among
countries and have their own vocabulary and grammar. Therefore,
the different deaf communities need a dictionary that associates
signs to the words of the spoken language of their country as well as
dictionaries which translate signs from a sign language to another.
Several researchers proposed multimedia dictionaries for sign languages
of specific countries, but there are only a few proposals of
multilanguage dictionaries. Moreover, current multimedia dictionaries
suffer from serious limitations. Most of them allow only for
a word-to-sign search, while only a few of them exploit sign parameters
(i.e., handshape, orientation, location, and movement) to
allow for a sign-to-word search. Current solutions also commonly
use pictures or videos to represent signs and their parameters, but
2D images are often misleading for a correct identification (e.g.,
recognizing an handshape can be very difficult due to occlusions).
This paper aims at facing the above described issues, exploiting
Web3D technologies such as X3D and H-Anim humanoids to better
understand signs and to simplify sign-to-word and sign-to-sign
search, by proposing an online international sign language dictionary,
called 3DictSL. The paper presents the client-server architecture
of 3DictSL and authoring tools which allow deaf communities
to extend the dictionary with their own language. As a practical
case study, the paper discusses the implementation of Italian Sign
Language (LIS).
Protein CorreLogo: an X3D representation of co-evolving pairs, tertiary structure, ligand binding pockets and protein-protein interactions in protein families
Scooter Willis - University of Florida
To understand the functional elements of a protein structure biologists use
domain specific 3D viewers (PDB) that are written to process the coordinates
of atoms that represent the solved protein structure using X-Ray
crystallography or NMR.  The PDB viewers have been written to capture specific
or common features of interest to the researcher. With the explosion of
protein sequence data comparative studies and statistical analysis of data can
indicate regions of interest in 3D models. The ability to integrate
statistical data into existing PDB viewers is difficult because the software
is typically written to accomplish very specific functional goals and does not
support exporting to a standard 3D format. In this paper, the PDB data is
shown as X3D PDB ribbon models that are augmented with statistically
significant data and compared to an Information-Rich Virtual Environment
represented as a Protein CorreLogo X3D model.

A protein family (Pfam) represents multiple alignments of protein sequences
where protein domains and the tertiary structures have evolutionary conserved
regions representing protein function.  Various information properties of the
protein family, the tertiary structure from a sequence’s PDB structure and
ligand binding pockets are combined to create a 3D Protein CorreLogo model.
The multiple sequence alignment from the protein family is used to detect
co-evolving amino acid pairs using mutual information. Co-evolving pairs are
indicated as a column with color coding to represent the physio-chemical
properties of each co-evolving amino acid combination. Additional
visualizations along each axis include the 2D sequence logo, the degree of
insert regions in the protein family and the surface accessibility of each
amino acid for the referenced PDB sequence. The Protein CorreLogo model is
based on X3D (VRML) facilitating immersive viewing of complex data
relationships and detected co-evolving pairs. 

Two protein families are presented in the results section that compare the
Protein CorreLogo model with a representative X3D RDB ribbon model showing the
structural significance of predicted co-evolving amino acid pairs using mutual
information. One example protein family, with proteins that bind cyclic
nucleotides (PF00027.18), is given where the co-evolving pairs are potential
markers for ligand binding pocket regions. Another example protein family,
with SH3 domains that are involved in signal transduction related to
cytoskeletal organization (PF00018.16), shows significant mutual information
occurring between two pairs of amino acids that are in contact in the
intertwined dimer structure but are on opposite ends of the tertiary
structure. 

Protein CorreLogo X3D models and X3D PDB ribbon models can be found at
http://www.proteinx3d.com
ISAS: A Human-Centric Digital Media Interface
James Oliverio - Digital Worlds Institute - University of Florida
Yvonne Masakowski - US Naval Undersea Warfare Center
Howard Beck - University of Florida
Raja Appuswamy - University of Florida
The Integrated Situational Awareness System (ISAS) initiative at the
University of Florida Digital Worlds Institute has demonstrated an effective
web services-enhanced graphically-based environment for globally-distributed
operations ranging from humanitarian aid during large-scale environmental
disasters to high-level collaboration and augmented decision-making in civil
and coalition activities.

 

Session 4: Modeling and Semantics


A Reusable 3D Visualization Component for the Semantic Web
Alessio Bosca - Politecnico di Torino
Dario Bonino - Politecnico di Torino
Fulvio Corno - Politecnico di Torino
Marco Comerio - Università degli studi di Milano-Bicocca
Simone Grega - Università degli studi di Milano-Bicocca
Ontology visualization and exploration is not a trivial task as many issues
can affect the effectiveness of interactions. As ontologies are, in the
general case, quite connected graphs where concepts are the nodes and semantic
relationships the edges, the problems include space allocation, edge
superposition, scene over-crowding, etc.

In this paper we propose a solution for the visualization and the exploration
of ontologies using a 3-dimensional
space, where information is represented on a 3D view-port enriched by visual
cues. Our visualization tool aims at tackling representation issues of
ontology models (as space allocation or the completeness and readability of
displayed information) by adopting different views, at different
granularities, in order to grant a constant navigability of the rendered
model. Each provided view represents semantic information according to a
different, task-based visualization paradigm, at a suitable level of detail.

Besides being primarily implemented as a Protégé plug-in, the proposed
solution (named OntoSphere3D) is designed to be  a reusable visualization
component within Semantic Web applications; in fact, every scene can be
exploited as a standalone facility that provides access to ontological data
through an intuitive and appealing 3D interface. A case-study, is presented,
where re-usability is demonstrated by integrating the OntoSphere3D
visualization inside an Eclipse-based tool for Web Service design (called Web
Services Design Tool) developed by some of the authors in the context of
another research project.
Semantic-based Rules for 3D Scene Adaptation
Ioan Marius BILASCO - Laboratoire Informatique de Grenoble (LIG)
Marlène VILLANOVA-OLIVER - Laboratoire Informatique de Grenoble (LIG)
Jérôme GENSEL - Laboratoire Informatique de Grenoble (LIG)
Hervé MARTIN - Laboratoire Informatique de Grenoble (LIG)
3D data is democratizing on the Web as it becomes available to everybody on
almost all access devices. Still, 3D data is a heavy medium, as it contains a
lot of geometric and texture information. This complexity raises a lot of
problems, especially when data initially designed for great capacity access
devices, is to be deployed on small devices. Due to the great heterogeneity of
access devices and of their users and usages, the adaptation of already
designed data is an important issue. In this paper, we present a rule-based
adaptation framework that deals with the adaptation of X3D scenes. An
adaptation rule indicates a type of adaptation to be applied to a set of
objects which fit with the rule criterion. A basic set of adaptation
techniques are registered within the framework. The framework is flexible and
additional adaptation engines can be registered in order to support large sets
of adaptation techniques.
COLLADA Physics
Erwin Coumans - Sony Computer Entertainment America
Keith Victor - Media Machines Inc.
COLLADA Physics gives an overview of the COLLADA 1.4 standard physics format
and its use in the 3D physics content pipeline. 
It will describe design decisions, implementation, compatibility and
interoperability aspects of adopters of this industry standard, 
as well as its relationship with other standards such as X3D from the Web 3D
Consortium.
TimeClock - Flexible Animation Control in X3D
Olavo Belloc - Laboratory of Integrated Systems - University of São Paulo - Brazil
Marcio Cabral - Laboratory of Integrated Systems - University of São Paulo - Brazil
Marcelo Zuffo - Laboratory of Integrated Systems - University of São Paulo - Brazil
In this paper we propose an alternative approach to create animations in X3D.
This approach allows extended flexibility 
to control animations during run-time. Among the extended features, it is
possible to: control the speed of the animation; play the animation backwards;
repeat any specific time interval of the animation and access any key-frame
instantly. In order to illustrate this we propose a new node called TimeClock
node.
This node implements the same functionalities of a TimeSensor node but with
the ability to independently set the time frame, overcoming current X3D
limitations in the time model specification for creating animation with
interpolators. We think this approach is useful for both animators, developers
and users: animators can carefully analyze on the fly their work, users can
easily control an animation using a DVD like interface and developers do not
need to worry about creating several different interpolators for the same
animation. We present our current results along with some examples of usage. 
Binding External Interactivity to X3D
John Stewart - CRC Canada
Sarah Dumoulin - CRC Canada
Sylvie Noel - CRC Canada
The VRML and X3D Standards have achieved success as a method for not only 3D
Model Interchange, but also for creation of complex synthesized 3D worlds.
Shortcomings in VRML and X3D exist in the areas of manipulation of aural
soundscapes, and interaction via intuitive devices.

Cognitive and Computer Scientists at the Communications Research Centre,
Canada, have embarked on a process of exploration to resolve these
shortcomings by binding leading edge audio control software to VRML/X3D, thus
using de facto standards to extend I/O control and audio data manipulation.
This paper will outline the direction of these experiments.

 

Session 5: Multi-User, Distributed VEs


From Coarse-grained Components to DVE Applications: A Service and Component Based Framework
Xiaoyu Zhang - Computer Science Department, Virginia Tech
Denis Gracanin - Computer Science Department, Virginia Tech
Distributed Virtual Environments (DVEs) are distributed, simulated virtual
worlds where users gather and interact within a shared space. 
Web-based DVE applications are attracting more and more attention. 
However, building DVE applications requires a significant effort, even with
the modern development tools.  In this paper we propose a component-based and
a service-based framework for constructing DVE applications from
coarse-grained components.  This component-based and service-oriented
architecture provides a great flexibility for building complex DVE
applications.  Based on the developed terminology and profile, the framework
provides a high level description language for specifying user interaction
tasks.  The DVE developers can concentrate on the application design rather
than worrying about the programming details.  The framework also provides a
runtime platform for coarse-grained components integration and a shared scene
graph for coordinating the presentation for individual users.
Grid-Based Large-scale Web3D Collaborative Virtual Environment
Qingping Lin - Nanyang Technological University
Hong Kang Neo - Nanyang Technological University
Liang Zhang - Nanyang Technological University
Guangbin Huang - Nanyang Technological University
Robert Gay - Nanyang Technological University
This paper presents a grid-based large-scale web3D collaborative virtual
environment that has the capability of scaling across multiple geographically
dispersed resources. The architecture consists of distributed mobile agents
working cooperatively in supporting and managing the web3D collaborative
virtual environments. The mobile agents’ tasks include managing persistency
and consistency of the virtual worlds, maintaining reliability and efficiency
of user interactions, ensuring security and integrity of data and systems. The
mobile agents are autonomous and have the ability of migrating among hosts to
maximize resource utilizations. Grid technologies allow the mobile agents to
execute and communicate securely in multiple administrative domains.
Grid-based scheduling components and polices are integrated to provide
intelligent resource optimizations. Furthermore, a better load-balancing can
be achieved by utilizing additional or more accurate information like
data-user proximity and hosts’ workload. The result will be a more scalable
and robust architecture for supporting large-scale web3D collaborative virtual
environment.
An Open Protocol for Wide-area Multi-user X3D
Jay Weber - Media Machines, Inc.
Tony Parisi - Media Machines, Inc.
This paper describes work to create an open protocol for wide-area multi-user
X3D, incorporating many aspects of prior academic, experimental, and
proprietary systems, but emphasizing simplicity and practicality for use among
heterogeneous Internet user agents.  In the Internet tradition, it is
documented as a protocol (rather than a framework), and backed by
freely-available reference implementations.  The hope is that this protocol is
useful to those working on new X3D networking nodes as well as to those
building multi-user world systems.

 

Session 6: Interaction & Visualization


IRVE-Serve: A Visualization Gateway for Spatially-Registered Time Series Data
Nicholas Polys - Virginia Tech
Michael Shapiro - Tufts University
Karen Duca - VA-MD College of Veterinary Medicine
Scientists regularly confront situations where they are trying to understand
large quantities of information, that vary over time and space. Analyzing such
systems where structure and function are related is still a challenge despite
the continued improvement of visualization tools and techniques. Given the
spatial basis of many simulations, Information Rich Virtual Environments
(IRVEs) can be a successful way of presenting heterogeneous information in an
intuitively comprehensible form. 

In this paper we describe the evolution of a web-based IRVE delivery system
for simulation data. Our framework decouples geometry, the underlying data
set, and the expressive repertory for information display. This allows us to
incorporate domain-specific information while providing for easy retargeting
of the information displayed in that domain. As a result of these
abstractions, we are able to continually expand and improve our visual
mappings and components and finally apply our framework in a completely
unrelated domain.
3D SPACE: Using Depth and Movement for Selection Tasks
Dale Patterson - Griffith University
This paper describes two new three dimensional interface components (The Flow
and Circulatory system). These components utilize the depth provided by 3D
computer graphics to present complex information in a natural three
dimensional form for user interaction. Part of a larger research project with
the objective of applying 3D computer graphics to the field of human computer
interfaces, this research focuses mainly on the content of the 3D space and
how users utilize and interact with that content, rather than physical device
related issues. Each of the new 3D interface components is designed for a
particular mainstream real world interaction task (eg. web search/browsing
activities). In addition to the specific components it introduces the concept
of “active 3D interfaces”, a new style of interface that presents its data
to the user rather than statically waiting for the user to interact with it.
Each interface is described in terms of its design, function and performance
in user trials. These trials clearly demonstrate the potential for active 3D
interfaces in a range of common interaction tasks.

 

Session 7: Applications #2


3D Digital Dossiers -- a new way of presenting cultural heritage on the Web
Anton Eliens - Vrije Universiteit Amsterdam
Yiwen Wang - Technische Universiteit Eindhoven
Chris van Riel - Vrije Universiteit Amsterdam
Tatja Scholte - Instituut Collectie Nederland
In this paper we give a comprehensive overview of our work on digital dossiers
for the presentation of cultural heritage, in particular contemporary art, on
the web using standard 3D technology. Digital dossiers allow for navigation
using concept-graphs, and use 3D in an essential manner to present artwork
installations, as 3D models, as well as all the relevant information needed
for understanding the artwork, and, for curators, for the preservation and
possible re-installation of the artwork(s). Our discussion encompasses
requirements, implementation issues, and the realization of guided tours in
digital dossiers, that provide a narrative facility as well as tools to
experiment with exhibition parameters in virtual space.
An experience using X3D for Virtual Cultural Heritage
Marcio Cabral - Laboratory of Integrated Systems - University of Sao Paulo - Brazil
Marcelo Zuffo - Laboratory of Integrated Systems - University of Sao Paulo - Brazil
Silvia Ghirotti - Laboratory of Integrated Systems - University of Sao Paulo - Brazil
Olavo Belloc - Laboratory of Integrated Systems - University of Sao Paulo - Brazil
Leonardo Nomura - Laboratory of Integrated Systems - University of Sao Paulo - Brazil
Mario Nagamura - Laboratory of Integrated Systems - University of Sao Paulo - Brazil
In this paper we present our experience in using Virtual Reality Technologies
to accurately reconstruct and further explore ancient and historic city
buildings. Virtual reality techniques provide a powerful set of tools to
explore and access the history of a city. In order to explore, visualize and
hear such history, we divided the process in three phases: historical data
gathering and analysis; 3D reconstruction and modeling; interactive immersive
visualization, auralization and display.
The set of guidelines devised helped to put into practice the extensible tools
available in VR but not always easy to put together by inexperienced users.
These guidelines also helped the smoothness of our work and helped avoiding
problems in the subsequent phases. Most importantly, the X3D standard provided
an environment capable of helping the design and validation process as well as
the visualization phase.
To finalize, we present the results achieved and further analyze the
extensibility of the framework. Although VR tools and techniques are widely
available at present, there is still a gap between using the tools and really
taking advantage of VR in historic architectural reconstruction so that users
might immerse themselves into this world and thus be able to consider various
scenarios and possibilities that might lead to new insightful inspiration.
This is an ongoing process that we think will increase and help current
architectural development.
The VRML model of Victoria Square in Gorizia (Italy) from laser scanning and photogrammetric 3D surveys
Domenico Visintini - Department of Georesources and Territory - University of Udine
Anna Spangher - Department of Georesources and Territory - University of Udine
Barbara Fico - Department of Georesources and Territory - University of Udine
In this paper, the novel 3D survey technique of laser scanning integrated with
photogrammetric images is introduced as a quasi-automatic way for the detailed
modeling and the virtual rendering of whole cities with complex geometries and
structures.
Afterwards, some specifications and examples are given about the different
reachable levels of detail, on the base of our surveys done in the city of
Gorizia (Italy). Thanks to the laser building volumes and the photogrammetric
image textures a VRML model for Victoria Square has been obtained: such a high
level of detail 3D model is described in the last part of this paper.
sTeam3D: Bringing Together Virtual Communities and CSCW
Stefan Mischke - University of Paderborn
Frank Goetz - University of Paderborn
Robert Hinn - Heinz Nixdorf Institute, University of Paderborn
Thorsten Hampel - Heinz Nixdorf Institute, University of Paderborn
Today CSCW systems are used in various scientific, economic, and industrial
areas. The logical next step is that collaborators not only want to work
together by sharing documents or by communicating via mail or chat. Rather it
would be more interesting to meet colleagues, business partners, and customers
in a virtual world, e.g. to discuss design and look of various products or
concepts in 3D space and in real-time. Thereby, it is important that the whole
cooperative functionality remains available. Most developments that bring a
virtual community to the computer desktop are proprietary solutions that only
partially contain CSCW features. In order to solve this problem and to provide
a satisfying and functional virtual environment our aim was the development of
an X3D-based Web3D client for a sophisticated CSCW system. As a server system
we chose sTeam, a free CSCW system that is part of the Debian Linux
distribution. Hence, virtual communities are able to use sTeam and our 3D
client sTeam3D to get a fully featured CSCW environment.
Building Information Modeling: The Web3D Application for AEC
Dace Campbell - M. A. Mortenson Co.
There is currently a dramatic shift in the Architecture, Engineering, and
Construction (AEC) industry to embrace Building Information Modeling (BIM) as
a tool that can assist in integrating the fragmented industry by eliminating
inefficiencies and redundancies, improving collaboration and communication,
and enhancing overall productivity.  In the context of this revolution, the
intent of this paper is three-fold:
1) To introduce and define BIM to the Web3D community as an application of
Web3D to the AEC industry.
2) To describe and illustrate the various ways innovative designers and
contractors are applying BIM and Web3D tools in the AEC industry.
3) To challenge the Web3D community to collaborate with BIM and AEC-specific
open standards organizations like the International Alliance for
Interoperability and to discover ways to integrate X3D with the IFC file
formats.
Curriculum visualization in 3D
Lorenzo Sommaruga - SUPSI-DTI
Nadia Catenazzi - Labi
This paper describes a 3D environment for representing a university
undergraduate education programme. More specifically, the curriculum data
selected for representation, how they are visualized in the 3D environment,
and the process of generating it are detailed. Modules and curricula have been
rendered in such a way that many numerical data, such as credits and duration,
are translated into a graphical form, resulting in a simple and intuitive
overall view. This effective visualization strategy provides added-value in
comparison with the more traditional textual presentation. The 3D environment,
based on the X3D language, is dynamically generated from a database thanks to
an XSLT transformation. The use of a powerful XML based Web publishing tool,
i.e. Apache Cocoon, allows this transformation to be easily performed “on
the fly”.

 

Session 8: Virtual Humans


Optimized MPEG-4 animation encoder for motion capture data
Marius Preda - Institut National des Télécommunications
Blagica Jovanova - Institut National des Télécommunications
Ivica Arsov - Institut National des Télécommunications
Françoise Prêteux - Institut National des Télécommunications
This paper presents first compression results on using the MPEG-4 BBA standard
for encoding motion capture data. We first introduce a detailed description of
the main compression mechanisms used in recently BBA standard: prediction,
frequency transform, quantization and entropy encoder and discuss the
theoretical range of encoding performances. Then we introduce an optimized BBA
encoder including also a key-frame reduction mechanism and show the
compression results on animation files from typical motion capture data-base.
We compare the results with the ones reported in literature showing the
advantages of the MPEG-4 BBA in terms of bit-rate, complexity and range
control.
Generation and Manipulation of H-Anim CAESER Scan Bodies
Qiming Wang - National Institute of Standards and Technology
Sandy Ressler - National Institute of Standards and Technology
In this paper we present a procedure to create animated human models,
compliant with the H-Anim standard, from 3D CAESAR scan bodies, which were
captured using a whole body scan device. We also present a VRML prototype of
an “Animated CAESAR Viewer” to view and manipulate the generated CAESAR
body animations interactively on the Web. The animated body model follows the
H-Anim skinned body geometry specification. The vertex blending method has
been used for smoother skin deformations. The model can be integrated with
motion capture data. Although the process to generate an H-Anim body involves
several different techniques, the discussion is focused on the methods of
creating segments and assigning vertex weights. The Viewer provides the
functions for a user to explore the components of the digital human model, to
adjust the joint locations, to make body postures with a direct kinematics
method, and control the animation using VCR-like controls. The aim of the
Viewer is to help digital human modelers create more realistic postures and
motion sequences intuitively.
Composing H-Anim Behaviors and Swapping Bodies with Motion Capture Data in X3D
Jeffrey Weekley - Naval Postgraduate School
Curt Blais - Naval Postgraduate School
Don Brutzman - Naval Postgraduate School
This paper describes current work in the evolution of open
standards for 3D graphics for Humanoid Animation (H-Anim). It
builds on previous work to encompass plausible humanoids,
humanoid behaviors and methodologies for composition with
interchangeable and blended behaviors. We present an overview
of the standardization activities for H-Anim, including a proposed
extension for the H-Anim Specification which allows for
interchangeable actors and dynamic behaviors. We demonstrate a
standards-based approach to the complex work flow and data
extraction for 3D optical motion tracking systems. We describe
how to archive, annotate and transform the whole body and
segmented performance data so that they can be used more widely
and with less effort. The approach is compressible, streamable,
scaleable, repeatable and suitable for large-scale training and
analysis, entertainment and games.

collected from:
CyberChair 4 Author: Richard van de Stadt  (Borbala Online Conference Services) Development supported by TRESE Copyright © by University of Twente