Previous experiences during earthquake events emphasize the need for new technologies for real-time monitoring and assessment of facilities with high value nonstructural elements such as equipment or other contents. Moreover, there are substantial limitations to our ability to rapidly evaluate and identify potential hazard zones within a structure, exposing rescue workers, society and the environment to unnecessary risks. A real-time monitoring system, integrated with critical warning systems, would allow for improved channeling of resources. Ideally such a system would acquire all relevant data non-intrusively, at high rates and resolution and disseminate it with low latency over a trusted network to a central repository. This repository can then be used by the building owner and rescue workers to make informed decisions. In recognition of these issues, in this paper, we describe a methodology for image-based tracking of seismically induced motions. The methodology includes calibration, acquisition, processing, and analysis tools geared towards seismic assessment. We present sample waveforms extracted considering pixel-based algorithms applied to images collected from an array of high speed, high-resolution charged-couple-device (CCD) cameras. This work includes use of a unique hardware and software design involving a multi-threaded process, which bypasses conventional hardware frame grabbers and uses a software-based approach to acquire, synchronize and time stamp image data.
Tiled display systems have emerged as a means to visualize complex scientific data sets, while reducing the need to subsample potentially critical information. This paper presents a global-timestamp-based approach for the development and the control of real-time distributed tiled display systems. Two different techniques are presented that enable the development of multi-tile configurations in combination with distributed render clusters. A single-display-multiple-renderer approach is presented that fuses visuals generated by multiple render nodes into one composite image, which can be assigned to one specific display tile. This approach is subsequently extend to a multiple-display-multiple-rendered approach that facilitates the creation of scalable display systems consisting of multiple display tiles and render clusters. This paper investigates challenges that have to be addressed by these systems and describes a proof-of-concept system based on a high-level object-oriented real-time programming scheme called TMO.
The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present a 3D Web Browser (WebVR) that allows users to search for arbitrary information on the Internet and to seamlessly augment the results into virtual environments. WebVR provides access to the standard data input and query mechanisms while supporting active texture-skins of web content that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.
The advent of high speed, CCD-based camera technologies opens new possibilities for field monitoring applications. In particular, under natural or man-made loading conditions, applying these new technologies towards the monitoring of building interiors may substantially help rescue and reconnaissance crews during post-event evaluations. To test such a methodology, we have developed a specialized network of high-speed cameras and supporting hardware for monitoring and tracking nonstructural elements subjected to vibration loading, within building structures. Teamed with the University of California, Los Angeles, a full-scale vibration experiment is conducted on a vacant structure damaged during the 1994 Northridge Earthquake. The building of interest is a four-story office building located in Sherman Oaks, California. The investigation has two primary objectives: (1) to characterize the seismic response of an important class of equipment and building contents and (2) to study the applicability of tracking the response of these equipment and contents using arrays of image-based monitoring systems. In this paper, we describe the image acquisition (hardware and software) system and the experimental field set-up are described. In addition, the underlying communication, networking and synchronization of the camera sensor system are discussed.
Distributed virtual environments are rapidly gaining in popularity for the implementation of intuitive and collaborative workspaces. In distributed virtual environments, geographically dispersed user sites possess considerable capabilities for computing and cooperation with other user sites. Primary challenges that have to be addressed by these systems are compensating network latency jitters, keeping system-wide data consistent, enabling fair resource sharing and interaction between the users. This paper reviews a global time-stamp based approach, which is developed by authors to enhance the fairness and consistency across the distributed virtual environments. The approach is described in combination with three different implementation philosophies, a centralized approach similar to client-server model, a decentralized approach similar to peer-to-peer model, and a combined approach consisting of hierarchical layers of centralized and decentralized approaches. Based on a new object-oriented real-time programming methodology called the time-triggered message-triggered object (TMO) programming scheme, two different implementations were tested and compared.
KEYWORDS: 3D modeling, Visualization, Cameras, Image segmentation, 3D acquisition, Systems modeling, 3D image reconstruction, 3D image processing, Calibration, Video
Highly accurate avatars of humans promise a new level of realism in engineering and entertainment applications, including areas such as computer animated movies, computer game development interactive virtual environments and tele-presence. In order to provide high-quality avatars, new techniques for the automatic acquisition and creation are required. A framework for the capture and construction of arbitrary avatars from image data is presented in this paper. Avatars are automatically reconstructed from multiple static images of a human subject by utilizing image information to reshape a synthetic three-dimensional articulated reference model. A pipeline is presented that combines a set of hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and coloring, leading to avatars that can be animated and included into interactive environments. The presented system removes traditional constraints in the initial pose of the captured subject by using silhouette-based modification techniques in combination with a reference model. Results can be obtained in near-real time with very limited user intervention.
KEYWORDS: Human-machine interfaces, Visualization, Virtual reality, Control systems, OpenGL, Data modeling, Visual process modeling, 3D modeling, Algorithm development, Process control
This paper introduces VirtualExplorer, a customizable plugin- based virtual reality framework for immersive scientific data visualization, exploration and geometric modeling. The framework is layered on to of a run-time plugin system and reconfigurable virtual user interface and provides a variety of plugin components. The system provides access to scene- graph-based APIs, including Performer and OpenInventor, direct OpenGL support for visualization of time-critical data as well as collision and generic device mangers. Plugins can be loaded, disabled, enabled or unloaded at any time, triggered either through pre-defined events or through an external Python-based interface. The virtual user interface uses pre- defined geometric primitives that can be customized to meet application-specific needs. The entire widget set can be reconfigured dynamically on a per-widget basis or as a whole through a style manager. The system is being developed with a variety of application areas in mind, but its main emphasis is on user-guided data exploration and high-precision engineering design.
KEYWORDS: Computer aided design, Solid modeling, Visualization, 3D modeling, Systems modeling, Mathematical modeling, Control systems, Virtual reality, Finite element methods, Computing systems
This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
The described virtual clay modeling project explores the use of virtual environments (VES) for the simulation of two- handed clay modeling and sculpting tasks. Traditional clay modeling concepts are implemented and enhanced with new digital design tools leveraging from virtual reality (VR) and new input device technology. In particular, the creation of an intuitive and natural work environment for comfortable and unconstrained modeling is emphasized. VR projection devices, such as the Immersive WorkBench, shutter glasses, and pinch gloves, equipped with six-degrees-of-freedom trackers, are used to apply various virtual cutting tools to a volumetric data structure . The employment of an octree as underlying data structure for volume representation and manipulation in immersive environments allows real-time modeling of solids utilizing a suite of either geometrically or mathematically defined cutting and modeling tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.