Image-based modeling and rendering is currently one of the most challenging topics in Computer Vision and Photogrammetry. The key issue here is building a set of dense correspondence points between two images, namely dense matching or stereo matching. Among all dense matching algorithms, Semi-Global Matching (SGM) is arguably one of the most promising algorithms for real-time stereo vision. Compared with global matching algorithms, SGM aggregates matching cost from several (eight or sixteen) directions rather than only the epipolar line using Dynamic Programming (DP). Thus, SGM eliminates the classical “streaking problem” and greatly improves its accuracy and efficiency. In this paper, we aim at further improvement of SGM accuracy without increasing the computational cost. We propose setting the penalty parameters adaptively according to image edges extracted by edge detectors. We have carried out experiments on the standard Middlebury stereo dataset and evaluated the performance of our modified method with the ground truth. The results have shown a noticeable accuracy improvement compared with the results using fixed penalty parameters while the runtime computational cost was not increased.
KEYWORDS: Wavelets, Data modeling, Computed tomography, Discrete wavelet transforms, Wavelet transforms, Reconstruction algorithms, Data storage, 3D modeling, Decision support systems, Data compression
Discrete Wavelet Transforms have proven to be a very effective tool for compressing large data sets. Previous
research has sought to select a subset of wavelet coefficients based on a given space constraint. These approaches
require non-negligible overhead to maintain location information associated with the retained coefficients. Our
approach identifies entire wavelet coefficient subbands that can be eliminated based on minimizing the total error
introduced into the reconstruction. We can get further space reduction (with more error) by encoding some or
all of the saved coefficients as a byte index into a floating point lookup table. We demonstrate how our approach
can yield the same global sum error using less space than traditional MR implementations.
Interactive visualization of very large data sets remains a challenging problem to the visualization community.
One promising solution involves using adaptive resolution representations of the data. In this model, important
regions of data are identified using reconstructive error analysis and are shown in higher detail. During the
visualization, regions with higher error are rendered with high resolution data, while areas of low error are
rendered at a lower resolution. We have developed a new dynamic adaptive resolution rendering algorithm along
with software support libraries. These libraries are designed to extend the VisIt visualization environment by
adding support for adaptive resolution data. VisIt supports domain decomposition of data, which we use to
define our AR representation. We show that with this model, we achieve performance gains while maintaining
error tolerances specified by the scientist.
Visualization and analysis of very large datasets remains a significant challenge to the visualization community.
Scientists have tried various techniques to deal with large data. Multiresolution data models reduce the size of the data using techniques such as mesh decimation, wavelet transformation, or data compression. The low resolution representation raises issues concerning the authenticity of the data at its resolution level. We address this issue by presenting our extensions to the VisIt visualization environment that enable the scientist to visualize
both multiresolution data and the uncertainty information associated with the lower resolution representations of the data.
KEYWORDS: Visualization, Data storage, Data centers, Volume visualization, Java, Data processing, Operating systems, Data acquisition, Visual analytics, Time metrology
In this paper we propose a technique called storage-aware spatial prefetching that can provide significant performance
improvements for out-of-core visualization. This approach is motivated by file chunking in which a
multidimensional data file is reorganized into multidimensional sub-blocks that are stored linearly in the file.
This increases the likelihood that data close in the n-dimensional volume represented by the file will be closer
together in the physical file. Chunking has been demonstrated to improve the typical access to such data, but it
requires a complete re-organization of the file and sometimes efficient access is only achieved if multiple different
chunking organizations are maintained simultaneously. Our approach can be thought of as on-the-fly chunking,
but it does not require physical re-organization of the data or multiple copies with different formats. We also
describe an implementation of our technique and provide some performance results that are very promising.
Visualization of multidimensional data presents special challenges for the design of efficient out-of-core data access. Elements that are nearby in the visualization may not be nearby in the underlying data file, which can severely tax the operating system’s disk cache. The Granite Scientific Database System can address these problems because it is aware of the organization of the data on disk, and it knows the visualization method’s pattern of access. The access pattern is expressed using a toolkit of iterators that both describe the access pattern and perform the iteration itself. Because our system has knowledge of both the data organization and the access pattern, we are able to provide significant performance improvements while hiding the details of out-of-core access from the visualization programmer. This paper presents a brief description of our disk access system placing special emphasis on the benefits offered to a visualization application. We describe a simple demonstration application that shows dramatic performance improvements when used with the 39GB Visible Woman Dataset.
We present an application case study for visualizing large data sets of time series spatial data. Our application is built on a flexible, object oriented framework that supports the visualization of dynamic internal wave propagation in the earth's tropopause. Our data model uses a multiresolution hierarchy that integrates spatial and temporal components. The data also includes error information at each level of the hierarchy. The application provides the scientist with tools necessary to examine, query, and interact with visualizations of data of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.