There has been vast process in linking semantic information across the billions of web pages through the use of ontologies encoded in the Web Ontology Language (OWL) based on the Resource Description Framework (RDF). A prime example is the Wikipedia where the knowledge contained in its more than four million pages is encoded in an ontological database called DBPedia http://wiki.dbpedia.org/. Web-based query tools can retrieve semantic information from DBPedia encoded in interlinked ontologies that can be accessed using natural language. This paper will show how this vast context can be used to automate the process of querying images and other geospatial data in support of report changes in structures and activities. Computer vision algorithms are selected and provided with context based on natural language requests for monitoring and analysis. The resulting reports provide semantically linked observations from images and 3D surface models.
Over the last several years, a new representation for geometry has been developed, based on a 3-d probability
distribution of surface position and appearance. This representation can be constructed from multiple images, using both
still and video data. The probability for 3-d surface position is estimated in an on-line algorithm using Bayesian
inference. The probability of a point belonging to a surface is updated as to its success in accounting for the intensity of
the current image at the projected image location of the point. A Gaussian mixture is used to model image appearance.
This update process can be proved to converge under relatively general conditions that are consistent with aerial
imagery. There are no explicit surfaces extracted, but only discrete surface probabilities. This paper describes the
application of this representation to object recognition, based on Bayesian compositional hierarchies.
Over the last 30 years, there have been steady and reasonably extensive R&D activities applied to the problem of interpreting image data automatically. The goal of much of this research has been directed either at automatic target recognition (ATR) or at the automation of the functions of intelligence image analysis. These efforts have greatly advanced our understanding of the nature of image content and what is necessary to describe it using computer algorithms. On the other hand, the goal of automatic interpretation of complex scenes and the recognition of targets in the presence of clutter and occlusion is still not formalized at the level of an engineering discipline. This paper reviews what the key issues are in achieving the automation of image understanding and provides examples of both success and still-unattainable interpretation capabilities.
The goal of automatic change detection in aerial imagery has attracted considerable research effort. A significant advance in achieving reliable change detection is offered by the application of spatial context, derived from a 3D site model. The use of 3D model context is a key approach of the RADIUS program, a 5 year ARPA project to develop tools to assist an image analyst in extracting intelligence information from aerial images. This paper introduces the concept of an observation event which provides a uniform mechanism for coupling the linguistic framework of intelligence concepts to image observations and associated image feature extraction and analysis processes. A blackboard-style processing architecture is being developed to compute the state of observation events using the spatial context of a 3D site model. The observation event class hierarchy is described along with experimental results of event computation in aerial images.
Gibbs sampling, and other stochastic simulation methods, have recently received considerable attention in Bayesian statistics. Significant progress has been made in the areas of developing techniques for sampling from non-conjugate distributions, and analyzing theoretical and practical aspects relating to convergence. One of the powers of Gibbs sampling is the way it can simplify the expression of data models by replacing the evaluation of the integrals needed to compute the relevant posterior quantities by sampling from multidimensional distributions. This has opened up the way to solve complex Bayesian models that are not analytically tractable. In this paper we show how to separate variability in model parameters from variability due to the model extraction process by fitting hierarchial models to image sequences using Gibbs sampling. First, we review some of the recent developments in Gibbs sampling. Then we describe some of our experimental work using Gibbs sampling to extract geometric parameter distributions from industrial images.
The relationship between photogrammetry and computer vision is examined. This paper reviews the central issues for both computer vision and photogrammetry and the shared goals as well as distinct approaches are identified. Interaction in the past has been limited by both differences in terminology and in the basic philosophy concerning the manipulation of projection equations. The application goals and mathematical techniques of both fields have considerable overlap and so improved dialogue is essential.
RCDE is a software environment for the development of image understanding algorithms. The application focus of RCDE is on image exploitation where the exploitation tasks are supported by 2D and 3D models of the geographic site being analyzed. An initial prototype for RCDE is SRI's Cartographic Modeling Environment (CME). This paper reviews the CME design and illustrates the application of CME to site modeling scenarios.
Most current object recognition systems are based on a 3D model which is used to describe the image projection of an object over all viewpoints. We introduce a new technique which can predict the geometry of an object under projective transformation. The object geometry is represented by a set of corresponding features taken from two views. The projected geometry can be constructed in any third view, using a viewpoint invariant derived from the correspondences.
KEYWORDS: 3D modeling, Visual process modeling, Image segmentation, Model-based design, Image processing, Systems modeling, Digital image processing, 3D image processing, Object recognition, Cameras
Currently there is rapid progress in the development of model-based object recognition techniques. Current techniques and capabilities are reviewed and an example of model-based recognition for aerial reconnaissance is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.