KEYWORDS: Machine learning, Computer vision technology, Computer graphics, Education and training, Data modeling, Field emission displays, Deep learning, Visualization, RGB color model, Performance modeling, Principal component analysis, Image classification
Synthetically-generated imagery holds the promise of being a panacea for the challenges of real world datasets. Yet it continues to be frequently observed that deep learning model performance is not as good when trained with synthetic data versus real measured imagery. In this study we present analyses and illustration of the use of several statistical metrics, measures, and visualization tools based on the distance and similarity between real and synthetic data empirical distributions in the latent feature embedding space, which provide a quantitative understanding of the relevant image-domain distribution discrepancy issues hampering the generation of performant simulated datasets. We also demonstrate the practical applications of these tools and techniques in a novel study comparing latent space embedding vector distributions of real, pristine synthetic, and synthetic modified by physics-based degradation models. The results may assist deep learning practitioners and synthetic imagery modelers with evaluating latent space embedding distributional dissimilarity and improving model performance when using simulation tools to generate synthetic imagery training data.
Traditional motion-based trackers often fail in maritime environments due to a lack of image features to help stabilize video. In this paper, we describe a computationally efficient approach which automatically detects, tracks and classifies different objects within aerial full motion video (FMV) sequences in the maritime domain. A multi-layered saliency detector is utilized to first remove any image regions likely belonging to background categories (ie, calm water) followed-by progressively pruning out distractor categories such as wake, debris, and reflection. This pruning stage combines features generated at the level of each individual pixel, with 2D descriptors formulated around the outputs of prior stages grouped into connected components. Additional false positive reduction is performed via aggregating detector outputs across multiple frames, by formulating object tracks from these detections and, lastly, by classifying the resultant tracks using machine learning techniques. As a by-product, our system also produces image descriptors specific to each individual object, which are useful in later pipeline elements for appearance-based indexing and matching.
Image change detection has long been used to detect significant events in aerial imagery, such as the arrival or departure
of vehicles. Usually only the underlying structural changes are of interest, particularly for movable objects, and the
challenge is to differentiate the changes of intelligence value (change detections) from incidental appearance changes (false
detections). However, existing methods for automated change detection continue to be challenged by nuisance variations in
operating conditions such as sensor (camera exposure, camera viewpoints), targets (occlusions, type), and the environment
(illumination, shadows, weather, seasons). To overcome these problems, we propose a novel vehicle change detection
method based on the detection response maps (DRM). The detector serves as an advanced filter that normalizes the images
being compared specifically for object level change detection (OLCD). In contrast to current methods that compare pixel
intensities, the proposed DRM-OLCD method is more robust to nuisance changes and variations in image appearance. We
demonstrate object-level change detection for vehicle appearing and disappearing in electro-optical (EO) visual imagery.
KEYWORDS: Databases, Feature extraction, 3D modeling, Data modeling, Image understanding, Roads, Systems modeling, Data storage, 3D image processing, Stereoscopic cameras
A model supported exploitation (MSE) system presents unique challenges to database implementation. A MSE database must be designed to store complex information such as geospatial coordinates, imagery, 3D geometric models, camera models, annotations, and support for image understanding (IU) algorithms. This paper presents concepts from the MSE database implemented within the research and development for image understanding systems (RADIUS) project. Previous papers, detail the storage objectives of RADIUS as well as general discussions of MSE database requirements. This paper explores performance enhancements to the RADIUS testbed database (RTDB) within the scope of these objectives.
To enable the transition of exploitation image understanding (IU) technology into near-operational use in the intelligence community, user interfaces must be designed to allow simple, intuitive access to IU functionality and results. The complexity of IU systems, both in required inputs and processing, must be hidden from the user as much as possible to avoid heavy training costs. This paper describes some of the important user interface issues encountered when image understanding algorithms are introduced to an imagery analyst, and discusses some of the solutions that have evolved during the development of the RADIUS testbed. Significant issues we have encountered are algorithm and parameter selection, algorithm execution, visual representation of change, and display of historical results.
KEYWORDS: Databases, Algorithm development, Data modeling, Data storage, Image understanding, Image processing, Detection and tracking algorithms, Human-machine interfaces, Systems modeling, Data processing
One of the primary functions of the research and development in image understanding systems (RADIUS) testbed is to support the development and use of image understanding (IU) technologies in a model-supported exploitation (MSE) workstation environment. This paper describes a suite of storage capabilities added to the RADIUS testbed as part of the foundation providing this support. We discuss the storage requirements of IU processes, and describe a database solution to satisfy them. We present our design, which addresses the issues of how to represent the data, what precisely to store and how to retrieve the stored data. This is followed by a critique of the design.
ARPA is currently sponsoring five institutions to perform research related to RADIUS. The efforts are primarily addressing the problems of semi- and fully automatic site model construction and change detection. Brief descriptions of the work at each institution are presented.
KEYWORDS: 3D modeling, Image resolution, 3D image processing, Image segmentation, Data modeling, Systems modeling, Image registration, Optimization (mathematics), Image understanding, Computing systems
A new technique for finding object pose in 2D is presented. Given an object model and an image of the object, the algorithm uses a hierarchical approach to quickly prune out areas of the image that do not contain the object. At higher resolutions hypothesized object positions are refined and pruned according to a match score based on the current resolution. Once the best position hypotheses are obtained at maximum resolution, a second image of the object is used to pinpoint its 3D position. The system was used to correct the positions of hundreds of 3D object models of buildings in outdoor scenes.
RCDE is a software environment for the development of image understanding algorithms. The application focus of RCDE is on image exploitation where the exploitation tasks are supported by 2D and 3D models of the geographic site being analyzed. An initial prototype for RCDE is SRI's Cartographic Modeling Environment (CME). This paper reviews the CME design and illustrates the application of CME to site modeling scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.