Sensor data fusion has significant potential for advancing discovery, processing, and inspection of engineering materials. The paper reviews recent developments in data fusion with respect to materials inspection, highlights potential areas for materials growth, and shows results from application of matching component analysis (MCA). The main contributions of the paper include analysis of current fusion methods to uncover challenges and opportunities with respect to two inspection modalities (scanning acoustic microscopy and eddy current testing); and presenting an extension of MCA which has previously developed for other image modalities. Presenting MCA highlights the benefits towards a baseline method of SAM-EC fusion using the Multi-Scale Mixed Modality Microstructure Titanium Assessment Characterization (M4TAC) challenge dataset. Example results are presented with current motivations of enhancements.
As humans, we perceive the world in three dimensions. However, many militarily relevant sensing capabilities only display two-dimensional information to users in the form of imagery. In this work we develop and analyze a technique for reconstructing objects in three dimensions given sparse amounts of synthetic aperture radar (SAR) data. We analyze the required sampling rates of the proposed techniques and conduct a thorough analysis of the accuracy of our methods.
Automatic target recognition (ATR) is an ongoing topic of research for the Air Force. In this effort we develop, analyze and compare template matching and deep learning algorithms for use in the task of classifying occluded targets in light detection and ranging (LiDAR) data. Specifically, we analyze convolutional sparse representations (CSR) and convolutional neural networks (CNN). We explore the strengths and weaknesses of each algorithm separately, then improve the algorithms, and finally provide a comprehensive comparison of the developed tools. To conduct this final comparison, we improve the functionality of current LiDAR simulators to include our occlusion creator and parallelize our data simulation tools for use on the DoD High Performance Computers. Our results demonstrate that for this problem, a DenseNet trained with images containing representative clutter outperforms a basic CNN and the CSR approach.
Machine learning techniques such as convolutional neural networks have progressed rapidly in the past few years, propelled by their rampant success in many areas. Convolutional networks work by transforming input images into compact representations that cluster well with the representations of related images. However, these representations are often not human-interpretable, which is unsatisfying. One field of research, image saliency, attempts to show where in an image a trained network is looking to obtain its information. With this method, well-trained networks will reveal a focus on the object matching the label and ignore the background or other objects. We train and test neural networks on synthetic SAR imagery and use image saliency techniques to investigate the areas of the image on which the network is focused. Doing so should reveal whether the network is using relevant information in the image, such as the shape of the target. We test various image saliency techniques and classification networks, then measure and comment on the resulting saliency results to gain insight into what the networks learn on simulated SAR data. This investigation is designed to serve as a tool for evaluating future SAR target recognition machine learning algorithms.
The publicly-available Moving and Stationary Target Acquisition and Recognition (MSTAR) synthetic aperture radar (SAR) dataset has been an valuable tool in the development of SAR automatic target recognition (ATR) algorithms over the past two decades, leading to the achievement of excellent target classification results. However, because of the large number of possible sensor parameters, target configurations and environmental conditions, the SAR operating condition (OC) space is vast. This leads to the impossible task of collecting sufficient measured data to cover the entire OC space. Thus, synthetic data must be generated to augment measured datasets. The study of synthetic data fidelity with respect to classification tasks is a non-trivial task. To that end, we introduce the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset, which consists of SAR imagery from the MSTAR dataset and well-matched synthetic data. By matching target configurations and sensor parameters among the measured and synthetic data, the SAMPLE dataset is ideal for investigating the differences between measured and synthetic SAR imagery. In addition to the dataset, we propose four experimental designs challenging researchers to investigate the best ways to classify targets in measured SAR imagery given synthetic SAR training imagery.
Convolutional neural networks (CNN) are tremendously successful at classifying objects in electro-optical images. However, with synthetic aperture radar (SAR) data, off-the-shelf classifiers are insufficient because there are limited measured SAR data available and SAR images are not invariant to object manipulations. In this paper, we utilize the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset to present an approach to the SAR measured and synthetic domain mismatch problem. We pre-process the synthetic and measured data using Variance-Based Joint Sparsity despeckling, quantization, and clutter transfer techniques. The t-SNE (stochastic neighborhood embedding) dimensionality reduction method is used to show that pre-processing the data in the proposed way brings the two-dimensional manifolds represented by the measured and synthetic data closer. A DenseNet classification network is trained with unprocessed and processed data, showing that when no measured data are available for training, it is beneficial to pre-process SAR data with the proposed technique.
In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.
Direct image formation in synthetic aperture radar (SAR) involves processing of data modeled as Fourier coefficients along a polar grid. Often in such data acquisition processes, imperfections in the data cannot simply be modeled as additive or even multiplicative noise errors. In the case of SAR, errors in the data can exist due to imprecise estimation of the round trip wave propagation time, which manifests as phase errors in the Fourier domain. To correct for these errors, we propose a phase correction scheme that relies on both the on smoothness characteristics of the image and the phase corrections associated with neighboring pulses, which are possibly highly correlated due to the nature of the data off setting. Our model takes advantage of these correlations and smoothness characteristics simultaneously for a new autofocusing approach, and our algorithm for the proposed model alternates between approximate image feature and phase correction minimizers to the model.
Synthetic aperture radar (SAR) images are corrupted with speckle noise, which manifests as a multiplicative gamma noise and reduces the contrast in imagery, making detection and classifi- cation using SAR images a difficult task. Many speckle reduction techniques aim to reduce this noise without including available prior knowledge about the speckle and the scene contents. In this investigation, we develop a new technique for speckle reduction which incorporates both the statistical model of speckle and the a priori knowledge about the sparsity of edges present in the scene. Using the proposed technique, we despeckle a synthetic image, a SAR image from the MSTAR data set and a SAR image from the Gotcha data set. Our results show that, with our method, we are able to visually improve the quality of SAR images. We show quantitatively that we are able to reduce speckle in homogeneous areas beyond comparable methods, while maintaining edge and target intensity information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.