PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show how to apply the techniques of microlocal analysis to the Potter-Moses attributed scattering center model, which is based on the
Geometrical Theory of Diffraction (GTD). The microlocal methods enable us to determine how scattering centers will appear in the radar data. We show also how to extend the model to some multiple-scattering events, and we apply the microlocal techniques to the extended model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For two-dimensional (2-D) spectral analysis, the adaptive filtering based technologies, such as CAPON and APES (Amplitude and Phase EStimation), are developed under the implicit assumption that the data sets are rectangular. However, in real SAR applications, especially for the wide-angle cases, the collected data sets are always non-rectangular. This raises the problem of how to extend the original adaptive filtering based algorithms for such kind of scenarios. In this paper, we propose an extended adaptive filtering (EAF) approach, which includes Extended APES (E-APES) and Extended CAPON (E-CAPON), for arbitrarily shaped 2-D data. The EAF algorithms adopt a missing-data approach where the unavailable data samples close to the collected data set are assumed missing. Using a group of filter-banks with varying sizes, these algorithms are non-iterative and do not require the estimation of the unavailable samples. The improved imaging results of the proposed algorithms are demonstrated by applying them to two different SAR data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of wide-angle SAR imaging from data with arbitrary frequency-band omissions. We propose an approach that involves composite image formation through combination of subaperture images, as well as point-enhanced, superresolution image reconstruction. This framework provides a number of desirable features including preservation of anisotropic scatterers that do not persist over the full wide-angle aperture; robustness to bandwidth limitations and frequency-band omissions; as well as a characterization of the aspect dependence of scatterers. We present experimental results based on the Air Force Research Laboratory (AFRL) "Backhoe Data Dome," demonstrating the effectiveness of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional high resolution Synthetic Aperture Radar (SAR) systems
operate by placing a narrow transmit beam on a given ground patch
for an extended time period. An alternate method of achieving high
resolution is to increase the beamwidth. This approach allows both
high resolution and large coverage areas, but increases the Doppler
spread of the clutter echoes and therefore the required Pulse
Repetition Frequency (PRF). Unfortunately, the required PRF for
avoiding Doppler aliasing can become impractical rather quickly.
This paper begins by introducing a recently proposed technique for
mitigating these Doppler aliasing artifacts using continuous
transmission of Stepped-Frequency (SF) waveforms. By randomizing
the subpulse frequency order on a waveform-to-waveform basis,
matched filter processing has been shown to build a composite
ambiguity function which places Doppler filter nulls on top of
Doppler aliasing artifacts, effectively removing them from the
imagery.
Using Xpatch radar scattering data of a backhoe, this technique is
demonstrated for the first time using a non-point scatterer target.
The 2-D spatial frequency data is used to produce uncompressed
target echoes of randomized SF waveforms. A Doppler mismatch is
applied to simulate the return from an aliased target. By applying
the appropriate filter to the SF echoes, the aliased target is
eliminated, dropping the peak of the aliased energy by nearly 40 dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target.
This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of developing three-dimensional (3D) spatial representations of objects by processing sparse, wide-angle radar measurements of that object. We propose an approach in which multiple interferometric SAR (IFSAR) image pairs are obtained, each using a modest angular aperture. Each IFSAR image pair is used to extract 3D scattering locations and attributes, and these points are noncoherently combined to form object reconstructions. Volume rendering methods are employed to represent these spatial points and their attributes. Reconstruction results are presented using synthetically-generated, wide-angle scattering data of a backhoe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our proposed research is to focus and geolocate moving targets in synthetic aperture radar imagery. The first step is to estimate the target cross-range velocity using sequential sub-apertures; this is done by forming low resolution images and estimating position as a function of sub-aperture, thus yielding an estimate of the cross-range velocity. This cross-range estimate is then used to bound the search range for a bank of focusing filters. Determining the proper velocity that yields the best focused target defines an equation for the target velocity, however both components of the targets velocity can not be determined from a single equation. Therefore, a second image with a slightly different heading is needed to yield a second focusing velocity, and then having a system of two equations and two unknowns a solution can be obtained. Once the target velocity is known the proper position can be determined from the range velocity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among the most basic of antenna properties is angular resolution. Several methods exist for increasing radar resolution. Pulse compression is the classic method of producing increased resolution along the antenna boresight axis. For a moving platform, synthetic aperture techniques vastly increase the along track resolution. In this paper, an innovative method called scanned time/angle correlation (STAC) is presented which provides increased antenna angular resolution. This angular resolution is shown to be independent of antenna size and frequency and unlike synthetic aperture techniques requires motion of neither the target nor the platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the results of a multi-baseline IFSAR study using a shooting and bouncing ray (SBR) based IFSAR simulator. The SBR technique has been used in the past for 2-D SAR and IFSAR simulations. This paper extends on those approaches for modeling multi-baseline IFSAR images. IFSAR gives the height estimate for a target and hence leads to a 3-D image of the target. The 3-D reconstruction is dependent on the choice of IFSAR sensor parameters. We present a tradeoff study the sensor resolution versus the number of baselines using the SBR based simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high resolution imaging capability of Synthetic Aperture Radar (SAR) is largely unaffected by atmospheric conditions and has proven to be an indispensable asset in a variety of military and civilian applications. Application of SAR methodology for real-time imaging however carries with it the large computational complexity and storage requirements of the image-forming algorithms. Recently however, the rapidly diminishing cost of computing hardware and the related ascent of cluster-based computing, has made parallelization of these algorithms an appealing area of investigation. This paper describes a parallel SAR processor developed at MIT Lincoln Laboratory. Several novel technologies were employed in it's implementation, including pMatlab which is a parallel extension of standard Matlab that is also being developed at MIT Lincoln Laboratory. These technologies will be described later in the document. We begin with a brief description of the basic SAR algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper address the problem of super resolution imaging, using regularized amplitude estimation. Using a Bayesian problem formulation the regularization is applied through a prior distribution of the amplitudes. We investigate both a "super Gaussian" and a Student-t prior distribution. We derive maximum a posteriori (MAP) estimators for the amplitudes, based on the "Space-Alternating Generalized Expectation-Maximization" (SAGE) method, that only uses FFT:s in each iteration. The behavior of the algorithms for different choices of regularization parameters are illustrated through simple one dimensional examples, and SAR imaging is illustrated through an example using MSTAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional Synthetic Aperture Radar combines high range resolution waveforms collected from disparate directions/locations to form an image in range and cross-range. If the radar bandwidth is narrow, then range resolution will suffer and the overall image will be degraded. (This necessarily happens when the radar's carrier frequency is small, for instance.) There is, however, a complementary imaging mode in which very narrow frequency-domain pulses are collected by a platform in relative motion with the target and combined to form an image. Such systems rely on Doppler frequency shift measurements (instead of range information). For various practical reasons, this kind of imaging has not been well examined, but there are situations where the scheme is useful (in principle).
We develop the theory of radar imaging from data measured by a moving antenna emitting a single-frequency waveform. We show that, under a linearized (Born) scattering model, the signal at a given Doppler shift is due to a superposition of returns from stationary scatterers on a cone whose axis is the flight velocity vector. This cone reduces to a hyperbola when the scatterers are known to lie on a planar surface. In this case, reconstruction of the scatterer locations can be accomplished by a tomographic inversion in which the scattering density function is reconstructed from its integrals over hyperbolas. We give an approximate reconstruction formula and analyze the resolution of the resulting image. We provide a numerical shortcut and show results of numerical tests in a simple case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a method for making an image of an object when there are extra point-like scatterers in the environment. Once the location of these scatterers is known, they can be exploited in the imaging process.
Here the extra point scatterers are assumed to lie between the sensor and the object of interest. A single-scattering model is used for the object itself. Detailed analysis is carried out for the case of a single extra scatterer in the foreground; the extension to the case of many scatterers is expected to be similar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A unified way of detecting and tracking moving targets with a SAR radar called SAR-MTI is presented. SAR-MTI differs from STAP or DPCA in that it is a generalization of SAR processing and can work with only a single phase center. SAR-MTI requires formation of a series of images assuming different sensor ground speeds, from vs-vtmax to vs+vtmax, where vs is the actual sensor ground speed and vtmax is the maximum target speed of interest. Each image will capture a different set of target velocities, and the complete set of images will focus all target speeds less than a desired maximum speed regardless of direction and target location. Thus the 2-dimensional SAR image is generalized to a 3-dimensional cube or stack of images. All linear moving targets less than the desired speed will be focused somewhere in the cube. The third dimension represents the along track velocity of the mover which is a piece of information not available to standard airborne MTI. A mover will remain focused at the same place within the cube as long as the motion of the mover and the sensor remain linear. Because stationary targets also focus within the detection cube, move-stop-move targets are handled smoothly and without changing waveforms or modes. Another result of this fact is that SAR-MTI has no minimum detectable velocity.
SAR-MTI has an inherent ambiguity because the four-dimensions of target parameters (two dimensions in both velocity and position) are mapped into a three-dimensional detection space. This ambiguity is characterized and methods for resolving the ambiguity for geolocation are discussed. The point spread function in the detection cube is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inverse synthetic aperture radar (ISAR) is a powerful means in target identifying, especially the target in the air, which can image the moving target. There is little study on modeling and resistance technique according to ISAR in China. This paper establishes a model of ISAR system, and then studies on some valid jamming technique. This will provide us the valid technique support on ISAR resistance equipment later.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss various methods for collecting and exploiting SAR imagery of urban areas. In particular we describe techniques that utilise both intensity and phase information to provide information beyond that of the standard image product. We demonstrate techniques to produce digital elevation maps and SAR movies, using airborne radar data. Examples showing the applications and limitations of these approaches to the task of urban surveillance are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Center for Advanced Communications (CAC) at Villanova University has conducted several preliminary through-the-wall imaging experiments and collected real data on different settings behind the wall using a newly-integrated RF instrumentation suite. The full-polarization, 2D aperture data measurements are taken using an Agilent network analyzer, Model ENA 5071B, implementing a step frequency waveform over a 2-3 GHz frequency range. The room imaged is a typical computer lab that has been lined with radar absorbing material. Three different arrangements of the room's contents are considered: empty scene, calibration scene, and populated scene. The empty scene allows measurement of the noise/clutter background and supports coherent subtraction with the other two scenes. The calibration scene contains isolated reflectors that may be used to determine a fully-polarimetric radiometric calibration solution for the experimental system. The populated scene contains a number of common objects such as a phone, computer, tables, chair and filing cabinet. In addition, a jug of saline solution has been added to crudely approximate a human. Each scene is imaged with and without a wall. The wall is composed of plywood and gypsum board on a wood frame. The antennas are mounted on a 2D scanner that moves the antennas along and adjacent to the wall and is controlled by the network analyzer. Two additional antennas are fixed to the scanner frame and act as bistatic receivers. The paper provides a detailed description of the RF system and experimental conditions and provides a quick look at collected data products. The data measurements, technical details on collection instrumentation, auxiliary measurements, and scene truth data will be made available starting in April 05 to download from the Villanova CAC website at http://www.engineering.villanova.edu/cac/TWRI-experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the issues associated with the suppression of radio frequency interference (RFI) for a synchronous impulse reconstruction (SIRE) ultra-wideband (UWB) synthetic aperture radar (SAR) that is currently being designed and built at the Army Research Laboratory. In this effort, we are developing the next version of the UWB radar that can employ inexpensive A/D converters to digitize wideband signals using the equivalent time sampling technique. In this presentation, we provide an analytical model for the signature of the RFI sources that are measured via an equivalent time sampling scheme. This formulation reveals spectral as well as temporal properties of the measured RFI signals that would aid a user in developing sniff (passive) data collection strategies for constructing adaptive digital signal processing methods for suppressing RFI sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major factors plaguing the performance of synthetic aperture radar (SAR) imagery is the presence of signal-dependent speckle noise. Grainy in appearance, speckle noise is primarily due to the phase fluctuations of the electromagnetic return signals. Since inherent spatial-correlation characteristics of speckle in SAR images are not exploited in existing multiplicative models for speckle noise, a new approach is proposed here that provides a new mathematical framework for modeling and reduction of speckle noise. The contribution of this paper is twofold. First, a novel model for speckled SAR imagery is introduced based on Markov random fields (MRFs) in conjunction with statistical optics. Second, utilizing the model, a global energy-minimization algorithm, based on simulated annealing (SA), is introduced for speckle reduction. In particular, the joint conditional probability density function (cpdf) of the intensity of any two points in the speckled image and the associated correlation function are used to derive the cpdf of any center pixel intensity given its four neighbors. The Hammersley-
Clifford theorem is then used to derive the energy function associated with the MRF. The SA, built on the Metropolis sampler, is employed for speckle reduction. Four metrics are used to assess the quality of the speckle reduction: the mean-square error, SNR, an edge-preservation parameter and the equivalent number of looks. A comparative study using both simulated speckled images as well as real SAR images indicates that the proposed approach performs favorably in comparison to existing filtering techniques such as the modified-Lee and the enhanced Frost-algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern Synthetic Aperture Radar (SAR) signal processing algorithms could retrieve accurate and subtle information regarding a scene that is being interrogated by an airborne radar system. An important reconnaissance problem that is being studied via the use of SAR systems and their sophisticated signal processing methods involves detecting changes in an imaged scene. In these problems, the user interrogates a scene with a SAR system at two different time points (e.g. different days); the resultant two SAR databases that we refer to as reference and test data, are used to determine where targets have entered or left the imaged scene between the two data acquisitions. For instance, X band SAR systems have the potential to become a potent tool to determine whether mines have been recently placed in an area.
This paper describes an algorithm for detecting changes in averaged multi-look SAR imagery. Averaged multi-look SAR images are preferable to full aperture SAR reconstructions when the imaging algorithm is approximation based (e.g. polar format processing), or motion data are not accurate over a long full aperture. We study the application of a SAR detection method, known as Signal Subspace Processing, that is based on the principles of 2D adaptive filtering. We identify the change detection problem as a binary hypothesis-testing problem, and identify an error signal and its normalized version to determine whether i) there is no change in the imaged scene; or ii) a target has been added to the imaged scene. A statistical analysis of the error signal is provided to show its properties and merits. Results are provided for data collected by an X band SAR platform and processed to form non-coherently look-averaged SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a unified mathematical theory of change detection for the multi-polarization, multi-pass synthetic aperture radar (SAR) change detection problem. A generalized likelihood ratio test (glrt) for optimum processing of the measured data is developed; an evaluation of the performance of the optimum glrt is also given in the form of ROC curves (PD versus PFA), quantifying the benefits of multi-polarization, multi-pass SAR change detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The focus of this paper is the classification of military vehicles in multi-polarimetric high-resolution spotlight SAR images in an ATR framework. Kernel machines as robust classification methods are the basis of our approach. A novel kernel machine the Relevance Vector Machine with integrated Generator (RVMG) controlling the trade-off between classification quality and computational effort is used. It combines the high classification quality of the Support Vector Machine by margin maximization and the low effort of the Relevance Vector Machine caused by the special statistical approach. Moreover multi-class classification capability is given by an efficient decision heuristic, an adaptive feature extraction based on Fourier coefficients allows the module to do real time execution, and a parameterized reject criterion is proposed in this paper.
Investigations with a nine class data set from QinetiQ deal with fully polarimetric SAR data. The objective is to assess polarimetric features in combination with several kernel machines. Tests approve the high potential of RVMG. Moreover it is shown that polarimetric features can improve the classification quality for hard targets. Among these the simple energy based features prove more favorable than complex ones. Especially the two coplanar polarizations embody the essential information, but a better generalizability is caused by using all four channels.
An important property of a classifier used in the ATR framework is the capability to reject objects not belonging to any of the trained classes. Therefore the QinetiQ data are divided into four training classes and five classes of confusion objects. The classification module with reject criterion is controlled by the reject parameter and the kernel parameter. Both parameters are varied to determine ROC curves related to different polarimetric features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 'curse of dimensionality' has limited the application of statistical modeling techniques to low-dimensional spaces, but typical data usually resides in high-dimensional spaces (at least initially, for instance images represented as arrays of pixel values). Indeed, approaches such as Principal Component Analysis and Independent Component Analysis attempt to extract a set of meaningful linear projections while minimizing interpoint distance distortions. The counterintuitive yet effective random projections approach of Johnson and Lindenstrauss defines a sample-based dimensionality reduction technique with probabilistically provable distortion bounds. We investigate and report on the relative efficacy of two random projection techniques for Synthetic Aperture Radar images in a classification setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Addressing the challenge of robust ATR, this paper describes
the development and demonstration of Machine Learning for Robust ATR. The primary innovation of this work is the development of an automated way of developing heuristic inference rules that can draw on multiple models and multiple feature types to make more robust ATR decisions. The key realization is that this meta learning problem is one of structural learning; that can be conducted independently of parameter learning associated with each model and feature based technique, and more effectively draw on the strengths of all such techniques, and even information from unforeseen techniques. This is accomplished by using robust, genetics-based machine learning for the ill conditioned combinatorial problem of structural rule learning, while using statistical and mathematical techniques for parameter learning. This paper describes a learning classifier system approach (with evolutionary computation for structural learning) for robust ATR and points to a promising solution to the structural learning problem, across multiple feature types (which we will refer to as the
meta-learning problem), for ATR with EOCs. This system was tested on MSTAR Public Release SAR data using nominal and extended operation conditions. These results were also compared against two baseline classifiers, a PCA based distance classifier and a MSE classifier. The systems were evaluated for accuracy (via training set classification) and robustness (via testing set classification). In both cases, the LCS based robust ATR system performed very well with accuracy over 99% and robustness over 80%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the MSTAR public release database. First, each image chip is pre-processed by extracting fine and raw feature sets, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) net as the base learner. Since the RBF net is a binary classifier, we decompose our multiclass problem into a set of binary ones through the
error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF net for each binary problem into a code word, which is then "decoded" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Adaptive SAR ATR Problem Set (AdaptSAPS) poses a typical "learning with a critic" problem, in which the system-under-test (SUT) is initially trained to characterize a subset of target objects (e.g. T72) and a subset of non-target objects (e.g. clutter), and is to be updated on-line using the Target Truth information. This work proposes an SUT for adaptive SAR imagery exploitation. The system is founded on a novel feature vector generation scheme and Linear Discriminant Analysis (LDA). The proposed feature vector generation scheme partitions SAR image chips into subimage blocks. The distribution density of subimage blocks is fitted as a Gaussian Mixture Model (GMM). Feature vector of each SAR image is composed of log-likelihoods of its subimage blocks on the pre-fitted GMM. Comparing to original SAR image chips, feature vectors generated from log-likelihoods display superior discriminative power. After feature generation, LDA is used to project feature vectors into a 1-dimensional subspace for classification. The performance of the proposed system is evaluated on the AdaptSAPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification ground vehicle targets from the MSTAR (Moving and Stationary Target Acquisition and Recognition) database is investigated using Gaussian-mixture hidden Markov models (gHMMs) and synthetic aperture radar images. The gHMMs employ features extracted from High Range Resolution (HRR) radar signal magnitude versus range profiles of the targets. Feature enhancement is made using Cetin's point-based reconstruction technique. The impact on classification accuracy across numbers of hidden states and sequence length is explored using separate training and testing data. Multiple gHMM classifier outputs are fused according to various decision rules across which classification performance is explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of several combinations of feature extraction and target classification algorithms is analyzed for Synthetic Aperture Radar (SAR) imagery using the standard Moving and Stationary Target Acquisition and Recognition (MSTAR) evaluation method. For feature extraction, 2D Fast Fourier Transform (FFT) is used to extract Fourier coefficients (frequency information) while 2D wavelet decomposition is used to extract wavelet coefficients (time-frequency information), from which subsets of characteristic in-class "invariant" coefficients are developed. Confusion matrices and Receiver Operating Characteristic (ROC) curves are used to evaluate and compare combinations of these characteristic coefficients with several classification methods, including Lp metric distances, a Multi Layer Perceptron (MLP) Neural Network (NN) and AND Corporation's Holographic Neural Technology (HNeT) classifier. The evaluation method examines the trade-off between correct detection rate and false alarm rate for each combination of feature-classifier systems. It also measures correct classification, misclassification and rejection rates for a 90% detection rate. Our analysis demonstrates the importance of feature and classifier selection in accurately classifying new target images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) performance is a function of image quality and its representation in the signature model generation and used in the ATR training process. This paper reports ATR performance as a function of synthetic aperture radar (SAR) image quality parameters including clutter-to-noise ratio (CNR) and multiplicative noise ratio (MNR). Images with specified image quality values were produced by introducing controlled degradations to the MSTAR public release data. Two different families of ATR algorithms, the statistical model-based classifier of DeVore, et al., and optimal tradeoff synthetic discriminant function (OTSDF) are applied to those data. Target classification accuracy was measured as a function of CNR/MNR for both the training and test data, indicating sensitivity of performance to a priori knowledge of these particular image quality parameters. Confusion matrices are expanded to include target aspect bins, providing visibility into performance as a function of aspect angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Exploitation has largely focused on single look, single modality platform collections exploited by a variety of classification algorithms over the last several decades. But multi look, multi modality exploitation is the evolving paradigm for threat classification given the evolution and cost of near term unmanned vehicle system sensor platforms. This paper presents an overview of machine learning algorithms that have previously been applied to automatic target recognition, discusses the evolution of target recognition to the 2010 paradigm of multi look, multi modality valid target identification, and explores how advanced machine learning algorithms can be applied to address the exploitation problems of the next decade.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAIC in support of the National Geospatial-Intelligence Agency (NGA) Synergistic Targeting Auto-Extraction and Registration (STAR) Program is conducting an evaluation of several automated registration applications developed by different vendors. A common problem when attempting to compare multiple automated registration packages is interpreting the results from the different applications in a consistent manner. Different vendors use different matching and adjustment methods as well as different output formats to store the results from automated registration. It is nearly impossible, due to the cost in terms of labor and time, to convert the results from different vendors into a common format or framework for evaluation. The approach taken by the STAR Program is to separate the automated registration process into three separate but equal components: automated data matching, data adjustment, and output format used to store the adjusted results. The results of different vendor's automated matching process will be fed into a single, common weighted least squares package for adjustment. The use of a common physics based adjustment process will allow the STAR Program Office to evaluate the results from several different automated registration applications in a consistent and fair manner. The metrics collected during the STAR Automated Registration Evaluation will include timing statistics, matching statistics, and geospatial accuracy statistics from comparisons to GPS survey sites. In this paper, we present the STAR automated registration evaluation architecture. Over the course of the next year the STAR Program Office will develop and then use this architecture to evaluate several different automated registration algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional detection system performance metrics, such a probability of detection and probability of false alarm, depend only on how the system responds to individual target-sized regions-of-interest (ROIs). The composition of the larger scene does not affect those metrics. There are circumstances however, where a user of a detection system wants to know, "For a given cue, what is the probability that the cue is correct?" or perhaps the detector is being used to determine a property of the overall scene. As an example of the latter case, suppose the detection system is looking for diseased cells in a tissue sample. Even if only one diseased cell exists, the whole "scene" represents a diseased individual. In both cases, the user-perspective or the scene-based perspective, the natural performance metrics depend on the scene content, especially the numbers of target and confuser ROIs. This paper defines scene-content dependent (SCD) performance metrics for detection systems, develops a theory for computing them, and illustrates properties of the metrics with examples. The SCD performance theory enabled determination of the example metrics in about two hours of computation; whereas Monte Carlo methods would have taken almost a year and direct testing would have been almost impossible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probability densities for target recognition performance metrics are developed. These densities assist in evaluation of systems under test (SUTs), which are systems that predict the presence of a target after examination of an input. After such examination, a SUT assigns a score that indicates the predicted likelihood that a target is present. From scores for a series of many inputs, the suitability of a SUT can be evaluated through performance metrics such as the receiver operating characteristic (ROC) and the confidence error (CE) generation curve. The ROC is a metric that describes how well the probability densities of target and clutter scores are separated, where clutter refers to the absence of target. The CE generation curve and the corresponding scalar CE is a metric that evaluates the accuracy of the score. Since only a limited number of test scores (scores for which the truth state is known by the evaluator) is typically available to evaluate a SUT, it is critical to quantify uncertainty in the performance metric results. A process for estimating such uncertainty through probability densities for the performance metrics is examined here. Once the probability densities are developed, confidence intervals are also obtained. The process that develops the densities and related confidence intervals is implemented in a fully Bayesian manner. Two approaches are examined, one which makes initial assumptions regarding the form of the underlying target and clutter densities and a second approach which avoids such assumptions. The target and clutter density approach is applicable to additional performance metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) performance modeling is dependent on model complexity, training data, and test analysis. In order to compare different ATR algorithms, we develop a fidelity score that characterizes the quality of different algorithms to meet real-world conditions. For instance, a higher fidelity ATR performance model (PM) is robust over many operating conditions (sensors, targets, environments). An ATR model that is run for one terrain, might not be applicable for all terrains, yet its operating manual clarifies its range of applicability. In this paper, we discuss a fidelity score that captures the performance application of ATR models and can be extended to different sensors over many operating conditions. The modeling quantification testing can be used as a fidelity score, validation metric, or guidance for model improvements. The goal is to provide a framework to instantiate a high fidelity model that captures theoretical, simulated, experimental, and real world data performance for use in a dynamic sensor manager.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common approach to the detection of objects in sensor data is to model the target, compare the input data to that model and then if the match is close enough, declare target-present. This is how many automatic target recognition (ATR) systems operate. An alternative approach is to also have confuser models (CMs) and to consider how close the input data is to all of the models in the library. The advantages of CMs can be increased by also modeling the match score likelihoods for targets and confusers. This paper considers several methods for using CMs and likelihood models (LMs) and demonstrates their relative merits with a mean-squared-error based ATR on the MSTAR synthetic-aperture-radar (SAR) public data set. Two benefits of CMs and LMs are demonstrated. They improve the ability of the ATR to discriminate targets and confusers, as one might expect, but they can also help the ATR estimate the confidence it should have in its decisions. In the demonstration, the area-under-the-ROC curve was increased from 0.88 to 0.94 by CM use. For the important case of out-of-library confusers, if the probability of false alarm (Pfa) is set to 0.1 then CMS and LMs increase probability of detection (Pd) from 0.40 to 0.65. On the other hand if the Pd is set to 0.9 then the CMs and LMs decrease Pfa from 0.50 to 0.35. The posterior estimate (i.e., the ATR's confidence) had a reduction in RMS error from 0.27 to 0.09 through the use of CMs and LMs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel methodology for evaluating statistically predicted versus measured multi-modal imagery, such as Synthetic Aperture Radar (SAR), Electro-Optical (EO), Multi-Spectral (MS) and Hyper-Spectral (HS) modalities. While several scene modeling approaches have been proposed in the past for multi-modal image predictions, the problem of evaluating synthetic and measured images has remained an open issue. Although analytical prediction models would be appropriate for accuracy evaluations of man-made objects, for example, SAR target modeling based on Xpatch, the analytical models cannot be applied to prediction evaluation of natural scenes because of their randomness and high geometrical complexity imaged by any of the aforementioned sensor modality. Thus, statistical prediction models are frequently chosen as more appropriate scene modeling approaches and there is a need to evaluate the accuracy of statistically predicted versus measured imagery. This problem poses challenges in terms of selecting quantitative and qualitative evaluation techniques, and establishing a methodology for systematic comparisons of synthetic and measured images. In this work, we demonstrate clutter accuracy evaluations for modified measured and predicted synthetic images with statistically modeled clutter. We show experimental results for color (red, green and blue) and HS imaging modalities, and for statistical clutter models using Johnson's family of probability distribution functions (PDFs). The methodology includes several evaluation techniques for comparing image samples and their similarity, image histograms, statistical central moments, and estimated probability distribution functions (PDFs). Particularly, we assess correlation, histogram, chi-squared, pixel and PDF parameter based error metrics quantitatively, and relate them to a human visual perception of predicted image quality. The work is directly applicable to multi-sensor phenomenology modeling for exploitation, recognition and identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurately associating sensor kinematic reports to known tracks, new tracks, or clutter is one of the greatest obstacles to effective track estimation. Feature-aiding is one technology that is emerging to address this problem, and it is expected that adding target features will aid report association by enhancing track accuracy and lengthening track life. The Sensor's Directorate of the Air Force Research Laboratory is sponsoring a challenge problem called Feature-Aided Tracking of Stop-move Objects (FATSO). The long-range goal of this research is to provide a full suite of public data and software to encourage researchers from government, industry, and academia to participate in radar-based feature-aided tracking research. The FATSO program is currently releasing a vehicle database coupled to a radar signature generator. The completed FATSO system will incorporate this database/generator into a Monte Carlo simulation environment for evaluating multiplatform/multitarget tracking scenarios. The currently released data and software contains the following: eight target models, including a tank, ammo hauler, and self-propelled artillery vehicles; and a radar signature generator capable of producing SAR and HRR signatures of all eight modeled targets in almost any configuration or articulation. In addition, the signature generator creates Z-buffer data, label map data, and radar cross-section prediction and allows the user to add noise to an image while varying sensor-target geometry (roll, pitch, yaw, squint). Future capabilities of this signature generator, such as scene models and EO signatures as well as details of the complete FATSO testbed, are outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The HH and VV-polarized backscattering behavior of homogeneous ground clutter has been investigated by measuring the radar cross section per unit area of rough surface terrain. The X, Ka, and W-band behavior was investigated by analyzing ISAR imagery of 1/16th scale terrain collected in compact radar ranges operating at 160 GHz, 520 GHz, and 1.56 THz. An array of scale model ground planes was fabricated with the appropriate roughness to model relatively smooth to rough soil terrain. In addition to studying terrain backscatter as a function of surface roughness, the dependence on soil moisture content was also characterized by tailoring the dielectric constant of the scale models. The radar cross section per unit illuminated area (?0) was calculated as a function of elevation angle between 15° and 75°. The results of this work have been used in the fabrication of scale model ground planes for collection of radar imagery from scaled threat targets situated in realistic environments. Backscattering data are presented and compared to clutter data found in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar detection and identification of ground targets in diverse environments is a subject of continuing interest. It has long been known that different radar bands have advantages for different environmental conditions. For example, it has been shown that detection of targets under foliage is more easily accomplished using longer wavelength radars since there is less attenuation at these frequencies. However, higher frequency radars offer greater resolution that is crucial in target identification. Because each radar band has its own unique strengths and weakness, one current approach is the use of dual-band radar platforms. With two radar bands working simultaneously, the strengths of each radar band can be used to compliment the other. ERADS has constructed two full polarimetric compact radar ranges to acquire X-Band and UHF ISAR imagery data using 1/35th scale models. The new compact ranges allow data to be taken that can simulate a multi-frequency radar platform with frequencies low enough to detect obscured targets and high enough to provide useful resolution to aid in target identification once they have been detected. Since both compact ranges use the same scale factor, this allows measurement of the same target at the two spectral regions simply by moving the target model from one compact range to the other. Data can thus be taken whose differences in scattering are due only to the difference in radar frequency, eliminating variations due to differences in target models as well as the surrounding ground clutter. Detailed descriptions of the new compact ranges will be presented along with results from sample data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Having relevant sensor data available during the early phases of ATR algorithm development and evaluation projects is paramount. The source of this data primarily comes from either being synthetically-generated or from measured collections. These collections, in turn, can either be highly-controlled or operational-like exercises. This paper presents a broad overview on the types of data being housed within the Automatic Target Recognition Division of the Air Force Research Laboratory (AFRL/SNA) and that are available to the ATR developer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The adaptive noise radar algorithm allows computation of compressed pseudo-pulses from a received noise radar signal at the receiver ADC rate. This is accomplished through use of LMS channel identification algorithms commonly exploited in wireless communications. This paper shows how having access to compressed pseudo-pulses at the ADC rate may be exploited to simultaneously implement SAR and GMTI modes in two parallel Doppler-processing chains. Simultaneous SAR and GMTI will aid in tracking of alternately moving and stationary targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.