PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10658, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three treatments (normal soil water condition, CK; continual submergence condition, T1; and submergence and drought alternation condition, T2) are adopted. The reflectance spectra of Pinuse lliottii Engelm leaves, the red edge parameters and the corresponding chlorophyll content are measured, and the relationship between the red edge parameters and total chlorophyll concentration are analyzed. The results show that: (1) There are “blue shift” phenomena before 24d and “red shift” after 24d for the position of λred (Red edge position). The opposite phenomenon occurs for Dλred (Red edge amplitude). The Sred (Red edge area) appear “blue shift” for the case T1 and T2. (2)There is extremely significant correlation between chlorophyll content and red edge position, and there is significant correlation between chlorophyll content and red edge area, but the correlation between chlorophyll content and red edge amplitude is poor. The correlation coefficient between chlorophyll content and red edge position is 0.762. (3)The red edge position, kurtosis and skewness, which are computed by spectral curve at range 680-760nm of Pinuse lliottii Engelmare considered the input variables for artificial neural networks. The correlation coefficient is 0.928, and it obviously improves the accuracy of the estimation of the total chlorophyll concentration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coded aperture snapshot spectral imager (CASSI) uses focal plane array (FPA) to capture three dimensional (3D) spectral scene by single or a few two-dimensional (2D) snapshots. Current CASSI systems use a set of fixed coded apertures to modulate the spatio-spectral data cube before the compressive measurement. This paper proposes an adaptive projection method to improve the compressive efficiency of the CASSI system by adaptively designing the coded aperture according to a-priori knowledge of the scene. The adaptive coded apertures are constructed from the nonlinear thresholding of the grey-scale map of the scene, which is captured by an aided RGB camera. Then, the 3D encoded spectral scene is projected onto the 2D FPAs. Based on the sparsity assumption, the spectral images can be reconstructed by the compressive sensing algorithm using the FPA measurements. This paper studies and verifies the proposed adaptive coded aperture method on a spatial super-resolution CASSI system, where the resolution of the coded aperture is higher than that of the FPAs. It is shown that the adaptive coded apertures provide superior reconstruction performance of the spectral images over the random coded apertures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high frame-rate compressive spectral video system is developed. The 4-dimensional targets (space, time and spectra) are captured by multi-spectral LED modulated illumination. The reflection of targets passing through an objective lens is modulated by a digital micro-mirror device (DMD) in the spatial domain, which is then collected by a RGB sensor through an imaging lens. The rapidly changing LED illumination patterns and DMD codings provide unique modulations for each temporal frame. Mathematical modeling and simulation are shown all as well as experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose and analyze a new compressive Fourier phase retrieval (PR) technique based on differential measurement, that can provably recover sparse signals with O(s polylog N) phase-less measurements. This measurement technique is easily implementable using interferometric techniques like Holography and Frequency Domain Optical Coherence Tomography. The proposed l1 minimization based recovery is able to achieve the same reconstruction quality as some of the popular PR algorithms with far fewer measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data Analysis and Learning with Faulty Measurements
Rank-1 L1-norm-based TUCKER2 (L1-TUCKER2) decomposition of 3-way tensors was recently solved exactly, for the first time, by Markopoulos et al.1 The exact solution to general-rank L1-TUCKER2 remains to date unknown. In this work, we present a novel approximate algorithm for general-rank L1-TUCKER2 decomposition of 3-way tensors. Our algorithm is accompanied by formal convergence and complexity analysis. Our numerical studies illustrate the sturdy corruption resistance of the proposed algorithm compared to state-of-the-art TUCKER2-decomposition counterparts such as GLRAM, HOSVD, and HOOI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an online method to track a subspace U from severely corrupted and incomplete data. If we could identify the corrupted entries in a new observation x, then we would be able to update U according to the uncorrupted entries in x using an incomplete-data rank-one update. The challenge is to identify the corrupted entries in x, which is in general NP-hard. To work around this we propose an approach that iteratively removes the entries that most affect partial projections of x onto U. Our experiments show that this simple approach outperforms state-of-the-art methods, including ℓ1-optimization, specially when most entries in x are corrupted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an iterative procedure for soft characterization of outlier data in any given data set. In each iteration, data compliance to nominal data behavior is measured according to current L1-norm principal-component subspace representations of the data set. Successively refined L1-norm subspace data set representations lead to successively refined outlier data characterization. The effectiveness of the proposed theoretical scheme is experimentally studied and the results show significantly improved performance compared to L2-PCA schemes, standard L1-PCA, and state-of-the-art robust PCA methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressive sensing theory states that a sparse vector x in dictionary A can be recovered from measurements y = WAx. For recovery of x, the measurement matrix W is generally chosen as random since a random W is sufficiently incoherent with a given basis A with high probability. Although Gaussian or Bernoulli random measurement matrices satisfy recovery requirements, they do not necessarily yield the best performance in terms of minimal mutual coherence or best parameter estimation. In literature several studies focused on measurement matrix design mainly to minimize some form of coherence between W and A to minimize measurement numbers while exact reconstruction is guaranteed. On the other hand, for enhanced parameter estimation W can be designed to minimize the Cramer Rao Lower Bound (CRLB). In this study, we propose direct and sequential measurement designs that minimizes the CRLB for the application of direction of arrival (DoA) estimation. Based on our results an adaptive target tracking procedure for single and multiple target scenarios is also proposed. Initial simulations show that measurement design solutions provide enhanced parameter estimation and target tracking performance compared to widely used random matrices in compressive sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar has emerged as a leading technology supporting large sectors of commerce, defense and security. Enabled by the advent of small, low-cost solid-state and software-defined radar technologies, new radar applications involving cognitive radar, medical and biometric radar, passive radar, and automotive radar have been made possible. In this paper, we examine redundancy in human motion signatures along the data and short-time Fourier transform (STFT) parameters. With an "eye" on a final product, we evaluate the effect of reduced sampling along slow-time on classification performance. The goal is to determine the degree of data down-sampling that can be tolerated without compromising feature extraction or significantly impeding motion classifications. We search for the optimum STFT parameters that provide the best classification performance for the given radar measurements and gain an understanding of their respective nominal range values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of sparse support recovery in Multiple Measurement Vector (MMV) models is considered, where the support size (K) can be larger than the dimension (M) of each measurement vector. Most results in literature address the case where K < M. We propose a sequential detector for the MMV problem, which despite its simplicity, it can recover supports of size K = O(M2), for suitable measurement matrices, with a probability of error decaying to zero exponentially fast as the number of independent measurement vectors (L) goes to infinity. By exactly characterizing the distribution of the detection statistic, we derive explicit forms for the error exponent. We show that the required conditions on the measurement matrix can be met for equiangular tight frames. Although certain constructive methods show existence of equiangular tight frames of size N > M (N being the number of columns), the question of existence of equiangular tight frames of size N = O(M2), for arbitrarily large M, is still an open problem. We review some of the well-known results on this topic, and make connections to the support recovery problem when K > M.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote activity monitoring can support aging-in-place for the elderly, providing crucial capabilities such as fall detection. Falls are the leading cause of accidental death in people aged 65 and over in the United States. The chances of survival are high with low impact on quality of life when prompt assistance is provided after a fall. Radar is at the forefront of research on non-wearable technologies for fall detection and monitoring of activities of daily living for eldercare. Various features extracted from Doppler motion signatures have been proposed in the literature for radar-based fall detection. However, none of these features were specifically designed to provide the most discrimination between the fall and non-fall motion classes. In this paper, we perform linear discriminant analysis (LDA) of Doppler signatures as a first step towards identification of the most discriminative features. LDA performance is evaluated using real data measurements of various indoor human activities and compared with that of existing radar-based fall detection schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of sensor and communication technology, the volume and the resolution of the data became increasingly high. Compressive Sensing theory allows signal compressed at a rate much lower than the Nyquist rate, which is promising to deal with big data acquisition and transmission. Compressive sensing has been applied in a variety of fields such as clutter suppression, image/video reconstruction, and real time processing. Most of the conventional algorithms for the estimation of the original signal, for instance, Total Variation (TV), consist of consistency error and constraint terms, the latter of which is quite influential on the quality of reconstructed image. The results subject to different constraints may vary greatly, for example, the conventional TV constraint suffers from the step effect, while the Higher Degree Total Variation (HDTV) may have the defect of edge blur. Besides, the computational cost is another problem, which needs to be considered. In this paper, a constraint refinement based algorithm for compressive sensing image reconstruction is proposed. Firstly, the construction of the constraint term is studied. For images that show different characteristics (for example, the richness of texture, etc.), the appropriate constraints for different cases are discussed. Secondly, a modified constraint is introduced to overcome the defect of the aforementioned algorithms. Moreover, a fast approximation algorithm to enhance the calculation efficiency is proposed based on the introducing of an auxiliary function to cross update. The visual and quantitative assessment both prove the superiority of the proposed constraint refinement method in terms of SNR, SSIM, and PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CS for Remote Sensing, Surveillance, and Radar Imaging
Distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that the PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l1-norm, l0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. As a result, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cognitive radar framework is being developed to dynamically detect changes in the clutter characteristics, and to adapt to these changes by identifying the new clutter distribution. In our previous work, we have presented a sparse-recovery based clutter identification technique. In this technique, each column of the dictionary represents a specific distribution. More specifically, calibration radar clutter data corresponding to a specific distribution is transformed into a distribution through kernel density estimation. When the new batch of radar data arrives, the new data is transformed to a distribution through the same kernel density estimation method and its distribution characteristics is identified through sparse-recovery. In this paper, we extend our previous work to consider different kernels and kernel parameters for sparse-recovery-based clutter identification and the numerical results are presented as well. The impact of different kernels and kernel parameters are analyzed by comparing the identification accuracy of each scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many scientific and defense surveillance missions, reducing the size, weight, and power (SWaP) of sensing systems is critical to accomplishing the intended objectives. At the backend, compressive sensing (CS) has been widely adopted to maintain the signal fidelity with less measurements, thereby reducing the hardware complexity. On the other hand, SWaP reduction can also be achieved with intelligent mechanical design. In this paper, we discuss a novel system concept, namely, Underwater Inflatable Co-prime Sonar Array (UICSA), which provides SWaP compression on two fronts. First, the sonar array is implemented as an inflatable structure, also referred to as a deployable structure, which is a folded package with compact stowed dimension. The folded package can be detached from a carrying platform and it can morph into its final structure form at the destination. Second, a sparse array configuration, namely, a co-prime array, is employed, which can resolve a much higher number of sources compared to a conventional uniform half-wavelength spaced array for a given number of sensors. The integration of these two concepts leads to a simulatenous reduction in the stowed dimension of the sonar array and the number of employed hydrophones. We describe the development of a UICSA prototype and provide underwater source direction-of-arrival estimates obtained using initial datasets acquired with the developed prototype.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging of targets embedded in multilayered dielectric media has attracted growing interest in microwave remote sensing, nondestructive testing, ground penetrating radar, and urban sensing. Compressive sensing has been successfully applied in the aforementioned applications for efficient target imaging, leading to prompt actionable intelligence. Recently, a total variation minimization (TVM) based approach was proposed, which offers superior performance over standard L1- minimization based sparse reconstruction in terms of target shape reconstruction and distinguishing closely-spaced point targets from an extended target. Alternatively, group sparse reconstruction (GSR) schemes can also be employed to account for target extent. In this paper, we provide a performance comparison between TVM and GSR schemes for extended target imaging in multi-layered media using numerical electromagnetic data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although largely different concepts, echo state networks and compressed sensing models both rely on collections of random weights; as the reservoir dynamics for echo state networks, and the sensing coefficients in compressed sensing. Several methods for generating the random matrices and metrics to indicate desirable performance are well-studied in compressed sensing, but less so for echo state networks. This work explores any overlap in these compressed sensing methods and metrics for application to echo state networks. Several methods for generating the random reservoir weights are considered, and a new metric, inspired by the restricted isometry property for compressed sensing, is proposed for echo state networks. The methods and metrics are investigated theoretically and experimentally, with results suggesting that the same types of random matrices work well for both echo state network and compressed sensing scenarios, and that echo state network classification accuracy is improved when the proposed restricted isometry-like constants are close to 1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many application areas including signal and image processing, computer vision, radar and remote sensing, bioinformatics deal with high dimensional data of various types. In these applications, the high dimensional data is not generally distributed over the whole signal space; rather it lives in the union of low dimensional subspaces. Hence, classical clustering techniques depending data distributions in centroids are not successful, and techniques that facilities the low dimensional subspace structure of big data are required. Sparse subspace clustering (SSC) technique that relies on the self-expressiveness of the data is shown to provably handle the data under noiseless case for independent and disjoint subspaces. Self-expressiveness means that each data point in a union of subspaces can be efficiently represented as a linear or affine combination of data points in the set. SSC implementation involves solving an L1 minimization problem for each data point in the space and applying spectral clustering to the affinity matrix constructed by the obtained coefficients. Despite good properties, SSC suffers from high computational complexity increasing with data point numbers. In addition, for noisy data self-expressiveness does not apply anymore. This paper proposes to use perturbed orthogonal matching pursuit (POMP) within SSC framework for robust and computationally efficient estimation of the number of subspaces, their dimensions, and the segmentation of the data into each subspace. POMP was shown to be successful in recovering sparse signals under random basis perturbations, which is actually the case in corrupted data clustering. Our initial results for simulated clustering datasets show that the proposed POMP- SSC technique provides both computational efficiency and high clustering performance compared to classical SSC implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarimetric SAR obtains rich target scattering information by utilizing different polarizations to transmit and receive radar signals alternately, which has become an important tool for ground exploration. Presently, there are still some problems about the classification of PolSAR image because of the nonlinear data. Nonlinear features often lead the data difficult to distinguish in the conventional dimensions. Kernel method maps data to high-dimensional space, making the linearly inseparable data in the original dimension can be linearly separated in the high-dimensional space. Based on the study of the features of PolSAR data and signal sparse representation theory, this paper proposes a PolSAR image classification method based on kernel sparse representation, which optimizes the polarimetric and spatial information in PolSAR data, uses the kernel function method to solve the adverse effect of the nonlinear features on the classification results in the PolSAR image to obtain more accurate classification results. The experiment uses the fully polarimetric SAR data in San Francisco in the United States obtained by airborne AIRSAR, the advantages of kernel sparse representation in PolSAR image classification can be seen from the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large-aperture, wide-band antenna arrays are important in both scientific (e.g., radio astronomy) and surveillance (e.g., radar) applications. The practical constraints of signal processing can limit these systems to produce a single (or few) beams, each of which is vulnerable to sidelobe interference. The disambiguation of legitimate (main lobe) signals from interference signals is of paramount importance in many applications. This paper considers time-modulated-array techniques for interference tagging as a mitigation approach and considers empirical statistics of interference detection vs. the beam-compressive ratio of a multi-beam system.
The time-modulated approach described in the paper expands on our previous work, in which it was demonstrated that alternating a real-time beamformer between multiple sets of weights can multiplex several different beams (acquisition and one or more sidelobe suppression) within the beamformer output for the same computational cost as a single beamformer. Alternating the weights based on pseudo-random codes implements a CDMA-like scheme, which exploits the frequency-domain sparsity of the received signals. Further, using a single SLS beam minimizes the impact on the resultant detection SNR, but our previous work showed that this also introduces challenges when the underlying array is spatially sparse.
The present work reported here will describe cost/performance tradeoffs of this technique when the number of auxiliary beams is changed; increasing this quantity will increase the probability of detection of RFI, but also will decrease the SNR (and PD) of weak target signals. This approach is compressive in the beam-space domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern radio telescopes commonly use antenna arrays, and high-resolution imaging techniques exploiting radio astronomical signals collected at these antenna arrays play a critical role to achieve their missions. Beamforming techniques have been developed in radio astronomy to generate dirty images with limited image resolutions for many years. Because the manifold of a radio telescope array varies over time due to the Earth rotation, beamformers are separately designed and implemented at each time epoch, and the resulting images are averaged to form enhanced dirty images. Considering the fact that astronomical scenes are typically sparse, we present a new method through sparse reconstruction to obtain clean astronomical images. Sparse reconstruction methods that fuse the measured data observed at multiple time epochs are examined and compared. Unlike beamforming techniques which require an additional deconvolution procedure for clean image formation, the proposed technique provides clean astronomical images with accurate estimation of the source position and a high dynamic range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.