Deep learning techniques have been leveraged in numerous applications and across different data modalities over the past few decades, more recently in the domain of remotely sensed imagery. Given the complexity and depth of convolutional neural network (CNN) architectures, it is difficult to fully evaluate performance, optimize the hyperparameters, and provide robust solutions to a specific machine learning problem that can be applied to nontraditional real-world feature extraction and automation tasks. Ursa Space Systems Inc. develops machine learning approaches to build custom solutions and extract answers from synthetic aperture radar satellite data fused with other remote sensing data sets. One application is identifying the orientation of nontexture linear features in imagery, such as an inlet pipe on top of a cylindrical oil storage tank. We propose a two-phase approach for determining this orientation: first an optimized CNN is used in a nontraditional way to probabilistically determine a coarse location and orientation of the inlet pipe, followed by a maximum likelihood voting scheme to automatically extract the orientation of the angular feature within 7.5 deg. We use a known hyperparameter optimization technique to determine the best deep learning CNN architecture for our specific problem and under user-defined optimization and accuracy constraints, by optimizing model hyperparameters (number of layers, size of the input image, and data set preprocessing) using a manual and grid search approach. The use of this systematic approach for hyperparameter optimization yields increased accuracy for our angular feature extraction and orientation finding algorithm from 86% to 94%. Additionally, this proposed algorithm shows how machine learning can be used to improve real-world remote sensing workflows.
Deep learning techniques have been leveraged in numerous applications and across different data modalities over the past few decades, more recently in the domain of remotely sensed imagery. Given the complexity and depth of Convolutional Neural Networks (CNNs) architectures, it is difficult to fully evaluate performance, optimize the hyperparameters, and provide robust solutions to a specific machine learning problem that can be easily extended to similar problems, e.g. via transfer learning. Ursa Space Systems Inc. (Ursa) develops novel machine learning approaches to build custom solutions and extract answers from Synthetic Aperture Radar (SAR) satellite data fused with other remote sensing datasets. One application is identifying the orientation with respect to true north of the inlet pipe, which is one common feature located on the top of a cylindrical oil storage tank. In this paper, we propose a two-phase approach for determining this orientation: first an optimized CNN is used to probabilistically determine a coarse orientation of the inlet pipe, followed by a maximum likelihood voting scheme to automatically extract the location of the angular feature within 7.5° . We present a systematic technique to determine the best deep learning CNN architecture for our specific problem and under user-defined optimization and accuracy constraints, by optimizing model hyperparameters (number of layers, size of the input image, and dataset preprocessing) using a manual and grid search approach. The use of this systematic approach for hyperparameter optimization yields increased accuracy for our angular feature extraction algorithm from 86% to 94% and can be extended to similar applications.
The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. Cloud computing and storage, combined with recent advances in machine learning, are enabling understanding of the world at a scale and at a level of detail never before feasible. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and that can scale with the high-rate and dimensionality of imagery being collected. We focus on the problem of monitoring food crop productivity across the Middle East and North Africa, and show how an analysis-ready, multi-sensor data platform enables quick prototyping of satellite imagery analysis algorithms, from land use/land cover classification and natural resource mapping, to yearly and monthly vegetative health change trends at the structural field level.
The increase in performance, availability, and coverage of multispectral satellite sensor constellations has led to a drastic increase in data volume and data rate. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. The data analysis capability, however, has lagged behind storage and compute developments, and has traditionally focused on individual scene processing. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and can scale with the high-rate and dimensionality of imagery being collected. We investigate and compare the performance of pixel-level crop identification using tree-based classifiers and its dependence on both temporal and spectral features. Classification performance is assessed using as ground-truth Cropland Data Layer (CDL) crop masks generated by the US Department of Agriculture (USDA). The CDL maps contain 30m spatial resolution, pixel-level labels for around 200 categories of land cover, but are however only available post-growing season. The analysis focuses on McCook county in South Dakota and shows crop classification using a temporal stack of Landsat 8 (L8) imagery over the growing season, from April through October. Specifically, we consider the temporal L8 stack depth, as well as different normalized band difference indices, and evaluate their contribution to crop identification. We also show an extension of our algorithm to map corn and soy crops in the state of Mato Grosso, Brazil.
Advanced pattern recognition and computer vision algorithms are of great interest for landscape characterization, change detection, and change monitoring in satellite imagery, in support of global climate change science and modeling. We present results from an ongoing effort to extend neuroscience-inspired models for feature extraction to the environmental sciences, and we demonstrate our work using Worldview-2 multispectral satellite imagery. We use a Hebbian learning rule to derive multispectral, multiresolution dictionaries directly from regional satellite normalized band difference index data. These feature dictionaries are used to build sparse scene representations, from which we automatically generate land cover labels via our CoSA algorithm: Clustering of Sparse Approximations. These data adaptive feature dictionaries use joint spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologic features. Land cover labels are estimated in example Worldview-2 satellite images of Barrow, Alaska, taken at two different times, and are used to detect and discuss seasonal surface changes. Our results suggest that an approach that learns from both spectral and spatial features is promising for practical pattern recognition problems in high resolution satellite imagery.
KEYWORDS: Chemical species, Associative arrays, Satellites, Signal processing, Picosecond phenomena, Feature extraction, Databases, Receivers, Signal to noise ratio, Lithium
For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth’s RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.
We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. Our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.
KEYWORDS: Associative arrays, Satellites, Principal component analysis, Databases, Signal processing, Image classification, Feature extraction, Visualization, Receivers, Chemical elements
Ongoing research at Los Alamos National Laboratory studies the Earth’s radio frequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Such impulsive events occur in the presence of additive noise and structured clutter and appear as broadband nonlinear chirps at a receiver on-orbit due to ionospheric dispersion. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lightning database. Application of modern pattern recognition techniques to this database may further lightning research and potentially improve event discrimination capabilities for future satellite payloads. We extend two established dictionary learning algorithms, K-SVD and Hebbian, for use in classification of satellite RF data. Both algorithms allow us to learn features without relying on analytical constraints or additional knowledge about the expected signal characteristics. We use a pursuit search over the learned dictionaries to generate sparse classification features and discuss performance in terms of event classification using a nearest subspace classifier. We show a use of the two dictionary types in a mixed implementation to showcase algorithm distinctions in extracting discriminative information. We use principal component analysis to analyze and compare the learned dictionary spaces to the real data space, and we discuss some aspects of computational complexity and implementation.
KEYWORDS: Associative arrays, Signal to noise ratio, Target detection, Target recognition, Interference (communication), Chemical elements, Detection and tracking algorithms, Signal processing, Feature extraction, Stochastic processes
Automatic classification of transitory or pulsed radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such transients are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models. Conventional representations using orthogonal bases, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular target signal. They do not usually lead to sparse decompositions, and require separate feature selection algorithms, creating additional computational overhead. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Our goal is to detect chirped pulses from a model target emitter in poor signal-to-noise and varying levels of simulated background clutter conditions. This paper builds on our previous RF classification work, and extends it to more complex target and background scenarios. We use a Hebbian rule to learn discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows containing a target pulse. We demonstrate that learned dictionary techniques are highly suitable for pulsed RF analysis and present results with varying background clutter and noise levels. The target detection decision is obtained in almost real-time via a parallel, vectorized implementation.
KEYWORDS: Associative arrays, Satellites, Principal component analysis, Signal processing, Databases, Feature extraction, Chemical elements, Receivers, Satellite communications, Analytical research
Ongoing research at Los Alamos National Laboratory studies the Earth’s radio frequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Such impulsive events are dispersed through the ionosphere and appear as broadband nonlinear chirps at a receiver on-orbit. They occur in the presence of additive noise and structured clutter, making their classification challenging. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lightning database. Application of modern pattern recognition techniques to this database may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. Conventional feature extraction techniques using analytical dictionaries, such as a short-time Fourier basis or wavelets, are not comprehensively suitable for analyzing the broadband RF pulses under consideration here. We explore an alternative approach based on non-analytical dictionaries learned directly from data, and extend two dictionary learning algorithms, K-SVD and Hebbian, for use with satellite RF data. Both algorithms allow us to learn features without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features, and discuss their performance in terms of event classification. We also use principal component analysis to analyze and compare the respective learned dictionary spaces to the real data space.
Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methodologies to the environmental sciences, using state-of-theart adaptive signal processing, combined with compressive sensing and machine learning techniques. We use a modified Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using CoSA: unsupervised Clustering of Sparse Approximations. We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska (USA). Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties (e.g., soil moisture and inundation), and topographic/geomorphic characteristics. In this paper, we explore learning from both raw multispectral imagery, as well as normalized band difference indexes. We explore a quantitative metric to evaluate the spectral properties of the clusters, in order to potentially aid in assigning land cover categories to the cluster labels.
For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory to learn more
about the Earth’s radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The
Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lighting database, comprising of five
years of data recorded from its two RF payloads. While some classification work has been done previously on the
FORTE RF database, application of modern pattern recognition techniques may advance lightning research in the
scientific community and potentially improve on-orbit processing and event discrimination capabilities for future
satellite payloads. We now develop and implement new event classification capability on the FORTE database using
state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The
focus of our work is improved feature extraction using sparse representations in learned dictionaries. Conventional
localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or
wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly
from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using
several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search
over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types.
We present preliminary results of our work and discuss classification scenarios and future development.
Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methodologies to the environmental sciences, using state-of-theart adaptive signal processing, combined with compressive sensing and machine learning techniques. We use a Hebbian learning rule to build undercomplete spectral-textural dictionaries that are adapted to the data. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using our CoSA algorithm: unsupervised Clustering of Sparse Approximations. We demonstrate our method using multispectral Worldview-2 data from three Arctic study areas: Barrow, Alaska; the Selawik River, Alaska; and a watershed near the Mackenzie River delta in northwest Canada. Our goal is to develop a robust classification methodology that will allow for the automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and geomorphic characteristics. To interpret and assign land cover categories to the clusters we both evaluate the spectral properties of the clusters and compare the clusters to both field- and remote sensing-derived classifications of landscape attributes. Our work suggests that neuroscience-based models are a promising approach to practical pattern recognition problems in remote sensing.
Neuroscience-inspired machine vision algorithms are of current interest in the areas of detection and monitoring of
climate change impacts, and general Land Use/Land Cover classification using satellite image data. We describe an
approach for automatic classification of land cover in multispectral satellite imagery of the Arctic using sparse
representations over learned dictionaries. We demonstrate our method using DigitalGlobe Worldview-2 8-band
visible/near infrared high spatial resolution imagery of the MacKenzie River basin. We use an on-line batch Hebbian
learning rule to build spectral-textural dictionaries that are adapted to this multispectral data. We learn our dictionaries
from millions of overlapping image patches and then use a pursuit search to generate sparse classification features. We
explore unsupervised clustering in the sparse representation space to produce land-cover category labels. This approach
combines spectral and spatial textural characteristics to detect geologic, vegetative, and hydrologic features. We compare
our technique to standard remote sensing algorithms. Our results suggest that neuroscience-based models are a
promising approach to practical pattern recognition problems in remote sensing, even for datasets using spectral bands
not found in natural visual systems.
Automatic classification of transitory or pulsed radio frequency (RF) signals is of particular interest in persistent
surveillance and remote sensing applications. Such transients are often acquired in noisy, cluttered environments, and
may be characterized by complex or unknown analytical models, making feature extraction and classification difficult.
We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. We compare
two dictionary learning methods from the image analysis literature, the K-SVD algorithm and Hebbian learning, and
extend them for use with RF data. Both methods allow us to learn discriminative RF dictionaries directly from data
without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use
a pursuit search over the learned dictionaries to generate sparse classification features in order to identify time windows
that contain a target pulse. In this paper we compare the two dictionary learning methods and discuss how their
performance changes as a function of dictionary training parameters. We demonstrate that learned dictionary techniques
are suitable for pulsed RF analysis and present results with varying background clutter and noise levels.
Automatic classification of broadband transient radio frequency (RF) signals is of particular interest in persistent
surveillance applications. Because such transients are often acquired in noisy, cluttered environments, and are
characterized by complex or unknown analytical models, feature extraction and classification can be difficult. We
propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Conventional
representations using fixed (or analytical) orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can
be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not
specifically designed for a particular signal class. They do not usually lead to sparse decompositions, and require
separate feature selection algorithms, creating additional computational overhead. Pursuit-type decompositions over
analytical, redundant dictionaries yield sparse representations by design, and work well for target signals in the same
function class as the dictionary atoms. The pursuit search however has a high computational cost, and the method can
perform poorly in the presence of realistic noise and clutter. Our approach builds on the image analysis work of Mairal et
al. (2008) to learn a discriminative dictionary for RF transients directly from data without relying on analytical
constraints or additional knowledge about the signal characteristics. We then use a pursuit search over this dictionary to
generate sparse classification features. We demonstrate that our learned dictionary is robust to unexpected changes in
background content and noise levels. The target classification decision is obtained in almost real-time via a parallel,
vectorized implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.