PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9121 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Mahalanobis Taguchi System (MTS) is a relatively new tool in the vehicle health maintenance domain, but has
some distinct advantages in current multi-sensor implementations. The use of Mahalanobis Spaces (MS) allows the
algorithm to identify characteristics of sensor signals to identify behaviors in machines. MTS is extremely powerful with
the caveat that the correct variables are selected to form the MS. In this research work, 56 sensors monitor various
aspects of the vehicles. Typically, using the MTS process, identification of useful variables is preceded by validation of
the measurements scale. However, the MTS approach doesn’t directly include any mitigating steps should the
measurement scale not be validated. Existing work has performed outlier removal in construction of the MS, which can
lead to better validation. In our approach, we modify the outlier removal process with more liberal definitions of outliers
to better identify variables’ impact prior to identification of useful variables. This subtle change substantially lowered the
false positive rate due to the fact that additional variables were retained. Traditional MTS approaches identify useful
variables only to the extent they provide usefulness in identifying the positive (abnormal) condition. The impact of
removing false negatives is not included. Initial results show our approach can reduce false positive values while still
maintaining complete fault identification for this vehicle data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous
information sources. However, except in special cases, the needed posterior distribution is not analytically
derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in
practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology
significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant
parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy
to criminal investigations where criminalists pursue clues also only locally. This publication follows previous
publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual
calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this,
the global posterior distribution is not completely determined. Strategies for using the results of a focussed
Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding
the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior
distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully
applicable in metrology and in different other areas. To address the special need for making further decisions
subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In typical classification problems the data used to train a model for each class is often correctly labeled, and so that fully
supervised learning can be utilized. For example, many illustrative labeled data sets can be found at sources such as the
UCI Repository for Machine Learning (http://archive.ics.uci.edu/ml/), or at the Keel Data Set Repository
(http://www.keel.es). However, increasingly many real world classification problems involve data that contain both
labeled and unlabeled samples. In the latter case, the data samples are assumed to be missing all class label information,
and when used as training data these samples are considered to be of unknown origin (i.e., to the learning system, actual
class membership is completely unknown). Typically, when presented with a classification problem containing both
labeled and unlabeled training samples, a technique that is often used is to throw out the unlabeled data. In other words,
the unlabeled data are not included with existing labeled data for learning, and which can result in a poorly trained
classifier that does not reach its full performance potential. In most cases, the primary reason that unlabeled data are not
often used for training is that, and depending on the classifier, the correct optimal model for semi-supervised
classification (i.e., a classifier that learns class membership using both labeled and unlabeled samples) can be far too
complicated to develop.
In previous work, results were shown based on the fusion of binary classifiers to improve performance in multiclass
classification problems. In this case, Bayesian methods were used to fuse binary classifier fusion outputs, while selecting
the most relevant classifier pairs to improve the overall classifier decision space. Here, this work is extended by
developing new algorithms for improving semi-supervised classification performance. Results are demonstrated with
real data form the UCI and Keel Repositories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Process for Probabilistic Multi-Source Multi-INT Intel Fusion Benefit Analysis** (PIFBA) provides a methodology
for statistically computing the probability of detecting, locating, and tracking objects in order to assess current
capabilities or the benefit of adding or removing ISR assets in order to obtain the statistically optimal result. The PIFBA
process defines the approach to calculate the probabilistic benefits or rewards associated with integrating or fusing multisource
multi-INT products across a wide range of platforms, sensors, environmental conditions and target objects. This
process applies to analyzing ISR capabilities, effectiveness, and gaps, as well as the benefits of applying existing or new
technology and tactics.
The PIFBA process was designed to answer the following questions:
Based on the defined ISR assets – what is the probability that we know a piece of Intel with sufficient accuracy
and timeliness to be of value to the analyst or warfighter?
What is the benefit of adding new data via additional platforms, sensors, or processing?
What is the benefit of adding new systems or technology and to what degree of performance must they exhibit
in order to affect the statistical outcome?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, simple, generic models of chemical sensing are used to simulate sensor array data and to illustrate the
impact on overall system performance that specific design choices impart. The ability of multisensor systems to perform
multianalyte detection (i.e., distinguish multiple targets) is explored by examining the distinction between fundamental
design-related limitations stemming from mismatching of mixture composition to fused sensor measurement spaces, and
limitations that arise from measurement uncertainty. Insight on the limits and potential of sensor fusion to robustly
address detection tasks in realistic field conditions can be gained through an examination of a) the underlying geometry
of both the composition space of sources one hopes to elucidate and the measurement space a fused sensor system is
capable of generating, and b) the informational impact of uncertainty on both of these spaces. For instance, what is the
potential impact on sensor fusion in an open world scenario where unknown interferants may contaminate target signals?
Under complex and dynamic backgrounds, decision rules may implicitly become non-optimal and adding sensors may
increase the amount of conflicting information observed. This suggests that the manner in which a decision rule handles
sensor conflict can be critical in leveraging sensor fusion for effective open world sensing, and becomes exponentially
more important as more sensors are added. Results and design considerations for handling conflicting evidence in Bayes
and Dempster-Shafer fusion frameworks are presented. Bayesian decision theory is used to provide an upper limit on
detector performance of simulated sensor systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Automatic Identification System (AIS) is a self-reporting system based on VHF radio to transmit a vessel’s identity,
position, speed, heading and other parameters to improve maritime domain awareness. However, AIS information can be
programmatically spoofed by terrorists or other criminals, who often choose to masquerade as innocent civilians and
exploit the vulnerabilities of military and civilian infrastructures for their purposes. Therefore, detecting and localizing a
spoofing AIS ship become a critical and challenging issue for maritime security. This paper presents an algorithm to
detect and geolocalize a spoofing AIS emitter using space-based AIS signals with its Doppler frequency. With an AIS
signal sensor on a fast orbiting satellite, the measured AIS Doppler frequency of an AIS emitter can be used to define a
double-napped cone of which the satellite is at its vertex and satellite velocity coincides with its axis, such that the
theoretical Doppler frequency derived from the radial velocity to the AIS emitter matches the measured Doppler
frequency. All such matches can only lie on either cone extending out from the satellite, which cuts the Earth’s surface
in two curves, so we know that the AIS emitter must lie somewhere on these curves. Two such AIS Doppler frequency
measurements for the same stationary AIS emitter produce two valid curves which intersect at the position of the AIS
emitter. Multiple Doppler frequency measurements can be used to better estimate the position fix of an AIS emitter,
hence determine the spoofing AIS ship if the estimated position fix unreasonably differs from the position carried in its
AIS message. A set of formulas are derived which relate an AIS emitter position to its Doppler frequency measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports a metamaterial inspired combined inductive-capacitive sensing method for detecting and distinguishing metallic and non-metallic objects. Metallic and non-metallic objects can be distinguished by measuring both of their inductive and capacitive responses based on the fact that they respond differently to inductive and capacitive sensing. The proposed method is inspired by metamaterial structures. Both inductive and capacitive sensing are simultaneously realized when the sensor is operating at off-resonant frequencies. The proposed method is demonstrated with typical printed circuit board (PCB) technology. The designed sensor can distinguish the metallic and dielectric objects with a sensing range about 10 mm, showing a competitive performance compared with commercially available proximity sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition of synchronous EO imagery and gamma radiation data in aerial overflights of an unmanned aircraft
can provide valuable spatial context for radioactive source mapping. Using image-based 3D reconstruction methods,
a terrain map can be generated and used to reason about more likely radiation source locations. For instance,
vehicles may be likely hiding places for nuclear materials, so a source model with assigned probability is used at the
vehicle to reduce the overall uncertainty in position estimation. Environment reconstructions based on EO imagery
with a mapped gamma radiation overlay provide intrinsic correlations between the datasets. Using radioactive
material dispersion models or point source models, the derived correlations serve to enhance coarse gamma
radiation data. The use of autonomous unmanned aircraft provide a valuable tool in acquiring these data as they are
capable of accurate and repeatable position control while eliminating exposure danger to the operators. In this
experiment, two sources (.084 Ci 137Ce and .00048 Ci 133Ba) were distributed in a field with varying terrain and a scan was conducted using the Virginia Tech Yamaha RMAX autonomous helicopter equipped with a two-camera
imaging system and a NaI scintillation-type spectrometer. Terrain reconstruction was conducted using both
structure from motion (SfM) and stereo vision techniques, and radiation data synchronized to the imagery was
overlaid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent transportation system (ITS) applications are expected to provide a more efficient, effective, reliable, and
safe driving experience, which can minimize road traffic congestion resulting in a better traffic flow management. To
efficiently manage traffic flows, in this paper, we compare the effectiveness of two well-known vehicle routing
algorithms: the Dijkstra's shortest path algorithm and the A* (Astar) algorithm in terms of the total travel time and the
travel distance. To this end, we built a generic ITS test-bed and created several real-world driving scenarios using field
and simulation data to evaluate the performance of these two routing algorithms. The dataset used in our simulation is six
weeks traffic volume data from 08/01/2012 to 09/27/2012 in the Maryland (MD)/Washington DC and Virginia (VA)
area. Our simulation data shows that an increase in network size results in scalability problems as the efficiency and
effectiveness of these algorithms diminishes in larger road networks with greater traffic volume densities, flow rates, and
congested conditions. In addition, the imprecision of the road network increases as the network size and the traffic
volume density increases. Our study shows that the ability of these vehicular routing algorithms to adaptively route
traffic depends on the size and type of road networks, and the current roadway conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As new imaging modalities arise, the problem of inpainting becomes increasing important. Typical techniques for inpainting are completely determined by the penalization term used in the optimization scheme. These methods range from minimizing over total variation to finding a sparsest solution in a given basis to minimizing the Ginzburg-Landau energy. In this paper, we propose a novel approach to inpainting of remote sensing images, which uses previous measurements taken from heterogeneous image soures in conjunction with these well studied penalization methods. These previous measurements could be images with different illumination or weather conditions, images with spatio-temporal changes, or even all together different imaging modalities. Our approach utilizes manifold learning techniques such as diffusion maps or Laplacian eigenmaps that are applied to each image. This is followed by learning a rotation between the two feature spaces in an effort to place data points from both images in a common feature space. Then, we apply a novel preimage algorithm to the fused data in conjunction with an inpainting penalization method to recreate the missing pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study was conducted in an oil producing environment dominated by mangrove and swamp vegetation in Niger
Delta, Nigeria. Ancillary data including oil pipeline map and GPS of spill points were used in selecting sample sites to
identify and detect polluted locations. A number of polluted and non-polluted sites were selected and vegetation spectral
reflectance and indices for these sample sites were extracted from TM data of January and December 1986. A statistical
T-test was used to test for significant differences between vegetation spectral reflectance and indices from polluted and
non-polluted sites. The initial results from the analysis of spectral reflectance between polluted and non-polluted did not
show any significant difference in all the six spectral bands with p-value <0.005. The results from analysis of various
vegetation indices some did not show any significance differences between the polluted and non-polluted sites (e.g. the
SRI, SAVI and EVI2). Other VIs (NDVI, MSAVI2 and ARVI2) showed significant differences between the polluted and
non-polluted sites. From these preliminary results we can conclude that pollution from oil spills may result to the
changes in leaf biochemistry of the Mangroves in the Niger Delta which are detectable from remote sensing data. Future
work will focus on undertaking further temporal analysis of additional spill sites to determine what quantity of spilt oil
arises in spectral changes of vegetation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We presented a system to display nightime imagery with natural colors using a public database of images. We initially
combined two spectral bands of images, thermal and visible, to enhance night vision imagery, however the fused image
gave an unnatural color appearance. Therefore, a color transfer based on look-up table (LUT) was used to replace the
false color appearance with a colormap derived from a daytime reference image obtained from a public database using
the GPS coordinates of the vehicle. Because of the computational demand in deriving the colormap from the reference
image, we created an additional local database of colormaps. Reference images from the public database were compared
to a compact local database to retrieve one of a limited number of colormaps that represented several driving
environments. Each colormap in the local database was stored with an image from which it was derived. To retrieve a
colormap, we compared the histogram of the fused image with histograms of images in the local database. The
colormaps of the best match was then used for the fused image. Continuously selecting and applying colormaps using
this approach offered a convenient way to color night vision imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high
performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port
security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf
visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a
single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can
time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy
efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and
has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and
transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo
vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer
PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field
monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress
and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to
image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a camera system for instantaneous, non-destructive capture of spectral signatures for forensic analysis. Our
system detects highly probative samples in the forensic scene mixed by the multiple target objects by combining a coded
aperture snapshot spectral imager with a multi-spectral detection algorithm. An Adaptive Cosine Estimator (ACE) is
used to quantitatively detect and classify the probative samples from the decoded spectral datacube. In this paper, we
demonstrate selected results using our system for luminescence characteristics and spectral classification of a number of
samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A plane wave is scattered by a potential of bounded support. Translation, rotation and reflection of the potential,
q0 induce transformations of the scattered wave. The latter can be represented by means of Born sequences,
where q0 appears under the integral sign: non-local formulas are thus derived, the properties of which are
discussed. Next, the symmetries induced by the 1st BORN approximation are addressed. Invariance of the
squared modulus of the scattering amplitude holds for translation and reflection. The transformation Tε :=
13 +Σ3ℓ=1εℓAℓ, with {εℓ;} real and {Aℓ} the generators of rotations in IR3, is investigated. Conditions on the
{ε ℓ} are derived, by which the scattering amplitude coming from the first BORN approximation is invariant to Tε. As an application, these “false symmetries” are compared to those induced by limited angular resolution
of a detector in light scattering experiments. Namely, scattering patterns are made available by the TAOS
(Two-dimensional Angle-resolved Optical Scattering) method, which consists of detecting single airborne aerosol
particles and collecting the intensity of the light they scatter from a pulsed, monochromatic laser beam. The
optics and the detector properties determine the resolution at which a pattern is saved. The implications on the
performance of TAOS pattern analysis are briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a unified framework for color filter array (CFA) interpolation and visible/NIR image
combination. The proposed method aims to reconstruct a high quality image from raw CFA data and the corresponding
NIR image, similar to a multi-spectral fusion of color and NIR images. Based on NIR image, we impose a sparse
constraint of gradient difference to modify the traditional color interpolation. The experiments indicate the effectiveness
of our hybrid scheme to acquire joint color and NIR information in real-time, and show that this hybrid process can
generate a better color image when compared with treating interpolation and fusion separately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A mobile robot operating in a netcentric environment can utilize offboard resources on the network to improve its
local perception. One such offboard resource is a world model built and maintained by other sensor systems. In this
paper we present results from research into improving the performance of Deformable Parts Model object detection
algorithms by using an offboard 3D world model. Experiments were run for detecting both people and cars in 2D
photographs taken in an urban environment. After generating candidate object detections, a 3D world model built from
airborne Light Detection and Ranging (LIDAR) and aerial photographs was used to filter out false alarm using several
types of geometric reasoning. Comparison of the baseline detection performance to the performance after false alarm
filtering showed a significant decrease in false alarms for a given probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of fusing laser and RGB-Data from multiple robots operating in close proximity to one another. By having a team of robots working together, a large area can be scanned quickly, or a smaller area scanned in greater detail. However, a key aspect of this problem is the elimination of the spurious readings due to the robots operating in close proximity. While there is an extensive literature on the mapping and localization aspect of this problem, our problem differs from the dynamic map problem in that it involves at one kind of transient map feature, robots viewing other robots, and we know that we wish to completely eliminate all such mutual views. In prior work, we investigated the problem of fusing laser data from multiple robots in such a manner as to reject this spurious data from other robots. This work showed that a combination of local robot-based direction filtering and global map-based visibility filtering at a central map server removed 91% of the spurious data and resulted in a 98% quality improvement. In this paper we additionally consider the problem of fusing RGB-D data generated by a stereo-camera sensor. An approach based on a model of human visual attention is presented and compared with our prior work and with other related work. This approach is an order of magnitude faster than the prior work and yet rejects 73% of the spurious data producing a 55% quality improvement. Results are shown for this approach for two experiments with a two robot team operating in a confined indoor environment (4m x 4m).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses selected aspects of our effort to implement rudimentary emulations of brain regions and their
synaptic connectivity (a paradigm we refer to as neurobiomimetic), including in particular the Neurobiomimetic
Cognitive Architecture Foundation Framework (NCAFF) we developed. Different instances of neurobiomimetic
emulations are possible and we refer to those instances as animats. NCAFF addresses the overwhelming developmental
challenge of implementing animats, separating animat-development process from the lower-level details. Approaches
such as NCAFF enable feasible building of animats which, by emulating aspects of brain structure and processes, may
constitute a particularly promising path to cognitive systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An integrated multi-sensor fusion framework for localization and mapping for autonomous navigation in unstructured outdoor environments based on extended Kalman filters (EKF) is presented. The sensors for localization include an inertial measurement unit, a GPS, a fiber optic gyroscope, and wheel odometry. Additionally a 3D LIDAR is used for simultaneous localization and mapping (SLAM). A 3D map is built while concurrently a localization in a so far established 2D map is estimated with the current scan of the LIDAR. Despite of longer run-time of the SLAM algorithm compared to the EKF update, a high update rate is still guaranteed by sophisticatedly joining and synchronizing two parallel localization estimators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future smart manufacturing systems will include more complex coordination of mobile manipulators (i.e., robot arms
mounted on mobile bases). The National Institute of Standards and Technology (NIST) conducts research on the safety
and performance of multiple collaborating robots using a mobile platform, an automatic guided vehicle (AGV) with an
onboard manipulator. Safety standards for robots and industrial vehicles each mandate their failsafe control, but there is
little overlap between the standards that can be relied on when the two systems are combined and their independent
controllers make collaborative decisions for safe movement. This paper briefly discusses previously uncovered gaps
between AGV and manipulator standards and details decision sharing for when manipulators and AGVs are combined
into a collaborative, mobile manipulator system. Tests using the NIST mobile manipulator with various control methods
were performed and are described along with test results and plans for further, more complex tests of implicit and
explicit coordination control of the mobile manipulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.