Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.
The main goal of a cyberspace environment is to support decision makers with relevant information on time for
operational use. Cyberspace environments depend on geospatial data including terrestrial, aerial/UAV, satellite and
other multi-sensor data obtained in electro-optical and other imaging domains. Despite advances in automated
geospatial image processing, the "human in the loop" is still necessary because current applications depend upon
complex algorithms and adequate classification rules that can only be provided by skilled geospatial professionals.
Signals extracted from humans may become an element of a cyberspace system. This paper describes research experiments on integrating an EEG device within geospatial technology.
Laser communication systems operate in the presence of strong atmospheric turbulence, affecting communication
platform by broadening of the laser footprint, random jitter of the laser beam, and high spatial frequency
intensity fluctuations referred to as scintillation. The prediction of the effects induced by the atmospheric
turbulence is a crucial task for reliable data transmission. Equipping the lasercom platform with adaptive optics
system capable of probing the atmospheric turbulence and generating the data on wave front errors in real
time improves performance and extends the range of optical communications systems. Most adaptive optics
systems implement wavefront sensors to measure the errors induced by the atmospheric turbulence. Real time
analysis of the data received from the wavefront sensor is used for outgoing laser beam compensation significantly
improves the lasercom performance. To obtain reliable data, the wavefront sensor needs to be accurately aligned
and calibrated. To model the performance of a laser communication system operating in the real world we have
developed an outdoor 3.2 km, partially over water, turbulence measurement and monitoring communication link.
The developed techniques of wavefront sensor alignment and calibration led to the successful data collection and
analysis are discussed in this paper.
KEYWORDS: Cameras, 3D modeling, Control systems, Unmanned aerial vehicles, Calibration, Data modeling, 3D image processing, Sensors, Algorithm development, Surveillance
This paper outlines research experiments performed on quantative evaluation of 3D geospatial data obtained by
means of Photogrammetric Small UAV(PSUAV) developed at Michigan Tech. SUAV platform is equipped with
autopilot and capable to accommodate a payload up to 11 pounds. Experiments were performed deploying 12MP
Cannon Rebel EOS camera, which was a subject of calibration procedures. Surveying grade GPS equipment was
used to prepare ground calibration sites. Work on processing of the obtained datasets encompasses: sensor modeling,
single photo resections with image co-registration, mosaicking, and finally 3D terrain models generation. One of the
most important results achieved at current stage of PSUAV development is method and algorithms for comparison
of UAV obtained DEMs with another models obtained from different geospatial sources.
Small unmanned aerial vehicle (SUAV) imagery geometrical quality is affected by the fact that cameras which
are installed in SUAV usually are not calibrated due to the platforms size and cost constrains. To this end, image
enhancements and camera calibration processes are crucial elements of the remote sensing system architectures.
In this work we present experimental research involving SUAV platform equipped with autopilot and with ability
to accommodate a payload up to 11 pounds. SUAV platform is currently fitted with a 12MP EOS camera, which
is a subject of calibration procedures. Presented preliminary results of the research demonstrate SUAV remote
sensing feasibility.
In this paper we describe multidisciplinary experimental research concentrated on stereoscopic presentation of geospatial
imagery data obtained from various sensors. Source data were different in scale, texture, geometry and content. None of
image processing techniques allows processing such a data simultaneously. However, augmented reality system allows
subjects to fuse multi-sensor, multi-temporal data and terrain reality into single model. Augmented reality experimental
set, based on head-mounted display was designed to efficiently superimpose LIDAR point-clouds for comfortable
stereoscopic perception. Practical research experiment performed indicates feasibility of the stereoscopic perception data
obtained on-the-fly. One of the most interesting findings is that source LIDAR point-clouds do not have to be preprocessed
or enhanced for being in the experiments described.
Situational awareness is a critical issue for the modern battle and security systems improvement of which will increase human performance efficiency. There are multiple research project and development efforts based on omni-directional (fish-eye) electro-optical and other frequency sensor fusion systems implementing head-mounted visualization systems. However, the efficiency of these systems is limited by the human eye-brain system perception limitations. Humans are capable to naturally perceive the situations in front of them, but interpretation of omni-directional visual scenes increases the user's mental workload, increasing human fatigue and disorientation requiring more effort for object recognition. It is especially important to reduce this workload making rear scenes perception intuitive in battlefield situations where a combatant can be attacked from both directions.
This paper describes an experimental model of the system fusion architecture of the Visual Acoustic Seeing (VAS) for representation spatial geometric 3D model in form of 3D volumetric sound. Current research in the area of auralization points to the possibility of identifying sound direction. However, for complete spatial perception it is necessary to identify the direction and the distance to an object by an expression of volumetric sound, we initially assume that the distance can be encoded by the sound frequency. The chain: object features -> sensor -> 3D geometric model-> auralization constitutes Volumetric Acoustic Seeing (VAS).
Paper describes VAS experimental research for representing and perceiving spatial information by means of human hearing cues in more details.
Spatial and temporal data derived from eye movements, compiled while the human eye observes geospatial
imagery, retain meaningful and usable information. When human perceives the stereo effect, the virtual three
dimensional (3D) model resulting from eye-brain interaction is generated in the mind. If the eye movements are
recorded while the virtual model is observed, it is possible to reconstruct a 3D geometrical model almost identical
to the one generated in the human brain. Information obtained from eye-movements can be utilized in many
ways for remote sensing applications such as geospatial image analysis and interpretation. There are various eyetracking
systems available on the market; however, none of them is designed to work with stereoscopic imagery.
We explore different approaches and designs of the most suitable and non-intrusive scheme for stereoscopic image
viewing in the eye-tracking systems to observe and analyze 3D visual models. The design of the proposed system
is based on the optical separation method, which provides visually comfortable environment for perception of
stereoscopic imagery. A proof of concept solution is based on multiple mirror-lens assembly that provides a
significant reduction of geometrical constrains in eye-frame capturing. Two projected solutions: for wide-angle
of viewing and helmet-integrated eye-tracker are also discussed here.
Many modern technologies widely deploy semi-autonomous robotic platforms, remotely controlled by a human
operator. Such tasks usually require rapid fusion of multisensor imagery and auxiliary geospatial data.
Operational-control units in particular can be considered as displays of the decision-support systems, and the
complexity of automated multi-domain geospatial data fusion leads to human-in-the loop technology which
widely deploys visual analytics. While a number of research studies have investigated eye movements and attention
on casual scenes, there has been a lack of investigations concerning the expert's eye movements and visual
attention, specifically when an operator is engaged in real-time visual data fusion to control and maneuver a
remote unmanned robotic vehicle which acquires visual data using CCTV cameras in visible, IR or other spectral
zones, and transmits this data through telemetric channels to a human operator. In this paper we investigate
the applicability of eye-tracking technology for the numerical assessment of efficiency of an operator in fusion of
multi-sensor and multi-geometry visual data in real-time robotic control tasks.
KEYWORDS: Photodetectors, Sensors, Medical imaging, Electrons, Signal detection, Optical amplifiers, Optical instrument design, Solid state photomultipliers, Semiconductors, Medical imaging applications
We describe a new type of a sensitive semiconductor photodetector that could be used in medical imaging applications.
The photodetector, based on the mechanism of discrete amplification, has performance parameters comparable to, and
for some applications exceeding, those of the vacuum photomultiplier tubes. High amplification gain achieved at very
low levels of excess noise is accompanied by the fast speed and high dynamic range of the photodetector. Comparison
of the technology with classic arrays of Geiger-mode APD arrays is also performed.
Topographic maps, both paper and computerized, require specific qualifications in the end users - planners,
designers, geodesists, etc. These maps offer the user some simplified information about reality, described in
accordance with a meaningful set of cartographic conventions. Overlaid with an orthophoto, such cartographic
information becomes a photomap, representing the surface in a more realistic way. A non-expert, used to perceiving
environmental reality through landscape images taken from the Earth's surface, faces certain difficulties interpreting
images collected from an aircraft or satellite. In fact, modern technologies do not provide the mass user with fullvalue
visual information about the real environment.
The mass-user is not generally concerned about using maps for measurements, but rather uses them to search for
some semantic information. Thus, a new, mass-user-oriented branch of GIS should be based on a new concept - geo-information reality, i.e., mass-user-oriented modeling of the environment. The key to this concept is an object-graphic
basis for GIS, bringing to bear modern methods of acquiring, storing, and representing visual and textual
information in digital form. This paper presents the proposed concept in detail.
KEYWORDS: 3D image processing, Image processing, Image fusion, Visualization, 3D modeling, Feature extraction, Geographic information systems, Computer simulations, 3D visualizations, Algorithm development
New ideas and solutions never come alone. Although automated feature extraction is not sufficiently mature to move from the realm of scientific investigation into the category of production technology, a new goal has arisen: 3D simulation of real-world objects, extracted from images. This task, which evolved from feature extraction and is not an easy task itself, becomes even more complex, multi-leveled, and often uncertain and fuzzy when one exploits time-sequenced multi-source remotely sensed visual data. The basic components of the process are familiar image processing tasks: fusion of various types of imagery, automatic recognition of objects, removng those objects from the source images, and replacing them in the images with their realistic simulated "twin" object rendering. This paper discusses how to aggregate the most appropriate approach to each task into one technological process in order to develop a Manipulator for Visual Simulation of 3D objects (ManVIS) that is independent or imagery/format/media. The technology could be made general by combining a number of competent special purpose algorithms under appropriate contextual, geometric, spatial, and temporal constraints derived from a-priori knowledge. This could be achieved by planning the simulation in an Open Structure Simulation Strategy Manager (O3SM) a distinct component of ManVIS building the simulation strategy before beginning actual image manipulation.
KEYWORDS: Navigation systems, Geographic information systems, Sensors, Unmanned aerial vehicles, Global Positioning System, Commercial off the shelf technology, Image segmentation, Tin, Image processing, Data processing
Catastrophe-theory-based Autonomous Terrain-Feature UAV Relative (CATFUR) navigation is geolocation without the Global Positioning System (GPS). As fully autonomous navigation based only on recognition of terrain features, it can be integrated with GPS or other state-of-the-art navigation systems, or can be independent. CATFUR navigation is based on integration/comparison and sensor fusion of DEM (digital elevation map) 3-D data, processed by commercial off-the-shelf geographic information system (COTS GIS) environments into a vectorial graph. CATFUR obtains data from the vertical takeoff unmanned air vehicle (VTUAV) COTS inertial and visual sensors, and from components of an azimuth-elevation local positioning system (LPS). Real-time data processing could perform on highly parallel 2 in. x 3 in. Application specific hardware. Typical point or line catastrophic singularities on surfaces are edges, ridges, wrinkles, and surface cracks. Such singularities have a fixed location on the surface. In contrast, catastrophes have the unexpected property of not being fixed to a surface. Catastrophes can be the basis of GPS-independent relative navigation, based only on the existence of a folded terrain, even without landmarks. Since mountains do not move, we can use mountain guidance, much as star guidance has been used for centuries to navigate the oceans.
The tremendous increase in satellite imagery resolution and accuracy has made it reasonable to exploit such imagery in GIS-based command and control, fusing data streams from UAV/UGV and satellite sensors for C4ISR systems. To fuse imagery from satellite and tactical sensors in real time, the authors apply probabilistic analysis, polyquadrically interpolating evidential reasoning about non-numeric tactical factors and superimposing them on Digital Terrain Models and satellite imagery. Doing this in real time for C4ISR requires highly parallel computation, implemented on Video/Imagery Super-Parallel and Supercomputer-Path (VISP) hardware. Another approach is integrated view analysis for purposes of command decision making missions, making use of GIS environments, Dynamic Time Multiplexed Holographic (DTMH) screens, and 3-D projection for precise visualization of accurate GIS-based imagery. This technology incorporates non-numeric real-time information into the command and control processes with simultaneous integrated 3-D views. Theory and experiment both indicate that the methods and algorithms described here are feasible.
Presently, there are many technological and industrial efforts for development of virtual flight simulators, usually based on networked technologies. In order to solve the problems of real time availability and realistic quality of simulators, source data images and digital terrain models (DTM) should have some generalized structure, which supposes different imagery resolution and different amount of detail on each level of 3D simulation. One of the central problems is geotruthing of satellite imagery with realistic accuracy requirements with respect to DTM. Traditionally such geotruthing can be achieved by means of geo control points measurements. This process is labor intensive and requires special photogrammetric operator skills. In order to avoid such a process an algorithm of terrain and image models singularity's recognition based on Catastrophe theory is investigated in this paper. This approach does not require training but operates with direct comparison of the analytical manifolds from DTM with those actually extracted from the image. The technology described in this paper, the Catastrophe Approach, and algorithms of satellite imagery treatment may be implemented in a multi-level image pyramid flight simulators. Theoretical approaches and practical realization indicates that the Catastrophe Approach is easy- to-use for a final customer and can be implemented on-line to networked flight simulators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.