PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8399, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many applications require to register images within subpixel accuracy like computer vision especially super-resolution
(SR) where the estimated subpixel shifts are very crucial in the reconstruction and restoration of SR
images. In our work we have an optical sensor that is mounted on an unmanned airborne vehicle (UAV) and
captures a set of images that contain sufficient overlapped area required to reconstruct a SR image. Due to the
wind, The UAV may encounter rotational effects such as yaw, pitch and roll which can distort the acquired as
well as processed images with shear, tilt or perspective distortions. In this paper we propose a hybrid algorithm
to register these UAV images within subpixel accuracy to feed them in a SR reconstruction step. Our algorithm
consists of two steps. The first step uses scale invariant feature transform (SIFT) to correct the distorted images.
Because the resultant images are not registered to a subpixel precision, the second step registers the images
using a fast Fourier transform (FFT) based method that is both efficient and robust to moderate noise and lens
optical blur. Our FFT based method reduces the dimensionality of the Fourier matrix of the cross correlation
and uses a forward and backward search in order to obtain an accurate estimation of the subpixel shifts. We
discuss the relation between the dimensionality reduction factors and the image shifts as well as propose criteria
that can be used to optimally select these factors. Finally, we compare the results of our approach to other
subpixel techniques in terms of their efficiency and computational speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-frame super-resolution of images containing complex motion fields remains an elusive target requiring precise
estimation of such motion between the frames. Although accuracy of optical flow algorithms has been increasing
steadily it is not yet sufficient to provide confident subpixel resolution enhancement during super-resolution
reconstruction. One way to approach the problem is to augment optical flow estimation input into super-resolution
procedure by reliable confidence measure. Numerous optical flow confidence or conversely uncertainty measures have
been developed recently. However, their performance has been studied and evaluated with respect to error metric
involving optical flow vectors, namely angular and end point errors averaged over the entire image. Super-resolution
processing requires consideration of interpolation error i.e. the difference between the reference image and the motion
compensated image. This work presents comparative evaluation of optical flow confidence measures and makes
conclusions of their usefulness to applications utilizing image compensation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blur estimation is an important technique for super resolution, image restoration, turbulence mitigation, deblurring and
autofocus. Low-cost methods have been proposed for blur estimation. However, they can have large stochastic errors
when computed close to the edge location and biased estimates at other locations. In this paper, we define an efficient,
accurate and precise estimate that can be computed at the edge location based on the first-order derivative. Our method is
compared and benchmarked against previous state-of-the-art. The results show that the proposed method is fast, unbiased
and with low stochastic error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We look at the design of projective measurements based upon image priors. If one assumes that image patches from
natural imagery can be modeled as a low rank manifold, we develop an optimality criterion for a measurement matrix
based upon separating the canonical elements of the manifold prior. Any sparse image reconstruction algorithm has
improved performance using the developed measurement matrix over using random projections. Some insights into the
empirical estimation of the image patch manifold are developed and several results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super-resolution (SR) refers to reconstructing a single high resolution (HR) image from a set of subsampled,
blurred and noisy low resolution (LR) images. The reconstructed image suffers from degradations such as
blur, aliasing, photo-detector noise and registration and fusion error. Wiener filter can be used to remove
artifacts and enhance the visual quality of the reconstructed images. In this paper, we introduce a new fast
stochasticWiener filter for SR reconstruction and restoration that can be implemented efficiently in the frequency
domain. Our derivation depends on the continuous-discrete-continuous (CDC) model that represents most of
the degradations encountered during the image-gathering and image-display processes. We incorporate a new
parameter that accounts for LR images registration and fusion errors. Also, we speeded up the performance
of the filter by constraining it to work on small patches of the images. Beside this, we introduce two figures
of merits: information rate and maximum realizable fidelity, which can be used to assess the visual quality
of the resultant images. Simulations and experimental results demonstrate that the derived Wiener filter that
can be implemented efficiently in the frequency domain can reduce aliasing, blurring, and noise and result in a
sharper reconstructed image. Also, Quantitative assessment using the proposed figures coincides with the visual
qualitative assessment. Finally, we evaluate our filter against other SR techniques and its results were very
competitive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An unsupervised adaptive segmentation technique based on active contour model for automatic object region
and boundary extraction in video sequences is proposed. The active contour model identies each region using
certain region descriptors that guide the motion of the initial contour towards the actual region of interest. The
region descriptors depend on the statistical information inside and outside the initial contour, which control the
growth or contraction of the contour. A process of selective binary and Gaussian ltering regularization level set
smooths the level-set function in order to maintain the level of expansion of the active contour. The adaptivity
of the proposed method comes from automatic selection of an optimal Gaussian lter parameter which depends
on the characteristics of the input video frames. More precisely, the algorithm utilizes the statistical information
of each video frame to identify the optimal value of the Gaussian parameter which should be large enough to
cover most of the level-set function, but not too large to overlap multiple neighbouring objects. The optimal
value of Gaussian parameter aects the segmentation performance of the entire activity video sequence. The
performance of the proposed method was tested and evaluated on Weizmann database and KTH database which
represent object regions in dierent illuminations and scenarios. The performance of the proposed method is also
compared to several state-of-the-art image segmentation methods and observed improved performance in terms
of accuracy and eciency. The study demonstrates the robustness of the method to non-uniform illuminations
and background noises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Solar images taken at different wavelengths enable scientists to visualize and analyze the suns activities. The Solar
Dynamics Observatory (SDO) provides high-resolution images of the sun, with cadence in seconds, taken at varying
wavelengths, resulting in finely detailed, almost continuous data for researcher's examination. We propose an
approach to find active regions and coronal holes that involves shifted means based segmentation, and voting based
edge linking to link fragments combined with Moore's neighbor tracing algorithm to highlight the regions of
interest. This approach is illustrated by using the images taken by the AIA telescopes onboard of the SDO mission.
We obtain a segmented image that clearly isolates the active regions. Moreover this method is comparatively faster
than the commonly used fuzzy logic based methods. This method is capable of forming a foundation for the analysis
of various other features of the sun like detection of prominences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video cameras are widely used for monitoring public areas, such as train stations, airports and shopping centers. When
crowds are dense, automatically tracking individuals becomes a challenging task. We propose a new tracker which
employs a particle filter tracking framework, where the state transition model is estimated by an optical-flow algorithm.
In this way, the state transition model directly uses the motion dynamics across the scene, which is better than the
traditional way of a pre-defined dynamic model. Our result shows that the proposed tracker performs better on different
tracking challenges compared with the state-of-the-art trackers, while also improving on the quality of the result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many techniques have been recently developed for classification of hyperspectral images (HSI) including support vector
machines (SVMs), neural networks and graph-based methods. To achieve good performances for the classification, a
good feature representation of the HSI is essential. A great deal of feature extraction algorithms have been developed
such as principal component analysis (PCA) and independent component analysis (ICA). Sparse coding has recently
shown state-of-the-art performances in many applications including image classification. In this paper, we present a
feature extraction method for HSI data motivated by a recently developed sparse coding based image representation
technique. Sparse coding consists of a dictionary learning step and an encoding step. In the learning step, we compared
two different methods, L1-penalized sparse coding and random selection for the dictionary learning. In the encoding step,
we utilized a soft threshold activation function to obtain feature representations for HSI. We applied the proposed
algorithm to a HSI dataset collected at the Kennedy Space Center (KSC) and compared our results with those obtained
by a recently proposed method, supervised locally linear embedding weighted k-nearest-neighbor (SLLE-WkNN)
classifier. We have achieved better performances on this dataset in terms of the overall accuracy with a random
dictionary. We conclude that this simple feature extraction framework might lead to more efficient HSI classification
systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An OpenCL implementation of the Active Contours Without Edges algorithm is presented. The proposed
algorithm uses the General Purpose Computing on Graphics Processing Units (GPGPU) to accelerate the original
model by parallelizing the two main steps of the segmentation process, the computation of the Signed Distance
Function (SDF) and the evolution of the segmented curve. The proposed scheme for the computation of the SDF
is based on the iterative construction of partial Voronoi diagrams of a reduced dimension and obtains the exact
Euclidean distance in a time of order O(N/p), where N is the number of pixels and p the number of processors.
With high resolution images the segmentation algorithm runs 10 times faster than its equivalent sequential
implementation. This work is being done as an open source software that, being programmed in OpenCL, can
be used in dierent platforms allowing a broad number of nal users and can be applied in dierent areas of
computer vision, like medical imaging, tracking, robotics, etc. This work uses OpenGL to visualize the algorithm
results in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity
distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual
success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal
step function model to real distortions in natural images, research on one dimensional edge pattern analysis to
improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed
by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for
edge representation based upon the classification of edge types into three categories-ramp, impulse, and step
(RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is
one important drawback to the derivative method.
But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally
raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual
perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has
larger information than a straight line. Also, research on scene perception gave an idea that contours have larger
information are more important factor to determine the success of scene categorization. Therefore, inflections or
junctions are extremely useful features, whose accurate description and reconstruction are significant in solving
correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection
or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments
were conducted to test my propositions about edge detection and localization accuracy improvements. The results
support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge
detection and localization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The road network is one of the most important types of information in the Geographic Information System (GIS).
However, automatic extraction of roads is still considered a challenging problem. In this paper, we focus on robust
extraction of main roads. In the proposed algorithm, we first determine the roadness of each pixel using the eigenvalues
of its Hessian matrix. The roadness represents the belongingness of a pixel to a road; and its determination is performed
on a multi-scale basis so that it is robust to various widths of roads. We then perform directional grouping to the
determined initial road map and remove outliers in each group via directionally morphological filtering. Finally, we
determine roads by combining the results from each group. Experimental results show that the proposed algorithm can
automatically extract most main roads in various remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel high dynamic range (HDR) imaging method using a single input image. Conventional
multiple image-based HDR methods are successful only on condition that there is no motion in the scene during the
acquisition of multiple, differently exposed low dynamic range (LDR) images. If these constraints are not satisfied, a
ghost artifact is produced in the resulting HDR image. In order to overcome these limitations, we generate multiple,
differently exposed LDR images from a single input image. We call these multiple images a set of layered exposed (LE)
images. In order to generate an appropriate set of LE images, the proposed method divides input image into 9 subregions
and computes local mean in each subregion to estimate the minimum and maximum local mean. The estimated local
means become ranges of histogram equalization (HE). HDR image is generated by fusing differently exposed LE images.
More specifically, given a set of LE images, we perform weighted fusion to produce the resulting HDR image, which is
inherently free from ghost artifacts since all LE images are geometrically identical. Experimental results show that the
proposed method outperforms the existing algorithms in the sense of both removing ghost artifacts and enhancement of
image contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent infrared (IR) sensors are mostly based on a focal-plane array (FPA) structure. However, IR images suffer from
the fixed pattern noise (FPN) due to non-uniform response of a FPA structure. Various nonuniformity correction (NUC)
techniques have been developed to alleviate the FPN. They can be categorized into reference-based and scene-based
approaches. In order to deal with a temporal drift, however, a scene-based approach is needed. Among scene-based
algorithms, conventional algorithms compensate only for the offset non-uniformity of IR camera detectors based on the
global motion information. Local motions in a video, however, can introduce inaccurate motion information for NUC.
Considering global and local motions simultaneously, we propose a correction algorithm of gain and offset. Experiment
results using simulated and real IR videos show that the proposed algorithm provides performance improvement on the
FPN reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recently introduced non-local means (NLM) image denoising technique broke the traditional
paradigm according to which image pixels are processed by their surroundings. Non-local means technique
was demonstrated to outperform state-of-the art denoising techniques when applied to images in the visible.
This technique is even more powerful when applied to low contrast images, which makes it tractable for
denoising infrared (IR) images. In this work we investigate the performance of NLM applied to infrared
images. We also present a new technique designed to speed-up the NLM filtering process. The main
drawback of the NLM is the large computational time required by the process of searching similar patches.
Several techniques were developed during the last years to reduce the computational burden. Here we present
a new techniques designed to reduce computational cost and sustain optimal filtering results of NLM
technique. We show that the new technique, which we call Multi-Resolution Search NLM (MRS-NLM),
reduces significantly the computational cost of the filtering process and we present a study of its performance
on IR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When using the conventional fixed smoothing factor to display the stabilized video, we have the issue of large undefined
black border regions (BBR) when camera is fast panning and zooming. To minimize the size of BBR and also provide
smooth visualization to the display, this paper discusses several novel methods that have demonstrated on a real-time
platform. These methods include an IIR filter, a single Kalman filter and an interactive multi-model filter. The
fundamentals of these methods are to adapt the smoothing factor to the motion change from time to time to ensure small
BBR and least jitters. To further remove the residual BBR, the pixels inside the BBR are composited from the previous
frames. To do that, we first store the previous images and their corresponding frame-to-frame (F2F) motions in a FIFO
queue, and then start filling the black pixels from valid pixels in the nearest neighbor frame based on the F2F motion. If a
matching is found, then the search is stopped and continues to the next pixel. If the search is exhausted, the pixel remains
black. These algorithms have been implemented and tested in a TI DM6437 processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the performance of the local level set method on the surface reconstruction problem for
unorganized point clouds in three dimensions. Many laser-ranging, stereo, and structured light devices produce three
dimensional information in the form of unorganized point clouds. The point clouds are sampled from surfaces
embedded in R3 from the viewpoint of a camera focal plane or laser receiver. The reconstruction of these objects in the
form of a triangulated geometric surface is an important step in computer vision and image processing. The local level
set method uses a Hamilton-Jacobi partial differential equation to describe the motion of an implicit surface in threespace.
An initial surface which encloses the data is allowed to move until it becomes a smooth fit of the unorganized
point data. A 3D point cloud test suite was assembled from publicly available laser-scanned object databases. The test
suite exhibits nonuniform sampling rates and various noise characteristics to challenge the surface reconstruction
algorithm. Quantitative metrics are introduced to capture the accuracy and efficiency of surface reconstruction on the
degraded data. The results characterize the robustness of the level set method for surface reconstruction as applied to 3D
remote sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial
system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera.
In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original
reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation
analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were
matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth
computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the
accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by
pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and
produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete
disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of
features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly
added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for
autonomous navigation and mapping tasks within unknown environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work aims to quantify and improve the sensitivity of monocular passive ranging techniques in sequences that
are degraded by the the quasi-periodic inter frame motion and localised blurring associated with atmospheric
scintillation. Ranges from tens of meters up to 1km in uncontrolled outdoor conditions were investigated. This
work has application in surveillance, border control, artillery, and the autonomous navigation of ground vehicles.
A field trial with long focal length, large aperture cameras was held. The targets were placed at known distances
and a scintillometer was used to measure the turbulence between the cameras and each target. A variety of
depth from defocus and depth from focus algorithms were applied to a the video sequences to determine range
to the targets. Increasing levels of atmospheric turbulence were observed ranging from no turbulence to mild,
moderate and severe turbulence. Algorithms for the mitigation of atmospheric turbulence were applied to the
recorded video sequences. The passive ranging techniques were then applied to the turbulence-mitigated video
sequences and the results compared to that of the raw sequences. Finally, findings regarding the best performing
combination of ranging and mitigation techniques to use under different scintillation conditions are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to passively reconstruct a scene in 3D provides significant benefit to Situational Awareness systems
employed in security and surveillance applications. Traditionally, passive 3D scene modelling techniques, such as Shape
from Silhouette, require images from multiple sensor viewpoints, acquired either through the motion of a single sensor or
from multiple sensors. As a result, the application of these techniques often attracts high costs, and presents numerous
practical challenges. This paper presents a 3D scene reconstruction approach based on exploiting scene shadows, which
only requires information from a single static sensor. This paper demonstrates that a large amount of 3D information
about a scene can be interpreted from shadows; shadows reveal the shape of objects as viewed from a solar perspective
and additional perspectives are gained as the sun arcs across the sky. The approach has been tested on synthetic and real
data and is shown to be capable of reconstructing 3D scene objects where traditional 3D imaging methods fail. Providing
the shadows within a scene are discernible, the proposed technique is able to reconstruct 3D objects that are
camouflaged, obscured or even outside of the sensor's Field of View. The proposed approach can be applied in a range
of applications, for example urban surveillance, checkpoint and border control, critical infrastructure protection and for
identifying concealed or suspicious objects or persons which would normally be hidden from the sensor viewpoint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terrestrial imaging over very long horizontal paths is increasingly common in surveillance and defense systems.
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. This
paper explores the Mean-Square-Error (MSE) performance of a multi-frame-blind-deconvolution-based reconstruction
technique using a non-linear optimization strategy to recover a reconstructed object. Three sets of
70 images representing low, moderate and severe turbulence degraded images were simulated from a diffraction
limited image taken with a professional digital camera. Reconstructed objects showed significant, 54, 22 and 14
percent improvement in mean squared error for low, moderate, and severe turbulence cases respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In high resolution remote sensing satellite imaging system, image restoration is an important step to visualize
ne details and mitigate the noise. The raw image data often presents poor imaging quality due to various
reasons and Point Spread Function (PSF) measures such blurriness characteristic of the image using point
source. Satellite image from Korea Multi-purpose Satellite 2 (KOMPSAT-2) also requires Modular Transfer
Function (MTF) compensation process to achieve more realistic image which entails removing ringing artifacts
at the edges and restraining excess use of denoising eect in order to keep it more realistic. This paper focuses
on the deconvolution of KOMPSAT-2 image utilizing PSF attained from Korea Aerospace Research Institute
compared to deconvolution with the estimated PSF blur kernel. The deconvolution algorithm considered are
Richard-Lucy, Damped Richard-Lucy, Bilateral Richard-Lucy and Sparse Prior deconvolution algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously we introduced the concept of continuous quantification of uniqueness, as a general purpose technique
designed to be applicable to any situation in which there is a need to decide which of several equally effective objects to
choose for a task, that requires recognition of the chosen object, in a variety of contexts, by comparing attributes which
contain a non trivial amount of context dependent variability. We defined that uniqueness assessment as an algorithm
that computes a fuzzy set membership function that measures some but not all aspects of the probability that the sought
after object will not be confused with other objects in the space being searched. We evaluated the usefulness of that
concept by experimentally assessing the extent to which the uniqueness of the SAD global minimum of locally
computed image subset dissimilarity was both a predictor of bidirectional match compliance with the Epipolar
Constraint, and a predictor of bidirectional match disparity correctness, for the classical stereoscopic correspondence
problem of computer vision, and in that context found the uniqueness of the aforementioned global minimum to be a
useful but imperfect predictor of success. In this paper we compare the usefulness of the uniqueness of the
aforementioned global minimum to that of, the magnitude of that same global minimum, the magnitude of variability
across contributors to that global minimum, uniqueness of that variability, and co-occurrence of the global minimum of
local image subset dissimilarity and global minimum of variability across contributors to local image subset
dissimilarity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of
photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the
image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account,
the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This
includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment
by several standard observers simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel multiplexed holographic storage technique in a coaxial alignment based on dually modulated spatial
light modulator (SLM). In this paper, a spatial light modulator based on a high-resolution twisted nematic liquid crystal
display is used to record both reference and object beams. We have programmed the active region of the SLM, so that
some part will work in the phase modulation mode and some part will work in the amplitude modulation mode. In our
coaxial holography design, the central ring area is reserved for amplitude modulation of the object beam while the outer
rings/annuli will cater for various phase modulation of the reference beams for multiplexed storage. A number of objects
can then be stored/encoded in the same location of the hologram and later be reconstructed using the appropriate
reference beams. Three different methods of phase modulating the reference beam are investigated, viz. the diffraction of
blaze grating, the diffusion of random speckle and beam shaping. The coaxial multiplexing holographic data encoding
and reconstruction are carried out experimentally in a single-beam 4-f setup using He-Ne laser with a wavelength of
632.8nm. A 2-D holographic medium is used for recording and the reconstructed images are captured by a camera on the
image plane. From the results of simulation and experiment, it can be seen that all the images are reconstructed clearly
and separately, demonstrating the feasibility of our proposed novel technique of coaxial multiplexed storage/encoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a spatially adaptive defogging algorithm for enhancement of color and visibility of unmanned aerial vehicle
(UAV) images. It is hard to identify an object of interest from airborne image acquired by a satellite, an airplane, and a
UAV because of various atmospheric distortions. For overcoming this problem, the proposed algorithm decomposes the
input foggy image into the original fog-free component and the atmospherically distorted component, and then estimates
the original image based on the image degradation model. We first generate a normalized image using the maximum
value among RGB color channels of a foggy Image. We estimate the atmospheric light in the labeled image. We also
generate a modified transmission map using the labeled image and a guided filter. A major contribution of the proposed
work is the enhancement of details using a guided filter as well as defogging. We can significantly enhance the visibility
of a foggy image by using the estimated atmospheric light and the transmission map. The proposed algorithm can
remove foggy components better than existing defogging techniques because the specular component and the labeled
image are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To obtain high speed and accuracy, passive autofocusing system and two-stage searching strategy have been widely
applied in microscopy. The kernel of the passive autofocusing system is the selection of the focus criterion functions,
which will significantly affect the efficiency and accuracy of the autofocusing system. In order to help choose the best
algorithm, this paper proposes a comparison method for the focus criterion functions in the two searching stages. The
ranking methodology for the first stage is proposed for the first time, and the one for the second stage is improved based
on previous works. An overall score of quantifying evaluation is also introduced in both stages. The proposed ranking
methodology for different focus algorithms is tested on a few synthetic defocused images, which are first simulated from
high definition images by Gaussian filter according to the defocus imaging principle, and then added with noise of
different ranges. Finally, to verify the effectiveness of the new ranking methodology, real defocused images are captured
to evaluate all of these algorithms, the result of which matches the comparison results on synthetic image sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.