PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7094, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The lateral resolution of an imaging system is limited by its numerical aperture and the wavelength.
Structured illumination incident on the object heterodynes the higher spatial frequencies of the object with the
spatial frequency of the sinusoidal illumination into the passband of the imaging system providing lateral
superresolution. This idea has been implemented in microscopy. Multiple images of an object are taken, with
distinct phase shifts in the sinusoidally patterned illumination. They are processed to separate the
conventional, un-aliased object spatial frequencies from the aliased ones, which contain superresolution
information. The separated aliased terms are de-aliased (i.e. the spatial frequencies in them are moved to their
correct locations in Fourier space) giving superresolution along the direction perpendicular to the orientation
of the sinusoidal fringe pattern. This process is repeated with, say, 60° and 120° rotation of the sinusoidal
fringe illumination to obtain superresolution in all directions. The final reconstructed image can be obtained
by appropriate combination of the de-aliased superresolution components with the conventional, un-aliased
components. We discuss the signal-to-noise ratio (SNR) and optical transfer function (OTF) compensation in
the combination of all these components to obtain an image with lateral superresolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a computer simulation to demonstrate the use of a periodic grating structure in the near field of a
telescope to super-resolve objects in the far field. The separation between the telescope pupil and the grating provides a
periodic anisoplanatism for the telescope, modulating the measured brightness of a point source as it moves across the
field normal to the grating. The resulting periodic modulation of an extended source can thus produce a spatial frequency
heterodyning effect, where frequency components outside the diffraction passband are aliased inside the passband and
measured. Using the simulation, we have quantitatively validated the analytically-predicted periodic blur function for the
case of single near-field grating. Further, we have shown the heterodyning effect is observed when the distance between
the grating and pupil corresponds to the Talbot distance, where the grating forms a "self-image" in the plane of the pupil.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase errors introduced in the object beam of a digital hologram degrade the image quality of the
object. We present computer simulations showing the effect of multiple planes of phase errors in the
propagation path. By using a nonlinear optimization technique to maximize sharpness metrics, we show
results that account for aberrations in multiple planes and correct anisoplanatic blur. This paper demonstrates
this technique for two and three phase screens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space imaging has many uses, including military intelligence, commercial map making, and natural resource
management. The desire for higher resolution and its natural limit due to diffraction has increasingly pushed up the size
and expense of space platforms. An alternative to larger telescopes is to use advanced image reconstruction and modern
small pixel detectors to build extremely compact satellites that can produce sub-diffraction resolution images. Because
of the Pixon® method's use of minimum complexity image modeling, it is especially good at recovering spatial
frequencies beyond the diffraction cutoff. This is because the simplest model that fits the spatial frequencies present in
the image, i.e., those below the diffraction cutoff, must necessarily contain correct spatial frequencies beyond the
diffraction cutoff. In our work with the Pixon method we have routinely obtained images with 1/4 diffraction resolution
for well sampled data (~4 pixels per diffraction FWHM) with good signal-to-noise ratio (~a few hundred per resolution
element). With such capabilities a 3.0 meter focal length, 0.8 meter diameter telescope, launched with a Pegasus missile
would have an effective 0.1 meter ground sampling distance, i.e., NIIRS 9 performance, from a 400 km orbit (675 nm
operating wavelength). We present a number of examples demonstrating the practical capabilities of the Pixon method
for achieving these goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Successful fusion combines salient features of each image to produce a new fused image with more "information". Although different sensors have different spatial resolutions, they tend to be detector-limited and this usually does not significantly affect fusion. The biggest problem with sensor fusion is that the number of detectors on each array is different. If the number of pixels on target are made equal (the desired design), then the fields-of-view are different. This may affect operational effectiveness. If the fields-of-view are equal, then pixels on target are different. This accentuates phasing effects, increases target edge ambiguity, and overall makes fusion more difficult. Sampling artifacts are most noticeable with man-made objects and are pronounced with periodic targets (bar targets).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In any image post-processing system, an imperfect knowledge of the system point-spread function (PSF) leads to
errors in the estimation of the truth image from the image data. Here we treat the problem of PSF uncertainty
resulting from either uncorrected or partially corrected atmospheric turbulence related phase errors and its
impact on image estimation in the presence of Poisson count statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent work, the resolution limits of two variations of chromo-tomographic hyperspectral imaging sensors were discussed.
In this paper, we examine another variation of this type of hyperspectral sensor the present a reconstruction method
using computed tomography to estimate a hyperspectral data cube and examine the limits of resolution. The resolution
analysis is then compared with a simulation to determine the validity of the calculated resolution bounds. The simulation
setup is discussed and results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote Ultra-Low Light Imaging detectors are photon limited detectors developed at Los Alamos National
Laboratories. RULLI detectors provide a very high degree of temporal resolution for the arrival times of detected photoevents,
but saturate at a photo-detection rate of about 106 photo-events per second. Rather than recording a conventional
image, such as output by a charged coupled device (CCD) camera, the RULLI detector outputs a data stream consisting
of the two-dimensional location, and time of arrival of each detected photo-electron. Hence, there is no need to select a
specific exposure time to accumulate photo-events prior to the data collection with a RULLI detector - this quantity can
be optimized in post processing. RULLI detectors have lower peak quantum efficiency (from as low as 5% to perhaps as
much as 40% with modern photocathode technology) than back-illuminated CCD's (80% or higher). As a result of these
factors, and the associated analyses of signal and noise, we have found that RULLI detectors can play two key new roles
in SSA: passive imaging of exceedingly dim objects, and three-dimensional imaging of objects illuminated with an
appropriate pulsed laser. In this paper we describe the RULLI detection model, compare it to a conventional CCD
detection model, and present analytic and simulation results to show the limits of performance of RULLI detectors used
for SSA applications at AMOS field site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airglow due to upper atmospheric molecular emissions has been known since the late 1960's. Recent interest in
extending high altitude low light imaging capability has created an interest in exploiting the airglow phenomenon. We
discuss the feasibility of developing a sensor that operates above the atmospheric layer responsible for airglow. We
examine the implications to the design of the focal plane array and electronics caused by viewing the ground through
the radiant layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are engaged in the development of an FPGA-based real-time video Deblurring Anisoplanaticism Corrector (DAC)
under a U.S. Army SBIR Phase II program. This DAC processor features the ability to measure and correct for the
inherent geometric image warp (e.g. as caused by anisoplanaticism) for each incoming video field by comparison with a
running-average reference image compiled by the processor from recent accumulated image data. The incoming video
field is accordingly dewarped and passed to an adaptive higher-order temporal filter, which appropriately combines the
new video field into the reference image. This filter moderates the contribution of each new pixel, based on temporal
spectrum, goodness of fit, and related information, in order to differentiate between the effects of anisoplanatically
induced warp and the effects of true changes in the image such as those caused by motion of objects. The DAC
processor also includes other real-time PixonVision image enhancement capabilities such as dehazing, spatial deblurring
for removal of additional high-order image blur, spatial denoising, and spatially adaptive contrast enhancement. A
hardware proof-of-concept system is being implemented for standard NTSC video operation, and development progress
and related preliminary results for that system are presented here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is shown that although divergent upon exiting an optical solid, and typically color dispersive over short propagation
distances, controllably generated ultra broadband continuums can be propagated as a pulsed white-light source with a
near-ideal Gaussian intensity distribution. In contrast to active 3-D imagery with monochromatic illumination, pulsed
ultra-broadband continuum illumination is shown to enable active 3-D multi/hyperspectral imaging and sensing for
standoff characterization of materials and enhanced object identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For synthetic aperture ladar (SAL) imaging, there are difficulties in the space domain because the size of optical antenna
of telescope is up to six orders of magnitude larger than the wavelength. In this paper, we suggest a defocused and spatial
phase masked telescope for reception to compensate the diffraction aberration from the target to match the directivity of
heterodyne detection, a defocused and phase masked transmission telescope to send out a wavefront with an additional
and controllable spatial quadratic phase to the phase history, and a circulated duplex to compensate the aberration for
reception and to produce spatial phase bias for transmission concurrently in the same telescope. On this basis, the point
target radar equation in a full space and time treatment is achieved. Correspondingly, the complete collection equations
of 2-D data acquired in the range and azimuth directions for 2-D SAL imaging of the strip-map mode and the spotlight
mode are developed. Then the imaging azimuth and range resolutions are redefined in terms of the idea of optical
imaging by a lens, and the requirement for azimuth sampling is given. The paper systemically presents the all details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A detailed laboratory experiment has been completed which models a simultaneous multiple beam Fourier telescopy
(FT) technique capable of imaging rapidly changing objects. Fourier telescopy uses multiple beams that illuminate the
target with a complex fringe pattern that sweeps across it due to frequency differences between beams. Using this
method, the target spatial frequency components are encoded in the temporal signal that is reflected from the target.
Previous work has concentrated on system designs where the target is illuminated with 3 individual beams in order to use
a standard phase closure process. Data processing and image reconstruction for this laboratory experiment invoked a
novel reconstruction algorithm that has been previously developed. The algorithm compensates for atmospheric phase
fluctuations affecting the large number of beams transmitted simultaneously and includes a new type of global phase
closure which allows image reconstruction from the time history of measured total reflected intensity from the target.
The reconstruction algorithm also solves for hundreds of image Fourier components simultaneously, permitting rapid
reconstruction of the image. This multiple beam laboratory experiment includes effects from realistic photon and speckle noise. Additional effects have been expanded to include uplink turbulence, piston jitter, and beam scintillation on the target, which will be encountered in an actual FT imaging system. Experimental results have obtained reconstructed image Strehl values which are greater than 0.9 under scaled system conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.