We describe a solution for image restoration in a computational
camera known as an extended depth of field
(EDOF) system. The specially-designed optics produce
point spread functions that are roughly invariant with object distance
in a range. However, this invariance involves a trade-off
with the peak sharpness of the lens. The lens blur
is a function of lens field-height, and the imaging sensor introduces signal-dependent noise. In this context, the principal contributions
of this paper are: a) the modeling of the EDOF focus recovery
problem; and b) the adaptive EDOF focus recovery approach, operating in signal-dependent noise.
The focus recovery solution is adaptive to complexities of an EDOF imaging system,
and performs a joint deblurring and noise
suppression. It also adapts to imaging conditions by accounting for the state of the sensor (e.g., low-light conditions).
A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera
Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital
images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The
softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC)
and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of
34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO
20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference
(JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler
and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at
Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the
softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the
differences within the range expected from observer variability. Compared to previous implementations of the quality
ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.
We consider the problem of segmenting images of natural scenes based
on color and texture. A recently proposed algorithm combines
knowledge of human perception with an understanding of signal
characteristics in order to segment natural scenes into
perceptually/semantically uniform regions. We conduct subjective
tests to determine key parameters of this algorithm, which include
thresholds for texture classification and feature similarity, as well
as the window size for texture estimation. The goal of the tests is
to relate human perception of isolated (context-free) texture patches
to image statistics obtained by the segmentation procedure. The texture patches correspond to homogeneous texture and color
distributions and were carefully selected to cover the entire
parameter space. The parameter estimation is based on fitting
statistical models to the texture data. Experimental results
demonstrate that this perceptual tuning of the algorithm leads to
significant improvements in segmentation performance.
We develop spatially adaptive, low-level, color and spatial texture features based on perceptual principles about the processing of texture and color information. We then propose an algorithm that combines these features to obtain image segmentations that convey semantic information that can be used for content-based retrieval. Our focus is images of natural scenes. The color texture features are based on the estimation of spatially adaptive dominant colors, which on one hand, reflect the fact that the human visual system cannot simultaneously perceive a large number of colors, and on the other, the fact that image colors are spatially varying. The spatially adaptive dominant colors are obtained using a previously developed adaptive clustering algorithm for color segmentation. The spatial texture features are based on a steerable filter decomposition, which offers an efficient and flexible approximation of early processing in the human visual system. We use the local energy of the subband coefficients as a simple but effective characterization of spatial texture. A median filter is used to distinguish the energy due to region boundaries from the energy of the textures themselves. Texture feature estimation requires a finite neighborhood that limits spatial resolution, while color segmentation provides accurate and precise edge localization. By combining texture with color information, the proposed algorithm can obtain robust segmentations that are accurate and precise. The performance of the proposed algorithm is demonstrated in the domain of photographic images, including low resolution, degraded, and compressed images.
We examine perceptual metrics and use them to evaluate the quality of still image coders. We show that mean-squared- error based metrics fail to predict image quality when one compares artifacts generated by different types of image coders. We consider three different types of coders: JPEG, the Safranek-Johnston perceptual subband coder (PIC), and the Said-Pearlman SPIHT algorithm with perceptually weighted subband quantization, based on the Watson et al. visual thresholds. We show that incorporating perceptual weighting in the SPIHT algorithm results in significant improvement in visual quality. The metrics we consider are based on the same image decompositions as the corresponding compression algorithms. Such metrics are computationally efficient and considerably simpler than more elaborate metrics. However, since each of the metrics is used for the optimization of a coder, one expects that they would be biased towards that coder. We use the metrics to evaluate the performance of the compression techniques for a wide range of bit rates. Our experiments indicate that the PIC metric provides the best correlation with subjective evaluations. It predicts that at very low bit rates the SPIHT algorithm and the 8 by 8 PIC coder perform the best, while at high bit rates the 4 by 4 PIC coder is the best. More importantly, we show that the relative algorithm performance depends on image content, with the subband and DCT coders performing best for images with a lot of high frequency content, and the wavelet coders performing best for smoother images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.