We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any
imaging device equipped with a capacitive touchscreen) and synchronizes triggering with the DxO LED Universal Timer
to measure shooting time lag and shutter lag according to ISO 15781:2013. The device and protocol extend the time lag
measurement beyond the standard by including negative shutter lag, a phenomenon that is more and more commonly found
in smartphones.
The device is computer-controlled, and this feature, combined with measurement algorithms, makes it possible to
automatize a large series of captures so as to provide more refined statistical analyses when, for example, the shutter lag
of “zero shutter lag” devices is limited by the frame time as our measurements confirm.
This paper presents a novel device and algorithms for measuring the different timings of digital cameras shooting both still images and videos. These timings include exposure (or shutter) time, electronic rolling shutter (ERS), frame rate, vertical blanking, time lags, missing frames, and duplicated frames. The device, the DxO LED Universal Timer (or “timer”), is designed to allow remotely-controlled automated timing measurements using five synchronized lines of one hundred LEDs each to provide accurate results; each line can be independently controlled if needed. The device meets the requirements of ISO 15781[1]. Camera timings are measured by automatically counting the number of lit LEDs on each line in still and video images of the device and finding the positions of the LEDs within a single frame or between different frames. Measurement algorithms are completely automated: positional markers on the device facilitate automatic detection of the timer as well as the positions of lit LEDs in the images. No manual computation or positioning is required. We used this system to measure the timings of several smartphones under different lighting and setting parameters.
This article presents a system and a protocol to characterize image stabilization systems both for still images and videos.
It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is
programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on
different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the
texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the
measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured
in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to
obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of
performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel
accuracy to determine a homographic deformation between the current frame and a reference position. This model
describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to
the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as
DSC, DSLR and smartphones.
The I3A Camera Phone Image Quality (CPIQ) initiative aims to provide a consumer-oriented
overall image quality metric for mobile phone cameras. In order to achieve this
goal, a set of subjectively correlated image quality metrics has been developed. This paper
describes the development of a specific group within this set of metrics, the spatial metrics.
Contained in this group are the edge acutance, visual noise and texture acutance metrics.
A common feature is that they are all dependent on the spatial content of the specific
scene being analyzed. Therefore, the measurement results of the metrics are weighted by
a contrast sensitivity function (CSF) and, thus, the conditions under which a particular
image is viewed must be specified. This leads to the establishment of a common framework
consisting of three components shared by all spatial metrics. First, the RGB image is transformed
to a color opponent space, separating the luminance channel from two chrominance
channels. Second, associated with this color space are three contrast sensitivity functions
for each individual opponent channel. Finally, the specific viewing conditions, comprising
both digital displays as well as printouts, are supported through two distinct MTFs.
Digital sensors have obviously invaded the photography mass market. However, some photographers with very high
expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the
eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before
development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital
processing afterwards. Therefore, it is legitimate to evaluate silver halide film "with a digital eye", with the assumption
that processing can be applied as for a digital camera. The article will describe in details the operations we need to
consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the
autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array).
We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor
response, signal to noise ratio and dynamic range.
Extended depth of field (EDOF) cameras have recently emerged as a low-cost alternative to autofocus lenses. Different
methods, either based on longitudinal chromatic aberrations or wavefront coding have been proposed and have reached
the market. The purpose of this article is to study the theoretical performance and limitation of wavefront coding
approaches. The idea of these methods is to introduce a phase element making a trade-off between sharpness at the
optimal focus position and the variation of the blur spot with respect to the object distance. We will show that there are
theoretical bounds to this trade-off: knowing the aperture and the minimal MTF value for a suitable image quality, the
pixel pitch imposes the maximal depth of field. We analyze the limitation of the extension of the depth of field for pixel
pitch from 1.75μm to 1.1μm, particularly in regards to the increasing influence of diffraction.
The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The
challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as
luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that
accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the
sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital
processing can correct some flaws (like distortion). Our definition of information takes possible correction into account
and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our
previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras
with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading.
We describe the procedure to evaluate the image quality of a camera in terms of texture preservation. We use a
stochastic model coming from stochastic geometry, and known as the dead leaves model. It intrinsically reproduces
occlusions phenomena, producing edges at any scale and any orientation with a possibly low level of contrast. An
advantage of this synthetic model is that it provides a ground truth in terms of image statistics. In particular, its power
spectrum is a power law, as many natural textures. Therefore, we can define a texture MTF as the ratio of the Fourier
transform of the camera picture by the Fourier transform of the original target and we fully describe the procedure to
compute it. We will compare the results with the traditional MTF (computed on a slanted edge as defined in the ISO
12233 standard) and will show that the texture MTF is indeed more appropriate for describing fine detail rendering.
KEYWORDS: Modulation transfer functions, Cameras, Optical signal processing, Lens design, Chromatic aberrations, Imaging systems, Point spread functions, Image processing, Digital signal processing, Computational imaging
In this paper we present an approach to extend the Depth-of-Field (DoF) for cell phone miniature camera by concurrently
optimizing optical system and post-capture digital processing techniques. Our lens design seeks to increase the
longitudinal chromatic aberration in a desired fashion such that, for a given object distance, at least one color plane of the
RGB image contains the in-focus scene information. Typically, red is made sharp for objects at infinity, green for
intermediate distances, and blue for close distances. Comparing sharpness across colors gives an estimation of the object
distance and therefore allows choosing the right set of digital filters as a function of the object distance. Then, by
copying the high frequencies of the sharpest color onto the other colors, we show theoretically and experimentally that it
is possible to achieve a sharp image for all the colors within a larger range of DoF. We compare our technique with other
approaches that also aim to increase the DoF such as Wavefront coding.
In this paper, we numerically quantify the information capacity of a sensor, by examining the different factors than can
limit this capacity, namely sensor spectral response, noise, and sensor blur (due to fill factor, cross talk and diffraction,
for given aperture). In particular, we compare the effectiveness of raw color space for different kinds of sensors. We also
define an intrinsic notion of color sensitivity that generalizes some of our previous works. We also attempt to discuss
how metamerism can be represented for a sensor.
A method for evaluating texture quality as shot by a camera is presented. It is shown that usual sharpness measurements
are not completely satisfying for this task. A new target based on random geometry is proposed. It uses the so-called
dead leaves model. It contains objects of any size at any orientation and follows some common statistics with natural
images. Some experiments show that the correlation between objectives measurements derived from this target and
subjective measurements conducted in the Camera Phone Image Quality initiative are excellent.
A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels)
while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of
the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger
limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies
(1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second
example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio
(SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to
the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel
technologies.
KEYWORDS: Signal to noise ratio, Sensors, Cameras, Image quality, RGB color model, Colorimetry, Calibration, Space sensors, 3D modeling, Digital cameras
This article proposes new measurements for evaluating the image quality of a camera, particularly on the reproduction of colors. The concept of gamut is usually a topic of interest, but it is much more adapted to output devices than to capture devices (sensors). Moreover, it does not take other important characteristics of the camera into account, such as noise. On the contrary, color sensitivity is a global measurement relating the raw noise with the spectral sensitivities of the sensor. It provides an easy ranking of cameras. To have an in depth analysis of noise vs. color rendering, a concept of Gamut SNR is introduced, describing the set of colors achievable for a given SNR (Signal to Noise Ratio). This representation provides a convenient visualization of what part of the gamut is most affected by noise and can be useful for camera tuning as well.
This article explains the cause of the color fringing phenomenon that can be noticed in photographs, particularly on the edges of backlit objects. The nature of color fringing is optical, and particularly related to the difference of blur spots at different wavelengths. Therefore color fringing can be observed both in digital and silver halide photography. The hypothesis that lateral chromatic aberration is the only cause of color fringing is discarded. The factors that can influence the intensity of color fringing are carefully studied, some of them being specific to digital photography. A protocol to measure color fringing with a very good repeatability is described, as well as a mean to predict color fringing from optical designs.
We present a technique for region tracking using a novel method of registration. This method uses the representation of the image with a tree form containing the topological and geometrical information in the image. The computation of the tree is achieved by using the so called Fast Level Set Transform (FLST) and owns the fine property of contrast invariance. We explain the main reasons why the shapes present in the image correspond to some nodes in the tree. We briefly describe the basic principles of the FLST. We then expose a registration algorithm, based on a vote procedure, that consists of finding correspondences between the nodes of the trees of two different images. We also porpose an implementation of this algorithm for a real time application and end with some numerical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.