PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A method for designing a signal-adapted, two-channel biorthogonal wavelet filter bank that maximizes coding gain is presented using the technique of Pseudoframes for Subspaces (PFFS). At the same time the PFFS model is able to incorporate the requirements of regularity and perfect reconstruction into this design. The coding gain achieved for AR(2) signals can result in a reduction in bit rate of more than 0.66 bits per sample as compared to traditional biorthogonal wavelet filter banks of the same length. The ability for PFFS to combine all of these design elements using an unconstrained optimization parameter makes pursuing this technique worthwhile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new prediction-based compression technique for dynamic 3D meshes with constant connectivity and time-varying geometry. The core of the proposed algorithm is a skinning model used for motion compensation. The mesh is first partitioned within vertex clusters that can be described by a single affine motion model. The proposed segmentation technique automatically determines the number of clusters and relays on a decimation strategy privileging the simplification of vertices exhibiting the same affine motion over the whole animation sequence. The residual prediction errors are finally compressed using a temporal-DCT representation.
The performances of our encoder are objectively evaluated on a data set of eight animation sequences with various sizes, geometries and topologies, and exhibiting both rigid and elastic motions. The experimental evaluation shows that the proposed compression scheme outperforms state of the art techniques such as MPEG-4/AFX, Dynapack, RT, GV, MCGV, TDCT, PCA and RT compression schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The compact representation of region boundary contours is key to efficient representation and compression of digital images using object-based compression (OBC). In OBC, regions are coded in terms of their texture, color, and shape. Given the appropriate representation scheme, high compression ratios (e.g., 500:1 ≤ CR ≤ 2,500:1) have been reported for selected images. Because a region boundary is often represented with more parameters than the region contents, it is crucial to maximize the boundary compression ratio by reducing these parameters. Researchers have elsewhere shown that cherished boundary encoding techniques such as chain coding, simplicial complexes, or quadtrees, to name but a few, are inadequate to support OBC within the aforementioned CR range. Several existing compression standards such as MPEG support efficient boundary representation, but do not necessarily support OBC at CR ≥ 500:1 . Siddiqui et al. exploited concepts from fractal geometry to encode and compress region boundaries based on fractal dimension, reporting CR = 286.6:1 in one test. However, Siddiqui's algorithm is costly and appears to contain ambiguities. In this paper, we first discuss fractal dimension and OBC compression ratio, then enhance Siddiqui's algorithm, achieving significantly higher CR for a wide variety of boundary types. In particular, our algorithm smoothes a region boundary B, then extracts its inflection or control points P, which are compactly represented. The fractal dimension D is computed locally for the detrended B. By appropriate subsampling, one efficiently segments disjoint clusters of D values subject to a preselected tolerance, thereby partitioning B into a multifractal. This is accomplished using four possible compression modes. In contrast, previous researchers have characterized boundary variance with one fractal dimension, thereby producing a monofractal. At its most complex, the compressed representation contains P, a spatial marker, and a D value for each monofractal boundary segment, with slight additional overhead indicating an encoding mode. The simplest representation contains P and a pointer into a database of region patterns. Each of these patterns has an associated fractal dimension, thus alleviating storage of segment-specific D values. Contour reconstruction during decompression is guided by the smoothed contour. Analysis of this procedure over a database of 73 images reveals 622:1 ≤ CR ≤ 1,720:1 is typical for natural scenes, demonstrating the utility of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data compression methods have mostly focused on achieving a desired perception quality for multi-media data for a given number of bits. However, there has been interest over the last several decades on compression for communicating data to a remote location where the data is used to compute estimates. This paper traces the perspectives in the research literature for compression-for-estimation. We discuss how these perspectives can all be cast in the following form: the source emits a signal - possibly dependent on some unknown parameter(s), the ith sensor receives the signal and compresses it for transmission to a central processing center where it is used to make the estimate(s). The previous perspectives can be grouped as optimizing compression for the purpose of either (i) estimation of the source signal or (ii) the source parameter. Early results focused on restricting the encoder to being a scalar quantizer that is designed according to some optimization criteria. Later results focused on more general compression structures, although, most of those focus on establishing information theoretic results and bounds. Recent results by the authors use operational rate-distortion methods to develop task-driven compression algorithms that allow trade-offs between the multiple estimation tasks for a given rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data compression ideas can be extended to assess the data quality across multiple sensors to manage the network of sensors to optimize the location accuracy subject to communication constraints. From an unconstrained-resources viewpoint it is desirable to use the complete set of deployed sensors; however, that generally results in an excessive data volume. Selecting a subset of sensors to participate in a sensing task is crucial to satisfying trade-offs between accuracy and time-line requirements. For emitter location it is well-known that the geometry between sensors and the target plays a key role in determining the location accuracy. Furthermore, the deployed sensors have different data quality. Given these two factors, it is no trivial matter to select the optimal subset of sensors. We attack this problem through use of a data quality measure based on Fisher Information for set of sensors and optimize it via sensor selection and data compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The demand of an accurate objective image quality assessment tool is important in modern multimedia systems. Image
coding algorithms introduce highly structured coding artifacts and distortions. In this paper, we present a novel approach
to predict the perceived image quality. Properties of the Human Visual System (HVS) were exploited to select a set of
suitable metrics. These metrics are extracted while comparing the reference and distorted image. Mutual Information
(MI) and Principal Component Analysis (PCA) were used to obtain an optimal set of objective features that best describe
the perceived image quality in respect to subjective scores from human observers. The impairment feature vector is forwarded
to the Artificial Neural Network (ANN) where the features are combined and the predicted quality score is
computed. Parameters of the ANN are adjusted using Mean Opinion Scores (MOS) obtained from the group of assessors.
It is shown that the proposed image quality assessment model can achieve high correlation with the subjective image
quality ratings. Possible incorporation of the model into a perceptual image-coding algorithm is proposed. Such a system
is capable to ensure that only visually important information is encoded and consequently that the required communication
bandwidth is minimized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many transform domain techniques have been developed for watermarking. Most of these techniques have been proven more robust than spatial domain methods after common image processing operations are applied to the watermarked images. In this research, adaptive models are used to help identify watermarks in the discrete cosine transform (DCT) domain. The adaptive models can be used to enhance the watermark detected after an attack. The watermark can thus be semi-blindly identified or estimated further allowing the estimation of the original image. In this paper, the susceptibility of the proposed DCT-based adaptive models to attacks is demonstrated on multiple digital images. The LMS correlation based detection is shown to be more robust than a simple correlation based detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Watermarking aims at enforcing property right for digital video: a mark is imperceptibly - transparently - embedded into original data. The true owner is identified by detecting this mark. The robust watermarking techniques allow the mark detection even when the protected video is attacked. Transparency and robustness constraints restrict the mark size: the better transparency and robustness, the smaller the data payload. The paper presents a method to evaluate the maximum quantity of information which can be theoretically inserted into the 2D-DCT coefficient hierarchy, for prescribed transparency and robustness. This approach relies on the noisy channel model for watermarking. Within this mathematical framework, the maximal data payload is expressed by the channel capacity. As any capacity evaluation procedure requires an intimate knowledge of the noise sources, the paper first describes the developed statistical approach enabling: (1) to properly handle the inner dependency existing among successive frames in a video sequence, and (2) to accurately check out the Gaussian behaviour for each noise source. The experiments were carried out in partnership with the SFR mobile service provider in France (Vodafone group).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a blind watermarking method where watermark is chosen as the hologram of the signal to be
embedded. In the proposed approach the quantized phase of the hologram is embedded into an image using quantization
index modulation (QIM). The robustness of the proposed technique is tested against several attacks such as filtering,
compression, occlusion and cropping. Also the effects of quantization of the hologram on the reconstruction quality are
demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, several researchers have constructed novel neural network models based on lattice algebra. Because of computational similarities to operations in the system of image morphology, these models are often called morphological neural networks. One neural model that has been successfully applied to many pattern recognition problems is the single-layer morphological perceptron with dendritic structure (SLMP). In this model, the fundamental computations are performed at dendrites connected to the body of a single neuron. Current training algorithms for the SLMP work by enclosing the target patterns in a set of hyperboxes orthogonal to the axes of the data space. This work introduces an alternate model of the SLMP, dubbed the synaptic morphological perceptron (SMP). In this model, each dendrite has one or more synapses that receive connections from inputs. The SMP can learn any region of space determined by an arbitrary configuration of hyperplanes, and is not restricted to forming hyperboxes during training. Thus, it represents a more general form of the morphological perceptron than previous architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extended interpolatory approximation is discussed for some classes of n-dimensional statistical signals. Firstly, we present two sufficient conditions of the optimum approximation. Then, as example of this optimum approximation, we consider approximation of n-dimensional statistical signals expressed by linear combination of the finite set of base signals in a n-dimensional space. We assume that these signals have generalized mutual moment smaller than a given positive number. Related topic was discussed in the previous paper. However, discrete running approximation along the time axis that uses shift-invariant interpolation functions with the finite supports is not treated in the previous paper. In the final part of this paper, we discuss best running approximation of n-dimensional signals expressed by linear combination of the finite set of sinusoidal signals in a n-dimensional space. The presented methods have the minimum measure of approximation error among all the linear and the nonlinear approximations using the same measure of error and generalized sample values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After a comprehensive overview of camera calibration algorithms, this paper introduces a robust calibration method based on contour matching of a pattern object. In contrast with state of the art methods, no manual and fastidious selection of particular pattern points is required. In order to evaluate the accuracy of the proposed approach, an objective comparison with three well-known methods (Faugeras and Toscani, Tsai, and Zhang) is proposed and discussed in detail. Experiments show that the proposed robust approach outperforms existing techniques and yields accurate results even when calibration is performed from a single image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate and robust techniques for automated feature extraction (AFE) from remotely-sensed imagery are an important area of research, having many applications in the civilian and military/intelligence arenas. Much work has been undertaken in developing sophisticated tools for performing these tasks. However, while many of these tools have been shown to perform quite well (such as the GENIE and Genie Pro software developed at LANL), these tools are not perfect. The classification algorithms produced often have significant errors, such as false-alarms and missed detections. We describe some efforts at improving this situation in which we add a clutter mitigation layer to our existing AFE software (Genie Pro). This clutter mitigation layer takes as input the output from the previous feature extraction (classification) layer and, using the same training data (pixels providing examples of the classes of interest), uses similar machine-learning techniques to those used in the previous AFE layer to optimise an image-processing pipeline aimed at improving any errors existing in the AFE output. While the AFE layer optimises an image processing pipeline that can combine spectral, logical, textural, morphological and other spatial operators, etc., the clutter mitigation layer is limited to a pool of morphological operators. The resulting clutter mitigation algorithm will not only be optimized for the particular feature of interest but will also be co-optimized with the preceding feature extraction algorithm. We demonstrate these techniques on several feature extraction problems in various multi-spectral, remotely-sensed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effectiveness of many hyperspectral feature extraction algorithms involving classification (and linear spectral
unmixing) are dependent on the use of spectral signature libraries. If two or more signatures are roughly similar to each
other, these methods which use algorithms such as singular value decomposition (SVD) or least squares to identify the
object will not work well. This especially goes for these procedures which are combined with three-dimensional discrete
wavelet transforms, which replace the signature libraries with their corresponding lowpass wavelet transform
coefficients. In order to address this issue, alternate ways of transforming these signature libraries using bandpass or
highpass wavelet transform coefficients from either wavelet or Walsh (Haar wavelet packet) transforms in the spectral
direction will be described. These alternate representations of the data emphasize differences between the signatures
which lead to improved classification performance as compared to existing procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Abundances of material components in objects are usually computed using techniques such as linear spectral unmixing
on individual pixels captured on hyperspectral imaging devices. The effectiveness of these algorithms usually depends
on how distinct the spectral signatures in the libraries used in them are. This can be measured by SVD or Least Squares
based figures of merit such as the condition number of the matrix consisting of the library signatures. However, it must
be noted that each library signature usually is the mean of a number of signatures representing that material, or class of
objects. This aspect of how individual library spectral signatures vary in real-world situations needs to be addressed in
order to more accurately assess linear unmixing techniques. These same considerations also goes for signature libraries
transformed into new ones by wavelet or other transforms. Figures of merit incorporating variations within each library
signature (which more accurately reflects real measurements) will be implemented and compared with other figures of
merit not taking these variations into account.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hybrid variational image segmentation techniques, involving energy functionals which combine contour- and
region-based terms, have been actively investigated due to their ability to jointly integrate shape and texture cues
about scene objects. Minimizing these functionals can be efficiently achieved using curve evolution techniques,
yielding region competition models along the deforming segmentation boundaries. Within this framework, this
paper presents a novel region-based statistical active contour approach to segmentation, refered to as info-snakes.
Here, the segmentation problem is expressed as the maximization of an information-theoretic similarity measure
between the image luminance distribution, and the label distribution of a regional template defining a multi-object
geometric prior model, subject to regularization constraints on region boundaries. The probability densities
associated with luminance distributions within each template region are estimated using a nonparametric Parzen
technique, which avoids resorting to prior assumptions on image statistics or to a training phase. We shall
focus our attention on the Ali-Silvey class of information measures, and derive the corresponding gradient flows
over nonparametric smooth curve spaces. As expected, the evolution equations for the template boundaries
interpret as a statistical region competition model, promoting statistically consistent regions relative to the
chosen information metrics. An efficient implementation using a multiphase level set technique is finally provided.
Experiments on a cardiac perfusion MRI dataset are presented, demonstrating the relevance of info-snakes for
implementing computer-assisted diagnosis tools in cardiology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this work is to model the apparent motion in image sequences depicting natural dynamic scenes
(rivers, sea-waves, smoke, fire, grass etc) where some sort of stationarity and homogeneity of motion is present.
We adopt the mixed-state Markov Random Fields models recently introduced to represent so-called motion
textures. The approach consists in describing the distribution of some motion measurements which exhibit
a mixed nature: a discrete component related to absence of motion and a continuous part for measurements
different from zero. We propose several extensions on the spatial schemes. In this context, Gibbs distributions
are analyzed, and a deep study of the associated partition functions is addressed. Our approach is valid for
general Gibbs distributions. Some particular cases of interest for motion texture modeling are analyzed. This
is crucial for problems of segmentation, detection and classification. Then, we propose an original approach for
image motion segmentation based on these models, where normalization factors are properly handled. Results
for motion textures on real natural sequences demonstrate the accuracy and efficiency of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of inferring the two-dimensional location of an illuminant in an image, such as a digital photograph, has been studied as part of general scene analysis. One application area for analytical techniques and algorithms to this end is forensic image analysis, particularly in the detection of tampering and composites. For instance, if algorithms applied to two figures or two sections of an image reveal that these regions were captured under different lighting directions, it is highly likely that these regions were captured separately and later digitally integrated into the composite image, that is, the photograph was tampered. Recently some of these techniques have been applied to the study of realist master paintings and drawings in order to answer questions about the studio working methods of artists. We reveiw several sources of visual information about the location of the illuminant, stressing those most relevant to problems in the analysis of art, specifically paintings and drawings. We then present the foundations for a Bayesian method for integrating estimates from different sources, and then explore the application of such methods to the problem of inferring the location of the illuminant in realist paintings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Follow-up studies of bronchial wall remodeling in asthmatics based on multi-detector computed tomography (MDCT)
imaging is the emerging modality for therapy assessment. While providing statistically significant quantification of
global variation before/after treatment, the employed cross-section-area estimation techniques cannot guarantee the
absolute accuracy of point-by-point estimation. Such uncertainty comes from the impossibility to define an accurate
cross-sectional plane of a bronchus at locations where the notion of central axis is questionable. In order to overcome
such limitation, this paper develops an original automated volumetric approach for bronchial wall quantification
involving a successive model-based 3D reconstruction of the inner and outer bronchial wall surfaces. The inner surface
is segmented by means of strong 3D morphological filtering and model-fitting. An optimal geometrico-topological
model is generated by using a restricted Delaunay triangulation approach. The model is then dynamically deformed in
the surface normal direction, under the constraint of local energy minimization acting at each evolving vertex. The
energy potentials oppose a mesh-derived elastic component combining topological and geometric features in order to
preserve shape regularity, and an expansion potential exploiting image characteristics. The deformation process both
adapts the mesh resolution and handles topology changes and auto-collisions. The developed 3D modeling stabilizes the
deformation at the level of the outer surface of the bronchial wall and provides robustness with respect to bronchus-blood
vessel contacts, where image data is irrelevant. The accuracy of the volumetric segmentation approach was
evaluated with respect to 3D mathematically-simulated phantoms of bronchial subdivisions. Comparisons with recent
2D techniques, carried out on simulated and real MDCT data showed similar performances in cross-section wall area
quantification. The benefit of using volumetric versus cross-section area quantification is finally argued in the context
of bronchial reactivity and wall remodeling follow-up.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper documents the application of a new image processing algorithm, two-dimensional non-linear additive decomposition (NLAD), which is used to identify regions in a digital image whose gray-scale (or color) intensity is different than the surrounding background. Standard image segmentation algorithms exist that allow users to segment images based on gray-scale intensity and/or shape. However, these processing techniques do not adequately account for the image noise and lighting variation that typically occurs across an image. NLAD is designed to separate image noise and background from artifacts thereby providing the ability to consistently evaluate images. The decomposition techniques used in this algorithm are based on the concepts of mathematical morphology. NLAD emulates the human capability of visually separating an image into different levels of resolution components, denoted as 'coarse', 'fine', and 'intermediate.' Very little resolution information overlaps any two of the component images. This method can easily determine and/or remove trends and noise from an image. NLAD has several additional advantages over conventional image processing algorithms, including no need for a transformation from one space to another, such as is done with Fourier transforms, and since only finite summations are required, the calculational effort is neither extensive nor complicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays multichannel (multi and hyperspectral) remote sensing (RS) is widely used in different areas. One of the basic factors that can deteriorate original image quality and prevent retrieval of useful information from RS data is noise. Thus, image filtering is a typical stage of multichannel image pre-processing. Among known filters, the most efficient ones commonly require a priori information concerning noise type and its statistical characteristics. This explains a great need in automatic (blind) methods for determination of noise type and its characteristics. Several such methods already exist, but majority of them do not perform appropriately well if analyzed images contain a large percentage of texture regions, details and edges. Besides, many blind methods are multistage where some preliminary and appropriately accurate estimate of noise variance is required for next stages. To get around aforementioned shortcomings, below we propose a new method based on using inter-quantile distance and its minimization for obtaining appropriately accurate estimates of noise variance. It is shown that mathematically this task can be formulated as finding a mode of contaminated asymmetric distribution. And this task can be met for other applications. The efficiency of the proposed method is studied for a wide set of model distribution parameters. Numerical simulation results that confirm applicability of the proposed approach are presented. They also allow evaluating the designed method accuracy. Recommendations on method parameter selection are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic inspection of small components on loaded Printed Circuit Board (PCB) is difficult due to the requirements of
precision and high speed. In this paper, an automatic inspection method is presented based on Singular Value
Decomposition (SVD) and Support Vector Machine (SVM). For the image of loaded PCB, we use prior location of
component to get approximate region of the small component. Then the accurate numeral region of the small component
can be segmented by using the projection data of this region. Next, Singular Values (SVs) of the numeral region can be
obtained through SVD of the gray image. These SVs are used as the features of small component to train a SVM
classifier. Then, the automatic inspection can be completed by using trained SVM classifier. The method based on
projection data can overcome some difficulties of traditional method using connected domain, and reduce complexity of
template matching. The SVD avoids using binary image to analyze the numerals, so the numeral information is retained
as much as possible. Finally, the experimental results prove that the method in this paper is effective and feasible to some
extent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As people read texts, their points of gaze can be described either as a sequence or as a pattern of dots. If reading fixations are visualized as a pattern and their duration is graphically attributed to the 3rd dimension, image processing techniques can be employed to describe individual reading styles. Two reader groups of text editors and of University students were matching according to parametric tests. Yet they appeared to have marked inter-subject variability of fixation distribution when individual cases were considered. To illustrate this, we applied a simple "Coulomb law" - like model that takes both fixation duration and spacing into account. Further the image entropy filter was shown to distinguish the readers by the extreme value points. Statistical averaging can level out differences in reading patterns that appear by image processing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.