PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A well known problem associated with super-resolution of imagery is the introduction of oscillatory artifacts into the super-resolved object estimate. In this paper, we derive a Maximum A-Posteriori (MAP) object estimator subject to a constraint on the norm of the first differences of adjacent object pixels. The posterior density is derived from a Poisson observation model and a Poisson prior for the object. It is shown that this optimization problem is identical to a MAP estimator with a Markov Random Field (MRF) prior using zero- and first-order neighborhood cliques. While most MRF priors used for image restoration neglect the singleton clique, our model incorporates this additional a-priori object knowledge into the estimator. An iterative solution to the optimization problem is developed using the Picard iteration technique. Results are presented which demonstrate substantial artifact reduction while achieving the bandwidth extension necessary to accomplish super-resolution. Test cases include extended objects and natural scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for classification of multi-spectral data is proposed. This method is based on fitting mixtures of multivariate Gaussian components to training and unlabeled samples by using the EM algorithm. Through a backtracking search strategy with appropriate depth bounds, a series of mixture models are compared. The validity of the candidate models are evaluated by considering their description lengths and allocation rates. The most suitable model is selected and the multi-spectral data are classified accordingly. The EM algorithm is mapped onto a massively parallel computer system to reduce the computational cost. Experimental results show that the proposed algorithm is more robust against variations in training samples than the conventional supervised Gaussian maximum likelihood classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examine methods to assess the convergence of Markov chain Monte Carlo (MCMC) algorithms and to accelerate their execution via parallel computing. We propose a convergence measure based on the deviations between simultaneously running MCMC algorithms. We also examine the acceleration of MCMC algorithms when independent parallel sampler are used and report on some experiments with coupled samplers. As applications we use small Ising model simulations and a larger medical image processing algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose the knowledge representation and evidence propagation schemes based on multivariate belief functions and present a medical image recognition system as an example to demonstrate their effectiveness in evidential reasoning. The multivariate belief functions, defined in a product space, are employed to represent domain specific knowledge such as rules or propositions. The product space and its sub-spaces (margins) are composed of a set of compatible frames. The logical relationships among these margins can be easily defined by using multivariate belief functions. Propagation of evidence is executed by extending or marginalizing the associated multivariate belief function to those margins characterized by their logical relationships. By using the blackboard-based architecture and the profound features of D-S theory, the proposed image recognition system is capable of mimicking the reasoning process of a human expert in recognizing anatomical entities efficiently in a set of correlated x-ray computed tomography, proton density weighted, and T2-weighted magnetic resonance images. Additionally, the proposed schemes can be also applied to other problem domains by employing the appropriate knowledge base. Several experiment results are given to illustrate the performance of the proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the influence of mutation on the behavior of Genetic Algorithms through a series of examples and experiments. From a traditional perspective, surprising results are obtained. Implications are discussed which point to the importance of open questions concerning Genetic Algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new stochastic optimization algorithm is introduced in which a pipeline of many biased stochastic procedures cooperate to concurrently sample the usual Boltzmann distribution for different temperatures. Convergence and efficiency of the pipeline algorithm is proved under certain conditions. Experimental confirmation is provided using seven standard test problems in nonlinear optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A disadvantage of using discrete-state Markov random field models of images is that optimal estimators for reconstruction problems require excessive and typically random amounts of computation. In one approach the key task is the computation of the conditional mean of the field given the data or equivalently the unconditional mean of the a posteriori field. In this paper we describe a hierarchy of deterministic parallelizable methods for such computations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, the principle of competitive learning is used to develop an iterative algorithm for image recovery and segmentation. Within the framework of Markov Random Fields, the image recovery problem is transformed to the problem of minimization of an energy function. A local update rule for each pixel point is then developed in a stepwise fashion and is shown to be a gradient descent rule for an associated global energy function. Relationship of the update rule to Kohonen's update rule is shown. Quantitative measures of edge preservation and edge enhancement for synthetic images are introduced. Simulation experiments using this algorithm on real and synthetic images show promising results on smoothing within regions and also on enhancing the boundaries. Restoration results computer favorably with recently published results using Markov Random Fields and mean field approximation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As we reported previously, learning of a multi-layered hard-limited perceptron can be formulated into a set of simultaneous linear inequalities. Solving these inequalities under a given training set would then allow us to achieve the goal of learning in this system. If the dimension N of the input vector is much larger than the number M of different patterns to be learned, then there is considerable freedom for the system to select a proper solution of the connection matrix. In most cases, even a single layer perceptron will do the learning satisfactorily. This paper reports the results of some theoretical and experimental studies of this one-layered, hard-limited perceptron trained under the novel, one-step, noniterative learning scheme. Particularly, the analysis of some important properties of this novel learning system, such as automatic feature competition, domain of convergence, and robustness of recognition, are discussed in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural net models have been studied for many years in the hope of achieving human- like performance in the fields of speech, image recognition and pattern recognition. For high performance and for controlling the size of the network, the input information must be preprocessed before being fed into the neural network. In this paper, a probabilistic spectral feature extraction technique (PSFET) with multiview spectral representations and its applications are described. During training and testing, the PSFET allows efficient extraction of useful information in addition to generating an input vector size for best classification performance by the following neural network. Experimental results indicate that the performance of the neural network increases in classification accuracy when PSFET is used at the input. The network also generalizes better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the idea of image compression using the Boltzmann machines will be developed. We will first introduces (theta) -transformation and show it is complete for a certain class of images. Then we will show that the Boltzmann machine can implement the first few terms in the (theta) -transformation; therefore, the Boltzmann machines can be used as image encoders. The compression ratio and distortion of this new image compression method are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents neural network models for storing terminating and cyclic temporal sequences of patterns under synchronous, sequential and asynchronous dynamics. We use fully interconnected neural networks with asymmetric weight connections for synchronous and sequential dynamics and a layered neural network with feedback for asynchronous dynamics. The network were successfully implemented and the number of patterns that could be stored and recalled was approximately 12% of the size of the patterns in the network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study the global dynamics of winner-take-all (WTA) networks. These networks generalize Hopfield's networks to the case where competitive behavior is enforced within clusters of neurons while the interaction between clusters is modeled by cluster-to- cluster connectivity matrices. Under the assumption of intracluster and intercluster symmetric connectivity, we show the existence of Lyapunov functions that allow us to draw rigorous results about the long-term behavior for both the iterated-map and continuous-time dynamics of the WTA network. Specifically, we show that the attractors of the synchronous, iterated- map dynamics are either fixed points or limit cycles of period 2. Moreover, if the network connectivity matrix satisfies a weakened form of positive definiteness, limit cycles can be ruled out. Furthermore, we show that the attractors of the continuous-time dynamics are only fixed points for any connectivity matrix. Finally, we generalize the WTA dynamics to distributed networks of clustered neurons where the only requirement is that the input-output mapping of each cluster be the gradient map of a convex potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the classification of pen gestures using the learning vector quantization algorithm, a supervised learning technique. Both single stroke and multi stroke gestures are considered. The slope information from the strokes is extensively preprocessed before classification. The preprocessing and the classification algorithms chosen help to obtain very high rates of gesture classification. This is especially true in the multi stroke case. The recognition of the pen gestures is independent of their position, orientation, and size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A spatio-temporal method for identifying objects contained in an image sequence is presented. The Hidden Markov Model (HMM) technique is used as the classification algorithm, making classification decisions based on a spatio-temporal sequence of observed object features. A five class problem is considered. Classification accuracies of 100% and 99.7% are obtained for sequences of images generated over two separate regions of viewing positions. HMMs trained on image sequences of the objects moving in opposite directions showed a 98.1% successful classification rate by class and direction of movement. The HMM technique proved robust to image corruption with additive correlated noise and had a higher accuracy than a single look nearest neighbor method. A real image sequence of one of the objects used was successfully recognized with the HMMs trained on synthetic data. This study shows the temporal changes that observed feature vectors undergo due to object motion hold information that can yield superior classification accuracy when compared to single frame techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most successful methods to stabilize inverse ill-posed problems in visual reconstruction use a priori information on the local regularity of the image as well as constraints on the geometry of the discontinuities. A commonly used method to incorporate prior knowledge into the problem is to adopt a Bayesian approach in which the image is modelled by a parametric Gibbs prior and the solution is obtained by minimizing the resulting posterior energy function (MAP estimate). However, this approach presents two major difficulties: the first is related to the non-convexity of the function to be optimized; the second to the choice of the model parameters that best fit the available prior knowledge. Since these parameters strongly affect the quality of the reconstructions, their selection is a critical task. They are usually determined empirically by trial and error. The paper proposes a generalized Boltzmann Machine which makes it possible to learn the most appropriate parameters for a given class of images from a series of examples. The trained Boltzmann Machine is then used to implement an annealing scheme for the minimization of the non-convex posterior energy. The method is applied to the restoration of piecewise smooth images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the framework of pattern recognition via Markov random field modelling, we propose three methods for estimating the topological and statistical parameters characterizing the model, namely clique orders, anisotropy indices, weighting coefficient between cliques with various orders, coefficients of polynomial potential functions and temperature. The developed approaches successively exploit local information associated with conditional probability distributions, a similarity criterion expressed as a distance in variations between appropriate probability distributions, standard least-square estimation and renormalization theory. Extensive experiments performed on a variety of synthetic images have established the relevance and accuracy of the proposed method. Its performances are further demonstrated within the framework of urban areas segmentation in SPOT images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We define a methodology for aligning multiple, three-dimensional, magnetic-resonance observations of the human brain over six degrees of freedom. The observations may be taken with disparate resolutions, pulse sequences, and orientations. The alignment method is a practical combination of off-line and interactive computation. An off-line computation first automatically performs a robust surface extraction from each observation. Second, an operator executes interactively on a graphics workstation to produce the alignment. For our experiments, we were able to complete both alignment tasks interactively, due to the quick execution of our implementation of the off-line computation on a highly-parallel supercomputer. To assess accuracy of an alignment, we also propose a consistency measure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work investigates the application of evolutionary programming for automatically configuring neural network architectures for pattern classification tasks. The evolutionary programming search procedure implements a parallel nonlinear regression technique and represents a powerful method for evaluating a multitude of neural network model hypotheses. The evolutionary programming search is augmented with the Solis & Wets random optimization method thereby maintaining the integrity of the stochastic search while taking into account empirical information about the response surface. A network architecture is proposed which is motivated by the structures generated in projection pursuit regression and the cascade-correlation learning architecture. Results are given for the 3-bit parity, normally distributed data, and the T-C classifier problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fractal-based image processing has been applied in many areas in recent years. Many researchers have discussed its application in feature detection, texture segmentation, obtaining 3D information etc. Fractinal Brownian Random field (FBR) is the basic fractal image model, but it contains some problems. First, FBR is isotropic, but nature images generally are anisotropic. Second, FBR is nonstable, it is not easy for processing. For solving these problems, in this paper we present a new fractal image model--Discrete Fractional Brownian Wavelet Random Field (DFBWR). It is the wavelet transform of FBR. After giving the definition of DFBWR, we discussed some important properties. We can estimate the parameter H with DFBWR. According to these H values, we can segment the textures. At last, we give a texture segmentation result of an experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A signal classifier is presented whose features consist of characteristic frequencies/resonances of time series of the signa over the observation window. Signals of interest here include those which can be well approximated by a relatively small number of sinusoids; i.e. harmonic signals, impulse responses from various types of objects, etc. The relevant features of these signals are their principal frequencies, corresponding decay coefficients and amplitudes (triplets). These are extracted from the time series by an application of the CTLS method. A nearest neighbor association is implemented using a maximum entropy distance measure between the extracted feature triplet vectors and a set of characteristic triplets for each signal type. This creates an assignment between the extracted triplets to each signal characteristics triplet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper expands on previous research efforts for superresolving unresolved closely-space objects (CSO) present in IR focal plane data via model-based signal processing techniques. It has been shown that a model-based maximum likelihood estimation technique attains the Cramer-Rao theoretical lower bound on the source position and intensity and it is used for resolving unresolved targets. Here, we present a Neyman-Pearson log-likelihood ratio receiver structure for detecting the presence of a single unresolved target (non-CSO) versus the presence of two CSOs. We derive analytical expressions for the receiver operating characteristic (ROC) curves for the proposed receiver structure. For a given false alarm rate (i.e. declaring the presence of a two-source CSO scenario when a single-source non-CSO is present), the Neyman-Pearson receiver maximizes the probability of detection. With simulated two-source CSO data, we present a partial verification of the ROC curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image differencing based inspection allows the comparison of a prior reference image with a subsequent inspection image to detect changes that can be attributed to flaw induced damage between inspection periods. In our application, the inspection system is required to find new flaws to orbiting space structures that might arise as a result of micro-meteorites, space debris, effects of atomic oxygen damage etc. Direct differencing of these `before' and `after' signals for the flaw detection problem is complicated by changes that are not the result of flaws. These benign changes result from the image noise, mis-registration effects induced by non- repeatability in the imaging viewpoint at each inspection period, and variability of ambient illumination resulting from change of solar angle induced by orbital motion. In this paper we discuss the methods by which some of these effects are handled and suggest a modeling framework using statistical mechanics techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fractal characterization of signals is well suited in analysis of some time series data and in classification of natural shapes and textures. A maximum likelihood estimator is used to measure the parameter H which is directly related to the fractal dimension. The robustness of the estimator and the performance of the method are demonstrated on datasets generated using a variety of techniques. Finally the characterization is used in segmentation of composite images of natural textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Noise is typically present in the input signal for perception problems. Noise arises in speech recognition due to both background sounds, and unintentional derivations from the intended utterance on the part of the speaker. The task of speech recognition is to correctly identify the words (or meaning) carried by the speech signal. Thus the speech recognizer must be able to successfully handle noise. We describe here a method of explicitly identifying and labeling noise elements in a speech signal. NOISE hypotheses are generated, and considered for acceptance, as part of an abductive inference strategy for speech processing. An abductive problem solver is able to treat noise within a unified inferential framework, treating noise hypotheses similarly to other hypotheses, weighing the explanatory alternatives in a context-sensitive manner, and with no need to resort to indirect methods to achieve noise tolerance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive image coding scheme is introduced in this paper. The proposed method explores the relationship between wavelet transform and statistical texture analysis, decomposing an image into octal signal bands and utilizing statistical texture information as a guiding tool to control the levels of signal decomposition necessary in coding in order to achieve high compression ratio and image reconstruction with superior quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an applied practical comparison of Bayesian and Dempster-Shafer techniques useful for managing uncertainty in sensing. Three formulations of the same example are presented: a Bayesian, a naive Dempster-Shafer, and a Dempster-Shafer approach using a refined frame of discernment. Both the Bayesian and Dempster-Shafer (with a refined frame of discernment) yield similar results; however, information content and representations are different between the two methods. Bayesian theory requires a more explicit formulation of conditioning and the prior probabilities of events. Dempster-Shafer theory embeds conditioning information into its belief function and does not rely on prior knowledge, making it appropriate for situations where it is difficult to either collect or posit such probabilities, or isolate their contribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gibbs sampling, and other stochastic simulation methods, have recently received considerable attention in Bayesian statistics. Significant progress has been made in the areas of developing techniques for sampling from non-conjugate distributions, and analyzing theoretical and practical aspects relating to convergence. One of the powers of Gibbs sampling is the way it can simplify the expression of data models by replacing the evaluation of the integrals needed to compute the relevant posterior quantities by sampling from multidimensional distributions. This has opened up the way to solve complex Bayesian models that are not analytically tractable. In this paper we show how to separate variability in model parameters from variability due to the model extraction process by fitting hierarchial models to image sequences using Gibbs sampling. First, we review some of the recent developments in Gibbs sampling. Then we describe some of our experimental work using Gibbs sampling to extract geometric parameter distributions from industrial images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerical computation with Bayesian posterior densities has recently received much attention both in the statistics and computer vision communities. This paper explores the computation of marginal distributions for models that have been widely considered in computer vision. These computations can be used to assess homogeneity for segmentation, or can be used for model selection. In particular, we discuss computation methods that apply to a Markov random field formation, implicit polynomial surface models, and parametric polynomial surface models, and present some demonstrative experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.