Building detection has been a prominent area in the area of image classification. Most of the research effort is adapted to the specific application requirements and available datasets. Our dataset includes aerial orthophotos (with spatial resolution 20cm), a DSM generated from LiDAR (with spatial resolution 1m and elevation resolution 20 cm) and DTM (spatial resolution 2m) from an area of Athens, Greece. Our aim is to classify these data by means of Markov Random Fields (MRFs) in a Bayesian framework for building block extraction and perform a comparative analysis with other supervised classification techniques namely Feed Forward Neural Net (FFNN), Cascade-Correlation Neural Network (CCNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). We evaluated the performance of each method using a subset of the test area. We present the classified images, and statistical measures (confusion matrix, kappa coefficient and overall accuracy). Our results demonstrate that the MRFs and FFNN perform better than the other methods.
We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either
independent or coupled, for the recognition of degraded characters. The independent classifiers are a
vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image
rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and
horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers
are then combined linearly at the decision level. We compare the different classifiers -independent,
coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits
and the recognition of real degraded old printed characters. Our results show that coupled DBNs
perform better on degraded characters than the linear combination of independent HMM scores. Our
results also show that the best classifier is obtained by linearly combining the scores of the best coupled
DBN and the best independent HMM.
It is well-known that Total Variation (TV) minimization with L2 data fidelity terms (which corresponds to white
Gaussian additive noise) yields a restored image which presents some loss of contrast. The same behavior occurs
for TV models with non-convex data fidelity terms that represent speckle noise. In this note we propose a new
approach to cope with the restoration of Synthetic Aperture Radar images while preserving the contrast.
We investigate in this paper the application of dynamic Bayesian networks (DBNs) to the recognition
of handwritten digits. The main idea is to couple two separate HMMs into various architectures. First,
a vertical HMM and a horizontal HMM are built observing the evolving streams of image columns and
image rows respectively. Then, two coupled architectures are proposed to model interactions between
these two streams and to capture the 2D nature of character images. Experiments performed on the
MNIST handwritten digit database show that coupled architectures yield better recognition performances
than non-coupled ones. Additional experiments conducted on artificially degraded (broken)
characters demonstrate that coupled architectures better cope with such degradation than non coupled
ones and than discriminative methods such as SVMs.
A method for speckle reduction and segmentation of synthetic aperture radar (SAR) images is presented here. As a first step we consider a preclassification to a specific number of labels (classes). The second step (final classificatio) is the relaxation labeling process. The method can be considered as a fast unsupervised technique. We have worked in 3-look simulated and real ERS-1 amplitude images.
A generalized Gaussian model that can be used for speckle reduction and restoration of synthetic aperture radar (SAR) images is presented here. We have worked in 3-look simulated and real ERS-1 amplitude images. A MAP approximation of the a posteriori log-likelihood distribution is given and the results of a local deterministic estimator are presented.
An iterative optimization algorithm which can be used for speckle reduction and segmentation of synthetic aperture radar (SAR) images is presented here. This method contains as a first step a fast restoration and as a second one the segmentation.
We have worked in 3-look simulated and real ERS-1 amplitude images. The iterative filter is based on a membrane model Markov random field (MRF) approximation optimized by a synchronous local iterative method (SLIM). The final form of restoration gives a total sum preserving regularization (TSPR).
The success of Richardson-Lucy (RL) algorithm is that it forces the restored image to be non-negative and to conserve global flux at each iteration. The problem with RL algorithm is that it produces solutions that are highly unstable, with high peaks and deep valleys. Our aim is to modify RL algorithm in order do regularize it while preserving positivity and total photometry as far as possible. Data instances that are not compatible with others can cause singularities in the restoration solution. So, we have an ill-posed problem and a regularization method is needed to replace it to a well-posed problem. The regularization approach overcomes this difficulty by choosing among the possible objects one 'smooth' that approximate the data. The basic underlying idea in most regularization approaches is the incorporation of 'a priori' knowledge into the restoration. In this article we try to give a simple method of spatial regularization deriving from RL algorithm in order to overcome the problem of noise amplification during the image reconstruction process. It is very important in astronomy and remote sensing to regularize images while having under control their photometric behavior. We propose a new reconstruction method preserving both the global photometry and local photometric aspects.
Roger Fjortoft, Jean-Marc Boucher, Yves Delignon, Rene Garello, Jean-Marc Le Caillec, Henri Maitre, Jean-Marie Nicolas, Wojciech Pieczynski, Marc Sigelle, Florence Tupin
Due to the enormous quantity of radar images acquired by satellites and through shuttle missions, there is an evident need for efficient automatic analysis tools. This article describes unsupervised classification of radar images in the framework of hidden Markov models and generalised mixture estimation. In particular, we show that hidden Markov chains, based on a Hilbert-Peano scan of the radar image, are a fast and efficient alternative to hidden Markov random fields for parameter estimation and unsupervised classification. We also describe how the distribution families and parameters of classes with homogeneous or textured radar reflectivity can be determined through generalised mixture estimation. Sample results obtained on real and simulated radar images are presented.
In 1996, the French research group ISIS* proposed a research initiative in the field of radar imaging. One purpose aims to study specificities of multitemporal SAR (synthetic Aperture Radar) images. This paper presents some results of research undertaken in the multitemporal workgroup. In this paper two different kinds of SAR multitemporal data have been distinguished: temporal sequences of spatial images and images containing temporal information like interferograms. To each of them, correspond different types of image processing. The multitemporal information may be used either to enhance static information (multitemporal filtering) or to study temporal evolution (change detection, temporal tracking of structures). In this paper, two different methods of quality restoration are proposed to enhance amplitude data using sequences of SAR images. Then a model for detecting and tracking environmental non-rigid structures (like the coastal line) is explained. Lastly, a segmentation method for phasimetric effects on interferograms is described.
Hyperparameter estimation for incomplete data in Markov Random Field image restoration is investigated. Assuming linear dependence of energies wrt hyperparameters framework, we use a classical cumulant expansion technique for Maximum Likelihood estimation of hyperparameters of the prior, pixel regularization probability density function. The particular case where the prior potential is an homogenous function of pixels is fully analyzed. This approach is then extended to an explicit joint boundary-pixel process aimed to preserve discontinuities. A generalized stochastic gradient (GSG) algorithm with a fast sampling technique is devised aiming to achieve simultaneous hyperparameter estimation, pixel and boundary restoration. Image restoration performances of posterior mean performed during GSG convergence and of simulated annealing performed after GSG convergence are compared experimentally. Results and perspectives are given.
The need for hierarchical statistical tools for modeling and processing image data, as well as the success of Markov random fields (MRFs) in image processing, have recently given rise to a significant research activity on hierarchical MRFs and their application to image analysis problems. Important contributions, relying on different models and optimization procedures, have thus been recorded in the literature. This paper presents a synthetic overview of available models and algorithms, as well as an attempt to clarify the vocabulary in this field. We propose to classify hierarchical MRF-based approaches as explicit and implicit methods, with appropriate subclasses. Each of these major classes is defined in the paper, and several specific examples of each class of approach are described.
In order to provide a better quantitative and morphologic description of complex vascular lesions, we propose an approach of 3D reconstruction of the vessel internal wall, based on data fusion from two different imaging sources: two x ray digital angiography projections and a stack of endovascular echography slices. After extraction of echographic and angiographic information to be fused, a geometric model leads to the determination of the unknown parameters which allow the alignment of all data in a common reference frame. Both types of data are then directly included in a probabilistic reconstruction process based on Markov random fields. The Markovian model consists of cost functions reflecting x ray and ultrasonic data consistency and regularization elements to control the anatomic reality of the reconstruction. The optimal solution according to the definition criteria is obtained by minimizing the model energy with an algorithm based on simulated annealing. Preliminary results have been obtained with data acquired on a dog aorta. The accuracy of reconstruction by data fusion is significantly improved compared with results obtained with separate reconstruction from angiographic or echographic data. Taking into account all information available about the problem, the method avoids uncertainties and ambiguities of a reconstruction based only on one modality, and the probabilistic fusion solves the possible contradictions between both acquisitions.
KEYWORDS: 3D modeling, Reconstruction algorithms, Angiography, Medical imaging, Chemical elements, Data acquisition, Data modeling, Binary data, Signal attenuation, Distance measurement
A method to reconstruct vessel lumens, based on constrained reconstruction of serial cross- sections from two digital angiographic projections, is proposed. Each cross-section is reconstructed by a binary matrix from its two densitometric data projections, with ambiguities on the reconstruction removed by a priori knowledge. A probabilistic approach in which properties of the expected solution are described through a Markov Random Field (MRF) model was chosen to facilitate incorporation of a priori information on the vessel segment to be reconstructed. The best solution amongst all possible ones is obtained by the implementation of an optimization algorithm based on Simulated Annealing. An initial configuration consisting of the ellipse of best fit is constructed and then imposed to guarantee rapid convergence to the optimal solution. This initial configuration is then deformed to be made consistent with projection data while being constrained into a connected realistic shape. The MRF model parameters have been estimated on 2D synthetic slices, from systematic quality reconstruction measurements. The method provides a good reconstruction of complex shapes, and can be applied to single pathologic vessels as well as to branchings. The method thus far has only been validated on peripheral arteries and bifurcations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.