In this paper, we investigate the use of the non-local means (NLM) denoising approach in the context of image
deblurring and restoration. We propose a novel deblurring approach that utilizes a non-local regularization
constraint. Our interest in the NLM principle is its potential to suppress noise while effectively preserving edges
and texture detail. Our approach leads to an iterative cost function minimization algorithm, similar to common
deblurring methods, but incorporating update terms due to the non-local regularization constraint. The dataadaptive
noise suppression weights in the regularization term are updated and improved at each iteration, based
on the partially denoised and deblurred result. We compare our proposed algorithm to conventional deblurring
methods, including deblurring with total variation (TV) regularization. We also compare our algorithm to
combinations of the NLM-based filter followed by conventional deblurring methods. Our initial experimental
results demonstrate that the use of NLM-based filtering and regularization seems beneficial in the context of
image deblurring, reducing the risk of over-smoothing or suppression of texture detail, while suppressing noise.
Furthermore, the proposed deblurring algorithm with non-local regularization outperforms other methods, such
as deblurring with TV regularization or separate NLM-based denoising followed by deblurring.
KEYWORDS: Cameras, Video, Point spread functions, Optical transfer functions, Phase transfer function, Image analysis, Computer simulations, Signal to noise ratio, Image processing, Error analysis
This paper proposes a novel technique for estimating focused video frames captured by an out-of-focus moving
camera. It relies on the idea of Depth from Defocus (DFD), however overcomes the shortage of DFD by
reforming the problem in a computer vision framework. It introduces a moving-camera scenario and explores
the relationship between the camera motion and the resulting blur characteristics in captured images. This
knowledge leads to a successful blur estimation and focused image estimation. The performance of this algorithm
is demonstrated through error analysis and computer simulated experiments.
In this paper, we introduce a method to jointly track the object motion and estimate pose within the framework of particle
filtering. We focus on direct estimation of the 3D pose from a 2D image sequence. Scale-Invariant Feature Transform
(SIFT) is used to extract feature points in the images. We show that pose estimation from the corresponding feature points
can be formed as a solution to Sylvester's equation. We rely on a solution to Sylvester's equation based on the Kronecker
product method to solve the equation and determine the pose state. We demonstrate that the classical Singular Value
Decomposition (SVD) approach to pose estimation provides a solution to Sylvester's equation in 3D-3D pose estimation.
The proposed approach to the solution of Sylvester's equation is therefore equivalent to the classical SVD method for
3D-3D pose estimation, yet it can also be used for pose estimation from 2D image sequences. Finally, we rely on computer
simulation experiments to demonstrate the performance of our algorithm on video sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.