Image denoising is a fundamental image processing step for improving the overall quality of images. It is more
important for remote sensing images because they require significantly higher visual quality than others. Conventional
denoising methods, however, tend to over-suppress high-frequency details. To overcome this problem, we present a
novel compressive sensing (CS)-based noise removing algorithm using adaptive multiple samplings and reconstruction
error control. We first decompose an input noisy image into flat and edge regions, and then generate 8x8 block-based
measurement matrices with Gaussian probability distributions. The measurement matrix is applied to the first three
levels of wavelet transform coefficients of the input image for compressive sampling. The orthogonal matching pursuit
(OMP) is applied to reconstruct each block. In the reconstruction process, we use different error threshold values
according to both the decomposed region and the level of the wavelet transform based on the fast that the first level
wavelet coefficients in the edge region have the lowest error threshold, whereas the third level wavelet coefficients in
the flat region have the highest error threshold. By applying adaptive threshold value, we can reconstruct the image
without noise. Experimental results demonstrate that the proposed method removes noise better than existing state-ofthe-
art methods in the sense of both objective (PSNR/MSSIM) and subjective measures. We also implement the
proposed denoising algorithm for remote sensing images with by minimizing the computational load.
A multiple color-filter aperture (MCA) can provide a single camera with depth information and multifocusing. However, the original version of the MCA system exhibits inherent limitations such as manual, empirical tuning parameters for the color channel registration and fusion (CRF) process. Furthermore, a CRF output image still contains undesired out-of-focus blur because of the finite-sized apertures and the lateral displacement of each color-filter aperture, which results in low exposure, color mixing, deviation of color convergence, and divergence of light rays. For overcoming these problems, we present a real-time image processing solution for digital multifocusing in a MCA system.
Tracking non-rigid objects such as people in video sequences is a daunting task due to computational complexity and unpredictable environment. The analysis and interpretation of video sequence containing moving, deformable objects have been an active research areas including video tracking, computer vision, and pattern recognition. In this paper we propose a robust, model-based, real-time system to cope with background clutter and occlusion. The proposed algorithm consists of following four steps: (i) localization of an object-of-interest by analyzing four directional motions, (ii) region tracker for tracking moving region detected by the motion detector, (iii) update of training sets using the Smart Snake Algorithm (SSA) without preprocessing, (iv) active shape model-based tracking in region information. The major contribution this work lies in the integration for a completed system, which covers from image processing to tracking algorithms. The approach of combining multiple algorithms succeeds in overcoming fundamental limitations of tracking and at the same time realizes real time implementation. Experimental results show that the proposed algorithm can track people under various environment in real-time. The proposed system has potential uses in the area of surveillance, sape analysis, and model-based coding, to name of few.
Monitoring of large sites requires coordination of multiple cameras, and methods for relating events between distributed cameras. This paper presents a planar trajectory estimation method using multiple cameras, and the use of estimated trajectories to facilitate camera calibration. The algorithm addresses the problem of recovering the relative pose of several stationary cameras that observe one or more objects in motion. Each camera tracks several objects to produce a set of trajectories in the image. Using a simple calibration procedure, we recover the relative orientation of each camera to the local ground plane. We also present experimental results on both indoor and outdoor sequences containing persons and vehicles.
Recently, surveillance systems gain more attraction than simple CCTV systems, especially for complicated security environment. The major purpose of the proposed system is to monitor and track intruders. More specifically, accurate identification of each intruder is more important than simply recording what they are doing. Most existing surveillance systems simply keep recording the fixed viewing area, and some others adopt the tracking technique for wider coverage.
Although panning and tilting the camera can extend the viewing area, only a few automatic zoom control techniques for acquiring the optimum ROI has been proposed. This paper describes a system for tracking multiple faces from input video sequences using facial convex hull-based facial segmentation and robust hausdorff distance. The proposed algorithm adapts skin color reference map in the YCbCr color space and hair color reference map in the RGB color space for classifying face region. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, this algorithm computes displacement of the point set between frames using a robust hausdorff distance and the best possible displacement is selected. Finally, the initial face model is updated using the displacement. We provide experimental result to demonstrate the performance of the proposed tracking algorithm, which efficiently tracks rotating, and zooming faces as well as multiple faces in video sequences obtained from at CCD camera.
Tracking non-rigid objects such as people in video sequences is a daunting task due to computational complexity and unstable performance. Special considerations for digital image processing are required when an object of interest changes its shape between consecutive frames. Traditionally active shape models (ASMs) have not include a color information in their formation. We present several extensions of the ASM for color images using different color-adapted objective functions. We also analyze the performance of color ASM models in RGB, YUV, or HIS color spaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.