Spatial-temporal filters have been widely used in video denoising module. The filters are commonly designed for
monochromatic image. However, most digital video cameras use a color filter array (CFA) to get color sequence. We
propose a recursive spatial-temporal filter using motion estimation (ME) and motion compensated prediction (MCP) for
CFA sequence. In the proposed ME method, we obtain candidate motion vectors from CFA sequence through
hypothetical luminance maps. With the estimated motion vectors, the accurate MCP is obtained from CFA sequence by
weighted averaging, which is determined by spatial-temporal LMMSE. Then, the temporal filter combines estimated
MCP and current pixel. This process is controlled by the motion detection value. After temporal filtering, the spatial
filter is applied to the filtered current frame as a post-processing. Experimental results show that the proposed method
achieves good denoising performance without motion blurring and acquires high visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.