Depth image-based rendering (DIBR) has recently received significant attention as an efficient concept for generating virtual views. One of the main challenges in the view synthesis process is that holes will appear in the background (BG) of the virtual views because of occlusion. The existing methods either utilize spatial or temporal information from the original views and depth maps to fill the holes. However, the performance of the existing methods is seriously affected by the quality of the depth maps. Foreground (FG) texture will be wrongly introduced to the BG when the depth maps are not accurate enough. We develop a hole-filling approach for DIBR based on convolutional neural network, which shows reduced dependency on the quality of the depth map. More specifically, FG objects are detected on the RGB image by combining the Laplacian operator with a graph cut algorithm to avoid dependencies on the depth map quality. Furthermore, we learn an end-to-end mapping between the warped virtual view and the ground-truth view image with FG awareness. The learning-based approach exhibits excellent superiority in terms of saving time. The experimental results also show that the proposed approach outperforms current state-of-the-art techniques quantitatively and qualitatively. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 5 scholarly publications.
Convolutional neural networks
RGB color model
Image segmentation
Image quality
Video
3D image processing
Cameras