The new video coding standard, known as versatile video coding (VVC) is projected to be concluded by the end of 2020. This standard is conducted mainly to address 8k videos and emerging applications such as 360 deg and high dynamic range. Intraprediction is a part of the prediction step in the video coding that exploits spatial redundancy. This module has been improved, compared to the high-efficiency video coding (HEVC), by increasing the set of angular intraprediction modes (IPM) from 33 to 65 to model directional textures more accurately. Moreover, a quadtree plus binary tree (QTBT) structure replaced the QT of the HEVC. These improvements targeting at enhancing the coding efficiency resulted in significant coding complexity, especially in terms of encoding time. This paper fits into this context. It evokes the optimizations of the intramode and coding unit size decisions using statistical methods of fast decision and deep learning. A fast intramode decision algorithm is proposed for the different binary depths of the QTBT structure. Thus, an optimization by deep learning for square blocks is also included. Results show that the combinations of these two approaches can significantly reduce the complexity of the VVC encoder. Under the all intra (AI) configuration, a reduction of about 61.04% of the intraencoding time is achieved while maintaining an acceptable rate-distortion performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.