Image and Video Compression

Learning-based Image and Video Compression

alt text 

Data compression and generative modeling are two fundamentally related tasks. Intuitively, the essence of compression is to find all “patterns” in the data and assign fewer bits to more frequent patterns. To know exactly how frequent each pattern occurs, one would need a good probabilistic model of the data distribution, which coincides with the objective of (likelihood-based) generative modeling. Motivated by this, we research the problem of image and video compression from the perspective of probabilistic generative modeling.

Publications:

  1. Z. Duan, M. Lu, J. Ma, Y. Huang, Z. Ma, F. Zhu, “QARV: Quantization-Aware ResNet VAE for Lossy Image Compression,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 1, p. 436 – 450, Oct 2023. Code

  2. Z. Duan, J. Ma, J. HePD, F. Zhu, “An Improved Upper Bound on the Rate-Distortion Function of Images,” Proceedings of the IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, Oct 2023. code

  3. M.A. Hossain, Z. Duan, Y. Huang, F. Zhu, “Flexible Variable-Rate Image Feature Compression for Edge-Cloud Systems,” Proceedings of the IEEE International Conference on Multimedia and Expo Workshop (ICME-W), Brisbane, Australia, Jul 2023.

  4. [Best Algorithms Paper] Z. Duan, M. Lu, Z. Ma, and F. Zhu, “Lossy Image Compression with Quantized Hierarchical VAEs,” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Hawaii, USA, Jan 2023. Code

  5. Z. Duan, M. Lu, Z. Ma, F. Zhu, “Opening the Black Box of Learned Image Coders,” Proceedings of the Picture Coding Symposium (PCS), San Jose, California, USA, Dec 2022.

Compression for Machine Vision and Processing

alt text 

Visual data are traditionally designed to be viewed by human, and thus the compression techniques are also designed to reconstruct the original data. A recent paradigm, namely Coding for Machines, has been grown rapidly to embrace the era of AI and deep learning. In many modern applications that involves autonomous visual analysis, visual data needs to be compressed, stored/transmitted, but is then only processed by an AI algorithm (instead of human eyes) and is never reconstructed to the original form. Traditional methods that are mostly designed to reconstruct visual signal are thus inefficient in this new paradigm, and new techniques must be developed to account for new challenges and requirements.

Publications:

  1. Y. Huang, Z. Duan, F. Zhu, “NARV: An Efficient Noise-Adaptive ResNet VAE for Joint Image Compression and Denoising,” Proceedings of the IEEE International Conference on Multimedia and Expo Workshop (ICME-W), Brisbane, Australia, Jul 2023.

  2. Z. Duan, Z. Ma, F. Zhu, “Unified Architecture Adaptation for Compressed Domain Semantic Inference,” IEEE Transactions on Circuits and Systems for Video Technology, Jan 2023.

  3. [Best Paper Award Finalists] Z. Duan, F. Zhu, “Efficient Feature Compression for Edge-Cloud Systems,” Proceedings of the Picture Coding Symposium (PCS), San Jose, California, USA, Dec 2022. Code

Texture-Based Video Coding

alt text 

In recent years, there has been a growing interest in developing novel techniques for increasing the coding efficiency of video compression methods. One approach is to use texture and motion models of the content in a scene. Based on these models parts of the video frame are not coded or “skipped” by a classical motion compensated coder. The models are then used at the decoder to reconstruct the missing or skipped regions. We propose several spatial-texture models for video coding. We investigate several texture features in combination with two segmentation strategies in order to detect texture regions in a video sequence. These detected areas are not encoded using motion compensated coding. The model parameters are sent to the decoder as side information. After the decoding process, frame reconstruction is done by inserting the skipped texture areas into the decoded frames.

Publications:

  1. M. Bosch, F. Zhu and E.J. Delp, “Segmentation Based Video Compression Using Texture and Motion Models,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 7, pp. 1366-1377, Nov 2011.

  2. M. Bosch, F. Zhu, and E. J. Delp, “Perceptual quality evaluation for texture and motion based video coding,” Proceedings of the IEEE International Conference on Image Processing (ICIP), pp. 2285-2288, Cairo, Egypt, Nov 2009.

  3. M. Bosch, F. Zhu, and E. J. Delp, “An Overview of Texture and Motion based Video Coding at Purdue University,” Proceedings of the 27th Picture Coding Symposium (PCS), pp. 1-4, Chicago, USA, May 2009.

  4. M. Bosch, F. Zhu, and E.J. Delp, “Models for texture based video coding,” Proceedings of International Workshop on Local and Non-Local Approximation in Image Processing (LNLA), Lausanne, Switzerland, Aug 2008.

  5. M. Bosch, F. Zhu, and E. J. Delp, “Spatial Texture Models for Video Compression,” Proceedings of IEEE International Conference on Image Processing (ICIP), pp. I-93-I-96, San Antonio, USA, Sep 2007.

  6. F. Zhu, K. Ng, G. Abdollahian, and E.J. Delp, “Spatial and Temporal Models for Texture-Based Video Coding,” Proceedings of SPIE 6508, Video Communications and Image Processing 2007 (VCIP), pp. 650806-650806-10, San Jose, USA, Jan 2007.

Motion-Based Video Coding

alt text 

Using similar approach to texture-based video coding, we consider motion models based on human visual motion perception. We describe a motion classification model to separate foreground objects containing noticeable motion from the background. This motion model is then used in the encoder to again allow regions to be skipped and not coded using a motion compensated encoder. Our results indicate significant increase in terms of coding efficiency in comparison to the spatial texture-based methods.

Publications:

  1. M. Bosch, F. Zhu, and E.J. Delp, “Video Coding Using Motion Classification,” Proceedings of the IEEE International Conference on Image Processing (ICIP), pp. 1588-1591, San Diego, USA, Oct 2008.

Deep Learning Based Approaches

alt text 

There has been a growing interest in using different approaches to improve the coding efficiency of modern video codec in recent years as demand for web-based video consumption increases. We propose a model-based approach that uses texture analysis/synthesis to reconstruct blocks in texture regions of a video to achieve potential coding gains using the AV1 codec developed by the Alliance for Open Media (AOM). The proposed method uses convolutional neural networks to extract texture regions in a frame, which are then reconstructed using a global motion model. Our preliminary results show an increase in coding efficiency while maintaining satisfactory visual quality.

Publications:

  1. C. Fu, D. Chen, E. J. Delp, Z. Liu, F. Zhu, “Texture Segmentation Based Video Compression Using Convolutional Neural Networks,” Electronic Imaging, Burlingame, CA, USA, Jan 2018.

  2. Di Chen, Chichen Fu, Zoe Liu and Fengqing Zhu, “AV1 Video Coding Using Texture Analysis With Convolutional Neural Networks”, arXiv:1804.09291, 2018.

  3. D. Chen, Q. Chen, F. Zhu, “Pixel-Level Texture Segmentation Based AV1 Video Compression,” Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK, May 2019.

  4. D. Ding, M. Zhan, D. Chen, Q. Chen, Z. Liu, F. Zhu, “Advances In video Compression System Using Deep Neural Network: A Review and Case Studies,” Proceedings of the IEEE, pp. 1-27, Mar 2021. Project Page