Content area
Full text
(ProQuest: ... denotes non-US-ASCII text omitted.)
Fu Xiao 1, 2, 3 and Jinkai Liu 1 and Jian Guo 1, 3, 4 and Linfeng Liu 1, 3, 4
Recommended by Ruchuan Wang
1, School of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
2, Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, Suzhou 215006, China
3, Jiangsu High Technology Research Key Laboratory for Wireless Sensor Networks, Nanjing 210003, China
4, Key Laboratory of Broadband Wireless Communication and Sensor Network Technology, Ministry of Education, Nanjing 210003, China
Received 24 August 2012; Accepted 15 October 2012
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
In recent years, along with the rapid development of the wireless multimedia communication technology [ 1], digital video requirements are being increased. People would like to see natural characterization of objects clearer and more realistic: the traditional single-view video network can only provide two-dimensional visual, and three-dimensional visual senses cannot be provided better; so the multiview video network appears. In the multiview video network, which has the limits of low power, storage capacity, computational and communication ability, it does not only need low-complexity encoding, but also requires real-time video encoding and transmission. Traditional video coding standards, such as MPEG-x or H.26x, mainly rely on the hybrid architecture, encoder using motion estimation to fully exploit the video sequences of time, and spatial correlation information. Since the heavy computing burden of the motion estimation and compensation task in these video compression standards, the encoder is 5 to 10 times more complex than the decoder [ 2, 3]. Traditional video coding system is not suitable, novel coding methods are required. Wide attention has been focused on new video codec framework, distributed video coding (DVC) from scholars, which uses intraframe encoding and interframe decoding. Decoder explores the correlation of video signals for interframe prediction decoding; so DVC removes the complexity of the interframe prediction coding in encoder. Distributed Video Coding, which has the characteristics of low-complexity encoding and good robustness, can meet the needs of these new video applications very well.
There are some DVC frameworks that have been proposed,...





