H.261
H.261
Video codec for audiovisual services at p x 64 kbit/s | |
Status | Published |
---|---|
Year started | 1988 |
Latest version | (03/93) |
Organization | ITU-T, Hitachi, PictureTel, NTT, BT, Toshiba, etc. |
Committee | ITU-T Study Group 16 VCEG (then: Specialists Group on Coding for Visual Telephony) |
Related standards | H.262, H.263, H.264, H.265, H.266, H.320 |
Domain | video compression |
Website | https://www.itu.int/rec/T-REC-H.261 |
H.261 is an ITU-T video compression standard, first ratified in November 1988.[1][2] It is the first member of the H.26x family of video coding standards in the domain of the ITU-T Study Group 16 Video Coding Experts Group (VCEG, then Specialists Group on Coding for Visual Telephony), and was developed with a number of companies, including Hitachi, PictureTel, NTT, BT and Toshiba. It was the first video coding standard that was useful in practical terms.
H.261 was originally designed for transmission over ISDN lines on which data rates are multiples of 64 kbit/s. The coding algorithm was designed to be able to operate at video bit rates between 40 kbit/s and 2 Mbit/s. The standard supports two video frame sizes: CIF (352×288 luma with 176×144 chroma) and QCIF (176×144 with 88×72 chroma) using a 4:2:0 sampling scheme. It also has a backward-compatible trick for sending still images with 704×576 luma resolution and 352×288 chroma resolution (which was added in a later revision in 1993).
History[edit]
The discrete cosine transform (DCT), a form of lossy compression, was first proposed by Nasir Ahmed in 1972.[3] Ahmed developed a working algorithm with T. Natarajan and K. R. Rao in 1973,[3] and published it in 1974.[4][5] DCT would later become the basis for H.261.[6]
The first digital video coding standard was H.120, created by the CCITT (now ITU-T) in 1984.[7] H.120 was not usable in practice, as its performance was too poor.[7] H.120 was based on differential pulse-code modulation (DPCM), which had inefficient compression. During the late 1980s, a number of companies began experimenting with the much more efficient DCT compression for video coding, with the CCITT receiving 14 proposals for DCT-based video compression formats, in contrast to a single proposal based on vector quantization (VQ) compression. The H.261 standard was subsequently developed based on DCT compression.[6]
H.261 was developed by the CCITT Study Group XV Specialists Group on Coding for Visual Telephony (which later became part of ITU-T SG16), chaired by Sakae Okubo of NTT.[8] A number of companies were involved in its development, including Hitachi, PictureTel, NTT, BT, and Toshiba, among others.[9] Since H.261, DCT compression has been adopted by all the major video coding standards that followed.[6]
Whilst H.261 was preceded in 1984 by H.120 (which also underwent a revision in 1988 of some historic importance) as a digital video coding standard, H.261 was the first truly practical digital video coding standard (in terms of product support in significant quantities). In fact, all subsequent international video coding standards (MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, H.264/MPEG-4 Part 10, and HEVC) have been based closely on the H.261 design. Additionally, the methods used by the H.261 development committee to collaboratively develop the standard have remained the basic operating process for subsequent standardization work in the field.[8]
Although H.261 was first approved as a standard in 1988, the first version was missing some significant elements necessary to make it a complete interoperability specification. Various parts of it were marked as "Under Study".[2] It was later revised in 1990 to add the remaining necessary aspects,[10] and was then revised again in 1993.[11] The 1993 revision added an Annex D entitled "Still image transmission", which provided a backward-compatible way to send still images with 704×576 luma resolution and 352×288 chroma resolution by using a staggered 2:1 subsampling horizontally and vertically to separate the picture into four sub-pictures that were sent sequentially.[11]
H.261 design[edit]
The basic processing unit of the design is called a macroblock, and H.261 was the first standard in which the macroblock concept appeared. Each macroblock consists of a 16×16 array of luma samples and two corresponding 8×8 arrays of chroma samples, using 4:2:0 sampling and a YCbCr color space. The coding algorithm uses a hybrid of motion-compensated inter-picture prediction and spatial transform coding with scalar quantization, zig-zag scanning and entropy encoding.
The inter-picture prediction reduces temporal redundancy, with motion vectors used to compensate for motion. Whilst only integer-valued motion vectors are supported in H.261, a blurring filter can be applied to the prediction signal – partially mitigating the lack of fractional-sample motion vector precision. Transform coding using an 8×8 discrete cosine transform (DCT) reduces the spatial redundancy. The DCT that is widely used in this regard was introduced by N. Ahmed, T. Natarajan and K. R. Rao in 1974.[12] Scalar quantization is then applied to round the transform coefficients to the appropriate precision determined by a step size control parameter, and the quantized transform coefficients are zig-zag scanned and entropy-coded (using a "run-level" variable-length code) to remove statistical redundancy.
The H.261 standard actually only specifies how to decode the video. Encoder designers were left free to design their own encoding algorithms (such as their own motion estimation algorithms), as long as their output was constrained properly to allow it to be decoded by any decoder made according to the standard. Encoders are also left free to perform any pre-processing they want to their input video, and decoders are allowed to perform any post-processing they want to their decoded video prior to display. One effective post-processing technique that became a key element of the best H.261-based systems is called deblocking filtering. This reduces the appearance of block-shaped artifacts caused by the block-based motion compensation and spatial transform parts of the design. Indeed, blocking artifacts are probably a familiar phenomenon to almost everyone who has watched digital video. Deblocking filtering has since become an integral part of the more recent standards H.264 and HEVC (although even when using these newer standards, additional post-processing is still allowed and can enhance visual quality if performed well).
Design refinements introduced in later standardization efforts have resulted in significant improvements in compression capability relative to the H.261 design. This has resulted in H.261 becoming essentially obsolete, although it is still used as a backward-compatibility mode in some video-conferencing systems (such as H.323) and for some types of internet video. However, H.261 remains a major historical milestone in the field of video coding development.
Software implementations[edit]
The LGPL-licensed libavcodec includes a H.261 encoder and decoder. It is supported by the free VLC media player and MPlayer multimedia players, and in ffdshow and FFmpeg decoders projects.
Patent holders[edit]
The following companies contributed patents towards the development of the H.261 format:[13]
- Hitachi
- PictureTel Corp.
- Graphics Communication Technologies, Ltd.[14]
- Nippon Telegraph and Telephone (NTT)
- BT Group
- Toshiba
- KDDI
- Alcatel
- Compression Labs, Inc.
- AT&T Corporation
- GPT Data Systems (GEC)
- Philips
- Sony
- Sharp Corporation
- Oki Electric Industry
- Matsushita Communication Industrial Co., Ltd.
- Mitsubishi Electric
- Fujitsu
- Orange S.A.
- NEC
- Electronics and Telecommunications Research Institute
See also[edit]
References[edit]
- ^ "(Nokia position paper) Web Architecture and Codec Considerations for Audio-Visual Services" (PDF).
H.261, which (in its first version) was ratified in November 1988.
- ^ a b ITU-T (1988). "H.261 : Video codec for audiovisual services at p x 384 kbit/s - Recommendation H.261 (11/88)". Retrieved 2010-10-21.
- ^ a b Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing. 1 (1): 4–5. doi:10.1016/1051-2004(91)90086-Z.
- ^ Ahmed, Nasir; Natarajan, T.; Rao, K. R. (January 1974), "Discrete Cosine Transform", IEEE Transactions on Computers, C-23 (1): 90–93, doi:10.1109/T-C.1974.223784
- ^ Rao, K. R.; Yip, P. (1990), Discrete Cosine Transform: Algorithms, Advantages, Applications, Boston: Academic Press, ISBN 978-0-12-580203-1
- ^ a b c Ghanbari, Mohammed (2003). Standard Codecs: Image Compression to Advanced Video Coding. Institution of Engineering and Technology. pp. 1–2. ISBN 9780852967102.
- ^ a b "The History of Video File Formats Infographic". RealNetworks. 22 April 2012. Retrieved 5 August 2019.
- ^ a b S. Okubo, "Reference model methodology – A tool for the collaborative creation of video coding standards", Proceedings of the IEEE, vol. 83, no. 2, Feb. 1995, pp. 139–150
- ^ "ITU-T Recommendation declared patent(s)". ITU. Retrieved 12 July 2019.
- ^ ITU-T (1990). "H.261 : Video codec for audiovisual services at p x 64 kbit/s - Recommendation H.261 (12/90)". Retrieved 2015-12-10.
- ^ a b ITU-T (1993). "H.261 : Video codec for audiovisual services at p x 64 kbit/s - Recommendation H.261 (03/93)". Retrieved 2015-12-10.
- ^ N. Ahmed, T. Natarajan and K. R. Rao, "Discrete Cosine Transform", IEEE Transactions on Computers, Jan. 1974, pp. 90-93; PDF file.
- ^ "ITU-T Recommendation declared patent(s)". ITU. Retrieved 12 July 2019.
- ^ "Patent statement declaration registered as H261-07". ITU. Retrieved 11 July 2019.
Comments
Post a Comment