- 1 Description
- 2 Characteristics of video stream
- 3 Video formats
- 4 See also
The term "video" commonly refers to related types of carrier formats — which can either be digital (DVD, QuickTime, MPEG-4) or analog videotapes (VHS, Betamax). Video can be created with mechanical cameras (capture on celluloid film), video cameras (capture electric signal in PAL or NTSC formats) or digital cameras (commonly capture in digital MPEG-4 or DV formats). Quality of video essentially depends on the capturing method and storage used. The last type of video — digital television (DTV) has a higher quality and now the become a mass standard (See List of digital television deployments by country)
New next generation of video was appearing at the end of 20 Century is 3D-video — digital video really in three dimensions. Such type of video was appearing with first cameras that had realtime depth measurement. Six or eight such cameras can capture a 3D-video stream. Format of this type of video was fixed in MPEG-4 Part 16 Animation Framework eXtension (AFX).
Characteristics of video stream
Number of frames per second
Number of frames per second ( or frame rate): Can be from 6–8 frame/s for old mechanical cameras to 120 frame/s and more for new professional cameras. For PAL (Europe) and SECAM (France) standards it is 25 fps, for NTSC (North America) — 30 frame/s. Minimum number of frames for illusion of moving picture is about 10 frame/s.
Video can be Interlaced or progressive. Interlacing was originally conceived as a way to achieve good visual quality within the limitations of a narrow bandwidth. Every frame of interlaced video has 2 fields. One field contains only the odd-numbered lines (forming the odd field), and the next contains only even-numbered lines (forming the even field). NTSC, PAL and SECAM are interlaced formats. Interlacing commonly noted as i character near resolution (e.g. 576i50).
In progressive scan systems, each frame is complete (i.e. includes all scan lines). The result is a much higher perceived resolution.
A procedure known as deinterlacing can be used for converting an interlaced stream (analog, DVD, satellite) for progressive devices (TFT TV-sets, projectors, Plasma panels). Any deinterlacing inevitably decreases video quality.
The size of a video image is measured in pixels (for digital video) or lines (for analog video). Standard-definition television (SDTV) refer to 640×480i60 (NTSC) and 720×576i50 (PAL, SÉCAM) resolution. New High-definition television (HDTV) define resolutions up to 1920×1080i50.
Video resolution for 3D-video is measured in voxels (volume element, representing a value in three dimensional space). For example 512×512×512 voxels resolution now used for simple 3D-video, can be displayed even on PDA's.
Aspect ratio describes the squareness of video screens and video picture elements. The screen aspect ratio of a traditional television screen is 4:3, or 1.33:1. High definition televisions use an aspect of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as "Academy standard") is around 1.37:1.
Pixels on computer monitors are usually square, but pixels used in digital video have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats.
Color space and Bits per pixel
The number of distinct colours that can be represented by a pixel depends on the number of bits per pixel (bpp). A very common way to reduce the number of bits per pixel in digital video is chroma subsampling (for example 4:4:4, 4:2:2, 4:2:0).
Assume that you want to evaluate subjective video quality of a video processing system. To do this you should go through following steps:
- Choose the video sequences for testing (they are often named SRC).
- Choose the settings of the system that you want to evaluate (often named HRC).
- Choose a test method (how sequences are presented to experts and how their opinion is collected).
- Invite sufficient number of experts (not less than 15 are recommended).
- Carry out testing.
- Calculate the average marks for each HRC based on experts' opinion.
Many subjective video quality methods are described in ITU-T recommendation BT.500. One of standardized methods is DSIS — Double Stimulus Impairment Scale: a expert is presented with an unimpaired reference video, then with the same video impaired, and after that he/she is asked to vote on the second video using an impairment scale (scale from "impairments are imperceptible" to "impairments are very annoying").
Video compression method (for digital only)
Compression method (standard) used for video. Video data contains spatial and temporal redundancy. Similarities can thus be encoded by merely registering differences within a frame (spatial redundancy, named intraframe compression and very close to image compression) and/or between frames (temporal redundancy, named interframe compression use motion compensation and other techniques). Most common standards are MPEG-2 — used for DVD and Satellite television and MPEG-4 — used for home video.
Bitrate (for digital only)
Bitrate is a measure of video stream's "speed". It is quantified using the bit per second (bit/s) unit. A bigger bitrate allows better video quality. For example: 1 Mbit/s – VHS quality, 5 Mbit/s – DVD quality 10 Mbit/s – HDTV quality. Several types of bitrate strategies can be used. To maximize visual video quality variable bitrate (VBR) is used. On fast motion scenes it uses more bits, on slow motion - less, we can see constant quality. For video streaming with limited channels (e.g. videoconferencing) constant bitrate (CBR) is used.
|Video Display Standards||Video Connection Standards|
Analog Tape Formats (See Analog television)
Digital Tape Formats (See Digital video)
Optical Disc Storage Formats
Digital Encoding Formats
- Video format
- Video usage