data compression


Also found in: Dictionary, Thesaurus, Medical, Legal, Acronyms, Wikipedia.
Related to data compression: Lossy data compression, Lossless data compression

data compression

[′dad·ə kəm‚presh·ən]
(computer science)
The technique of reducing the number of binary digits required to represent data.

Data compression

The process of transforming information from one representation to another, smaller representation from which the original, or a close approximation to it, can be recovered. The compression and decompression processes are often referred to as encoding and decoding. Data compression has important applications in the areas of data storage and data transmission. Besides compression savings, other parameters of concern include encoding and decoding speeds and workspace requirements, the ability to access and decode partial files, and error generation and propagation.

The data compression process is said to be lossless if the recovered data are assured to be identical to the source; otherwise the compression process is said to be lossy. Lossless compression techniques are requisite for applications involving textual data. Other applications, such as those involving voice and image data, may be sufficiently flexible to allow controlled degradation in the data.

Data compression techniques are characterized by the use of an appropriate data model, which selects the elements of the source on which to focus; data coding, which maps source elements to output elements; and data structures, which enable efficient implementation.

Information theory dictates that, for efficiency, fewer bits be used for common events than for rare events. Compression techniques are based on using an appropriate model for the source data in which defined elements are not all equally likely. The encoder and the decoder must agree on an identical model. See Information theory

A static model is one in which the choice of elements and their assumed distribution is invariant. For example, the letter “e” might always be assumed to be the most likely character to occur. A static model can be predetermined with resulting unpredictable compression effect, or it can be built by the encoder by previewing the entire source data and determining element frequencies. The benefits of using a static model include the ability to decode without necessarily starting at the beginning of the compressed data.

An alternative dynamic or adaptive model assumes an initial choice of elements and distribution and, based on the beginning part of the source stream that has been processed prior to the datum presently under consideration, progressively modifies the model so that the encoding is optimal for data distributed similarly to recent observations. Some techniques may weight recently encountered data more heavily. Dynamic algorithms have the benefit of being able to adapt to changes in the ensemble characteristics. Most important, however, is the fact that the source is considered serially and output is produced directly without the necessity of previewing the entire source.

In a simple statistical model, frequencies of values (characters, strings, or pixels) determine the mapping. In the more general context model, the mapping is determined by the occurrence of elements, each consisting of a value which has other particular adjacent values. For example, in English text, although generally “u” is only moderately likely to appear as the “next” character, if the immediately preceding character is a “q” then “u” would be overwhelmingly likely to appear next.

The use of a model determines the intended sequence of values. An additional mapping via one coding technique or a combination of coding techniques is used to determine the actual output. Several data coding techniques are in common use.

Digitized audio and video signals

The information content of speech, music, and television signals can be preserved by periodically sampling at a rate equal to twice the highest frequency to be preserved. This is referred to as Nyquist sampling. However, speech, music, and television signals are highly redundant, and use of simple Nyquist sampling to code them is inefficient. Reduction of redundancy and application of more efficient sampling results in compression of the information rate needed to represent the signal without serious impairment to the quality of the remade source signal at a receiver. For speech signals, redundancy evident in pitch periodicity and in the format (energy-peaks) structure of the signal's spectrum along with aural masking of quantizing noise is used to compress the information rate. In music, which has much wider bandwidth than speech and far less redundancy, time-domain masking and frequency-domain masking are principally used to achieve compression. For television, redundancy evident in the horizontal and vertical correlation of the pixels of individual frames and in the frame-to-frame correlation of a moving picture, combined with visual masking that obscures quantizing noise resulting from the coding at low numbers of bits per sample, is used to achieve compression. See Television

Compression techniques may be classified into two types: waveform coders and parametric coders. Waveform coders replicate a facsimile of a source-signal waveform at the receiver with a level of distortion that is judged acceptable. Parametric coders use a synthesizer at the receiver that is controlled by signal parameters extracted at the transmitter to remake the signal. The latter may achieve greater compression because of the information content added by the synthesizer model at the receiver.

Waveform compression methods include adaptive differential pulse-code modulation (ADPCM) for speech and music signals, audio masking for music, and differential encoding and sub-Nyquist sampling of television signals. Parametric encoders include vocoders for speech signals and encoders using orthogonal transform techniques for television.

data compression

(algorithm)
compression. Probably to distinguish it from (electronic) signal compression.

data compression

There are two categories of data compression. The first reduces the size of a single file to save storage space and transmit faster. The second is for storage and transmission convenience.

#1 - Compressing a Single File
The JPEG image, MPEG video, MP3 audio and G.7xx voice formats are widely used "lossy" methods that analyze which pixels, video frames or sound waves can be removed forever without the average person noticing (see lossy compression). GIF images have no loss of pixels but may have a loss of colors (see GIF).

JPEG files can be reduced as much as 80%; MPEG enables a two-hour HD movie to fit on a single disc, and MP3 sparked a revolution by reducing CD music 90%. For a list of compression methods, see codec examples. See JPEG, GIF, MPEG, MP3, G.7xx and interframe coding.

#2 - Compressing a Group of Files (Archiving)
The second "lossless" category compresses and restores data without the loss of a single bit. Although this is widely used for documents, this method is not aware of the content's purpose. It merely looks for repeatable patterns of 0s and 1s, and the more patterns, the higher the compression ratio. Text documents compress the most, while binary and already-compressed files (JPEG, MPEG, etc.) compress the least.

Although lossless methods such as the ZIP format are used to reduce the size of a single, huge file, they are widely used to compress several files into one "archive." It is convenient to store and considerably more convenient to transmit a single file than to keep track of multiple files. See lossless compression, archive, archive formats and capacity optimization.

Lossless Methods (Dictionary and Statistical)
The widely used dictionary method creates a list of repeatable phrases. For example, GIF images and ZIP and JAR archives are compressed with this method (see LZW). The statistical method converts characters into variable length strings of bits based on frequency of use (see Huffman coding).


References in periodicals archive ?
Lossless data compression can mitigate some of the throughput decreases caused by TCP/IP packet loss.
is the inventor and continuing innovator of the ZIP standard, the most widely used method for data compression on Windows, UNIX, OS/400, MVS and other computing platforms.
Also, Stac's customers will be pleased with the simplicity of being able to obtain all of their data compression technology from a single source.
This honor underscores our belief that deploying real-time data compression is the most efficient way for enterprises to reduce their network-attached storage (NAS) footprint and related costs," said Jonathan Amit, Storwize CTO.
An independent evaluation of 8 different data compression techniques was recently done by the University of Central Florida's Institute for Simulation and Training (IST).
Providing the highest compression performance for 56/64 Kbps to Fractional T1/E1 links, Bay Networks' data compression support reduces line costs and improves performance by increasing network bandwidth efficiency.
Data compression support in IDEA Concert BRouter Release 2.
BOSTON -- 1stWorks Corporation, a leader in real time communications software and data compression technology, has been granted a U.
Many customers implementing WAN networks to connect high-performance channel peripherals can greatly benefit by using data compression for more cost effective bandwidth.
a provider of innovative online storage compression solutions, today unveiled its Vanguard channel partner program, empowering worldwide solution providers to partner with Storewiz and successfully sell and support the company's advanced data compression solutions.
March 28 /PRNewswire/ -- Addressing the continuing challenge of interoperability, AT&T Paradyne has announced its intention to participate in an industry initiative focused on establishing industry standards for the implementation of synchronous data compression in digital communications products.
Data compression uses algorithms that replace repeating patterns with efficient encoding methods to reduce file size.

Full browser ?