Also found in: Dictionary, Thesaurus, Medical, Legal, Financial, Acronyms, Idioms, Wikipedia.
(information channel). (1) The set of devices connecting communications lines, for receiving, transmitting, converting, and recording information. The initial and terminal devices may be telephones or telegraphs, tape recorders, punchers, computers, lasers, or acoustical devices. In communications, use is ordinarily made of radio channels, acoustical and optical communications lines, signal cable, wires, and telephone, telegraph, and radio relay lines. The technical characteristics of a channel are determined by the operating principle of the devices included in it, the type of signal, the properties and composition of the physical media in which the electrical, acoustic, and light signals are propagated, and the properties of the code or language being used. The effectiveness of channels is characterized by the speed and reliability of information transmission, the reliability of operation of the devices, and the time delay of signals.
(2) The aggregate of digital computer devices directly involved in the reception, storage, processing, and readout of information.
REFERENCESGoldman, S. Teoriia informatsii. Moscow, 1957. (Translated from English.)
Shannon, C. Raboty po teorii informatsii i kibernetiki. Moscow, 1963. (Translated from English.)
E. IA. DASHEVSKII
in information theory, any device for transmitting information. Unlike engineering, information theory abstracts from the concrete nature of these devices, much as geometry studies the volumes of bodies in abstraction from the material of which they are made. In information theory specific communications systems are considered only from the point of view of the amount of information that can be transmitted reliably using them.
The concept of the channel is approached in the following way: the channel is defined by the set of “permissible” messages (or signals) x at the input, the set of messages (signals) y at the output, and the set of conditional probabilities p( y|x) of receiving signal y at the output with input signal x. The conditional probabilities p(y|x) describe the statistical property of the “noise” (interference) that distorts signals during the transmission process. If p(y|x) = 1 for y = x and p(y|x) = 0 for y≠ x, the channel is called a channel without noise.
A distinction is made between discrete and continuous channels in accordance with the structure of input and output signals. In discrete channels signals at the input and output are sequences of “letters” from one and the same or different “alphabets” (codes). In continuous channels the input and output signals are functions of the continuous parameter t —time. Mixed cases are also possible, but it is usually preferred to consider one of the two cases as an idealization.
The ability of a channel to transmit information is characterized by a certain number—the carrying capacity, or simply capacity, of the channel. It is defined as the maximum amount of information relative to a signal at the input contained in a signal at the output (calculated per unit of time).
To be more precise, suppose that input signal ξ assumes several values x with probabilities p(x). Then according to probability theory, the probabilities q(y) that signal η will assume the value y at the output can be calculated by the formula
just as probabilities p( x, y) that events ξ = x and η= y will coincide can be determined by
p(x, y) = p(x)p(y|x)
The last formula is used to compute the amount of information (in binary units) I(η, ξ)= I(ξ;, η) and its average value
where T is the duration of ξ. The upper limit C of magnitudes R, taken for all permissible signals at the input, is called the channel capacity. Computing capacity, like computing entropy, is easier in the discrete case and significantly more complex in the continuous case, where it is based on the theory of stationary random processes.
Simplest of all is the case of a discrete channel without noise. Information theory establishes that in this case the general definition of capacity C is equivalent to the following:
where N(T) is the number of permissible signals of duration T.
Example 1. Suppose the “alphabet” of the channel without noise consists of two “letters,” 0 and 1, with a duration of T seconds each. Permissible signals of duration T = nT are represented by a sequence of the symbols 0 and 1. Their number is N(T) = 2n. Accordingly,
Example 2. Suppose that symbols 0 and 1 have durations of T and 2T seconds, respectively. In this case there will be fewer permissible signals of duration T = nT than in Example 1. For example, where n = 3 there will be only three (instead of eight). Now we may calculate
When it is necessary to transmit on a given channel messages written using a certain code, these messages must be converted into permissible signals in the channel, that is, the appropriate encoding must be carried out. After transmission the decoding operation must be performed, that is, the inverse operation of converting the signal back into the message. Naturally it is advisable to do the encoding so that the average time spent on transmission is minimal. Where the duration of symbols at the channel input is identical, this means that one must select the most economical code with an “alphabet” that coincides with the input “alphabet” of the channel.
When the procedure for matching the source with the channel as described above is used, the phenomenon of delay occurs. This may be clarified by Example 3.
Example 3. Suppose that a message source sends independent symbols that assume the values x1, x2, x3, and x4 with probabilities equal to, respectively, 1/2, 1/4, 1/8, and 1/8 at time intervals of 1/v (that is, with a speed of v). Assume that the channel is without noise, as in Example 1, and that coding is done instantaneously. The signal received is either transmitted on the channel if it is free or it waits (is placed in “memory”) until the channel is free. Now if, for example, we have selected the code x1 = 00, x2 = 01, x3 = 10, x4 = 11 and v ≤ (1/2)T (that is, 1/v ≥ 2 T), then in the time between the appearance of two sequential values of x it will be possible to transmit the coded notation, and the channel will be free. Thus, in this case the time interval 2T passes between the appearance of some message “letter” and transmission of its coded notation along the channel. Where v > (l/2)T, a different result is observed: the «th “letter” of the message appears at the moment (n — l)/v and its coded notation will be transmitted along the channel at moment 2nT. Therefore, the time interval between the appearance of the nth “letter” of the message and the moment of its appearance after decoding of the transmitted signal will be greater than n(2T — 1/v), which approaches infinity as n → ∞ oo. Thus, in this case the transmission will be carried on with unlimited delay. Therefore, to be able to transmit without unlimited delay for the given code, satisfaction of the inequality is necessary and sufficient. Selection of a better code can increase transmission speed by making it as close as one wants to the capacity of the channel, but this limit cannot be exceeded (needless to say, while preserving the requirement that delay is limited). This statement is completely general and is called the fundamental theorem of channels without noise.
It is relevant to add the following note in special relation to Example 3. For the messages considered, the binary code x1 = 0, x2 = 10, x3 = 110, and x4 = 111 is optimal. Because of the different lengths of the coded notations, delay time wn for the nth. “letter” of the initial message will be a random variable. When v < 1/T (1/T—channel capacity) and n→∞, its average value approaches a certain limit m(v ), which depends on v. As v approaches the critical value 1/r, the value of m(v ) increases in proportion to (T−1— v)−1. This once again reflects the general proposition that the endeavor to make transmission speed as close to maximal as possible involves an increase in delay time and in the necessary size of the “memory” of the coding device.
The assertion of the fundamental theorem (substituting “almost error-free” for error-free transmission) also applies to channels with noise. This fact, which is truly fundamental for the entire theory of information transmission, is called Shannon’s theorem. The possibility of reducing the probability of erroneous transmission through channels with noise is achieved by using so-called noise-combating codes.
Example 4. Suppose that the input “alphabet” of the channel consists of two symbols 0 and 1 and the noise effect is expressed as follows: during transmission each of the symbols may with a slight (for example, equal to 1/10) probability p change into the other or, with a probability of q = 1 — p, remain undistorted. The use of a noise-combating code essentially amounts to selecting a new “alphabet” at the input of the channel. Its “letters” are ^-element chains of the symbols 0 and 1 that differ from one another by a sufficient number of characters D. Thus, where n = 5 and nD = 3 the new “letters” can be 00000, 01110, 10101, and 11011. If the probability of more than one error for a group of five characters is small, then even when distorted these new “letters” can hardly be confused. For example, if the signal 10001 is received, it almost certainly came from 10101.
It turns out that with a proper selection of sufficiently large n and D this method is significantly more effective than simple repetition (that is, using “alphabets” of the 000, 111 type). However, possible improvement of the transmission process in this way inevitably involves greatly increasing the complexity of the coding and decoding devices. For example, it has been calculated that if p = 10−2 initially and this value must be decreased to P1= 10 −4, then the length of the code chain n must be selected not less than 25 (or 380) depending on whether the channel capacity to be used is 53 percent (or 80 percent).
IU. V. PROKHOROV
(1) A short natural passageway that links such water basins as lakes or a lake with a river. Less frequently, it links two rivers or a river with a lake.
(2) A river’s secondary watercourse when it is divided by islands into several branches.
the part of a valley floor where a body of water flows. The channels of large rivers range in width from a few meters to tens of kilometers, for example, the lower courses of the Ob’, Lena, and Amazon. As river size increases, the increase in channel depth is less than the increase in width, according approximately to the relation here h is the average depth, A is a coefficient dependent on the character of the soils, and b is the average width. Along the length of a channel, deep spots (pools) alternate with shallow stretches (shoals). The channels of lowland rivers are usually winding or divided into arms and may be formed in muddy, sandy, or gravelly deposits. The channels of mountain rivers are straighter, often have rapids and waterfalls, and usually contain large boulders.
a beam or strip, usually made of metal, with a squared-off, U-shaped cross section, with a height of 50–400 mm and a wall thickness of 4–15 mm. Steel channels are produced primarily by hot rolling billets in section mills (seeROLLED SECTION). Channels with thin flanges are produced on section-bending machines; those made of nonferrous metals are usually manufactured by extrusion through a shaped slot. Several dozen shapes and sizes of channels are produced. They are primarily used in construction.
Some notable channels are "#initgame", "#hottub" and "#report". At times of international crisis, "#report" has hundreds of members, some of whom take turns listening to various news services and typing in summaries of the news, or in some cases, giving first-hand accounts of the action (e.g. Scud missile attacks in Tel Aviv during the Gulf War in 1991).
channel(1) The distribution of IT products through independent sales organizations. The manufacturer sells its products either directly to IT resellers (the dealers), which are the point of contact with the customer, or they sell to an IT distributor organization that sells to the dealers. Manufacturers that sell in the channel rely on the sales ability of the dealers and the customer relationships they have built up over the years. Sometimes, manufacturers also compete with the channel by selling directly to the customer via catalogs and the Web.
(2) A high-speed mainframe subsystem that provides a pathway between the CPU and the control units of peripheral devices. Each channel is an independent unit that transfers data concurrently with other channels. In contrast, the PCI bus in a desktop computer is shared among all attached devices. See mainframe and PCI.
(3) The physical connecting medium between devices in a network; for example, twisted wire pairs, coaxial cable and optical fibers.
(4) A frequency assigned to a TV or radio station, which allows it to transmit over the air simultaneously with other broadcasters. See carrier.
(5) See alpha channel.