Also found in: Wikipedia.
The following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.



a tensor that is skew-symmetric with respect to any two of its indices. It thus is a tensor with either only covariant indices (subscripts) or only contravariant indices (superscripts), where each index can take on values from 1 to n. Moreover, a component of a p-vector changes sign when any two of the component’s indices are interchanged.

If the degree—that is, the number of indices—of a p-vector is equal to 2, 3…. m, we speak of a 2-vector, 3-vector, …, m-vector, respectively. For example, aij is a covariant 2-vector if aij = — aji, and bjki is a contravariant 3-vector if bijk = —bjik = bjki = —bikj = bkij = —

. If only those comonents of the m-vector

ωi1, i2, …, im

are retained for which i1 < i2 < … < im, P-Vector essential components will remain.

The components of a p-vector can be arranged in a certain way in the form of a rectangular matrix of n rows and columns P-Vector whose rank is called the rank of the p-vector. If a p-vector’s rank is equal to its degree (valency), the p-vector is the exterior product of tensors of degree one and is said to be simple.

The Great Soviet Encyclopedia, 3rd Edition (1970-1979). © 2010 The Gale Group, Inc. All rights reserved.
References in periodicals archive ?
Our specification is more flexible because it allows for the possibility of introducing covariates into the p-vector, too, making it possible to test for different liabilities in different subgroups of the sample.
According to the arguments above, variables that might influence the shape of the rate (size and legal form) are introduced into the p-vector. All variables are also introduced into the [Lambda]-vector, because they might shift the maximum.
The P-vector algorithm approach has been suggested, which is an LMS-like approximate gradient method.