# linear transformation

Also found in: Dictionary, Acronyms, Wikipedia.

## linear transformation

[′lin·ē·ər ‚tranz·fər′mā·shən]*T*defined in a vector space

*E*and having its values in another vector space over the same field, such that if ƒ and

*g*are vectors in

*E,*and

*c*is a scalar, then

*T*(ƒ +

*g*) =

*T*ƒ +

*Tg*and

*T*(

*c*ƒ) =

*c*(

*T*ƒ). Also known as homogeneous transformation; linear function; linear operator.

## Linear Transformation

A linear transformation of the variables *x*_{1}, *x*_{2}, ..., *x _{n}* is the replacement of these variables by new variables

*x*′,

*x*′

_{2},...,

*x*′

_{n}in terms of which the initial variables are expressed linearly, that is, by expressions of the form

*x*_{1} = *a*_{11}*x*′_{1} + *a*_{12}*x*′_{2} + ⋯ + *a*_{nn}*x _{n}′* +

*b*

_{1}

*x*_{2} = *a*_{21}*x*_{1}′ + *a*_{22}*x*_{2}′ + ⋯ + *a*_{2n}*x _{n}*′ +

*b*

_{2}

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

*x _{n}* =

*a*

_{n}_{1}

*x*

_{1}′ +

*a*

_{n}_{2}

*x*

_{2}′ + ⋯ +

*a*′ +

_{nn}x_{n}*b*

_{n}Here *a _{ij}* and

*b*, for

_{i}*i, j*= 1, 2,...,

*n*, are arbitrary numerical coefficients. If

*b*

_{1},

*b*

_{2}, ...,

*b*are all equal to zero, then the linear transformation of the variables is homogeneous.

_{n}As a simple example of a linear transformation of variables we can take the formulas for the transformation of rectangular coordinates in the plane

*x* = *x*′ cos *α* - *y*′ sin *α* + *a*

*y* = *x*′ sin *α* + *y*′ cos *α* + *b*

If determinant *D* = ǀ*a _{ij}*ǀ, formed by the multipliers of the variables is not equal to zero, then the new variables

*x*′

_{1},

*x*′

_{2},...,

*x*′

_{n}can also be linearly expressed in terms of the old variables. Thus, in our example,

and

*x*′ = *x* cos *α* + *y* sin *α* + *a*_{1}

*y*′ = —*x* sin *α* + *y* cos *α* + *b*_{1}

where *a*_{1} = —*a* cos *α* — *b* sin *α* and *b*_{2} = *a* sin *α* – *b* cos *α*. Other examples of linear transformations of variables are transformations of affine and (homogeneous) projective coordinates and the substitutions of variables involved in the reduction of quadratic forms.

A linear transformation of vectors (or a linear transformation on a vector space) is a rule that associates to a vector x in *n*-dimensional space a vector x′ whose coordinates are linear and homogeneous functions of the coordinates of x:

*x*_{1}′ = *a*_{11}*x*_{1} + *a*_{12}*x*_{2} + ⋯ + *a*_{1n}*x _{n}*

*x*_{2}′ = *a*_{21}*x*_{1} + *a*_{22}*x*_{2} + ⋯ + *a*_{2n}*x _{n}*

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

*x _{n}*′ =

*a*

_{n1}

*x*

_{1}+

*a*

_{n2}

*x*

_{2}+ ⋯ +

*a*

_{nn}x_{n}or, briefly,

x′ = *Ax*

For example, projection to a coordinate plane, say the plane *xOy*, is a linear transformation of three-dimensional vector space. This transformation associates to each vector a with coordinates *x, y, z* a vector b, whose coordinates *x′, y′, z′* are expressed in terms of *x, y, z* by *x*′ = *x, y*′ = *y*, and *z*′ = 0. An example of a linear transformation of the plane is its rotation through an angle *α* about the origin. The matrix

formed by the coefficients of a linear transformation *A*, is the matrix of *A*. The matrices of the linear transformations of projection and rotation given above are, respectively,

A linear transformation on a vector space can be defined (as is generally done) without using a coordinate system. Thus, a correspondence x → y = *A*x is a linear transformation if *A*(x + y) = *A*x + *A*y and *A*(αx) = α*A*(x) for any vectors x and y and for any number α. In different coordinate systems, different matrices (and, consequently, different formulas for the transformation of coordinates) correspond to the same linear transformation.

Linear transformations include, in particular, the zero transformation *O* that sends all vectors to the zero vector 0 (*O*x = 0) and the identity transformation *E* that leaves all vectors invariant (*E*x = x). In any coordinate system, these linear transformations are represented by the zero and identity matrix, respectively.

There are natural definitions of addition and multiplication of linear transformations on a vector space. Thus, the sum of two linear transformations *A* and *B* is the linear transformation *C*, which sends any vector x to the vector *C*x = *A*x + *B*x, and the product of the linear transformations *A* and *B* is the result of their successive application: *C* = *AB* if *C*x = *A*(*B*x).

By virtue of these definitions, the set of all linear transformations of a vector space forms a ring. The matrix of a sum (product) of linear transformations is equal to the sum (product) of the matrices of the linear transformations. Since the product of linear transformations, like that of matrices, is not commutative, the order of the transformations in a product is important. A linear transformation can be multiplied by numbers. If the linear transformation *A* sends the vector x to the vector *y* = *A*x, then α*A* sends x to α*y*. We illustrate these definitions by the following examples.

(1) If *A* and *B* denote the operation of projection to the *Ox* and *Oy* axes in three-dimensional space, then *A* + *B* will be the projection to the *xOy* plane, and *AB* = 0.

(2) If *A* and *B* are rotations of a plane about the origin through angles Φ and ψ, respectively, then *AB* is the rotation through the angle of Φ + ψ.

(3) The product of the identity transformation *E* and the number α is a dilation with ratio *α*.

The linear transformation *B* is the inverse of the linear transformation *A* (and is denoted by *A*^{-1}) if *BA* = *E* (or *AB* = *E*). If *A* sends the vector x to the vector y, then *A*^{-1} sends y back to x. A linear transformation that possesses an inverse is called nonsingular. Such linear transformations are characterized by the fact that the determinants of their matrices are never equal to zero.

Certain classes of linear transformations deserve particular mention. Orthogonal (or unitary in complex spaces) linear transformations are a generalization of rotations of two-dimensional and three-dimensional Euclidean spaces. Orthogonal linear transformations do not alter the lengths of vectors (and, consequently, the angles between them). The matrices of these linear transformations relative to an orthonormal coordinate system are also called orthogonal (unitary). The product of an orthogonal matrix and its transpose is the identity matrix:

In a complex space,

A symmetric (in a complex space, Hermitian, or self-adjoint) linear transformation is a linear transformation whose matrix is symmetric, that is, *a _{ij}* =

*a*(or

_{ji}*a*=

_{ij}*ā*). Symmetric linear transformations expand spaces by different coefficients in several mutually orthogonal directions. There is a close connection between the theory of quadratic forms (or Hermitian forms in a complex space) and symmetric linear transformations.

_{ji}Our coordinate-free definition of a linear transformation on a vector space can be extended without any modifications to infinite-dimensional (in particular, function) spaces. Linear transformations on infinite-dimensional spaces are called linear operators.

### REFERENCES

Aleksandrov, P. S.*Lektsii po analiticheskoi geometrii.*Moscow, 1968.

Mal’tsev, A. I.

*Osnovy lineinoi algebry,*3rd ed. Moscow, 1970.

Efimov, N. V., and E. R. Rozendorn.

*Lineinaia algebra i mnogomernaia geometriia.*Moscow, 1970.