# vector space

Also found in: Dictionary, Thesaurus, Wikipedia.

## vector space

[′vek·tər ‚spās]## Vector Space

a mathematical concept that generalizes the concept of a set of all (free) vectors of ordinary three-dimensional space.

** Definition** The rules for adding vectors and multiplying them by real numbers are specified for vectors of three-dimensional space. As applied to any vectors

**x, y**, and

**z**and any numbers

*α*and

*β*, these rules satisfy the following conditions (conditions A):

(1) **x** + **y** = **y** + **x** (commutation in addition);

(2) **(x + y)** + **z** = **x** + **(y + z)** (association in addition);

(3) there is a zero vector, **0**, which satisfies the condition **x + 0** = **x** for any vector **x**;

(4) for any vector **x** there is a vector **y** inverse to it such that **x** + **y** = **0**;

(5) **1 · x** = **x**;

(6) *α(β***x**) = (*±²*) **x** (association in multiplication);

(7) *(α + β)***x** = *α***x** + *β***x** (distributive property with respect to a numerical multiplier); and

(8) α**(x + y)** = **αβ** + α**y** (distributive property with respect to a vector multiplier).

A vector (or linear) space is a set *R* consisting of elements of any type (called vectors) in which the operations of addition and multiplication of elements by real numbers satisfy conditions A (conditions (l)-(4) express the fact that the operation of addition defined in a vector space transforms it into a commutative group).

The expression

(1) α_{1}**e**_{1} + α_{2}**e**_{2} + … α_{n}**e**_{n}

is called a linear combination of the vectors **e**_{1}, **e**_{2}, … **e**_{n} with coefficients α_{1}, α_{2}, … α_{n} Linear combination (1) is called nontrivial if at least one of the coefficients α_{2}, … α_{n} differs from zero. The vectors **e**_{1}, **e**_{2}, … **e**_{n} are called linearly dependent if there exists a nontrivial combination (1) representing a zero vector. In the opposite case—that is, if only a trivial combination of vectors **e**_{1}, **e**_{2}, … **e**_{n} is equal to the zero vector— the vectors **e**_{1}, **e**_{2}, … **e**_{n} are called linearly independent.

The vectors (free) of three-dimensional space satisfy the following condition (condition B): there exist three linearly independent vectors and any four vectors are linearly dependent (any three nonzero vectors that do not lie in the same plane are linearly independent).

A vector space is called *n*-dimensional (or has “dimension *n* ”) if there are *n* linearly independent elements **e**_{1}, **e**_{2}, … **e**_{n} in it and if any *n* + 1 elements are linearly dependent (generalized condition B). A vector space is called infinite dimensional if for any natural number *n* in it there are *n* linearly independent vectors. Any *n* linearly independent vectors of an *n*-dimensional vector space form the basis of this space. If **e**_{1}, **e**_{2}, … **e**_{n} form the basis of a vector space, then any vector **x** of this space can be represented uniquely in the form of a linear combination of basis vectors:

**x** = α_{1}**e**_{1}, α_{2}**e**_{2}, … α_{n}x**e**_{n}

Here, the numbers α_{1}, α_{2}, … α_{n} are called coordinates of vector **x** in the given basis.

** Examples** The set of all vectors of three-dimensional space obviously forms a vector space. A more complex example is the so-called

*n*-dimensional arithmetic space. The vectors of this space are ordered systems of

*n*real numbers: (λ

_{1}, λ

_{2}, …, λ

_{n}). The sum of two vectors and the product of a vector and a number are defined by the relations

(λ_{1}, λ_{2}, … λ_{n}) + (μ_{1}, μ_{2}, … μ_{n}) = (λ_{1} + μ_{1},λ_{2} + μ_{2}, …, λ_{n} + μ_{n})

λ(λ_{1}, λ_{2}, …, λ_{n}) = (αλ_{1}, αλ_{2},… αλ_{n})

As the basis of this space, one can use, for example, the following system of *n* vectors: **e**_{1} = (1, 0, … , 0), **e**_{2} = (0, 1, … , 0),… , **e**_{n} = (0, 0, … , 1).

The set *R* of all polynomials α_{0}+ α_{1}γ + … + α_{n}γ^{n} (of any degree *n*) of a single variable with real coefficients α_{0}, α_{1}, … α_{n} and with the ordinary algebraic rules for adding polynomials and multiplying them by real numbers forms a vector space. The polynomials 1, γ, γ^{2}, … , γ^{n} (for any *n*) are linearly independent in *R,* and therefore *R* is an infinite-dimensional vector space. Polynomials of degree not higher than *n* form a vector space of dimension *n* + 1 ; the polynomials 1, *γ, γ ^{2},* … ,

*γ*can serve as its basis.

^{n}** Subspaces** The vector space

*R’*is called a subspace of

*R*if (1) x(that is, every vector of space

*R′*is also a vector of space

*R*) and (2) for every vector ∊ and for every two vectors

**v**

_{1}and

**v**

_{2}∊, vector セ

**v**(for any セ) and vector

**v**are one and the same, regardless of whether one considers vectors

_{1}+ v_{2}**v, v**, and

_{1}**v**2 as elements of space

*R′*or

*R*. The linear manifold of vectors

**x**

_{l}

**x**

_{2}, … ,

**x**

*p*is the set of all the possible linear combinations of these vectors, that is, of vectors of the form α

_{1}

**x**

_{1}+ α

_{2}

**x**

_{2}· · · α

_{p}**x**

*. In three-dimensional space, the linear manifold of a single, nonzero vector*

_{p}**x**

_{1}will obviously be the set of all vectors lying on the straight line determined by the vector

**x**

_{1}. The linear manifold of two vectors

**x**

_{1}and

**x**

_{2}, not lying on the same straight line, will be the set of all vectors located in the plane determined by vectors

**x**

_{1}and

**x**

_{2}. In the general case of an arbitrary vector space

**the linear manifold of vectors**

*R,***x**

_{1},

**x**

_{2}, …,

**x**

*of this space is a subspace of space*

_{p}*R*of dimension

**K**

*≥ p*. In an

*n*-dimensional vector space there are subspaces of all dimensions less than

*n*. Each finite-dimensional (of given dimension

*k*) subspace

*R′*of vector space

*R*is the linear manifold of any

*k*linearly independent vectors lying within

*R′.*A space consisting of all polynomials of degree

*≥ n*(the linear manifold of the polynomials 1,

*γ, γ*) is an (

^{2}, … , γ^{n}*n*+ 1)-dimensional subspace of space

*R*of all polynomials.

** Euclidean spaces** In order to develop geometric methods in the theory of vector spaces, it has been necessary to find methods of generalizing such concepts as the length of a vector and the angle between vectors. One of the possible methods consists in the fact that any two vectors

**x**and

**y**of

*R*are set in correspondence with a number designated as (

**x,y**) and called the scalar product of vectors

**x**and

**y**. It is necessary to satisfy the following axioms for the scalar product:

(1) **(x, y)** = **(y, x**) (commutativity);

(2) **(x _{1} + x_{2}, y)** =

**(x**+

_{1}, y)**(x**(distributive property);

_{2}, y)(3) **(αx, y) = α(x, y)**; and

(4) **(x, x)***≥* 0 for any **x**, while (**x, x**) = 0 only for **x** = **0**.

The ordinary scalar product in three-dimensional space satisfies these axioms. A vector space, in which a scalar product satisfying the above axioms is defined is called a Euclidean space; it can be either finite in dimensions (*n* -dimensional) and infinite in dimensions. An infinite-dimensional Euclidean space is usually called a Hilbert space. The length |**x** | of vector **x** and the angle (**xy**) between vectors **x** and **y** of a Euclidean space are defined by the scalar product according to the formulas

An example of a Euclidean space is an ordinary three-dimensional space with the scalar product defined in vector calculus. Euclidean *n*-dimensional (arithmetic) space *E ^{n}* is obtained by defining, in an

*n*-dimensional arithmetic vector space, the scalar product of vectors

**x**= (λ

_{1}, … ,λ

_{n}) and

**y**= (μ

_{1}, … ,

*μ*by the relation

_{n})(2) **(x, y)** = λ_{1}μ_{1} + λ_{2}μ_{2} + … + λ_{n}μ_{n}

Requirements (l)-(4) are clearly fulfilled here.

In Euclidean spaces the concept of orthogonal (perpendicular) vectors is introduced. Precisely, vectors **x** and **y** are called orthogonal if their scalar product is equal to zero: **(x,y)** = 0. In the space *E ^{n}* considered here, the condition of orthogonality of vectors

**x**= (λ,… , λ

_{n}) and

**y**= (μ

_{1}, …, μ

_{n}, as follows from relation (2), has the form

(3) λ_{1}μ_{1} + λ_{2}μ_{2} + … + λ_{n}μ_{n} = 0

** Applications** The concept of a vector space (and various generalizations) is widely used in mathematics and has applications in the natural sciences. For example, let

*R*be the set of all the solutions of the linear homogeneous differential equation

*y*+

_{n}*a*

_{1}(

*x*)

*y*

^{n-1}+ … +

*a*(

_{n}*x*)

*y*= 0. Clearly, the sum of two solutions and the product of a solution times a number are also solutions to this equation. Thus,

*R*satisfies conditions A. It has been proved that for

*R*the generalized condition B is fulfilled. Consequently,

*R*is a vector space. Any basis in the vector space under consideration is called a fundamental system of solutions, knowledge of which permits one to find all the solutions to the equation under consideration. The concept of a Euclidean space permits the complete geometrization of the theory of systems of homogeneous linear equations:

Let us consider, in the Euclidean space *E′* , vectors **a***i* = (α_{i 1}, α_{2 2}, … α_{i n}), where *i* = 1,2, … , *n* , and the solution vector μ = (μ_{l} μ_{2}, … , μ_{n}). Using equation (2) for the scalar product of the vectors of *En* we give equations (4) the form

(5) (*a _{i}* ,

*u*) = 0,

*i*= 1, 2, …,

*m*

From relations (5) and equation (3), it follows that the solution vector **u** is orthogonal to all vectors **a***i* . In other words, this vector is orthogonal to the linear manifold of vectors **a***i* ; that is, the solution **u** is any vector of the orthogonal complement of the linear manifold of vectors **a***i* . Infinite-dimensional linear spaces play an important role in mathematics and physics. An example of such a space is the space *C* of continuous functions on a segment with the usual operations of addition and multiplication on real numbers. The space of all polynomials mentioned previously is a sub-space of space *C* .

### REFERENCES

Aleksandrov, P. S.*Lektsii po analiticheskoi geometrii.*Moscow, 1968.

Gel’fand, I. M.

*Lektsii po lineinoi algebre.*Moscow-Leningrad, 1948.

E. G. POZNIAK