(redirected from algebraists)
Also found in: Dictionary, Thesaurus.
Related to algebraists: Camille Jordan, Niccolò Fontana Tartaglia


branch of mathematicsmathematics,
deductive study of numbers, geometry, and various abstract constructs, or structures; the latter often "abstract" the features common to several models derived from the empirical, or applied, sciences, although many emerge from purely mathematical or logical
..... Click the link for more information.
 concerned with operations on sets of numbersnumber,
entity describing the magnitude or position of a mathematical object or extensions of these concepts. The Natural Numbers

Cardinal numbers describe the size of a collection of objects; two such collections have the same (cardinal) number of objects if their
..... Click the link for more information.
 or other elements that are often represented by symbols. Algebra is a generalization of arithmetic and gains much of its power from dealing symbolically with elements and operations (such as addition and multiplication) and relationships (such as equality) connecting the elements. Thus, a+a=2a and a+b=b+a no matter what numbers a and b represent.

Principles of Classical Algebra

In elementary algebra letters are used to stand for numbers. For example, in the equationequation,
in mathematics, a statement, usually written in symbols, that states the equality of two quantities or algebraic expressions, e.g., x+3=5. The quantity x
..... Click the link for more information.
 ax2+bx+c=0, the letters a, b, and c stand for various known constant numbers called coefficients and the letter x is an unknown variable number whose value depends on the values of a, b, and c and may be determined by solving the equation. Much of classical algebra is concerned with finding solutions to equations or systems of equations, i.e., finding the rootsroot,
in mathematics, number or quantity r for which an equation f(r)=0 holds true, where f is some function. If f is a polynomial, r is called a root of f; for example, r=3 and r
..... Click the link for more information.
, or values of the unknowns, that upon substitution into the original equation will make it a numerical identity. For example, x=−2 is a root of x2−2x−8=0 because (−2)2−2(−2)−8=4+4−8=0; substitution will verify that x=4 is also a root of this equation.

The equations of elementary algebra usually involve polynomialpolynomial,
mathematical expression which is a finite sum, each term being a constant times a product of one or more variables raised to powers. With only one variable the general form of a polynomial is a0xn+a1x
..... Click the link for more information.
 functions of one or more variables (see functionfunction,
in mathematics, a relation f that assigns to each member x of some set X a corresponding member y of some set Y; y is said to be a function of x, usually denoted f(x) (read "f of x ").
..... Click the link for more information.
). The equation in the preceding example involves a polynomial of second degree in the single variable x (see quadraticquadratic,
mathematical expression of the second degree in one or more unknowns (see polynomial). The general quadratic in one unknown has the form ax2+bx+c, where a, b, and c are constants and x is the variable.
..... Click the link for more information.
). One method of finding the zeros of the polynomial function f(x), i.e., the roots of the equation f(x)=0, is to factor the polynomial, if possible. The polynomial x2−2x−8 has factors (x+2) and (x−4), since (x+2)(x−4)=x2−2x−8, so that setting either of these factors equal to zero will make the polynomial zero. In general, if (xr) is a factor of a polynomial f(x), then r is a zero of the polynomial and a root of the equation f(x)=0. To determine if (xr) is a factor, divide it into f(x); according to the Factor Theorem, if the remainder f(r)—found by substituting r for x in the original polynomial—is zero, then (xr) is a factor of f(x). Although a polynomial has real coefficients, its roots may not be real numbers; e.g., x2−9 separates into (x+3)(x−3), which yields two zeros, x=−3 and x=+3, but the zeros of x2+9 are imaginary numbers.

The Fundamental Theorem of Algebra states that every polynomial f(x)=anxn+an−1xn−1+ … +a1x+a0, with an≠0 and n≥1, has at least one complex root, from which it follows that the equation f(x)=0 has exactly n roots, which may be real or complex and may not all be distinct. For example, the equation x4+4x3+5x2+4x+4=0 has four roots, but two are identical and the other two are complex; the factors of the polynomial are (x+2)(x+2)(x+i)(xi), as can be verified by multiplication.

Principles of Modern Algebra

Modern algebra is yet a further generalization of arithmetic than is classical algebra. It deals with operations that are not necessarily those of arithmetic and that apply to elements that are not necessarily numbers. The elements are members of a setset,
in mathematics, collection of entities, called elements of the set, that may be real objects or conceptual entities. Set theory not only is involved in many areas of mathematics but has important applications in other fields as well, e.g.
..... Click the link for more information.
 and are classed as a groupgroup,
in mathematics, system consisting of a set of elements and a binary operation a+b defined for combining two elements such that the following requirements are satisfied: (1) The set is closed under the operation; i.e.
..... Click the link for more information.
, a ringring,
in mathematics, system consisting of a set R of elements and two binary operations, such that addition makes R a commutative group and multiplication is associative and distributes over addition (see commutative law; associative law; distributive law).
..... Click the link for more information.
, or a fieldfield,
in algebra, set of elements (usually numbers) that may be combined under the operations of addition and multiplication so that it constitutes an additive group, the nonzero elements form a multiplicative group, and multiplication distributes over addition.
..... Click the link for more information.
 according to the axioms that are satisfied under the particular operations defined for the elements. Among the important concepts of modern algebra are those of a matrixmatrix,
in mathematics, a rectangular array of elements (e.g., numbers) considered as a single entity. A matrix is distinguished by the number of rows and columns it contains. The matrix

is a 2×3 (read "2 by 3") matrix, because it contains 2 rows and 3 columns.
..... Click the link for more information.
 and of a vectorvector,
quantity having both magnitude and direction; it may be represented by a directed line segment. Many physical quantities are vectors, e.g., force, velocity, and momentum.
..... Click the link for more information.


See M. Artin, Algebra (1991).



one of the major branches of mathematics and, along with arithmetic and geometry, one of its oldest. The problems as well as the methods which distinguish algebra from other branches of mathematics were developed gradually, starting in antiquity. Algebra arose in response to the needs of social practice and is the result of attempts to find general methods to solve similar arithmetic problems. These methods usually consist of the formulation and solution of equations.

The problems of solving and analyzing equations greatly influenced the development of the initial arithmetic concept of a number. As negative, irrational, and complex numbers were introduced into mathematics, the general analysis of the properties of these different number systems also devolved upon algebra. In this manner, a letter designation characteristic of algebra was formed and thus permitted the writing of the properties of operations on numbers in compact form suitable for constructing calculations on letter expressions. The apparatus of classical algebra consists of the letter calculation of identical transformations, making it possible to transform according to definite rules (reflecting the properties of the operation) the letter representations of the result of the operation. This also distinguishes algebra from arithmetic. Using letter designations, algebra studies the general properties of number systems and the general methods for solution of problems through equations. Arithmetic, on the other hand, employs computation methods with definitely specified numbers; in its higher branches, it deals with more refined individual properties of numbers. The development of algebra—its methods and symbolic language—has had á great influence on the development of newer areas of mathematics; in particular, it prepared the way for the appearance of mathematical analysis. Writing out the simplest fundamental concepts of analysis, such as the variable or the function, is impossible without letter symbolism; analysis—in particular, differential and integral calculus—employs the apparatus of classical algebra in its entirety. The apparatus of classical algebra can be applied wherever operations analogous to the addition and multiplication of numbers are involved. Thus, these operations can be carried out on items of the most diverse nature other than numbers. The best-known example of this expanded application of algebraic methods is vector algebra. Vectors can be added, multiplied by numbers, and multiplied by each other in two different ways. The properties of these vector operations on vectors are in many respects similar to the properties of addition and multiplication of numbers, but in certain respects they differ. For example, the vector product of two vectors A and B is not commutative, that is, the vector C = [A, B] may not be equal to the vector D = [B, A]; on the contrary, in vector calculus the rule [A, B] = − [B, A] applies.

Following vector algebra, the algebra of tensors arose, becoming one of the basic auxiliary tools of modern physics. Matrix algebra and many other algebraic systems developed with classical algebra.

Thus, in a broader, more modern sense, algebra may be defined as the science of systems of objects of varied nature in which operations more or less similar to the addition and multiplication of numbers are introduced. These operations are called algebraic. Algebra classifies systems with their given algebraic operations according to their properties and studies different problems naturally arising in these systems, including the problem of solving and analyzing equations; this takes on a new meaning in new systems of objects (the solution of an equation may be a vector, a matrix, an operator, and so forth). This new view of algebra, which has fully taken shape only in the 20th century, has expanded the applicability of algebraic methods, extending to areas outside mathematics (in particular, physics). At the same time, it has strengthened algebra’s links with other branches of mathematics and increased the influence of algebra on their further development.

Initial development. Algebra was preceded by arithmetic—the compilation of the gradually accumulated practical rules for solving everyday problems. These rules of arithmetic were addition, subtraction, multiplication, and division of numbers—at first, only whole numbers, but later, after a gradual and very slow development, fractions as well. The feature that distinguishes algebra from arithmetic is the introduction of an unknown quantity; the operations performed on it, which are dictated by the conditions of the problem, lead to an equation, from which the unknown is found. A hint of such a treatment of arithmetical problems appears as early as the ancient Egyptian papyrus of Ahmes (2000–1700 B.C.), where the unknown quantity is called “heap” and is indicated by a corresponding sign, a hieroglyph. The ancient Egyptians also solved much more complicated problems as, for example, problems of arithmetic and geometric progressions. Both the formulation of the problem and its solution were given in word form accompanied by definite numerical examples. Nevertheless, behind these examples one senses the presence of accumulated general methods which, if not in form then in essence, are equivalent to the solution of first- or sometimes second-degree equations. They also used the first mathematical symbols—for example, a special sign for fractions.

In the early 20th century, numerous mathematical texts (cuneiforms) from Babylonia, another ancient culture, were also deciphered. These showed the world the high level of mathematical culture which existed as early as 4,000 years before our time. With the aid of extensive special tables, the Babylonians were able to solve various problems; some of them were equivalent to the solution of quadratic equations and even of one form of a third-degree equation. A controversy arose among scholars exploring the history of mathematics regarding the extent to which the mathematics of the Babylonians could be considered algebra. It must not be forgotten, however, that ancient mathematics was unified; division came much later.

Geometry was clearly evident in ancient Greece. Ancient Greek geometers were the first to consciously formulate analyses in which each step was justified by logical proof. The power of this method was so great that both purely arithmetic and algebraic problems were translated into the language of geometry: quantities were treated as lengths, the product of two quantities as the area of a rectangle, and so forth. Modern mathematical language retains, for example, the term “square” for the product of a quantity multiplied by itself. The unity of scientific knowledge and practical application characteristic of the older cultures was broken by the mathematics of the ancient Greeks. Geometry was regarded as a logical discipline, an indispensable school for the philosophical intellect, while the idealistic philosophy of Plato did not consider any type of calculation—that is, problems of arithmetic and algebra—as a subject worthy of science. Doubtless, these branches also continued to develop (based on Babylonian and Egyptian traditions), but the treatise by Diophantus of Alexandria, Arithmetic (probably third century), was the only one to reach us. In it, Diophantus is already dealing quite freely with first- and second-degree equations; a rudimentary use of negative numbers can also be found in his work.

The heritage of ancient Greek science was taken up by the scholars of the medieval East—Central Asia, Mesopotamia, and North Africa. Arabic served them as an international scientific language, as Latin did for the scholars of the medieval West. Accordingly, this period is sometimes called the Arabic period in the history of mathematics. Actually, one of the greatest scientific centers of this era (ninth-15th centuries) was Middle Asia. Among many examples, it suffices to cite the activity of the ninth-century Uzbek mathematician and astronomer Muhammad al-Khwarizmi, a native of Khorezm, and the great encyclopedic scholar Biruni; the observatory of Ulug Beg was established at Samarkand in the 15th century. The scholars of the medieval East transmitted the mathematics of the Greeks and Hindus to the West in their own original transcriptions; in particular, they had devoted considerable study to algebra. The word “algebra” itself comes from the Arabic al-jabr and was the beginning of the title of one of Khwarizmi’s works (al-jabr designated one of the methods of transforming equations). Since the time of Khwarizmi algebra can be regarded as a distinct branch of mathematics.

The mathematicians of the medieval East expressed all operations in words. Algebra could develop further only after the general use of convenient symbols to indicate operations. This process proceeded slowly and by zigzags. The fraction sign of the ancient Egyptians was mentioned above. Diophantus used the letter i (the beginning of the word isos, or equal) as an equals sign, and the Hindus had similar abbreviations (fifth-seventh centuries), but this developing symbolic language was once again lost.

Credit for further development of algebra goes to the Italians, who took over in the 12th century the mathematics of the medieval East. Leonardo Fibonacci (13th century), the most prominent mathematician of this era, studied algebraic problems. Gradually, algebraic methods crept into computation practice, competing fiercely at first with methods of arithmetic. Accommodating themselves to practice, Italian scholars moved once more to convenient abbreviations: for example, instead of the words “plus” and “minus” they started to use the Latin letters ρ and m with a special bar above them. The signs + and − which are accepted now, appeared in mathematical treatises at the end of the 15th century; there are indications that long before this time these signs were used in commercial practice to indicate a surplus or deficit in weight.

The introduction and general acceptance of the other signs (power, radical, parentheses, and others) followed quickly. By the mid-17th century, the apparatus of symbols of contemporary algebra was complete—the use of letters to designate not only an unknown but all the quantities involved in a problem. Before this reform, which was definitively established by F. Viète at the end of the 16th century, algebra and arithmetic had, as it were, no general rules and proofs. Only numerical examples were considered, and it was almost impossible to express general conclusions. Even the elementary texts of this time were extremely difficult, since they gave dozens of special rules instead of one general one. Viète was the first to write his problems in a general form, designating unknown quantities by the vowels A, E, I, . . ., and the known by the consonants B, C, D, . . . . He joined these letters by the signs for mathematical operations that had been introduced by this time. Thus, the letter formulas so characteristic of contemporary algebra appeared for the first time. Beginning with R. Descartes (17th century), the last letters of the alphabet—(x, y, and z)—have been used predominantly for unknowns.

The introduction of symbolic designations and operations on letters, replacing definite numbers, was of extraordinary importance. Without this tool—the language of formulas—the brilliant development of higher mathematics which began in the 17th century, the creation of mathematical anaylsis, mathematical expressions for laws of mechanics and physics, and so forth would have been inconceivable.

At the time of Diophantus, algebra consisted of equations of first and second degree. Apparently, ancient Greek mathematicians arrived at second-degree equations, or quadratic equations, through geometry, since the problems that led to these equations naturally arise in using different data to determine area and circumference. However, the solution of equations by ancient mathematicians differed from modern solutions in one very essential respect: they did not use negative numbers. Thus, even equations of first degree did not always have solutions, from the point of view of the ancients. In considering equations of second degree, it was necessary to distinguish many individual cases (due to signs of the coefficients). The crucial step—the use of negative numbers—was made by Hindu mathematicians in the tenth century, but the scholars of the medieval East did not follow this path. People became accustomed to negative numbers gradually. Particular help in this process was given by commercial computation, in which negative numbers have the obvious meaning of a loss, an expenditure, a deficiency, and so forth. Negative numbers were completely accepted only in the 17th century, after Descartes used their graphic geometrical representation for the formulation of analytic geometry.

The rise of analytic geometry was also a triumph for algebra. If earlier, with the ancient Greeks, purely algebraic problems assumed a geometric form, now, on the contrary, algebraic means of expression proved to be so convenient and graphic that geometrical problems were translated into the language of algebraic formulas. One should only note here that the need to introduce negative, irrational, and imaginary numbers was felt with particular urgency precisely in algebra: thus, for example, quadratic irrationalities (roots) arise in the solution of second-degree equations. Of course, even ancient Greek and Middle Asian mathematicians could not bypass the extraction of roots and devised means for their approximate computation. But the view of an irrationality as a number was established considerably later. The introduction of complex, or imaginary, numbers took place in the succeeding era (the 18th century).

Thus, leaving aside imaginary numbers, by the 18th century algebra had developed approximately to the level at which it is being taught now in secondary schools. This algebra includes the operations of addition and multiplication, with their inverse operations of subtraction and division, and also raising to a power (a special case of multiplication) and its inverse, the extraction of roots. These operations were performed on numbers or on letters that could represent positive or negative, rational or irrational, numbers. They were used to solve problems which were equivalent to first- and second-degree equations. Today every educated person masters algebra to this degree. This “elementary” algebra is employed daily in technology, physics, and other areas of science and practice. But this by no means exhausts the content and application of algebra. Only the first steps were difficult and slow. Since the 16th, and especially since the 18th, century, algebra has developed rapidly, and in the 20th century it has flourished again.

The first exposition in Russian of elementary algebra in the form it had assumed by the early 18th century was given in L. F. Magnitskii’s famous Arithmetic, which appeared in 1703.

Algebra in the 18th-19th centuries. At the turn of the 18th century, there was a breakthrough of the greatest importance in the history of mathematics and natural science: the analysis of infinitesimals (differential and integral calculus) was developed and rapidly spread. This turning point resulted from the development of productive forces and the demands of contemporary technology and natural science, and its way was paved by the preceding development of algebra. In particular, during the 16th—17th centuries letter symbols and the operations performed on them facilitated consideration of mathematical quantities as variables; this view is characteristic of the analysis of infinitesimals, where continuous change in one variable usually corresponds to a continuous change in another, its function.

Algebra and analysis developed in the 17th—18th centuries in close contact. Functional representations entered algebra; this trend was enriched by I. Newton. On the other hand, algebra brought to analysis its rich collection of formulas and transformations, and these played an important part in the initial period of integral calculus and the theory of differential equations. An important event in algebra during this period was the publication of the algebra course of L. Euler, who was then working at the St. Petersburg Academy of Sciences. The course was published first in Russian (1768–69) and then repeatedly in foreign languages. What distinguished algebra and analysis in the 18th—19th centuries was that the basic subject of algebra was the discontinuous, the finite. This feature of algebra was underlined in the first half of the 19th century by N. I. Lobachevskii, who titled his book Algebra, or the Computation of Finites (1834). Algebra is concerned with the fundamental operations (addition and multiplication) performed a finite number of times.

The simplest product of multiplication is the monomial—for example, 5a3bx2y. The sum of a finite number of such monomials (of integral powers) is called a polynomial, also known as an integral rational function. By concentrating on one of the letters of the polynomial—for example, x —it is possible to give it the form a0xn + a1xn−1 + . . . + an, where the coefficients a0, a1 . . . , a n no longer depend on x. This is a polynomial of nth degree. Eighteenth- and 19th-century algebra is, above all, the algebra of polynomials.

Thus, the scope of algebra is considerably narrower than that of analysis; on the other hand, however, the simplest operations and items, which make up the subject of algebra, are studied in great depth and detail. And precisely because they are the simplest, their study is of fundamental importance for mathematics as a whole. Furthermore, algebra and analysis continue to have many overlapping points, and the boundary between them is not a rigid one. Thus, analysis takes from algebra its symbolic language, without which it could not have developed. In many instances, the study of polynomials—the simpler functions—paved the way for the general theory of functions. Finally, throughout the subsequent history of mathematics, there was a tendency to reduce the study of more complicated functions to polynomials or series of polynomials, the simplest example of which is Taylor series. On the other hand, algebra has frequently used the idea of continuity. The notion of infinite numbers of objects has become dominant in recent algebra, although in a new, special form.

If a polynomial is set equal to zero (or in general, to any particular number), we obtain an algebraic equation. Historically, the first task of algebra was the solution of such equations—that is, finding their roots, those values of the unknown quantity x which make the polynomial equal to zero. Since ancient times, the solution to the quadratic equation x2 + px + q = 0 has been known in the form of the formula

The algebraic solution to third- and fourth-degree equations was discovered in the 16th century. For equations of the form x3 + px + q = 0 to which all third-degree equations can be reduced, the solution is given by the formula

This is called Cardano’s formula, although the question of whether G. Cardano discovered it himself or borrowed it from other mathematicians cannot be considered fully resolved. A method of solving algebraic equations of fourth degree was indicated by L. Ferrari. After this, persistent searches began for formulas that would also solve equations of higher degrees by similar methods—that is, that would reduce their solution to the extraction of roots (“solution by radicals”). These efforts continued for about three centuries. Only in the early 19th century did N. Abel and E. Galois prove that equations of degrees higher than 4, in general, are not solved by radicals; it turned out there exist equations of nth degree, where n is equal to or greater than 5, which are insoluble by radicals. An example is the equation x5 - 4x - 2 = 0. This discovery was of great importance, since it was found that roots of algebraic equations were a much more complicated subject than radicals. Galois did not limit himself to this, so to speak, negative result; he laid the basis for a more profound theory of equations, associating every equation with a permutation group of representations of its roots. The solution of equations by radicals is equivalent to the reduction of the initial equation to series of equations of the form ym = a (which also indicates that Historical Survey. Reduction to such equations proved to be impossible in the general case, but the question arose: to what series of simpler equations can the solution of the given equation be reduced? For example, through the roots of what equations are the roots of the given equation expressed rationally—that is, through the four operations of addition, subtraction, multiplication, and division. In this broader sense, the Galois theory has continued to develop up to the present.

From the purely practical side, there was no special need for general formulas for the computation of the roots of equations of higher orders with given coefficients, since in practical terms they were of little use even for equations of degrees 3 and 4. The numerical solution of equations proceeded along another path, that of approximate calculation—all the more appropriate since in practice (for example, in astronomy and technology), the coefficients themselves are usually the result of measurement (that is, they are known only approximately, to a certain degree of precision).

Approximate calculation of roots of algebraic equations is an important problem for computer mathematics. In recent years a very large number of methods of solving this problem have been worked out—in particular, methods involving the use of modern computer technology. But mathematics does not consist solely of describing methods of calculation. No less important, even for applications, is another side of mathematics: the ability to solve problems purely theoretically, without calculation. In the theory of algebraic equations, such a question is that of the number and nature of roots. If positive and negative numbers are permitted, an equation of the first degree always has one—and only one solution. But a quadratic equation may not have a solution among the so-called real numbers. For example, no positive or negative x can satisfy the equation x2 + 2 = 0, since the left side will always be a positive number and not zero. A solution represented in the form of Historical Survey has no meaning until the square root of a negative number is explained. Precisely this sort of problem directed mathematicians to the so-called imaginary numbers. Bold and isolated investigators had earlier made use of them, but they were finally introduced into science only in the 19th century. These numbers proved an extremely important tool not only in algebra, but in almost all branches and applications of mathematics. To the extent that people became accustomed to imaginary numbers, they lost all mystery and “imagi-nariness.” This is why they are now most frequently called not imaginary, but complex, numbers.

If complex numbers (as well as positive and negative numbers) are allowed, it turns out that any equation of nth degree has roots. This is true for equations with complex coefficients as well. This important theorem, entitled the fundamental theorem of algebra, was first expressed in the 17th century by the French mathematician A. Girard, but its first rigorous proof was given at the very end of the 18th century by K. Gauss; since then numerous different proofs have been published. All these proofs had to resort, in one form or another, to the concept of continuity; thus, the proof of the fundamental theorem of algebra itself goes beyond the limits of algebra, demonstrating again the inseparability of mathematical science as a whole.

If xi is one of the roots of the algebraic equation

a0xn + a1xn−1 + . . . + an = 0

then it is easily demonstrated that the polynomial on the left side of the equation can be divided by xxi without a remainder. The fundamental theorem of algebra leads readily to the conclusion that every polynomial of nth degree can be resolved into n such factors of the first degree—that is, we have the identity

a0xn − a1 xn−1 + . . . + an = a0(x−x1)(x−x2). . . (x−xn)

and the polynomial allows only one resolution of this type.

Thus, an equation of nth degree has n roots. In special cases it may turn out that some of the factors are equal—that is, certain roots are repeated a number of times (multiple roots); consequently, the number of different roots may be less than n. Often it is less important to calculate the roots than to analyze their nature. As an example, we may cite the Rule of Signs, which Descartes had already discovered: an equation cannot have more positive roots than the number of times there is a change in sign in a series formed by its coefficients (and if less, then by an even number). For example, in the equation considered above x5 − 4x − 2 = 0, there is one change in sign: the first coefficient is positive, the rest negative. So, without solving the equation, it can be asserted that it has one and only one positive root. The general problem of the number of real roots within given limits is solved by Sturm’s theorem. It is very important that in equations with real coefficients, complex roots can appear only in pairs: along with the root a + bi, the same equation will always have a root a − bi. Applications sometimes pose more complicated problems of this type; thus, in mechanics it is demonstrated that movement is steady if a certain algebraic equation has only roots (even if they be complex) in which the real part is negative. This forced a search for conditions under which the roots of the equation have this property (for example, the Routh-Hurwitz problem).

Many theoretical and practical problems lead not to one equation but to a whole system of equations with several unknowns. An especially important case is that of systems of linear equations—that is, systems of m first-degree equations in n unknowns:

a11x1 + ... + a1nxn = b1

a21x1 +... + a2nxn = b2


am1x1 +... + amnxn = bm

Here x1, . . . xn are unknowns, and the coefficients are written so that the symbols associated with them designate the number of the equation and the number of the unknown. The importance of systems of first-degree equations is not limited to the fact that they are the simplest. In practice (for example, in finding corrections in astronomical computations, in estimating the error in approximate computations, and so forth), there are often cases of quantities known to be small, whose higher orders can be ignored because of their extremely small magnitude—thus, to the first approximation, equations with such quantities can be reduced to linear equations. No less important is the fact that the solution of systems of linear equations is an essential part in the numerical solution of various applied problems. G. Leibniz (1700) had already called attention to the fact that in studying systems of linear equations, it is most important to consider a table made up of coefficients aik. He showed how the so-called determinant, with the aid of which systems of linear equations are studied, is constructed out of these coefficients (in the case where m = n). Later, these tables, or matrices, became the subject of independent study, since it was shown that their role was not exhausted by applications to the theory of systems of linear equations. Today, the theory of systems of linear equations and the theory of matrices have become parts of an important branch of science—linear algebra.

Based on material from the article by


in the Great Soviet Encyclopedia, 2nd ed.

The applications of mathematics have expanded over time, and the rate of this expansion has increased. If in the 18th century mathematics became the basis for mechanics and astronomy, in the 19th century it became indispensable for different areas of physics, and now mathematical methods have penetrated areas of knowledge apparently remote from mathematics, like biology, linguistics, and sociology. Every new application entails the creation of new chapters within mathematics itself. This tendency has led to the development of a considerable number of separate mathematical disciplines based on the different areas of research: the theory of functions of a complex variable, the theory of probability, the theory of equations of mathematical physics, and others; newer areas include information theory, the theory of automatic control, and so forth. Despite such differentiation, mathematics remains a unified science. This unity is preserved thanks to the development and perfection of a number of unifying ideas and points of view. The tendency toward unification lies at the heart of mathematics—a science that utilizes the method of abstraction and, in addition, is often stimulated by the necessity of using one and the same mathematical apparatus in the analysis of problems arising in different areas.

Modern algebra can be understood as the study of operations on any mathematical object; it is one of the branches of mathematics that forms general concepts and methods for mathematics as a whole. It shares this role with topology, which studies the most general properties of continuous extension. Despite the difference in their objects of study, algebra and topology are so related that it is difficult to draw a firm boundary between them. It is characteristic of modern algebra to focus attention on the properties of operations and not on the objects on which these operations are performed. Let us try to explain this with a simple example. Everyone knows the formula (a + b)2 = a2 + 2ab + b2. Its derivation is a series of equalities: (a + b)2 = (a + b) (a + b) = (a + b)a + (a + b)b = (a2 + ba) + (ab + b2) = a2 (ba + ab) + b2 = a2 + 2ab + b2 In establishing this, we used the distributive law twice: c(a + b) = ca + cb (the role of c is played by a + b) and (a + b)c = ac + bc (the role of c is played by a and b) the associative law, which under addition allows the objects being added to be regrouped; and finally, the commutative law: ba = ab. It is immaterial what objects are designated by the letters a and b; what is important is that they belong to a system of objects in which two operations—addition and multiplication—are defined and which satisfy the above requirements concerning not the properties of objects but the properties of operations. Thus, the formula remains true if a and b designate vectors in a plane or in space; addition is assumed first as vector addition, then as the addition of numbers, and multiplication as the scalar multiplication of vectors. One can substitute for a and b commutative matrices—that is, those in which ab = ba (this may not be true of all matrices)—or operators of differentiation of two independent variables, and so forth.

The properties of operations on mathematical objects may differ in various situations or may be identical despite differences in the objects. Ignoring the nature of the objects but focusing on certain properties of operations carried out on them, we come to the concept of a set, endowed with an algebraic structure, or the concept of an algebraic system. The needs that arise in the development of science have brought to life a whole host of interesting algebraic systems: groups, linear spaces, fields, rings, and so forth. Modern algebra basically studies existing algebraic systems and also the properties of algebraic systems in general on the basis of still more general concepts (Ω-algebra, models). In addition to this orientation, which is called general algebra, applications of algebraic methods in other branches of mathematics and beyond its limits are studied (topology, functional analysis, theory of numbers, algebraic geometry, computational mathematics, theoretical physics, crystallography, and so forth).

The most important algebraic systems with one operation are groups. Operation in a group is associative—that is, it is true that (a * b) * c = a * (b * c) for any a, b, and c from the group, where the asterisk * indicates an operation which in different situations can have different names—and uniquely reversible—that is, for any a and b of the group there is a unique x and y such that a * x = b and y * a = b. The following are examples of a group: the set of all integers with the operation of addition, the set of all rational (integers and fractions) positive numbers with the operation of multiplication. In these examples, the operation—addition in the first and multiplication in the second—is commutative. Such groups are called Abelian groups. The set of superpositions of a given figure or body with itself forms a group, if the consecutive performance of two superpositions is taken as an operation. Such groups (symmetry groups of a figure) may be non-Abelian. Superposition of an atomic lattice of a crystal on itself forms a so-called Fedorov group; this group plays a fundamental role in crystallography and, through it, solid-state physics. Groups may be finite (groups of symmetry of a cube) and infinite (groups of integers with addition), discrete (the same example) and continuous (the group of rotations of a sphere). The theory of groups has become a mathematical theory rich in content and with a number of branches; it has extensive application.

No less rich in applications is linear algebra, which studies linear spaces. This designation includes algebraic systems with two operations: addition and multiplication by numbers (real or complex). Relative to addition, the objects, called vectors, form an Abelian group; the operation of multiplication satisfies the inherent requirements:

a(x + y) = ax + ay (a + b)x = ax + bx 1 x = x a(bx) = ab(x)

where a and b designate numbers and x and y, vectors. A set of vectors (in the ordinary sense) in a plane and in space forms linear spaces in the sense of the given definition. However, the problems confronting mathematics have forced the examination of multidimensional and even infinite-dimensional linear spaces. The latter (their elements are most frequently functions) are the subject matter of functional analysis. The ideas and methods of linear algebra are applied in most branches of mathematics, beginning with analytic geometry and the theory of systems of linear equations. The theory of matrices and determinants forms the computational apparatus of linear algebra.



History of algebra
Vygodskii, M. Ia. Arifmetika i algebra ν drevnem mire, 2nd ed. Moscow, 1967.
Iushkevich, A. P. Istoriia matematiki ν srednie veka. Moscow, 1961.
Vileitner, G. Istoriia matematiki ot Dekarta do serediny XIX stoletiia, 2nd ed. Moscow, 1966. (Translated from German.)
Classics of science
Descartes, R. Geometriia. Moscow-Leningrad, 1938. (Translated from Latin.)
Newton, I. Vseobshchaia arifmetika, ili kniga ob arifmeticheskikh sinteze i analize. Moscow, 1948. (Translated from Latin.)
Euler, L. Universal’naia arifmetika, vols. 1–2. St. Petersburg, 1768–69. (Translated from German.)
Lobachevskii, N. I. Polnoe sobrante sochinenii, vol. 4: Sochineniia po algebre. Moscow-Leningrad, 1948.
Galois, E. Sochineniia. Moscow-Leningrad, 1936. (Translated from French.)
University courses
Kurosh, A. G. Kurs vysshei algebry, 9th ed. Moscow, 1968.
Gel’fand, I. M. Lektsii po lineinoi algebre, 3rd ed. Moscow, 1966.
Mal’tsev, A. I. Osnovy lineinoi algebry. Moscow-Leningrad, 1948.
Monographs on general problems in algebra
Van der Waerden, B. L. Sovremennaia algebra. 2nd ed., parts 1–2. Moscow-Leningrad, 1947. (Translated from German.)
Bourbaki, N. Algebra. Moscow, 1962–66. [Chapters 1–9] (Translated from French.)
Kurosh, A. G. Lektsii po obshchei algebre. Moscow, 1962.
Monographs on specialized divisions of algebra
Shmidt, O. Abstraktnaia teoriia grupp, 2nd ed. Moscow-Leningrad, 1933.
Kurosh, A. G. Teoriia grupp, 3rd ed. Moscow, 1967.
Pontriagin, L. S. Nepreryvnye gruppy, 2nd ed. Moscow, 1954.
Chebotarev, N. G. Osnovy teorii Galua, parts 1–2. Moscow-Leningrad, 1934–37.
Jacobson, N. Teoriia kolets. Moscow, 1947. (Translated from English.)


A method of solving practical problems by using symbols, usually letters, for unknown quantities.
The study of the formal manipulations of equations involving symbols and numbers.
An abstract mathematical system consisting of a vector space together with a multiplication by which two vectors may be combined to yield a third, and some axioms relating this multiplication to vector addition and scalar multiplication. Also known as hypercomplex system.


a branch of mathematics in which arithmetical operations and relationships are generalized by using alphabetic symbols to represent unknown numbers or members of specified sets of numbers


(mathematics, logic)
1. A loose term for an algebraic structure.

2. A vector space that is also a ring, where the vector space and the ring share the same addition operation and are related in certain other ways.

An example algebra is the set of 2x2 matrices with real numbers as entries, with the usual operations of addition and matrix multiplication, and the usual scalar multiplication. Another example is the set of all polynomials with real coefficients, with the usual operations.

In more detail, we have:

(1) an underlying set,

(2) a field of scalars,

(3) an operation of scalar multiplication, whose input is a scalar and a member of the underlying set and whose output is a member of the underlying set, just as in a vector space,

(4) an operation of addition of members of the underlying set, whose input is an ordered pair of such members and whose output is one such member, just as in a vector space or a ring,

(5) an operation of multiplication of members of the underlying set, whose input is an ordered pair of such members and whose output is one such member, just as in a ring.

This whole thing constitutes an `algebra' iff:

(1) it is a vector space if you discard item (5) and

(2) it is a ring if you discard (2) and (3) and

(3) for any scalar r and any two members A, B of the underlying set we have r(AB) = (rA)B = A(rB). In other words it doesn't matter whether you multiply members of the algebra first and then multiply by the scalar, or multiply one of them by the scalar first and then multiply the two members of the algebra. Note that the A comes before the B because the multiplication is in some cases not commutative, e.g. the matrix example.

Another example (an example of a Banach algebra) is the set of all bounded linear operators on a Hilbert space, with the usual norm. The multiplication is the operation of composition of operators, and the addition and scalar multiplication are just what you would expect.

Two other examples are tensor algebras and Clifford algebras.

[I. N. Herstein, "Topics in Algebra"].
References in periodicals archive ?
Moore gives a full discussion of Carlyle's training in a geometry-based course of mathematical study, and his opposition to the developments of algebraists and analysts on the continent, in "Carlyle and Goethe as Scientist," "Carlyle, Mathematics and 'Mathesis,'" and "Carlyle and the "Torch of Science.
The author has thus brought the fundamental problem of the unity of geometry to this logico-philosophical discipline of the analysis and the synthesis, inaugurating in this way an entire tradition that can be traced throughout the tenth century all the way to the algebraist al- Samawbal in the twelfth century.