number(redirected from Number (mathematics))
Also found in: Dictionary, Thesaurus, Medical, Legal.
number,entity describing the magnitude or position of a mathematical object or extensions of these concepts.
The Natural Numbers
Cardinal numbers describe the size of a collection of objects; two such collections have the same (cardinal) number of objects if their members can be matched in a one-to-one correspondence. Ordinal numbers refer to position relative to an ordering, as first, second, third, etc. The finite cardinal and ordinal numbers are called the natural numbers and are represented by the symbols 1, 2, 3, 4, etc. Both types can be generalized to infinite collections, but in this case an essential distinction occurs that requires a different notation for the two types (see transfinite numbertransfinite number,
cardinal or ordinal number designating the magnitude (power) or order of an infinite set; the theory of transfinite numbers was introduced by Georg Cantor in 1874.
..... Click the link for more information. ).
The Integers and Rational Numbers
To the natural numbers one adjoins their negatives and zero to form the integers. The ratios a/b of the integers, where a and b are integers and b ≠ 0, constitute the rational numbers; the integers are those rational numbers for which b = 1. The rational numbers may also be represented by repeating decimals; e.g., 1/2 = 0.5000 … , 2/3 = 0.6666 … , 2/7 = 0.285714285714 … (see decimal systemdecimal system
[Lat.,=of tenths], numeration system based on powers of 10. A number is written as a row of digits, with each position in the row corresponding to a certain power of 10.
..... Click the link for more information. ).
The Real Numbers
The real numbers are those representable by an infinite decimal expansion, which may be repeating or nonrepeating; they are in a one-to-one correspondence with the points on a straight line and are sometimes referred to as the continuum. Real numbers that have a nonrepeating decimal expansion are called irrational, i.e., they cannot be represented by any ratio of integers. The Greeks knew of the existence of irrational numbers through geometry; e.g., √2 is the length of the diagonal of a unit square. The proof that √2 is unable to be represented by such a ratio was the first proof of the existence of irrational numbers, and it caused tremendous upheaval in the mathematical thinking of that time.
The Complex Numbers
Numbers of the form z = x + yi, where x and y are real and i = √−1, such as 8 + 7i (or 8 + 7√−1), are called complex numbers; x is called the real part of z and yi the imaginary part. The real numbers are thus complex numbers with y = 0; e.g., the real number 4 can be expressed as the complex number 4 + 0i. The complex numbers are in a one-to-one correspondence with the points of a plane, with one axis defining the real parts of the numbers and one axis defining the imaginary parts. Mathematicians have extended this concept even further, as in quaternionsquaternion
, in mathematics, a type of higher complex number first suggested by Sir William R. Hamilton in 1843. A complex number is a number of the form a+bi when a and b are real numbers and i
..... Click the link for more information. .
The Algebraic and Transcendental Numbers
A real or complex number z is called algebraic if it is the root of a polynomial equation zn + an − 1zn − 1 + … + a1z + a0 = 0, where the coefficients a0, a1, … an − 1 are all rational; if z cannot be a root of such an equation, it is said to be transcendental. The number √2 is algebraic because it is a root of the equation z2 + 2 = 0; similarly, i, a root of z2 + 1 = 0, is also algebraic. However, F. Lindemann showed (1882) that π is transcendental, and using this fact he proved the impossibility of "squaring the circle" by straight edge and compass alone (see geometric problems of antiquitygeometric problems of antiquity,
three famous problems involving elementary geometric constructions with straight edge and compass, conjectured by the ancient Greeks to be impossible but not proved to be so until modern times.
..... Click the link for more information. ). The number e has also been found to be transcendental, although it still remains unknown whether e + π is transcendental.
See G. Ifrah, The Universal History of Numbers (1999).
in linguistics, the grammatical category indicating the number of participants in an action—subjects and objects— through morphological means. The primary distinction in the category of number is between singular and plural. Some languages also have a dual number and, more rarely, a triple number. In the historical development of a language, the dual may weaken and be absorbed by the plural, as happened in the Slavic languages; for example, in the second person personal pronoun Old Church Slavonic distinguished singular (ty, “thou”), dual (va, “you two”), and plural (vy, “you”).
The forms and meanings of the plural include the distributive plural, which indicates that plurality is thought of as consisting of individual objects (listy, “leaves”), and the collective plural, which indicates that plurality is thought of as a single aggregate (list’ia, “leaves”). Collective meaning can also be expressed by a singular form (triap’e, “rags,” voron’e, “crows”). Plural forms can also indicate the concept of class (generic plural) (v etoi mestnosti vodiatsia volki, “there are wolves in this area”). Plural forms may sometimes be used with the meaning of the singular, such as the polite, or honorific, form of the second person personal pronoun (vy, “you,” in addressing one person) and the plural form in the first person (my, “we”) as used in the speech of sovereigns.
Number is an independent category in nouns and personal pronouns. Other parts of speech, including verbs, adjectives, and the other types of pronouns, acquire marking for number through agreement (syntactic number). Number agreement is obligatory in the Indo-European languages (on rabotaet, “he works,” oni rabotaiut, “they work”). However, as morphology becomes simpler, agreement may also disappear. For example, in English there is no number agreement between adjective and noun (“clever child,” “clever children”). There are various ways of expressing plural number: affixation (stol, “table,” stoly, “tables”), suppletion (chelovek, “person,” liudi, “people”), internal inflection (Arabic radžulun, “man,” ridžālun, “men,”), in which the root vowel changes, and reduplication (Indonesian [Malay] orang, “person,” orang-orang, “people”). In Indo-European languages the plural form is required if a noun is modified by a word denoting quantity (desiat’ knig, “ten books,” mnogo knig, “many books”); in other languages the noun may have the form of the singular in such constructions (Hungarian könyv, “book,” tu könyv, “ten books,” sok könyv, “many books”). In many Asian and American languages the plural of nouns used in constructions containing a numeral is expressed by means of special classifiers, which differ according to the lexical group to which the noun belongs. In such instances the nouns do not change their form (Vietnamese hai con meo, “two cats,” where con is the classifier).
REFERENCESSapir, E. Iazyk. Moscow-Leningrad, 1934. (Translated from English.)
Jespersen, O. Filosofiia grammatiki. Moscow, 1958. (Translated from English.)
Reformatskii, A. A. “Chislo i grammatika.” In the collection Voprosy grammatiki. Moscow-Leningrad, 1960.
Vinogradov, V. V. Russkii iazyk, 2nd ed. Moscow, 1972.
V. A. VINOGRADOV
the most important mathematical concept. The concept of number arose in simplest form in primitive society and over the ages has undergone changes, gradually growing richer in content with the expansion of the range of human activities and the range of problems requiring quantitative description and investigation. In the first stages of development, the concept of number was determined by the requirements of counting and measurement in man’s daily activities. Subsequently, number became the basic concept of mathematics, and the further development of the concept has been determined by the needs of mathematical science.
The concept of natural number, or positive integer, arose as far back as prehistoric times in connection with the need to count objects. In general, its formation and development proceeded as follows. At the lower stage of primitive society, the concept of abstract number was nonexistent. This did not mean that primitive man was unable to ascertain the number of objects in a given set, such as the number of people involved in a hunt or the number of lakes in which fish could be caught. However, the consciousness of primitive man still could not perceive that which was common to various groups of objects, such as “three people” or “three lakes.” Analyses of the languages of primitive peoples have shown that different phrases were used in counting different objects. The word “three” was conveyed differently in the contexts of “three people” and “three boats.” Of course, such named number series were very short and terminated in a nonindividualized concept (“many”) of a large number of some object, which was also named, that is, expressed by different words for different kinds of objects, such as “crowd,” “herd,” and “pile.”
The concept of abstract number emerged out of the primitive way of counting objects, consisting in comparing the objects of a given specific set with objects of some defined set that acts as a standard. Among most peoples, fingers served as the first such standard (“finger counting”), confirmed by linguistic analyses of the names of the first numbers. At this stage, number becomes abstract, independent of the nature of the objects being counted, and at the same time acts as a fully concrete embodiment associated with the nature of the reference set. The expanding needs of counting forced people to use other counting standards, such as notches on sticks. For recording comparatively large numbers, a new idea came to be used—the designation of some specific number (ten among most peoples) by a new symbol, for example, a notch on a different stick.
With the development of writing, the possibilities of reproducing numbers expanded significantly. At first, numbers were denoted by lines on the writing material, such as papyrus or clay tablet. Later, other symbols were introduced to denote large numbers. The Babylonian cuneiform notations for numbers, like the Roman numerals that have been preserved to this day, clearly show that it was precisely in this manner that number notation developed. A major step forward was the Hindu positional numeration system, which made it possible to write any natural number by means of ten symbols—digits (seeNUMERATION SYSTEM). Thus, as writing developed, the concept of natural number assumed increasingly abstract form. Also, the abstract concept of number, expressed by special words in speech and denoted by special symbols in writing, became increasingly entrenched.
An important step in the development of the concept of natural number was the recognition of the infiniteness of the sequence of natural numbers, that is, the possibility of its unlimited continuation. A clear understanding of the infiniteness of the sequence of natural numbers is reflected in Greek mathematical works (third century B.C.),specifically in Euclid’s and Archimedes’ works. The unlimited continuability of the sequence of prime numbers is established as early as Euclid’s Elements, and the principles for the construction of names and symbols for any large number, particularly numbers larger than the “number of grains of sand in the world,” are given in Archimedes’ book Sandreckoner.
Operations on numbers came into use with the development of the concept of natural number in connection with the counting of objects. The operations of addition and subtraction originally arose as operations on sets themselves, in the form of the joining of two sets into one and the separation of part of a set. Multiplication, apparently, arose as a result of counting in equal parts (by twos or threes, for example), and division arose as the division of a set into equal parts (seeMULTIPLICATION and DIVISION). The abstract nature of these operations became evident after centuries of experience, as did the independence of the quantitative result of an operation from the nature of the objects forming the set, for example, that two objects and three objects will add up to five objects, regardless of the nature of the objects. Only then did mathematicians begin to develop the rules of operations, to study the operations, and to devise methods of solving problems; in other words, only then did the development of the science of numbers—arithmetic—begin (seeARITHMETIC). Arithmetic developed above all as a system of knowledge with an overt practical orientation. However, in the very process of its development, it became evident that there was a need to study the properties of numbers as such and to elucidate the increasingly complex regularities in their interrelations brought about by the very existence of the operations. The refinement of the concept of natural number began, and various classes, such as even and odd numbers and prime and composite numbers, became distinguished. The study of the deep-seated regularities of the natural numbers continues and constitutes the branch of mathematics known as number theory (seeNUMBERS, THEORY OF).
Natural numbers, in addition to their primary function of characterizing the number of objects, have another function, that of characterizing the order of objects in a sequence. The concept of ordinal number (first, second, and so on), which arises in connection with this function, is closely linked with the concept of cardinal number (one, two, and so on). In particular, the most frequently used method of counting objects since time immemorial has been the placement of the objects being counted in a sequence and then counting them using ordinal numbers (for example, if the last object being counted is seventh, then the total number of objects is seven).
The question of substantiating the concept of natural number is rather recent. The concept is so familiar and simple that the need for its definition in terms of some simpler concepts never arose. It was only in the mid–19th century, under the influence of the development of the axiomatic method in mathematics, on the one hand (seeAXIOMATIC METHOD), and the critical reassessment of the foundations of mathematical analysis, on the other, that the time became ripe for substantiating the concept of cardinal natural number. A clear definition of the concept of natural number based on the concept of set (an aggregate of objects) was provided in the 1870’s by G. Cantor. First, Cantor defines the concept of the equivalence of sets. Specifically, two sets are said to be equivalent if the objects of the sets can be put into one-to-one correspondence. Then the number of objects within a given set is defined as that which the given set and any other set of objects equivalent to it have in common. The definition reflects the essence of the natural number as that which results from counting the objects composing the given set. Indeed, at all historical levels, counting has consisted in comparing one by one the objects being counted with the objects constituting a “reference” set (in the early stages, the fingers of the hands and notches on sticks; today, words and symbols representing numbers). Cantor’s definition was the starting point for the extension of the concept of cardinal number from finite to infinite sets.
Another substantiation of the concept of natural number is based on an analysis of the relation of succession, which, it turns out, can be axiomatized. A system of axioms constructed on the basis of this principle was formulated by G. Peano.
It should be noted that the extension of the concept of ordinal number to infinite sets (transfinite ordinal numbers and, more generally, ordinal types) diverges sharply from the generalized concept of cardinal number, since quantitatively identical (equivalent) sets can be ordered by different methods. (SeeTRANSFINITE ORDINAL NUMBER and SET THEORY.)
Historically, the first extension of the concept of number was the introduction of fractions. The first use of fractions was connected with the need to carry out measurements. The measurement of some quantity consists in comparing it with another qualitatively similar quantity, which is taken as the unit of measurement. This comparison is performed by means of the operation—specific to the method of measurement—of “applying” the unit of measurement to the quantity being measured and counting the number of such applications. Length is measured in this way by applying a segment that is taken as the unit of measurement, and the amount of a liquid is measured by means of a measuring vessel. However, the unit of measurement does not always fit the quantity being measured a whole number of times, a fact that cannot always be ignored, even in the most primitive practical activity. Herein lies the source of the simplest and most “convenient” fractions, such as one-half, one-third, and one-fourth. It was only with the development of arithmetic as the science of numbers that the idea emerged of considering fractions with any natural denominator, as well as the concept of fractional number as a quotient in the division of two natural numbers, of which the dividend is not divisible by the divisor (see FRACTION).
The further extension of the concept of number was now no longer connected with the direct needs of counting and measurement but was a direct consequence of the development of mathematics.
The introduction of negative numbers was brought about of necessity by the development of algebra as the science providing general methods for the solution of arithmetic problems, regardless of content or given numerical data. The need for negative numbers in algebra arose in the solution of problems that reduce to linear equations with one unknown. A possible negative answer in problems of this kind may be interpreted by using as examples very simple directed quantities (oppositely directed segments, motion in the direction opposite to the direction chosen, property-debt). However, in problems that involve the repeated application of the operations of addition and subtraction, a great many cases must be considered in the course of a solution if negative numbers are not used, which may prove to be so burdensome a task that the advantage of an algebraic solution over an arithmetic solution is lost. Thus, the extensive use of algebraic methods in solving problems is extremely difficult unless negative numbers are used. In India negative numbers were used systematically as early as the sixth to 11th centuries in problem solving and were interpreted basically as they are today.
In European science the use of negative numbers did not become firmly established until the time of R. Descartes, who provided geometrical interpretation of negative numbers as directed line segments. Descartes’s creation of analytic geometry, which made it possible to consider the roots of an equation as the coordinates of the points of intersection of some curve with the axis of abscissas, at long last eliminated the fundamental difference between the positive and negative roots of an equation, since their interpretation proved to be essentially the same.
Integers and fractions, both positive and negative, as well as the digit zero, were grouped under the general term “rational numbers.” The set of rational numbers is said to be closed with respect to the four arithmetic operations. This means that the sum, difference, product, and quotient (except the quotient in division by zero, which is meaningless) of any two rational numbers is also a rational number. The set of rational numbers is ordered with respect to the concepts of greater than and less than. Furthermore, it has the property of density: there are infinitely many rational numbers between any two different rational numbers. This makes it possible to carry out various measurements, for example, of the length of a line segment using a selected unit of measurement, to any degree of accuracy by means of rational numbers. Thus, the set of rational numbers turns out to be sufficient to satisfy many practical needs. The formal substantiation of the concepts of fraction and negative number was accomplished in the 19th century and, in contrast to the substantiation of the concept of natural number, posed no fundamental difficulties.
The set of rational numbers proved insufficient for the study of continuously changing variables, which necessitated a further extension of the concept of number, consisting in the transition from the set of rational numbers to the set of real numbers. This transition involved the addition to the rational numbers the irrational numbers. A discovery of vast fundamental importance was made by the ancient Greeks: not all precisely defined line segments (the term “precisely defined” itself is an idealization inherent in geometry) are commensurable; that is, the length of a line segment cannot always be expressed by rational numbers if another line segment is taken as the unit. A classic example of incommensurable line segments is the side of a square and its diagonal. That incommensurable line segments exist was not an impediment to the development of geometry. The Greeks worked out a theory of the ratios of segments, presented in Euclid’s Elements, that takes into account the possibility of incommensurability. The Greeks knew how to compare the magnitude of such ratios and to perform arithmetic operations on them (in purely geometrical form); that is, they treated them as numbers. However, they did not fully perceive the idea that the ratio of the lengths of incommensurable line segments may be considered as numbers. This may be attributed to the idealist separation of theoretical mathematics from practical problems prevalent in the school to which Euclid belonged. In Archimedes’ works we find greater interest in practical problems, particularly approximate calculations of the ratios of incommensurable line segments, but even Archimedes did not develop the concept of irrational number as a number that expresses the ratio of the lengths of incommensurable line segments.
In the 17th century, the era of the birth of modern science, particularly modern mathematics, a number of methods of studying continuous processes and methods of approximate calculations were developed. A clear definition of the concept of real number is given by I. Newton, one of the founders of mathematical analysis, in Arithmetica universalis: “By number we mean not so much a set of units as the abstract ratio of some quantity to another quantity of the same kind that we have taken as unity.” This formulation gives a unified definition of a real number, whether rational or irrational. Later, in the 1870’s, the concept of real number was refined on the basis of a detailed analysis of the concept of continuity in the works of R. Dedekind, G. Cantor, and K. Weierstrass.
According to Dedekind, the property of continuity of a straight line consists in the fact that if all the points that make up a straight line are divided into two classes such that every point of the first class lies to the left of every point of the second class (that is, if the straight line is “broken” into two parts), then either the first class contains a rightmost point or the second class contains a leftmost point In either case, the “extreme” point is the point at which the “break” of the straight line occurred.
The set of all rational numbers does not possess the property of continuity. If the set of all rational numbers is divided into two classes such that every number of the first class is smaller than every number of the second, then upon such a subdivision (a Dedekind cut) it may turn out that there will be no largest number in the first class and no smallest number in the second. This will be the case, for example, if all negative rational numbers, zero, and all positive (rational) numbers whose square is less than 2 are placed in the first class and all positive (rational) numbers whose square is greater than 2 are placed in the second class. Such a cut is said to be irrational. Then the following definition of an irrational number is given: with every irrational cut in the set of rational numbers we associate an irrational number assumed to be larger than any number of the first class and smaller than any number of the second class. The set of all real numbers, both rational and irrational, already has the property of continuity.
Cantor’s substantiation of the concept of real number differs from Dedekind’s, although it too is based on an analysis of the concept of continuity. Both Dedekind’s. and Cantor’s definitions use the abstraction of the actual infinite. For example, in Dedekind’s theory an irrational number is defined by means of a cut in the set of all rational numbers, which is conceived as being given in its entirety.
Recent years have seen the development of the concept of computable numbers, that is, numbers approximations to which can be given by means of some algorithm. This concept is defined on the basis of a refined concept of algorithm and without resorting to the abstraction of the actual infinite.
The final stage in the development of the concept of number was the introduction of complex numbers (seeCOMPLEX NUMBERS). The concept of complex number emerged in the course of algebra’s development. Apparently, it first arose among 16th-century Italian mathematicians (G. Cardano, R. Bombelli) in connection with the discovery of the algebraic solution of third-and fourth-degree equations. It is known that even the solution of a quadratic equation can sometimes lead to the operation of extracting the square root of a negative number, which cannot be performed in the domain of real numbers. This occurs only in cases when the equation does not have real roots. A practical problem that reduces to the solution of such a quadratic equation turns out to have no solution. The following fact was observed in connection with the discovery of the algebraic solution of third-degree equations. If all three roots of the equation are real numbers, then it proves necessary, in the course of calculations, to extract the square root of a negative number. The “imaginariness” that arises in this case disappears only if all subsequent operations are performed. This fact was the first stimulus to the study of complex numbers. However, mathematicians were slow to accept the use of complex numbers and operations performed on them. The remnants of disbelief in the legitimacy of complex numbers are reflected in the term “imaginary,” preserved to this day. This disbelief was dissipated only after the establishment, in the late 18th century, of the geometrical interpretation of complex numbers as points in a plane and the establishment of the indisputable benefit derived from the introduction of complex numbers in the theory of algebraic equations, especially after the celebrated work of K. Gauss. Even before Gauss, in the works of L. Euler, complex numbers played a significant role not only in algebra but also in mathematical analysis. They acquired particular importance in the 19th century, in connection with the development of the theory of functions of a complex variable.
The set of all complex numbers, like the set of real numbers and the set of rational numbers, is closed with respect to the operations of addition, subtraction, multiplication, and division. Moreover, the set of all complex numbers has the property of algebraic closure, which means that every algebraic equation with complex coefficients has roots that are also in the domain of all complex numbers. The set of all real numbers, especially the rational numbers, does not have the property of algebraic closure. For example, the equation x2 + 1 = 0, with real coefficients, does not have real roots. As established by Weierstrass, the set of all complex numbers cannot be enlarged by the inclusion of new numbers in such a way that all laws of operation that are valid in the set of complex numbers are preserved in the enlarged set.
In addition to the main line of development of the concept of number (natural numbers→rational numbers→real numbers→complex numbers), the specific requirements of certain branches of mathematics have engendered various generalizations of the concept of number in essentially different directions. For example, in branches of mathematics associated with set theory, the aforementioned concepts of cardinal and ordinal transfinite numbers are of major importance, P-adic numbers, systems of which are obtained from systems of rational numbers by the inclusion of new entities different from irrational numbers, have assumed major importance in modern number theory. Various systems of entities possessing properties that are more or less close to those of the set of integers or rational numbers—groups, rings, fields, and algebras (seeGROUP; RING, ALGEBRAIC; and FIELD)—are being studied in algebra. (See alsoHYPERCOMPLEX NUMBERS.)
REFERENCESIstoriia matematiki, vols. 1–3. Moscow, 1970–72.
Van der Waerden, B. L. Probuzhdaiushchaiasia nauka. Moscow, 1959. (Translated from Dutch.)
Entsiklopediia elementarnoi matematiki, book 1: Arifmetika. Moscow-Leningrad, 1951.
Nechaev, V. I. Chislovye sistemy. Moscow, 1972.
D. K. FADDEEV