physics(redirected from Etymology of Physics)
Also found in: Dictionary, Thesaurus, Medical.
physics,branch of sciencescience
[Lat. scientia=knowledge]. For many the term science refers to the organized body of knowledge concerning the physical world, both animate and inanimate, but a proper definition would also have to include the attitudes and methods through which this body of
..... Click the link for more information. traditionally defined as the study of mattermatter,
anything that has mass and occupies space. Matter is sometimes called koinomatter (Gr. koinos=common) to distinguish it from antimatter, or matter composed of antiparticles.
..... Click the link for more information. , energyenergy,
in physics, the ability or capacity to do work or to produce change. Forms of energy include heat, light, sound, electricity, and chemical energy. Energy and work are measured in the same units—foot-pounds, joules, ergs, or some other, depending on the system of
..... Click the link for more information. , and the relation between them; it was called natural philosophy until the late 19th cent. and is still known by this name at a few universities. Physics is in some senses the oldest and most basic pure science; its discoveries find applications throughout the natural sciences, since matter and energy are the basic constituents of the natural world. The other sciences are generally more limited in their scope and may be considered branches that have split off from physics to become sciences in their own right. Physics today may be divided loosely into classical physics and modern physics.
Classical physics includes the traditional branches and topics that were recognized and fairly well developed before the beginning of the 20th cent.—mechanicsmechanics,
branch of physics concerned with motion and the forces that tend to cause it; it includes study of the mechanical properties of matter, such as density, elasticity, and viscosity.
..... Click the link for more information. , soundsound,
any disturbance that travels through an elastic medium such as air, ground, or water to be heard by the human ear. When a body vibrates, or moves back and forth (see vibration), the oscillation causes a periodic disturbance of the surrounding air or other medium that
..... Click the link for more information. , lightlight,
visible electromagnetic radiation. Of the entire electromagnetic spectrum, the human eye is sensitive to only a tiny part, the part that is called light. The wavelengths of visible light range from about 350 or 400 nm to about 750 or 800 nm.
..... Click the link for more information. , heatheat,
nonmechanical energy in transit, associated with differences in temperature between a system and its surroundings or between parts of the same system. Measures of Heat
..... Click the link for more information. , and electricityelectricity,
class of phenomena arising from the existence of charge. The basic unit of charge is that on the proton or electron—the proton's charge is designated as positive while the electron's is negative.
..... Click the link for more information. and magnetismmagnetism,
force of attraction or repulsion between various substances, especially those made of iron and certain other metals; ultimately it is due to the motion of electric charges.
..... Click the link for more information. . Mechanics is concerned with bodies acted on by forcesforce,
commonly, a "push" or "pull," more properly defined in physics as a quantity that changes the motion, size, or shape of a body. Force is a vector quantity, having both magnitude and direction.
..... Click the link for more information. and bodies in motionmotion,
the change of position of one body with respect to another. The rate of change is the speed of the body. If the direction of motion is also given, then the velocity of the body is determined; velocity is a vector quantity, having both magnitude and direction, while speed
..... Click the link for more information. and may be divided into staticsstatics,
branch of mechanics concerned with the maintenance of equilibrium in bodies by the interaction of forces upon them (see force). It incorporates the study of the center of gravity (see center of mass) and the moment of inertia.
..... Click the link for more information. (study of the forces on a body or bodies at rest), kinematics (study of motion without regard to its causes), and dynamicsdynamics,
branch of mechanics that deals with the motion of objects; it may be further divided into kinematics, the study of motion without regard to the forces producing it, and kinetics, the study of the forces that produce or change motion.
..... Click the link for more information. (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics, the latter including such branches as hydrostatics, hydrodynamics, aerodynamics, and pneumatics. Acousticsacoustics
[Gr.,=the facts about hearing], the science of sound, including its production, propagation, and effects. Various branches of acoustics that deal with different aspects of sound and hearing include bioacoustics, physical acoustics, ultrasonics, and architectural
..... Click the link for more information. , the study of sound, is often considered a branch of mechanics because sound is due to the motions of the particles of air or other medium through which sound waves can travel and thus can be explained in terms of the laws of mechanics. Among the important modern branches of acoustics is ultrasonicsultrasonics,
study and application of the energy of sound waves vibrating at frequencies greater than 20,000 cycles per second, i.e., beyond the range of human hearing. The application of sound energy in the audible range is limited almost entirely to communications, since
..... Click the link for more information. , the study of sound waves of very high frequency, beyond the range of human hearing. Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflectionreflection,
return of a wave from a surface that it strikes into the medium through which it has traveled. The general principles governing the reflection of light and sound are similar, for both normally travel in straight lines and both are wave phenomena.
..... Click the link for more information. , refractionrefraction,
in physics, deflection of a wave on passing obliquely from one transparent medium into a second medium in which its speed is different, as the passage of a light ray from air into glass.
..... Click the link for more information. , interferenceinterference,
in physics, the effect produced by the combination or superposition of two systems of waves, in which these waves reinforce, neutralize, or in other ways interfere with each other.
..... Click the link for more information. , diffractiondiffraction,
bending of waves around the edge of an obstacle. When light strikes an opaque body, for instance, a shadow forms on the side of the body that is shielded from the light source.
..... Click the link for more information. , dispersion (see spectrumspectrum,
arrangement or display of light or other form of radiation separated according to wavelength, frequency, energy, or some other property. Beams of charged particles can be separated into a spectrum according to mass in a mass spectrometer (see mass spectrograph).
..... Click the link for more information. ), and polarization of lightpolarization of light,
orientation of the vibration pattern of light waves in a singular plane. Characteristics of Polarization
Polarization is a phenomenon peculiar to transverse waves, i.e.
..... Click the link for more information. . Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamicsthermodynamics,
branch of science concerned with the nature of heat and its conversion to mechanical, electric, and chemical energy. Historically, it grew out of efforts to construct more efficient heat engines—devices for extracting useful work from expanding hot gases.
..... Click the link for more information. deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th cent.; an electric current gives rise to a magnetic field and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest.
Most of classical physics is concerned with matter and energy on the normal scale of observation; by contrast, much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on the very large or very small scale. For example, atomic and nuclear physics studies matter on the smallest scale at which chemical elements can be identified. The physics of elementary particleselementary particles,
the most basic physical constituents of the universe. Basic Constituents of Matter
Molecules are built up from the atom, which is the basic unit of any chemical element. The atom in turn is made from the proton, neutron, and electron.
..... Click the link for more information. is on an even smaller scale, being concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in large particle acceleratorsparticle accelerator,
apparatus used in nuclear physics to produce beams of energetic charged particles and to direct them against various targets. Such machines, popularly called atom smashers, are needed to observe objects as small as the atomic nucleus in studies of its
..... Click the link for more information. . On this scale, ordinary, commonsense notions of space, time, matter, and energy are no longer valid.
The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. The quantum theoryquantum theory,
modern physical theory concerned with the emission and absorption of energy by matter and with the motion of material particles; the quantum theory and the theory of relativity together form the theoretical basis of modern physics.
..... Click the link for more information. is concerned with the discrete, rather than continuous, nature of many phenomena at the atomic and subatomic level, and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativityrelativity,
physical theory, introduced by Albert Einstein, that discards the concept of absolute motion and instead treats only relative motion between two systems or frames of reference.
..... Click the link for more information. is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with relative uniform motion in a straight line and the general theory of relativity with accelerated motion and its connection with gravitation. Both the quantum theory and the theory of relativity find applications in all areas of modern physics.
Evolution of Physics
The earliest history of physics is interrelated with that of the other sciences. A number of contributions were made during the period of Greek civilization, dating from Thales and the early Ionian natural philosophers in the Greek colonies of Asia Minor (6th and 5th cent. B.C.). Democritus (c.460–370 B.C.) proposed an atomic theory of matter and extended it to other phenomena as well, but the dominant theories of matter held that it was formed of a few basic elements, usually earth, air, fire, and water. In the school founded by Pythagoras of Samos the principal concept was that of number; it was applied to all aspects of the universe, from planetary orbits to the lengths of strings used to sound musical notes.
The most important philosophy of the Greek period was produced by two men at Athens, Plato (427–347 B.C.) and his student Aristotle (384–322 B.C.); Aristotle in particular had a critical influence on the development of science in general and physics in particular. The Greek approach to physics was largely geometrical and reached its peak with Archimedes (287–212 B.C.), who studied a wide range of problems and anticipated the methods of the calculus. Another important scientist of the early Hellenistic period, centered in Alexandria, Egypt, was the astronomer Aristarchus (c.310–220 B.C.), who proposed a heliocentric, or sun-centered, system of the universe. However, just as the earlier atomic theory had not become generally accepted, so too the astronomical system that eventually prevailed was the geocentric system proposed by Hipparchus (190–120 B.C.) and developed in detail by Ptolemy (A.D. 85–A.D. 165).
Preservation of Learning
With the passing of the Greek civilization and the Roman civilization that followed it, Greek learning passed into the hands of the Muslim world that spread its influence from the E Mediterranean eastward into Asia, where it picked up contributions from the Chinese (papermaking, gunpowder) and the Hindus (the place-value decimal number system with a zero), and westward as far as Spain, where Islamic culture flourished in Córdoba, Toledo, and other cities. Little specific advance was made in physics during this period, but the preservation and study of Greek science by the Muslim world made possible the revival of learning in the West beginning in the 12th and 13th cent.
The Scientific Revolution
The first areas of physics to receive close attention were mechanics and the study of planetary motions. Modern mechanics dates from the work of Galileo and Simon Stevin in the late 16th and early 17th cent. The great breakthrough in astronomy was made by Nicolaus Copernicus, who proposed (1543) the heliocentric model of the solar systemsolar system,
the sun and the surrounding planets, natural satellites, dwarf planets, asteroids, meteoroids, and comets that are bound by its gravity. The sun is by far the most massive part of the solar system, containing almost 99.9% of the system's total mass.
..... Click the link for more information. that was later modified by Johannes Kepler (using observations by Tycho Brahe) into the description of planetary motions that is still accepted today. Galileo gave his support to this new system and applied his discoveries in mechanics to its explanation.
The full explanation of both celestial and terrestrial motions was not given until 1687, when Isaac Newton published his Principia [Mathematical Principles of Natural Philosophy]. This work, the most important document of the Scientific Revolution of the 16th and 17th cent., contained Newton's famous three laws of motion and showed how the principle of universal gravitationgravitation,
the attractive force existing between any two particles of matter. The Law of Universal Gravitation
Since the gravitational force is experienced by all matter in the universe, from the largest galaxies down to the smallest particles, it is often called
..... Click the link for more information. could be used to explain the behavior not only of falling bodies on the earth but also planets and other celestial bodies in the heavens. To arrive at his results, Newton invented one form of an entirely new branch of mathematics, the calculuscalculus,
branch of mathematics that studies continuously changing quantities. The calculus is characterized by the use of infinite processes, involving passage to a limit—the notion of tending toward, or approaching, an ultimate value.
..... Click the link for more information. (also invented independently by G. W. Leibniz), which was to become an essential tool in much of the later development in most branches of physics.
Other branches of physics also received attention during this period. William Gilbert, court physician to Queen Elizabeth I, published (1600) an important work on magnetism, describing how the earth itself behaves like a giant magnet. Robert Boyle (1627–91) studied the behavior of gases enclosed in a chamber and formulated the gas lawgas laws,
physical laws describing the behavior of a gas under various conditions of pressure, volume, and temperature. Experimental results indicate that all real gases behave in approximately the same manner, having their volume reduced by about the same proportion of the
..... Click the link for more information. named for him; he also contributed to physiology and to the founding of modern chemistry.
Newton himself discovered the separation of white light into a spectrumspectrum,
arrangement or display of light or other form of radiation separated according to wavelength, frequency, energy, or some other property. Beams of charged particles can be separated into a spectrum according to mass in a mass spectrometer (see mass spectrograph).
..... Click the link for more information. of colors and published an important work on optics, in which he proposed the theory that light is composed of tiny particles, or corpuscles. This corpuscular theory was related to the mechanistic philosophy presented early in the 17th cent. by René Descartes, according to which the universe functioned like a mechanical system describable in terms of mathematics. A rival theory of light, explaining its behavior in terms of WavesWaves
(Women Appointed for Voluntary Emergency Service), U.S. navy organization, created (1942) in World War II to release male naval personnel for sea duty. The organization was commanded until 1946 by Mildred Helen McAfee.
..... Click the link for more information. , was presented in 1690 by Christian Huygens, but the belief in the mechanistic philosophy together with the great weight of Newton's reputation was such that the wave theory gained relatively little support until the 19th cent.
Development of Mechanics and Thermodynamics
During the 18th cent. the mechanics founded by Newton was developed by several scientists and received brilliant exposition in the Analytical Mechanics (1788) of J. L. Lagrange and the Celestial Mechanics (1799–1825) of P. S. Laplace. Daniel Bernoulli made important mathematical studies (1738) of the behavior of gases, anticipating the kinetic theory of gases developed more than a century later, and has been referred to as the first mathematical physicist.
The accepted theory of heat in the 18th cent. viewed heat as a kind of fluid, called caloric; although this theory was later shown to be erroneous, a number of scientists adhering to it nevertheless made important discoveries useful in developing the modern theory, including Joseph Black (1728–99) and Henry Cavendish (1731–1810). Opposed to this caloric theory, which had been developed mainly by the chemists, was the less accepted theory dating from Newton's time that heat is due to the motions of the particles of a substance. This mechanical theory gained support in 1798 from the cannon-boring experiments of Count Rumford (Benjamin Thompson), who found a direct relationship between heat and mechanical energy.
In the 19th cent. this connection was established quantitatively by J. R. Mayer and J. P. Joule, who measured the mechanical equivalent of heat in the 1840s. This experimental work and the theoretical work of Sadi Carnot, published in 1824 but not widely known until later, together provided a basis for the formulation of the first two laws of thermodynamics in the 1850s by William Thomson (later Lord Kelvin) and R. J. E. Clausius. The first law is a form of the law of conservation of energy, stated earlier by J. R. von Mayer and Hermann Helmholtz on the basis of biological considerations; the second law describes the tendency of energy to be converted from more useful to less useful forms.
The atomic theory of matter had been proposed again in the early 19th cent. by the chemist John Dalton and became one of the hypotheses of the kinetic-molecular theory of gaseskinetic-molecular theory of gases,
physical theory that explains the behavior of gases on the basis of the following assumptions: (1) Any gas is composed of a very large number of very tiny particles called molecules; (2) The molecules are very far apart compared to their sizes,
..... Click the link for more information. developed by Clausius and James Clerk Maxwell to explain the laws of thermodynamics. The kinetic theory in turn led to the statistical mechanics of Ludwig Boltzmann and J. W. Gibbs.
Advances in Electricity, Magnetism, and Thermodynamics
The study of electricity and magnetism also came into its own during the 18th and 19th cents. C. A. Coulomb had discovered the inverse-square laws of electrostatics and magnetostatics in the late 18th cent. and Alessandro Volta had invented the electric battery, so that electric currents could also be studied. In 1820, H. C. Oersted found that a current-carrying conductor gives rise to a magnetic force surrounding it, and in 1831 Michael Faraday (and independently Joseph Henry) discovered the reverse effect, the production of an electric potential or current through magnetism (see inductioninduction,
in electricity and magnetism, common name for three distinct phenomena. Electromagnetic induction is the production of an electromotive force (emf) in a conductor as a result of a changing magnetic field about the conductor and is the most important of the
..... Click the link for more information. ); these two discoveries are the basis of the electric motor and the electric generator, respectively.
Faraday invented the concept of the fieldfield,
in physics, region throughout which a force may be exerted; examples are the gravitational, electric, and magnetic fields that surround, respectively, masses, electric charges, and magnets. The field concept was developed by M.
..... Click the link for more information. of force to explain these phenomena and Maxwell, from c.1856, developed these ideas mathematically in his theory of electromagnetic radiationelectromagnetic radiation,
energy radiated in the form of a wave as a result of the motion of electric charges. A moving charge gives rise to a magnetic field, and if the motion is changing (accelerated), then the magnetic field varies and in turn produces an electric field.
..... Click the link for more information. . He showed that electric and magnetic fields are propagated outward from their source at a speed equal to that of light and that light is one of several kinds of electromagnetic radiation, differing only in frequency and wavelength from the others. Experimental confirmation of Maxwell's theory was provided by Heinrich Hertz, who generated and detected electric waves in 1886 and verified their properties, at the same time foreshadowing their application in radio, television, and other devices. The wave theory of light had been revived in 1801 by Thomas Young and received strong experimental support from the work of A. J. Fresnel and others; the theory was widely accepted by the time of Maxwell's work on the electromagnetic field, and afterward the study of light and that of electricity and magnetism were closely related.
Birth of Modern Physics
By the late 19th cent. most of classical physics was complete, and optimistic physicists turned their attention to what they considered minor details in the complete elucidation of their subject. Several problems, however, provided the cracks that eventually led to the shattering of this optimism and the birth of modern physics. On the experimental side, the discoveries of X raysX ray,
invisible, highly penetrating electromagnetic radiation of much shorter wavelength (higher frequency) than visible light. The wavelength range for X rays is from about 10−8 m to about 10−11
..... Click the link for more information. by Wilhelm Roentgen (1895), radioactivityradioactivity,
spontaneous disintegration or decay of the nucleus of an atom by emission of particles, usually accompanied by electromagnetic radiation. The energy produced by radioactivity has important military and industrial applications.
..... Click the link for more information. by A. H. Becquerel (1896), the electronelectron,
elementary particle carrying a unit charge of negative electricity. Ordinary electric current is the flow of electrons through a wire conductor (see electricity). The electron is one of the basic constituents of matter.
..... Click the link for more information. by J. J. Thomson (1897), and new radioactive elements by Marie and Pierre Curie raised questions about the supposedly indestructible atomatom
[Gr.,=uncuttable (indivisible)], basic unit of matter; more properly, the smallest unit of a chemical element having the properties of that element. Structure of the Atom
..... Click the link for more information. and the nature of matter. Ernest Rutherford identified and named two types of radioactivity and in 1911 interpreted experimental evidence as showing that the atom consists of a dense, positively charged nucleusnucleus,
in physics, the extremely dense central core of an atom. The Nature of the Nucleus
Atomic nuclei are composed of two types of particles, protons and neutrons, which are collectively known as nucleons.
..... Click the link for more information. surrounded by negatively charged electrons. Classical theory, however, predicted that this structure should be unstable. Classical theory had also failed to explain successfully two other experimental results that appeared in the late 19th cent. One of these was the demonstration by A. A. Michelson and E. W. Morley that there did not seem to be a preferred frame of reference, at rest with respect to the hypothetical luminiferous etherether
in physics and astronomy, a hypothetical medium for transmitting light and heat (radiation), filling all unoccupied space; it is also called luminiferous ether. In Newtonian physics all waves are propagated through a medium, e.g.
..... Click the link for more information. , for describing electromagnetic phenomena.
Relativity and Quantum Mechanics
In 1905, Albert Einstein showed that the result of the Michelson-Morley experiment could be interpreted by assuming the equivalence of all inertial (unaccelerated) frames of reference and the constancy of the speed of light in all frames; Einstein's special theory of relativity eliminated the need for the ether and implied, among other things, that mass and energy are equivalent and that the speed of light is the limiting speed for all bodies having mass. Hermann Minkowski provided (1908) a mathematical formulation of the theory in which space and time were united in a four-dimensional geometry of space-time. Einstein extended his theory to accelerated frames of reference in his general theory (1916), showing the connection between acceleration and gravitation. Newton's mechanics was interpreted as a special case of Einstein's, valid as an approximation for small speeds compared to that of light.
Although relativity resolved the electromagnetic phenomena conflict demonstrated by Michelson and Morley, a second theoretical problem was the explanation of the distribution of electromagnetic radiation emitted by a blackbodyblackbody,
in physics, an ideal black substance that absorbs all and reflects none of the radiant energy falling on it. Lampblack, or powdered carbon, which reflects less than 2% of the radiation falling on it, crudely approximates an ideal blackbody; a material consisting of a
..... Click the link for more information. ; experiment showed that at shorter wavelengths, toward the ultraviolet end of the spectrum, the energy approached zero, but classical theory predicted it should become infinite. This glaring discrepancy, known as the ultraviolet catastrophe, was solved by Max Planck's quantum theory (1900). In 1905, Einstein used the quantum theory to explain the photoelectric effect, and in 1913 Niels Bohr again used it to explain the stability of Rutherford's nuclear atom. In the 1920s the theory was extensively developed by Louis de Broglie, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger, P. A. M. Dirac, and others; the new quantum mechanics became an indispensable tool in the investigation and explanation of phenomena at the atomic level.
Particles, Energy, and Contemporary Physics
Dirac's theory, which combined quantum mechanics with the theory of relativity, also predicted the existence of antiparticlesantiparticle,
elementary particle corresponding to an ordinary particle such as the proton, neutron, or electron, but having the opposite electrical charge and magnetic moment.
..... Click the link for more information. . During the 1930s the first antiparticles were discovered, as well as other particles. Among those contributing to this new area of physics were James Chadwick, C. D. Anderson, E. O. Lawrence, J. D. Cockcroft, E. T. S. Walton, Enrico Fermi, and Hideki Yukawa.
The discovery of nuclear fission by Otto Hahn and Fritz Strassmann (1938) and its explanation by Lise Meitner and Otto Frisch provided a means for the large-scale conversion of mass into energy, in accordance with the theory of relativity, and triggered as well the massive governmental involvement in physics that is one of the fundamental facts of contemporary science. The growth of physics since the 1930s has been so great that it is impossible in a survey article to name even its most important individual contributors.
Among the areas where fundamental discoveries have been made more recently are solid-state physicssolid-state physics,
study of the properties of bulk matter rather than those of the individual particles that compose it. Solid-state physics is concerned with the properties exhibited by atoms and molecules because of their association and regular, periodic arrangement in
..... Click the link for more information. , plasmaplasma,
in physics, fully ionized gas of low density, containing approximately equal numbers of positive and negative ions (see electron and ion). It is electrically conductive and is affected by magnetic fields.
..... Click the link for more information. physics, and cryogenics, or low-temperature physicslow-temperature physics,
science concerned with the production and maintenance of temperatures much below normal, down to almost absolute zero, and with various phenomena that occur only at such temperatures.
..... Click the link for more information. . Out of solid-state physics, for example, have come many of the developments in electronics (e.g., the transistortransistor,
three-terminal, solid-state electronic device used for amplification and switching. It is the solid-state analog to the triode electron tube; the transistor has replaced the electron tube for virtually all common applications.
..... Click the link for more information. and microcircuitry) that have revolutionized much of modern technology. Another development is the masermaser
, device for creation, amplification, and transmission of an intense, highly focused beam of high-frequency radio waves. The name maser is an acronym for microwave amplification by stimulated emission of r
..... Click the link for more information. and laserlaser
[acronym for light amplification by stimulated emission of radiation], device for the creation, amplification, and transmission of a narrow, intense beam of coherent light. The laser is sometimes referred to as an optical maser.
..... Click the link for more information. (in principle the same device), with applications ranging from communication and controlled nuclear fusion experiments to atomic clocks and other measurement standards.
See I. M. Freeman, Physics Made Simple (1990); R. P. Feynman, The Character of Physical Law (1994); K. F. Kuhn, Basic Physics (2d ed. 1996); J. D. Bernal, A History of Classical Physics (1997); R. L. Lehrman, Physics the Easy Way (3d ed. 1998); C. Suplee, Physics in the 20th Century (1999); A. Pais, The Genius of Science: A Portrait Gallery of Twentieth Century Physicists (2000); J. L. Heilbron, Physics: A Short History (2016); C. Rovelli, Seven Brief Lessons on Physics (tr. 2016).
Formerly called natural philosophy, physics is concerned with those aspects of nature which can be understood in a fundamental way in terms of elementary principles and laws. In the course of time, various specialized sciences broke away from physics to form autonomous fields of investigation. In this process physics retained its original aim of understanding the structure of the natural world and explaining natural phenomena.
The most basic parts of physics are mechanics and field theory. Mechanics is concerned with the motion of particles or bodies under the action of given forces. The physics of fields is concerned with the origin, nature, and properties of gravitational, electromagnetic, nuclear, and other force fields. Taken together, mechanics and field theory constitute the most fundamental approach to an understanding of natural phenomena which science offers. The ultimate aim is to understand all natural phenomena in these terms. See Classical field theory, Mechanics, Quantum field theory
The older, or classical, divisions of physics were based on certain general classes of natural phenomena to which the methods of physics had been found particularly applicable. The divisions are all still current, but many of them tend more and more to designate branches of applied physics or technology, and less and less inherent divisions in physics itself. The divisions or branches of modern physics are made in accordance with particular types of structures in nature with which each branch is concerned.
In every area physics is characterized not so much by its subject-matter content as by the precision and depth of understanding which it seeks. The aim of physics is the construction of a unified theoretical scheme in mathematical terms whose structure and behavior duplicates that of the whole natural world in the most comprehensive manner possible. Where other sciences are content to describe and relate phenomena in terms of restricted concepts peculiar to their own disciplines, physics always seeks to understand the same phenomena as a special manifestation of the underlying uniform structure of nature as a whole. In line with this objective, physics is characterized by accurate instrumentation, precision of measurement, and the expression of its results in mathematical terms.
For the major areas of physics and for additional listings of articles in physics See Acoustics, Atomic physics, Classical mechanics, Electricity, Electromagnetism, Elementary particle, Fluid mechanics, Heat, Low-temperature physics, Molecular physics, Nuclear physics, Optics, Solid-state physics, Statistical mechanics, Theoretical physics
Physics is the science that studies the simplest and most general regularities underlying natural phenomena, the properties and structure of matter, and the laws governing the motion of matter. Its concepts and laws therefore constitute the basis of all the natural sciences. Physics is an exact science that studies the quantitative regularities of phenomena.
The word “physics” derives from the Greek physis, “nature.” At first, during the classical period, science was not broken up into branches and encompassed the entire body of knowledge about natural phenomena. As knowledge and methods of investigation became differentiated, some sciences, including physics, separated out from the general science of nature into independent disciplines. The boundaries that delineate physics from other natural sciences are largely arbitrary and have changed over the course of time.
By its very nature, physics is an experimental science, its laws being based on facts that are established experimentally. These laws represent quantitative relations and are stated in mathematical language. A distinction is made between experimental physics, in which experiments are conducted to elucidate new facts and test known physical laws, and theoretical physics, whose objective is to formulate laws of nature, to explain various phenomena on the basis of these laws, and to predict new phenomena. In the study of any phenomenon, both experiment and theory are equally necessary and consequently are interrelated.
Because of the diversity of the objects and forms of motion of physical matter that are studied, physics is subdivided into a number of disciplines (branches), which are related to one another in varying degree. The division of physics into individual disciplines is not unambiguous and can be made according to different criteria. Thus, for example, according to the objects studied, physics is subdivided into elementary particle physics, nuclear physics, atomic and molecular physics, the physics of gases and liquids, solid-physics, and plasma physics. The processes or forms of motion of matter can also serve as criteria; scientists distinguish mechanical motion, thermal processes, electromagnetic phenomena, and gravitational, strong, and weak interactions. Accordingly, physics has been subdivided into the mechanics of material particles and rigid bodies, continuum mechanics (including acoustics), thermodynamics and statistical mechanics, electrodynamics (including optics), the theory of gravitation, quantum mechanics, and quantum field theory. The subdivisions of physics often overlap because of the profound intrinsic relationship between the objects of the material world and the processes in which the objects are involved. Applied physics—for example, applied optics—is also sometimes singled out according to the purposes of investigation.
The study of oscillations and waves is especially important in physics. This is due to the similarity of the regularities underlying all vibrational processes, regardless of their physical nature, as well as the similarity the methods used to study these processes. Mechanical, acoustical, electrical, and optical oscillations and waves are considered here from a single viewpoint.
Modern physics contains a small number of fundamental physical theories that encompass all branches of physics. These theories are the quintessence of our knowledge of the character of physical processes and phenomena and represent an approximate but complete representation of the various forms of motion of matter in nature.
Rise of physics (to the 17th century). Man has long been interested in the physical phenomena of the surrounding world. Attempts to provide a causal explanation of these phenomena preceded the creation of physics in the modern sense of the word. During the Greco-Roman period (sixth century B.C. to second century A.D.), the idea of the atomic structure of matter was conceived (Democritus, Epicurus, Lucretius), the geocentric system of the world was developed (Ptolemy), the simplest laws of statics (for example, the law of the lever) were established, the law of the rectilinear propagation of light and the law of light reflection were discovered, the principles of hydrostatics were formulated (Archimedes’ principle), and the simplest manifestations of electricity and magnetism were observed.
Aristotle summed up the body of knowledge accumulated by the fourth century B.C. Although his physics included certain accurate premises, many progressive ideas of his predecessors, in particular the atomic hypothesis, were absent. While acknowledging the importance of the experiment, Aristotle did not consider it the main criterion for validity of knowledge, preferring speculative concepts. During the Middle Ages, the Aristotelian doctrine, which was sanctioned by the church, long hindered the development of science.
In the 15th and 16th centuries science emerged reborn after a struggle with the scholasticized form of the Aristotelian doctrine. In the mid-16th century, N. Copernicus advanced the theory of the heliocentric system of the world, which initiated the liberation of natural science from theology. The requirements of production and the development of trades, navigation, and artillery stimulated experimentally based scientific research. However, experimental research in the 15th and 16th centuries was rather haphazard, and it was only in the 17th century that the experimental method finally become systematically used in physics, which led to the creation of the first fundamental physical theory—classical Newtonian mechanics.
Formation as a science (early 17th to late 18th centuries). The development of physics as a science in the modern sense of the word dates to the work of Galileo (first half of the 17th century), who perceived the need for a mathematical description of motion. Galileo demonstrated that the effect of surrounding bodies on a given body determines not the velocity, as was believed in Aristotelian mechanics, but the acceleration of the body. This represented the first formulation of the law of inertia. Galileo also discovered the principle of relativity in mechanics (seeGALILEAN PRINCIPLE OF RELATIVITY), proved that the acceleration of a body in free fall is independent of its density and mass, and tried to substantiate the Copernican theory. He obtained significant results in other fields of physics as well. He constructed a telescope, with which he made several astronomical discoveries, such as the mountains on the moon and the satellites of Jupiter. Galileo’s invention of the thermometer initiated the quantitative study of thermal phenomena.
The first successful studies of gases date to the first half of the 17th century. E. Torricelli, a student of Galileo, established the existence of atmospheric pressure and constructed the first barometer. R. Boyle and E. Mariotte investigated the pressure of gases and formulated the first gas law, which now bears their name. W. Snell and R. Descartes discovered the law of the refraction of light. The microscope was invented during this period. Significant advances in the study of magnetic phenomena were made in the early 17th century by W. Hilbert, who proved that the earth is a large magnet and was the first to delineate rigorously between electric and magnetic phenomena.
The main achievement of 17th-century physics was the creation of classical mechanics. Developing the ideas of Galileo, C. Huygens, and others, I. Newton formulated all the fundamental laws of physics in his Philosophiae naturalis principia mathematica (1687). The ideal of scientific theory, which is still valid today, was embodied for the first time in the construction of classical mechanics. With the appearance of Newtonian mechanics, it was finally apprehended that the task of science consists in seeking out the most general natural laws that can be stated quantitatively.
Newtonian mechanics achieved its greatest successes in explaining the motion of heavenly bodies. On the basis of the laws of planetary motion established by J. Kepler from the observations of Tycho Brahe, Newton discovered the law of universal gravitation (seeNEWTON’S LAW OF GRAVITATION). This enabled scientists to calculate with remarkable accuracy the motions of the moon, planets, and comets of the solar system and to explain the oceanic tides. Newton subscribed to the concept of long-range action, according to which the interaction of bodies (particles) occurs instantaneously directly through the void; the forces of interaction must be determined experimentally. Newton was the first to formulate clearly the classical concept of absolute space as a repository of matter that is independent of its properties and motion, as well as the concept of absolute, uniform time. These concepts remained unchanged until the formulation of the theory of relativity.
At the same time, Huygens and G. von Leibniz formulated the law of the conservation of momentum; Huygens conceived the theory of the physical pendulum and built a pendulum clock.
The 17th century also marks the development of physical acoustics. M. Mersenne measured the number of normal modes of a vibrating string. He also determined the speed of sound in air for the first time, while Newton theoretically derived the formula for the speed of sound.
In the second half of the 17th century, geometrical optics as applied to the design of telescopes and other optical instruments developed rapidly, and the foundations of physical optics were laid. F. Grimaldi discovered the diffraction of light, and Newton conducted fundamental research in the dispersion of light (seeDIFFRACTION OF LIGHT and DISPERSION OF LIGHT). Optical spectroscopy dates to Newton’s studies. In 1676, O. C. Roemer first measured the speed of light. Two different theories of the physical nature of light—the corpuscular theory and the wave theory—emerged at about the same time (seeOPTICS). According to Newton’s corpuscular theory, light is a flux of particles radiating from a source in all directions. Huygens developed the wave theory of light, according to which light is a flux of waves that propagate in a special hypothetical medium—the ether—which fills all of space and permeates all bodies.
Classical mechanics thus was basically constructed in the 17th century, and research was begun in optics, electric and magnetic phenomena, heat, and acoustics.
The development of classical mechanics, in particular celestial mechanics, continued in the 18th century. The existence of a new planet—Neptune—was predicted from a small perturbation in the motion of the planet Uranus (Neptune was discovered in 1846). Certainty in the validity of Newtonian mechanics became universal. A unified mechanical view of the world was developed on the basis of mechanics. According to this view, all the wealth and qualitative diversity of the universe result from differences in the motion of the particles (atoms) that compose bodies, a motion that obeys Newton’s laws. This view for many years exerted a strong influence on the development of physics. An explanation of a physical phenomenon was considered scientific and complete if it could be reduced to the action of the laws of mechanics.
The needs of developing industry served as an important stimulus to the development of mechanics. The dynamics of rigid bodies were developed in the work of L. Euler and other scientists. The development of fluid mechanics paralleled that of the mechanics of particles and solids. The work of D. Bernoulli, Euler, J. Lagrange, and others in the first half of the 18th century laid the foundation for the hydrodynamics of an ideal fluid—an incompressible fluid lacking viscosity and the ability to conduct heat. In Lagrange’s Mécanique analytique (1788), the equations of mechanics were presented in such generalized form that they subsequently could also be applied to nonmechanical processes, particularly electromagnetic ones.
Experimental data were amassed in other fields of physics, and the simplest experimental laws were formulated. C. F. Dufay discovered the existence of two types of electricity and determined that similarly charged bodies repel one another and oppositely charged bodies attract one another. B. Franklin established the law of the conservation of electric charge. H. Cavendish and C. de Coulomb independently discovered the fundamental law of electrostatics, which defines the force of interaction between electric charges at rest (seeCOULOMB’S LAW). The study of atmospheric electricity arose. Franklin, M. V. Lomonosov, and G. V. Rikhman proved the electrical nature of lightning. The improvement of telescope objectives continued in optics. The work of P. Bouguer and J. Lambert led to the development of photometry. Infrared rays were discovered by the British scientists W. Herschel and W. Wollaston, and ultraviolet rays were discovered by the German scientist J. W. Ritter and Wollaston.
There was considerable progress in the investigation of thermal phenomena. After the discovery of latent heat of fusion by J. Black and the experimental proof of the conservation of heat in calorimetric experiments, scientists began making a distinction between temperature and the quality of heat. The concept of heat capacity was formulated, and the study of heat conduction and thermal emission was begun. Along with this, erroneous ideas regarding the nature of heat emerged: heat was viewed as a special indestructible weightless fluid—the caloric—capable of flowing from hot bodies to cold ones. The theory of heat according to which heat is a type of intrinsic motion of particles suffered a temporary setback, despite its support and development by such outstanding scientists as Newton, Hooke, Boyle, Bernoulli, and Lomonosov.
Classical physics (19th century). In the early 19th century the long-standing controversy between the corpuscular and wave theories of light culminated in the apparent conclusive triumph of the wave theory. A contributing factor was the successful explanation by T. Young and A. J. Fresnel of the phenomena of the interference and diffraction of light by means of the wave theory. These phenomena are characteristic of wave motion only, and it seemed impossible to explain them by means of the corpuscular theory. At the same time, decisive proof was obtained of the transverse nature of light waves (Fresnel, D.-F. Arago, Young), which had been discovered in the 18th century (seePOLARIZATION OF LIGHT). Considering light as transverse waves in an elastic medium (the ether), Fresnel discovered the quantitative law that determines the intensity of refracted and reflected light waves upon the passage of light from one medium to another and created the theory of double refraction (seeDOUBLE REFRACTION).
The discovery of the electric current by L. Galvani and A. Volta was of major importance for all of physics. The construction of powerful sources of direct current—galvanic batteries—made it possible to detect and study the diverse effects of a current. The chemical action of a current was investigated by H. Davy and M. Faraday. V. V. Petrov produced an electric arc. The discovery of the effect of an electric current on a magnetic needle by H. C. Oersted in 1820 established the relationship between electricity and magnetism. Proceeding from the concept of the unity of electrical and magnetic phenomena, A. Ampere concluded that all magnetic phenomena are due to moving charged particles—the electric current. He then experimentally established the law that determines the force of interaction between electric currents (seeAMPÈRE’S LAW).
In 1831, Faraday discovered the phenomenon of electromagnetic induction (seeELECTROMAGNETIC INDUCTION). Major difficulties were encountered in attempts to explain this phenomenon by means of the concept of long-range action. Even before the discovery of electromagnetic induction, Faraday advanced the hypothesis that electromagnetic interactions are accomplished through an intermediate medium—the electromagnetic field (the concept of short-range action). This served as the starting point for the formation of a new science, one concerned with the study of the properties and laws of the behavior of a special form of matter—the electromagnetic field.
In the early 19th century J. Dalton introduced (1803) in science the concept of atoms as the smallest (indivisible) particles of matter—the carriers of the chemical individuality of the elements.
The foundations of solid-state physics were laid by the first quarter of the 19th century. Data on the macroscopic properties of solids, such as metals, industrial materials, and minerals, were accumulated throughout the 17th, 18th, and early 19th centuries, and the empirical laws governing the behavior of a solid under the influence of external forces, such as mechanical forces, heat, the electric and magnetic fields, and light, were established. The investigation of elastic properties led to the discovery of Hooke’s law (seeHOOKE’S LAW) in 1660, and the study of the electrical conductivity of metals led to the discovery of Ohm’s law in 1826 (seeOHM’S LAW). The study of thermal properties led to the formulation of Dulong and Petit’s law of specific heat in 1819 (seeDULONG AND PETIT’S LAW). The principal magnetic properties of solids were discovered. At the same time, a general theory of the elastic properties of solids was advanced by L. M. H. Navier (1819–26) and A. L. Cauchy (1830). The interpretation of the solid state as a continuous medium was characteristic of nearly all these findings, although a significant number of scientists acknowledged that solids, most of which are in the crystalline state, have an internal microscopic structure.
The discovery of the law of the conservation of energy, which brought together all natural phenomena, was of major importance to physics and all natural science. In the mid-19th century the equivalence of the amount of heat and work was proved experimentally, and it thus was established that heat is not some hypothetical weightless substrate—the caloric—but a special form of energy. In the 1840’s, J. R. von Mayer, J. Joule, and H. von Helmholtz independently discovered the law of the conservation and transformation of energy. This law, the first law of thermodynamics, became the basic law of thermal phenomena (thermodynamics). (SeeTHERMODYNAMICS, FIRST LAW OF.)
Even before the discovery of this law, in the work Reflections on the Motive Power of Fire and on the Machines Capable of Developing This Power (1824), N. L. S. Carnot obtained results that served as the basis for another fundamental law of the theory of heat—the second law of thermodynamics (seeTHERMODYNAMICS, SECOND LAW OF)—which was formulated in the work of R. Clausius (1850) and Lord Kelvin (W. Thomson; 1851). A generalization of experimental data that indicate the irreversibility of thermal processes in nature, the law defines the direction of possible energy processes. The research of J. L. Gay-Lussac, on whose basis B. Clapeyron discovered the equation of state of an ideal gas—later generalized by D. I. Mendeleev—played a significant role in the formation of thermodynamics as a science.
The molecular-kinetic theory of thermal processes developed simultaneously with thermodynamics. This made it possible to include thermal processes within the framework of the mechanistic picture of the world and led to the discovery of new types of laws—statistical laws in which all relations between physical quantities are probabilistic.
In the first stage of development of the kinetic theory of the simplest medium—a gas—Joule, Clausius, and others calculated the average values of various physical quantities, such as the speeds of molecules, the number of molecular collisions per second, and free path lengths. The dependence of the pressure of a gas on the number of molecules per unit volume was discovered and the average kinetic energy of the translational motion of molecules was obtained. This elucidated the physical meaning of temperature as a measure of the average kinetic energy of molecules.
The second stage of the development of the molecular-kinetic theory began with the work of J. C. Maxwell. In 1859, Maxwell, introducing the concept of probability into physics for the first time, formulated the law of the velocity distribution of molecules (seeMAXWELLIAN DISTRIBUTION). With this, the capabilities of the molecular-kinetic theory expanded considerably and subsequently led to the creation of statistical mechanics. L. Boltzmann constructed the kinetic theory of gases and statistically substantiated the laws of thermodynamics (seeKINETIC THEORY OF GASES). The fundamental problem, which Boltzmann largely succeeded in solving, consisted in reconciling the time-reversible character of the motion of individual molecules with the obvious irreversibility of macroscopic processes. According to Boltzmann, the maximum probability of a given state corresponds to the thermodynamic equilibrium of a system. The irreversibility of processes is related to the tendency of systems toward the most probable state. The theorem of equipartition of mean kinetic energy, which Boltzmann proved, was of great importance.
Classical statistical mechanics culminated in the work of J. W. Gibbs (1902), who created a method for calculating the distribution functions for any system (not merely a gas) in a state of thermodynamic equilibrium. Statistical mechanics won general recognition in the 20th century, after the creation of the quantitative theory of Brownian motion (seeBROWNIAN MOVEMENT) by A. Einstein and M. Smoluchowski (1905–06) on the basis of the molecular-kinetic theory; the theory of Brownian motion was experimentally confirmed by J. B. Perrin.
In the second half of the 19th century, Maxwell completed his extensive study of electromagnetic phenomena. In the fundamental work Treatise on Electricity and Magnetism (1873), he established the equations for the electromagnetic field (which now bear his name), which explained all the facts then known from a unified standpoint and made possible the prediction of new phenomena. Maxwell interpreted electromagnetic induction as the process of the generation of a circuital electric field by a variable magnetic field. Soon afterward, he predicted the reverse effect—the generation of a magnetic field by a variable electric field. The most important result of Maxwell’s theory was the conclusion that the rate of propagation of electromagnetic interactions is finite and equal to the speed of light. The experimental detection of electromagnetic waves by H. R. Hertz (1886–89) confirmed the validity of this conclusion. It followed from Maxwell’s theory that light has an electromagnetic nature. Optics thus became a branch of electrodynamics. At the very end of the 19th century, P. N. Lebedev detected and measured the pressure of light that had been predicted by Maxwell’s theory, and A. S. Popov made the first use of electromagnetic waves for wireless communication.
In the 19th century, G. Kirchhoff and R. W. von Bunsen laid the foundations for spectral analysis (1859). Continuum mechanics developed further. The theory of elastic vibrations and waves was worked out in acoustics by Helmholtz, Lord Rayleigh (J. W. Strutt), and others. Techniques of obtaining low temperatures were developed. All gases were obtained in the liquid state except helium, which was finally obtained in liquid form in the early 20th century (1908) by H. Kamerlingh Onnes.
By the end of the 19th century, physics appeared nearly complete to contemporaries. It had seemed that all physical phenomena could be reduced to the mechanics of molecules (or atoms) and the ether, a mechanical medium in which electromagnetic phenomena were played out. One of the major physicists of the 19th century—Lord Kelvin—only pointed out two unexplained facts: the failure of the Michelson experiment (seeMICHELSON EXPERIMENT) to detect the motion of the earth relative to the ether, and the temperature dependence of the specific heat of gases, which was inconsistent from the standpoint of the molecular-kinetic theory. However, just these facts indicated the need to reexamine the basic concepts of physics prevalent in the 19th century. The creation of the theory of relativity and quantum mechanics was required to explain these and many other phenomena that were discovered later.
Relativistic and quantum physics; physics of the atomic nucleus and elementary particles (late 19th and 20th centuries). A new era in physics was heralded by Joseph Thomson’s discovery of the electron in 1897. It was determined that atoms were not elementary particles but were actually complex systems that included electrons, a discovery in which the investigation of electric discharges in gases played an important part.
In the late 19th and early 20th centuries, H. Lorentz laid the foundations for the electron theory.
It became clear in the early 20th century that electrodynamics required a fundamental reevaluation of the concepts of space and time in classical Newtonian mechanics. In 1905, Einstein formulated the special theory of relativity—a new doctrine of space and time anticipated by the work of Lorentz and H. Poincaré.
Experiments showed that the principle of relativity formulated by Galileo, according to which mechanical phenomena occur identically in all inertial frames of reference, was also valid for electromagnetic phenomena (seeINERTIAL FRAME OF REFERENCE). Maxwell’s equations therefore should remain invariant, that is, should not change their variables, upon transition from one inertial frame of reference to another. It turned out, however, that this was valid only if the transformations of coordinates and time upon such a transition are different from the Galilean transformations that are valid in Newtonian mechanics. Lorentz found these transformations (seeLORENTZ TRANSFORMATIONS) but was unable to interpret them correctly. This was done by Einstein in the special theory of relativity.
The discovery of the special theory of relativity demonstrated the limited nature of the mechanistic picture of the world. Attempts to reduce electromagnetic processes to mechanical processes in a hypothetical medium—the ether—turned out to be unfounded, and it became clear that the electromagnetic field was a special form of matter, whose behavior did not obey the laws of mechanics.
In 1916, Einstein constructed the general theory of relativity—a physical theory of space, time, and gravitation that heralded a new stage in the development of the theory of gravitation.
Even before the creation of the special theory of relativity, the foundations were laid at the turn of the 20th century for the development of quantum theory, a remarkable discovery that revolutionized physics.
It was found in the late 19th century that the spectral distribution of the energy of thermal radiation, derived from the law of classical statistical mechanics of equipartition of energy, was inconsistent with experiments. It followed from theory that matter should emit electromagnetic waves at any temperature, lose energy, and cool to absolute zero; that is, thermal equilibrium between matter and radiation would be impossible, a conclusion everyday experience contradicted. An answer was provided in 1900 by M. Planck, who showed that theoretical results agreed with experiment if it were assumed that atoms emitted electromagnetic radiation in individual bundles—quanta—rather than continuously as posited in classical electrodynamics. The energy of each quantum is directly proportional to the frequency, and the proportionality factor is the quantum of action h = 6.6 × 10–27 erg · sec, subsequently named Planck’s constant.
In 1905, Einstein extended Planck’s hypothesis by assuming that the radiated portion of electromagnetic energy was also propagated and absorbed only as a whole; that is, it behaved like a particle, which was later called the photon. On the basis of this hypothesis, Einstein explained the regularities of the photoelectric effect that did not fit within the framework of classical electrodynamics.
The corpuscular theory of light thus was resurrected at a new qualitative level. Light behaves like a flux of particles (corpuscles), but at the same time, inherent in it are wave properties, manifested in particular in the diffraction and interference of light. Consequently, the wave and corpuscular properties that are incompatible from the standpoint of classical physics are also characteristic of light to equal extent (the dual character of light). The “quantization” of radiation led to the conclusion, drawn by N. Bohr (1913), that the energy of intraatomic motions can also vary, but only discontinuously.
By this time, E. Rutherford had discovered (1911) the atomic nucleus on the basis of experiments on the scattering of alpha particles by matter and had constructed a planetary model of the atom, in which electrons revolve around the nucleus like the planets revolve around the sun. According to Maxwellian electrodynamics, however, such an atom is unstable: while moving in circular or elliptical orbits, electrons experience acceleration and consequently should continuously emit electromagnetic waves, lose energy, and, gradually approaching the nucleus, ultimately fall into it (as calculations showed, in a time of the order of 10–8 sec). The stability of atoms and their line spectra thus could not be explained by the laws of classical physics. Bohr explained this phenomenon by postulating that atoms contain discrete stationary states in which electrons do not radiate energy. Radiation occurs upon the transition from one stationary state to another. The discreteness of the energy of an atom was confirmed by James Franck and Gustav Hertz (1913–14) during the study of collisions between atoms and electrons accelerated by an electric field. For the simplest atom—the hydrogen atom—Bohr constructed a quantitative theory of the emission spectrum that agreed with experiment.
Solid-state physics, in its modern meaning as the physics of condensed systems of an enormous number of particles (~1022 cm–3) took shape in the late 19th and early 20th centuries. Until 1925 its development proceeded along two lines: the physics of the crystal lattice and the physics of electrons in crystals, chiefly metals. These two lines subsequently merged on the basis of quantum theory.
The concept of the crystal as an aggregation of atoms orderly arranged in space and maintained in equilibrium by forces of interaction was finally formulated in the early 20th century, after a long period of development. This model has its beginnings in Newton’s work (1686) on calculating the speed of light in a chain of elastically coupled particles, and its development was continued by other scientists, including Daniel Bernoulli and Johann Bernoulli (1727), Cauchy (1830), and Lord Kelvin (1881).
In the late 19th century, E. S. Fedorov, in his work on the structure and symmetry of crystals, laid the foundations of theoretical crystallography: in 1890–91 he proved the possibility of the existence of 230 space groups of crystal symmetry (the Fedorov group)—types of ordered arrangement of particles in the crystal lattice (seeSYMMETRY OF CRYSTALS). In 1912, M. von Laue and his co-workers discovered the diffraction of X rays by crystals, conclusively establishing the conception of the crystal as an ordered atomic structure. This discovery led to the development of a method of determining the arrangement of atoms in crystals experimentally and measuring interatomic distances, which in turn led to the discovery of X-ray diffraction analysis by W. L. Bragg and W. H. Bragg (1913), and G. V. Vul’f (also Wulff; 1913). During the same period (1907–14), a dynamic theory of crystal lattices that made significant allowances for quantum concepts was developed. In 1907, Einstein, using a model of the crystal as an aggregation of quantum harmonic oscillators of identical frequency, explained the observed decrease in the specific heat of solids with decreasing temperature—a fact that is in sharp contradiction to Dulong and Petit’s law. A more advanced dynamic theory of the crystal lattice as an aggregation of coupled quantum oscillators of different frequencies was constructed by P. Debye (1912), M. Born and T. von Kármán (1913), and E. Schrödinger (1914) in a form close to its present one. An important new era was heralded by the creation of quantum mechanics.
The second line of development of solid-state physics—the physics of a system of electrons in a crystal—arose after the discovery of the electron to form the electron theory of metals and other solids. In this theory, electrons in a metal are treated as a free-electron gas filling the crystal lattice, similar to an ordinary rarefied molecular gas, which obeys Boltzmann’s classical statistics. The electron theory made it possible to explain Ohm’s law and the Wiedemann-Franz law (P. Drude) and lay the foundations for the theory of the dispersion of light in crystals. However, not all observable facts fit within the framework of the classical electron theory. For example, the temperature dependence of the specific resistivity of metals was not explained, and it remained unclear why the electron gas did not make an appreciable contribution to the specific heat of metals. A resolution was found only after the formulation of quantum mechanics.
The first version of the quantum theory that was created by Bohr was contradictory: while using the Newtonian laws of mechanics for the motion of electrons, Bohr artificially imposed quantum restrictions on the possible motions of electrons, restrictions that are alien to classical physics.
The reliably established discreteness of action (see) and its quantitative measure—Planck’s constant h, a universal constant that acts as the natural measure for natural phenomena—required the radical adjustment of both the laws of mechanics and the laws of electrodynamics. Classical laws are valid only when the motion of objects of quite large mass is considered, when the quantities of the dimensionality of action have values that are great in comparison with h and the discreteness of action may be disregarded.
The 1920’s saw the creation of the most profound and all-encompassing modern physical theory—quantum, or wave, mechanics—a consistent, logically complete nonrelativistic theory of the motion of microparticles that was also able to explain many properties of macroscopic bodies and the phenomena that occur in them (seeQUANTUM MECHANICS). The Planck-Einstein-Bohr idea of quantization became the basis of quantum mechanics, as did the hypothesis advanced by L. de Broglie in 1924 that a dualistic wave-particle nature is inherent in electromagnetic radiation (photons) and all other types of matter. All microparticles, such as electrons, protons, and atoms, have wave properties in addition to corpuscular properties: each particle can be placed in correspondence with a wave, whose wavelength is equal to the ratio of Planck’s constant h to the momentum of the particle and whose frequency is equal to the ratio of the particle’s energy to h. De Broglie waves describe free particles (seeDE BROGLIE WAVES). The diffraction of electrons, which experimentally confirmed that electrons have wave properties, was observed for the first time in 1927. Diffraction later was observed in other microparticles, including molecules (seeDIFFRACTION OF PARTICLES).
In 1926, Schrödinger, attempting to obtain discrete values for the energy of an atom from a wave-type equation, formulated the fundamental equation of quantum mechanics, which now bears his name. W. Heisenberg and Born constructed (1925) quantum mechanics in a different mathematical form—matrix mechanics.
In 1925, G. E. Uhlenbeck and S. A. Goudsmit discovered, on the basis of experimental (spectroscopic) data, the existence of the electron’s intrinsic angular momentum—spin (and consequently the associated intrinsic, spin, magnetic moment)—which is equal to (½)ћ. (The value of spin is usually expressed in units of ћ = h/2π, which, like h, is called Planck’s constant; in these units the spin of the electron is equal to ½.) W. Pauli wrote the equation of motion of a nonrelativistic electron in an external electromagnetic field with allowance for the interaction of the spin magnetic moment of the electron with the magnetic field (see). In 1925 he formulated the exclusion principle, according to which no more than one electron can be in a single quantum state (seePAULI EXCLUSION PRINCIPLE). This principle played a very important role in the formulation of the quantum theory of many-particle systems; in particular, it explained the regularities in the filling of shells and layers by electrons in multielectron atoms and thus theoretically substantiated Mendeleev’s periodic system of the elements.
In 1928, P. A. M. Dirac derived the quantum relativistic equation of motion of the electron, from which the existence of spin in the electron naturally followed (seeDIRAC EQUATION). On the basis of this equation, Dirac predicted (1931) the existence of the first antiparticle—the positron—which was discovered in 1932 by C. D. Anderson in cosmic rays (seeCOSMIC RAYS). (The antiproton and antineutron—the antiparticles of the other structural units of matter, the proton and neutron—were experimentally discovered in 1955 and 1956, respectively.)
Quantum statistics—the quantum theory of the behavior of physical system (particularly macroscopic bodies) consisting of a vast number of microparticles—developed along with quantum mechanics. In 1924, S. Bose applied the principles of quantum statistics to photons, which are particles with integral spin, and derived Planck’s formula for the energy distribution in the spectrum of equilibrium radiation; Einstein obtained the formula for the energy distribution of an ideal molecular gas (Bose-Einstein statistics). In 1926, Dirac and E. Fermi showed that an aggregation of electrons and other particles with half-integral spin for which the Pauli exclusion principle is valid obeys a different statistics—Fermi-Dirac statistics. In 1940, Pauli established the relationship between spin and statistics.
Quantum statistics played a major role in the development of the physics of condensed media and, above all, in the formation of solid-state physics. In quantum language, the thermal vibrations of atoms in a crystal may be considered as a sort of aggregation of “particles,” or more accurately, quasiparticles (seeQUASIPARTICLES)—phonons, which were introduced by I. E. Tamm in 1929. This approach explained in particular the decrease in the specific heat of metals (obeying the T3 law) with decreasing temperature T in the low-temperature region and showed that the cause of the electrical resistance of metals is the scattering of electrons mainly by phonons rather than ions. Other quasiparticles were introduced later. The quasiparticle method proved extremely effective in the study of the properties of complex macroscopic systems in the condensed state.
In 1928, A. Sommerfeld used the Fermi-Dirac distribution function to describe transport processes in metals. This resolved a number of difficulties in the classical theory and created the foundation for the further development of the quantum theory of kinetic phenomena (electrical and thermal conduction and thermoelectric, galvanomagnetic, and other effects) in solids, especially metals and semiconductors (see and SEMICONDUCTORS).
According to the Pauli exclusion principle, the energy of all the free electrons of a metal is nonzero, even at absolute zero. In an unexcited state, all energy levels, beginning with the zero-energy level and ending with some maximum level (the Fermi level), turn out to be occupied by electrons. This picture enabled Sommerfeld to explain the minor contribution of electrons to the specific heat of metals: upon heating, only electrons near the Fermi level are excited.
The theory of the band energy structure of crystals, which provided a natural explanation of the differences in the electrical properties of dielectrics and metals, was developed by F. Bloch, H. A. Bethe, and L. Brillouin (1928–34). The approach described, which was called the one-electron approximation, was developed further and found broad application, especially in semiconductor physics.
In 1928, Ia. I. Frenkel’ and Heisenberg showed that the quantum exchange interaction (seeEXCHANGE INTERACTION), which was investigated by Heisenberg in 1926 using the helium atom as an example, underlay ferromagnetism (see). In 1932–33, L. Néel and L. D. Landau independently predicted antiferromagnetism (seeANTIFERROMAGNETISM).
The discovery of superconductivity by Kamerlingh Onnes (1911) and that of the superfluidity of liquid helium by P. L. Kapitsa (1938) stimulated the development of new techniques in quantum statistics. A phenomenological theory of superfluidity was constructed by Landau in 1941. The next step was the phenomenological theory of superconductivity created by Landau and V. L. Ginzburg (1950). (SeeSUPERCONDUCTIVITY and SUPERFLUIDITY.)
Powerful new methods of calculation were developed in the 1950’s in the statistical quantum theory of many-particle systems. One of the major achievements was the creation of the microscopic theory of superconductivity by J. Bardeen, L. Cooper, and J. Schrieffer (United States) and N. N. Bogoliubov (USSR). Attempts to construct a consistent quantum theory of the emission of light by atoms led to a new stage in the development of quantum theory—the creation (1929) of quantum electrodynamics by Dirac (seeQUANTUM ELECTRODYNAMICS).
Physics has undergone a revolutionary transformation in the second half of the 20th century connected with the elucidation of the structure of and the processes in the nucleus of the atom and with the creation of elementary particle physics. The discovery of the atomic nucleus by Rutherford, mentioned above, was prepared for by the discovery of radioactivity and radioactive transmutations of heavy atoms as early as the late 19th century (A. H. Becquerel and P. Curie and M. Curie). Isotopes were discovered in the early 20th century. The first attempts to investigate the structure of the atomic nucleus date directly to 1919, when Rutherford, by bombarding stable nitrogen nuclei with alpha particles, achieved the artificial transformation of the nitrogen nuclei into oxygen nuclei. The discovery of the neutron by J. Chadwick in 1932 led to the creation of the modern proton-neutron model of the nucleus (D. D. Ivanenko, Heisenberg). In 1934, J. F. Joliot-Curie and his wife, I. Joliot-Curie, discovered a way to induce artificial radioactivity.
The construction of charged-particle accelerators made it possible to study various nuclear reactions. The discovery of the fission of the atomic nucleus was the most important result of this area of physics.
The development of nuclear energy from fission chain reaction was achieved in the period 1939–45 using 235U, and the atomic bomb was built. The USSR was the first country to use the controlled nuclear reaction of fission of 235U for peaceful, industrial purposes; the first atomic power plant was built in the USSR in 1954 near the city of Obninsk. Working atomic power plants were later built in many countries.
The thermonuclear fusion reaction was accomplished in 1952 (a nuclear device was detonated), and the hydrogen bomb was built in 1953.
Elementary particle physics has developed rapidly in the 20th century concurrently with nuclear physics. The first major advances in this area were made in the study of cosmic rays. Muons, pi-mesons (pions), K-mesons, and the first hyperons were discovered. The systematic study of elementary particles and their properties and interactions began after the development of high-energy charged-particle accelerators. The existence of two types of neutrinos was proved experimentally, and many new elementary particles were discovered, including the highly unstable particles known as resonances, whose average lifetime is a mere 10–22–10–24 sec. The observed universal mutual transformations of elementary particles indicated that these particles were not elementary in the absolute sense of the word but had a complex internal structure that remained to be elucidated. The theory of elementary particles and their interactions—strong, electromagnetic, and weak interactions—constitutes the subject of quantum field theory, a theory that is still far from complete (seeQUANTUM FIELD THEORY).
Classical Newtonian mechanics. Newton’s introduction of the concept of state was of fundamental importance to physics. The concept was originally formulated for the simplest mechanical system—a system of mass points—and it was for mass points that Newton’s laws were directly valid (seeMASS POINT). In all subsequent physical theories, the concept of state has proved to be one of the basic concepts. The state of a mechanical system is completely defined by the coordinates and momenta of all bodies that form the system. If the forces of interaction of bodies that determine the accelerations of the bodies are known, then the equations of motion of Newtonian mechanics (Newton’s second law) make it possible to establish unambiguously from the values of the coordinates and momenta at the initial moment of time the values of the coordinates and momenta at any subsequent moment. Coordinates and momenta are fundamental quantities in classical mechanics. If they are known, the value of any other mechanical quantity, such as energy or angular momentum, can be calculated. Although it was later found that Newtonian mechanics has limited applications, it was and still remains the foundation without which the construction of the entire superstructure of modern physics would have been impossible.
Continuum mechanics. Gases, liquids, and solids are viewed as continuous, homogeneous media in continuum mechanics. Instead of the coordinates and momenta of particles, the state of a system is characterized unambiguously by the following functions of the coordinates (x,y,z) and time (t): the density p(x,y,z,t), pressure P (x,y,z,t), and the hydrodynamic velocity v(x,y,z,t) with which the mass is transferred. The equations of continuum mechanics make it possible to establish the values of these functions at any moment of time if their values at the initial moment of time and the boundary conditions are known.
The Euler momentum equation, which relates the rate of flow of a fluid to the pressure, together with the continuity equation, which expresses the conservation of matter, makes it possible to solve any problem of the dynamics of an ideal fluid. The action of frictional forces and the influence of thermal conduction, which lead to the dissipation of mechanical energy, are taken into account in the hydrodynamics of a viscous fluid. Continuum mechanics ceases to be “pure mechanics”: thermal processes become significant. The complete system of equations that describe mechanical processes in real gaseous, liquid, and solid bodies was formulated only after the creation of thermodynamics. The motion of electrically conductive liquids and gases is studied in magnetohydrodynamics (seeMAGNETOHYDRODYNAMICS). The vibrations of an elastic medium and the propagation of waves in it are studied in acoustics (see).
Thermodynamics. The entire content of thermodynamics is basically a consequence of two principles: the law of the conservation of energy and the law from which the irreversibility of macroscopic processes follows. These principles make it possible to introduce single-valued functions of state: the internal energy and entropy. In closed systems the internal energy remains constant, while entropy is conserved only in equilibrium (reversible) processes. Entropy increases in irreversible processes, and its increase reflects quite completely the definite direction of macroscopic processes in nature. In the simplest case, the pressure, volume, and temperature are the main quantities in thermodynamics that define the state of a system; these are called the thermodynamic degrees of freedom. The relationship between them is given by the thermal equation of state, and the dependence of energy on volume and temperature is given by the caloric equation of state. The simplest thermal equation of state is the equation of state of an ideal gas (seeCLAPEYRON EQUATION).
Classical thermodynamics studies the states of thermal equilibrium and equilibrium processes (the latter occur infinitely slowly). Time does not enter into the basic equations. The thermodynamics of nonequilibrium processes was initiated in the 1930’s. In this theory the state is defined in terms of density, pressure, temperature, entropy, and other quantities (local thermodynamic degrees of freedom), which are treated as functions of the coordinates and time. The equations of mass, energy, and momentum transfer, which describe the evolution of the state of a system with time (the equations of diffusion and thermal conduction, the Navier-Stokes equations), are written for these quantities. These equations express local laws (that is, laws valid for each infinitesimal element) of the conservation of the physical quantities indicated. (See alsoTHERMODYNAMICS and THERMODYNAMICS, NONEQUILIBRIUM.)
Statistical mechanics. In classical statistical mechanics, the particle coordinate and momentum distribution function, f(r1, p1, . . ., rN, pN, t), which is interpreted as the probability density of detecting the observed values of coordinates and momenta in certain small intervals at a given moment t (where N is the number of particles in the system), is specified instead of the coordinates ri and momenta pi of the particles of a system. The distribution function f satisfies the equation of motion (the Liouville equation), which has the form of the continuity equation in space of all ri and Pi, (that is, in phase space). The Liouville equation unambiguously defines f at any moment of time from the given value at the initial moment if the energy of interaction between particles of the system is known. The distribution function makes it possible to calculate the average values of the densities of matter, energy, and momentum and their fluxes, as well as fluctuation, that is, deviations from the average values. The equation that describes the evolution of the distribution function for a gas was first obtained by Boltzmann in 1872 and is called the Boltzmann (kinetic) equation.
Gibbs obtained an expression for the distribution function of an arbitrary system in equilibrium with a thermostat (the canonical Gibbs distribution). This function makes possible the calculation of all thermodynamic potentials from the known expression for energy as a function of the coordinates and momenta of the particles (Hamiltonian function). It is the calculation of these potentials that is the subject of statistical thermodynamics.
The processes that occur in systems that are not in thermodynamic equilibrium are irreversible and are studied in the statistical theory of nonequilibrium processes. (Together with the thermodynamics of nonequilibrium processes, this theory forms the subject of physical kinetics; seeKINETICS, PHYSICAL.) In principle, if the distribution function is known, any macroscopic quantity characterizing a system in a nonequilibrium state can be determined, and its change in space with time can be traced.
To calculate the physical quantities that characterize a system, that is, the average density of the number of particles, energy, and momentum, it is not necessary to know the complete distribution function. Simpler distribution functions suffice—the single-particle distribution functions, which give the average number of particles with given values of coordinates and momenta, and the two-particle, or pair, distribution functions, which define the mutual influence (correlation) of two particles. A general method of deriving the equations for such functions was developed in the 1940’s by Bogoliubov, Born, and the British physicist H. S. Green, among others. The equations for a single-particle distribution function, which can be constructed for low-density gases, are called the kinetic equations, which include the Boltzmann (kinetic) equation. The kinetic equations of Landau and A. A. Vlasov (1930’s–1940’s) are varieties of the Boltzmann (kinetic) equation for an ionized gas (plasma).
In recent decades the study of plasma (seePLASMA) has gained increasing importance. In this medium, the electromagnetic interactions of charged particles play the main part, and generally only a statistical theory can answer various questions associated with the behavior of a plasma. In particular, such a theory makes it possible to investigate the stability of a high-temperature plasma in an external electromagnetic field. This problem is of particular importance because of its connection with the problem of controlled thermonuclear fusion. (See alsoSTATISTICAL MECHANICS.)
Electrodynamics. The state of an electromagnetic field in Maxwell’s theory is characterized by two basic vectors: the electric field strength E and the magnetic induction B, both functions of the coordinates and time. The electromagnetic properties of a substance are given by three quantities: the dielectric constant ∈, magnetic permeability μ, and specific electrical conductivity σ, which must be determined experimentally. A system of linear partial differential equations—Maxwell’s equations—is written for the vectors E and B and the associated auxiliary vectors of electric induction D and magnetic field strength H (seeMAXWELL’S EQUATIONS). These equations describe the evolution of an electromagnetic field. The values of the field characteristics at the initial moment of time within some volume and the boundary conditions on the surface of this volume can be used to find E and B at any subsequent moment. These vectors define the force that acts on a charged particle moving with a certain velocity in an electromagnetic field (seeLORENTZ FORCE).
Lorentz, the founder of the electron theory, formulated the equations that describe elementary electromagnetic processes. These equations, which are called the Lorentz-Maxwell equations (seeLORENTZ-MAXWELL EQUATIONS), relate the motion of individual charged particles to the electromagnetic field that they generate.
By proceeding from the concepts of the discreteness of electric charges and the equations for elementary electromagnetic processes, it is possible to extend the methods of statistical mechanics to electromagnetic processes in matter. The electron theory made possible the determinination of the physical meaning of the electromagnetic characteristics of matter ε, μ, and σ and the calculation of the values of these quantities as a function of frequency, temperature, pressure, and the like.
Special theory of relativity; relativistic mechanics. Two postulates underlie the special theory of relativity, the physical theory of space and time in the absence of gravitational fields: the principle of relativity and the independence of the speed of light from the motion of the source.
According to Einstein’s principle of relativity, any physical phenomenon, be it a mechanical, optical, or thermal one, occurs identically under identical conditions in all inertial frames of reference. This means that the uniform and rectilinear motion of a system does not influence the course of processes within the system. All inertial frames of reference are equivalent (there exists no special frame of reference that is at “absolute rest,” just as neither absolute space nor absolute time exists). The speed of light in a vacuum therefore is the same in all inertial frames of reference. The transformations of space coordinates and time upon transition from one inertial frame to another—the Lorentz transformations—follow from the aforementioned two postulates.
The following main consequences of the special theory of relativity are deduced from the Lorentz transformations: (1) the existence of a limiting speed that coincides with the speed of light c in a vacuum (no body can move with a speed exceeding c, and c is the maximum rate of transfer of any interaction), (2) the relativity of simultaneity (generally, events that are simultaneous with respect to one inertial frame of reference are not simultaneous with respect to another), and (3) the slowing of time (time dilation) and the contraction of the longitudinal dimensions of a body in the direction of motion (all physical processes within a body moving with a velocity v with respect to some inertial frame of reference occur more slowly by a factor of than the same processes in the inertial frame of reference, and the longitudinal dimensions of the body are reduced by the same amount). It follows from the equivalence of all inertial frames of reference that the effects of time dilation and the contraction of bodies are not absolute but relative and depend on the frame of reference.
Newton’s laws of mechanics cease to be valid at rates of motion comparable with the speed of light. Immediately after the creation of the theory of relativity, relativistic equations of motion that generalized the equations of motion of Newtonian mechanics were constructed. The equations are suitable for describing the motion of particles with velocities close to the speed of light. Two consequences of relativistic mechanics took on exceptional importance in physics: the variation of a particle’s mass with velocity and the universal relationship between energy and mass (seeRELATIVITY, THEORY OF).
At high velocities, a physical theory must satisfy the requirements of the theory of relativity; that is, it must be relativistically invariant. The laws of the theory of relativity define the transformations upon transition from one inertial frame of reference to another not only for space coordinates and time but also for any physical quantity. The relativity theory derives from the principles of invariance, or symmetry, in physics (see).
General theory of relativity (theory of gravitation). Of the four types of fundamental interactions—gravitational, electromagnetic, strong, and weak—the gravitational interactions, or forces of gravity, were discovered first. For more than 200 years the foundations of the theory of gravitation formulated by Newton remained unchanged, since nearly all consequences of the theory agreed totally with experiment.
The classical theory of gravitation was revolutionized by Einstein in the second decade of the 20th century. Einstein’s theory of gravitation, in contrast to all other theories, was created solely through the logical development of the principle of relativity as applied to gravitational interactions; it was called the general theory of relativity. Einstein interpreted in a new way the equality established by Galileo of gravitational and inertial mass (see). This equality signifies that gravitation identically curves the trajectories of all bodies, and gravitation therefore may be considered as the curvature of space-time itself. Einstein’s theory revealed the profound connection between the geometry of space-time and the distribution and motion of masses. The components of the metric tensor, which characterize the space-time metric, are at the same time the gravitational field potentials; that is, they define the state of the gravitational field (seeSPACE-TIME METRIC). The gravitational field is described by Einstein’s nonlinear equations. In the approximation of a weak field, these equations imply the existence of gravitational waves, which have not yet been detected experimentally (seeGRAVITATIONAL RADIATION).
Gravitational forces are the weakest of the fundamental forces in nature. For protons they are weaker by a factor of approximately 1036 than electromagnetic forces. In the modern theory of elementary particles, gravitational forces are disregarded, since it is assumed that their role is insignificant. However, their role does become decisive in interactions of bodies of cosmic dimensions. Gravitational forces also determine the structure and evolution of the universe.
Einstein’s theory of gravitation led to new concepts of the evolution of the universe. In the mid-1920’s A. A. Fridman found a nonstationary solution of the gravitational field equations that corresponded to an expanding universe. This conclusion was confirmed by the observations of E. Hubble, who established the law of the red shifts (seeRED SHIFT) for galaxies, according to which the distances between galaxies increase with time. The general theory of relativity also predicts the possibility of the unlimited compression of stars of sufficiently large mass (greater than two or three solar masses), resulting in the formation of black holes (seeBLACK HOLE). There are definite indications from observations of binary stars, which are discrete sources of X rays, that such objects do indeed exist.
The general theory of relativity and quantum mechanics are the two most important theories of the 20th century. All previous theories, including the special theory of relativity, are usually placed under classical physics (nonquantum physics is sometimes called classical physics).
Quantum mechanics. The state of a microscopic object in quantum mechanics is described by the wave function ψ (seeWAVE FUNCTION). The wave function has a statistical meaning (Born, 1926): it represents the probability amplitude, that is, the square of its modulus, ǀψǀ2, is the probability density for a particle to be in a given state. In coordinate representation, ψ = ψ(x, y, z, t), and the quantity ǀψǀ2 Δx Δy Δz defines the probability that the coordinates of the particle at a time t will lie within some small volume of space Δx Δy Δz at a point with the coordinates x,y,z. The evolution of the state of a quantum system is determined unambiguously by means of the Schrödinger equation.
The wave function completely describes any given state. If ψ is known, it is possible to calculate the probability of a specific value of any physical quantity that pertains to a particle (or system of particles) and the average values of all the physical quantities. Statistical distributions with respect to coordinates and momenta are not independent, from which it follows that the coordinates and momentum of a particle cannot assume exact values simultaneously (the Heisenberg uncertainty principle); their spreads are related by the uncertainty relation. The uncertainty relation is also valid for energy and time.
In quantum mechanics the angular momentum and its projection and the energy during motion in a bounded region of space can assume only a series of discrete values. The possible values of physical quantities are the eigenvalues of the operators, which in quantum mechanics are placed in correspondence with each physical quantity. A physical quantity assumes a certain value with a probability of unity only if the system is located in the state represented by the eigenfunction of the corresponding operator.
The Schrödinger-Heisenberg quantum mechanics does not satisfy the requirements of the theory of relativity; that is, it is non-relativistic. It is applicable for describing the motion of elementary particles and their component systems at speeds much less than the speed of light.
Quantum mechanics has been used to construct the atomic theory and to explain the chemical bond, in particular, the nature of the covalent chemical bond. Here, the existence of the specific exchange interaction—a purely quantum effect that has no analog in classical physics—was discovered. The exchange energy plays a major role in the formation of the covalent bond in both molecules and crystals and also in the phenomena of ferromagnetism and antiferromagnetism. It is also of great importance in intranuclear interactions.
Some nuclear processes, such as alpha decay, can be explained only by the quantum effect of the passage of particles through the potential barrier (seePOTENTIAL BARRIER and TUNNEL EFFECT).
The quantum theory of scattering has led to results essentially different from the results of the classical theory of scattering. In particular, it has turned out that in collisions of slow neutrons with nuclei, the interaction cross section is hundreds of times greater than the transverse dimensions of the colliding particles, a fact of exceptional importance in nuclear power engineering (seeSCATTERING OF MICROPARTICLES).
The band theory of solids has been constructed on the basis of quantum mechanics (seeBAND THEORY OF SOLIDS).
In the 1950’s a new branch of radio physics grew out of the quantum theory of stimulated emission created by Einstein as early as 1917: electromagnetic waves were generated and amplified through the use of quantum systems. N. G. Basov and A. M. Prokhorov, and C. Townes, working independently, created microwave quantum generators (masers), which make use of the stimulated emission of excited molecules. The laser—a quantum generator of electromagnetic waves in the optical region—was developed in the 1960’s. (SeeQUANTUM ELECTRONICS and LASER.)
Quantum statistics. Just as the theory of the behavior of a large aggregation of individual particles was constructed on the basis of the classical laws of motion of such particles, quantum statistics was built on the basis of the quantum laws of particle motion. Quantum statistics describes the behavior of macroscopic objects in the case where classical mechanics is inapplicable for describing the motion of the component particles. In this case, the quantum properties of microscopic objects are distinctly manifested in the properties of macroscopic bodies.
The mathematical apparatus of quantum statistics differs considerably from that of classical statistics, since, as stated above, certain physical quantities in quantum mechanics can assume discrete values. However, the content of the statistical theory of equilibrium states itself has not undergone profound changes. In quantum statistics, as in the quantum theory of many-particle systems in general, the principle of the indistinguishability of identical particles plays an important role (seeINDISTINGUISHABILITY OF IDENTICAL PARTICLES). In classical statistics it is assumed that the interchange of two identical particles changes the state. In quantum statistics the state of a system is invariant to such an interchange. If the particles (or quasiparticles) have integral spin (such particles are called bosons), then any number of particles can exist in the same quantum state; systems of such particles are described by the Bose-Einstein statistics (seeBOSE-EINSTEIN STATISTICS). The Pauli principle is valid for all particles (quasiparticles) with half-integral spin (such particles are called fermions), and systems of such particles are described by the Fermi-Dirac statistics (see).
Quantum statistics has made it possible to substantiate the Nernst heat theorem (seeTHERMODYNAMICS, THIRD LAW OF), according to which entropy tends toward zero as the temperature approaches absolute zero.
The quantum statistical theory of equilibrium processes has been constructed in a form as complete as the classical theory, and the foundations of the quantum statistical theory of nonequilibrium processes have been also laid. The equation that describes nonequilibrium processes in a quantum system, which is called the basic kinetic equation, in principle makes it possible to trace the change in time in the probability of the system’s quantum states of a distribution.
Quantum field theory. The next step in the development of quantum theory was the extension of quantum principles to systems with an infinite number of degrees of freedom (physical fields) and the description of the processes of particle production and transformation. This led to quantum field theory, which reflects quite completely a fundamental property of nature—the wave-particle duality.
In quantum field theory, particles are described by means of quantized fields, which are aggregates of production and annihilation operators of particles in different quantum states. The interaction of quantized fields leads to various processes of particle emission, absorption, and transformation. In quantum field theory any process is considered as the annihilation of some particles in certain states and the creation of others in new states.
Quantum field theory was initially constructed to explain the interaction of electrons, positrons, and photons (quantum electrodynamics). The interaction between charged particles, according to quantum electrodynamics, is accomplished through the exchange of photons. In this case, the electric charge e of a particle is a constant that characterizes the coupling of the field of the charged particles to the electromagnetic field (photon field).
The ideas that formed the foundations for quantum electrodynamics were used by E. Fermi in 1934 to describe the processes of beta decay (seeBETA DECAY) of radioactive atomic nuclei by way of a new type of interaction, which, as was subsequently determined, is a particular case of the weak interactions (seeWEAK INTERACTION). In processes of electron beta decay, one neutron of the nucleus is converted into a proton with the simultaneous emission of an electron and an electron antineutrino. According to quantum field theory, this process may be represented as the result of the contact interaction (interaction at a single point) of quantized fields corresponding to four particles with half-integral spin—the proton, ne’utron, electron, and antineutrino—that is, a four-fermion interaction.
H. Yukawa’s hypothesis (1935) that an interaction exists between the field of nucleons (protons and neutrons) and the field of mesons (which at that time had not been detected as yet experimentally) was a further fruitful application of the ideas of quantum field theory. According to the hypothesis, the nuclear force between nucleons is due to the exchange of mesons between the nucleons, and the short range of the nuclear force is explained by the comparatively large rest mass of the mesons. Mesons with the predicted properties (pi-mesons, or pions) were discovered in 1947, and their interaction with nucleons turned out to be a particular manifestation of the strong interactions (seeSTRONG INTERACTION).
Quantum field theory thus is the basis for describing the fundamental interactions in nature—electromagnetic, strong, and weak interactions. In addition, its methods have found extensive applications in solid-state theory, plasma theory, and nuclear theory, since many processes studied in these theories are related to the emission and absorption of various types of elementary excitations—quasiparticles, such as phonons and spin waves.
Because a field has an infinite number of degrees of freedom, the interaction between its particles—the quanta of the field—leads to mathematical difficulties that have not yet been completely surmounted. In the theory of electromagnetic interactions, however, any problem can be solved approximately, since the interaction may be regarded as a small perturbation of the free state of the particles (because of the smallness of the dimensionless constant α = e2/ℏ/c ≈ 1/137, which characterizes the intensity of electromagnetic interactions). Although the theory of all effects in quantum electrodynamics is in complete agreement with experiment, it is still somewhat unsatisfactory, since infinite expressions (divergences) are obtained for some physical quantities (mass, electric charge) in perturbation-theoretic calculations. These divergences are eliminated by using the renormalization technique, which consists in replacing the infinitely large values of a particle’s mass and charge by the observed values. S. Tomonaga, R. Feynman, and J. Schwinger made major contributions to the development of quantum electrodynamics in the late 1940’s.
Attempts were made later to apply the methods developed in quantum electrodynamics to calculations of the weak and strong (nuclear) interactions, but a number of difficulties were encountered.
The weak interactions, inherent in all elementary particles except the photon, are manifested in the decay of most elementary particles and a number of other transformations of particles. The weak-interaction, or Fermi, constant, which defines the rate of the processes generated by weak interactions, increases with increasing particle energy.
The universal theory of weak interactions, which closely resembles Fermi’s theory of beta decay, was proposed in 1956, after the experimental establishment of the nonconservation of space parity in the weak interactions. In contrast to quantum electrodynamics, however, this theory did not make it possible to introduce corrections in the higher orders of perturbation theory; that is, it proved to be nonrenormalizable. In the late 1960’s, attempts were made to construct a renormalizable theory of the weak interactions, and success was finally achieved on the basis of gauge theories. A unified model of the weak and electromagnetic interactions was created, in which carriers of the weak interactions—intermediate vector bosons—must exist in addition to the photon—the carrier of electromagnetic interactions between charged particles. It is believed that the intensity of the interactions between intermediate bosons and other particles is the same as that between photons and other particles. Since the radius of the weak interactions is very small (less than 10–15 cm), it follows from the laws of quantum theory that the mass of intermediate bosons must be very large—several tens of proton masses. Such particles have yet to be detected in experiments. Both charged vector bosons (W– and W+) and the neutral vector boson (Z0) should exist. Processes that apparently can be explained by the existence of neutral intermediate bosons were observed in experiments in 1973. However, the validity of the new unified theory of the weak and electromagnetic interactions cannot be considered proved.
The difficulties in creating a theory of the strong interactions are related to the fact that, because of the large coupling constant, the methods of perturbation theory turn out to be inapplicable. Consequently, and also because of the enormous amount of experimental material that still requires theoretical generalization, scientists are developing methods based on the general principles of quantum field theory, such as relativistic invariance and locality of interaction (which indicates that the causality condition is satisfied), in the theory of the strong interactions. These methods include the use of dispersion relations and the axiomatic method. The latter is the most fundamental, but it has not yet provided an adequate number of concrete results that would permit experimental verification. The greatest practical advances in the theory of the strong interactions have been obtained by applying symmetry principles.
Attempts are being made to construct a unified theory of the weak, electromagnetic, and strong interactions, patterned after the gauge theories. (See alsoQUANTUM FIELD THEORY.)
Symmetry principles and laws of conservation. Physical theories make it possible to determine the future behavior of an object from its initial state. Symmetry (or invariance) principles are general in character and are obeyed by all physical theories. The symmetry of the laws of physics with respect to some transformation means that these laws are invariant under a given transformation, and therefore symmetry principles can be established on the basis of known physical laws. On the other hand, if a theory of some physical phenomenon has not yet been created, the symmetries discovered in experiments have a heuristic role in the construction of the theory. From this follows the special importance of the experimentally established symmetries of the strongly interacting elementary particles—the hadrons—the theory of which has not yet been constructed.
A distinction is made between general symmetries, which are valid for all physical laws and all kinds of interactions, and approximate symmetries, which are valid only for a certain range of interactions or even for one type of interaction. Thus there exists a hierarchy of symmetry principles. Symmetries are divided into space-time, or geometric, symmetries, and internal symmetries, which describe the specific properties of elementary particles. The laws of conservation are related to symmetries. For continuous transformations this relationship was established in 1918 by E. Noether on the basis of the most general assumptions regarding the mathematical apparatus of the theory (seeNOETHER’S THEOREM and CONSERVATION LAW).
The symmetries of physical laws relative to the following space-time continuous transformations are valid for all types of interactions: translation and rotation of a physical system as a whole in space and translation of time (change in the origin of the time coordinate). The invariance of all physical laws to these transformations reflects the homogeneity and isotropy of space and the homogeneity of time, respectively. The laws of the conservation of momentum, angular momentum, and energy, respectively, result from these symmetries. The general symmetries also include invariance to the Lorentz transformations and the gauge transformations (of the first kind)—the multiplication of the wave function by a phase factor, which does not change the square of the wave function’s modulus (the last symmetry implies the laws of the conservation of electric, baryonic, and leptonic charges).
Symmetries also exist that correspond to discrete transformations: time reversal, space inversion (the mirror-image symmetry of nature), and charge conjugation (seeTIME REVERSAL; SPACE INVERSION; and CHARGE CONJUGATION). On the basis of the approximate SU(3) symmetry, M. Gell-Mann created (1962) a systematization of the hadrons that made it possible to predict the existence of several elementary particles, which were later discovered experimentally.
The systematization of the hadrons can be explained if it is assumed that all hadrons are “constructed” from a small number of fundamental particles (three in the most widely held version) called quarks (see QUARKS) and their corresponding antiparticles—antiquarks. There are various quark models of the hadrons, but free quarks have not yet been detected experimentally. In 1975–76 two new strongly interacting particles—ψ1 and ψ2—were discovered, with masses more than three times greater than the mass of the proton and lifetimes of 10–20–10–21 sec. An explanation of the peculiarities of the creation and decay of these particles apparently requires the introduction of an additional, fourth, quark to which the quantum number charm has been assigned. Furthermore, according to present concepts, every quark exists in three varieties that differ in a special characteristic—color.
The advances in the classification of the hadrons on the basis of symmetry principles have been considerable, but the reasons for the occurrence of these symmetries still remain obscure, although it is conjectured they may be owing to the existence and properties of quarks.
As late as the early 20th century, such epoch-making discoveries as Rutherford’s discovery of the atomic nucleus could be made with comparatively simple equipment. Subsequent experimentation, however, rapidly grew more complex, and experimental facilities acquired an industrial character. The role of measuring and computing equipment grew immeasurably. Modern experimental research in nuclear and elementary particle physics, radio astronomy, quantum electronics, and solid-state physics requires an unprecedented scale and unprecedented financial expenditures, often accessible only to large countries or even to groups of countries with advanced economies.
A major role in the development of nuclear physics and elementary particle physics can be attributed to the development of methods of observing and recording individual events of transmutations of elementary particles (caused by collisions of particles with one another and with atomic nuclei) and the creation of charged-particle accelerators, which engendered the study of high-energy physics. The independent discovery of the phase stability principle by V. I. Veksler (1944) and E. M. McMillan (1945) increased the limit of attainable particle energies by a factor of thousands. Clashing-beam accelerators have significantly increased the effective particle collision energy. Different types of highly efficient particle detectors have been developed, including gas, scintillation, and Cherenkov (Čerenkov) counters. Photomultipliers make it possible to detect individual photons. Complete and exact information on events in the microworld is obtained by means of bubble and spark chambers and thick-layer photographic emulsions, in which the trails left by charged particles that have passed through can be observed directly. Some detectors make it possible to record such extremely rare events as collisions between neutrinos and atomic nuclei.
The use of electronic computers to process information obtained by recording devices has revolutionized the experimental investigation of the interactions of elementary particles. Tens of thousands of photographs of particle tracks must be analyzed to pinpoint low-probability processes. Manually this would require so much time that it would be virtually impossible to obtain the information needed. Images of the tracks therefore are converted into a series of electrical impulses by special devices, and further analysis is carried out by computer. This greatly reduces the time between the actual experiment and the receipt of processed information. In spark chambers particle tracks are recorded and analyzed automatically by a computer within the experimental setup itself.
The importance of charged-particle accelerators is determined by the following facts. The greater the energy (momentum) of a particle, the smaller (according to the uncertainty principle) the dimensions of the objects or details thereof that can be distinguished in collisions of the particle with an object. As of 1977 these minimum dimensions were 10–15 cm. By studying the scattering of high-energy electrons by nucleons, it is possible to detect certain aspects of the internal structure of nucleons, namely, the distribution of the electric charge and the magnetic moment within these particles (so-called form factors). The scattering of ultrahigh-energy electrons by nucleons indicates the existence within nucleons of several ultrasmall individual formations called partons, which may be the hypothetical quarks.
Another reason for the interest in high-energy particles is the creation of new particles of ever greater mass in collisions of such particles. A total of 34 stable and quasi-stable particles (quasi-stable particles are particles that do not decay through the strong interactions) and their antiparticles and more than 200 resonances are known. The overwhelming majority were discovered in accelerators. The investigation of the scattering of ultrahigh-energy particles should help elucidate the nature of the strong and weak interactions.
The most disparate types of nuclear reactions have been studied. A collision of relativistic nuclei was accomplished for the first time in the accelerator of the Joint Institute for Nuclear Research in the city of Dubna. The synthesis of the transuranium elements is proceeding successfully. The nuclei of antideuterium, antitritium, and antihelium have been produced. A new regularity of the strong interactions—the increase in the total interaction cross section of very high-energy hadrons in collisions with increasing collision energy (the Serpukhov effect)—has been discovered in the accelerator at Serpukhov.
The development of radio physics acquired a new direction after the creation of radar stations during World War II (1939–45). Radars have found extensive applications in aviation, maritime transport, and astronautics. Radar observations of celestial bodies, such as the moon, Venus and the other planets, and the sun, have been conducted (seeRADAR ASTRONOMY). Gigantic radio telescopes that trap the radiations of cosmic bodies with a spectral radiant flux density of 10–26 erg/cm2·sec·Hz have been built. Information on cosmic objects has grown immeasurably. Radio stars and radio galaxies with powerful emissions of radio waves have been discovered. Quasars, the quasi-stellar objects most distant from us, were discovered in 1963 (seeQUASARS). The luminosity of quasars is hundreds of times greater than that of the brightest galaxies. The resolving power of modern radio telescopes, equipped with computer-controlled movable antennas, reaches one second of arc (for radiation with a wavelength of a few centimeters). When the antennas are spread out over large distances (of the order of 10,000 km), even higher resolution is obtained (hundredths of a second of arc).
The investigation of the radio-frequency radiation of celestial bodies has made it possible to determine the sources of primary cosmic rays (protons, heavier atomic nuclei, electrons). These sources have turned out to be supernova explosions (seeSUPERNOVA). Radio background radiation—thermal radiation corresponding to a temperature of 2.7°K—was discovered (seeRADIO BACKGROUND RADIATION). Pulsars—rapidly rotating neutron stars—were discovered in 1967 (see PULSAR and NEUTRON STAR). Pulsars generate directional radiation in the radio-frequency, visible, and X-ray regions of the spectrum, a radiation whose intensity varies periodically because of the rotation of the stars.
The launching of space stations has played a major part in the study of near-earth space and outer space; the earth’s radiation belts were discovered (seeRADIATION BELTS OF THE EARTH), and cosmic sources of X radiation and bursts of gamma radiation were detected (these types of radiation are absorbed by the earth’s atmosphere and do not reach its surface).
Modern radio-physical methods make it possible to communicate through space over distances of tens or hundreds of millions of kilometers. The necessity of transmitting large volumes of information has stimulated the development of fundamentally new optical communication lines using optical fibers.
Fine precision has been attained in the measurement of the amplitude of the oscillations of macroscopic bodies. Mechanical oscillations with an amplitude of the order of 10–15 cm can be recorded by means of electronic and optical detectors (this limit could be increased to 10–16–10–19 cm).
High-precision automatic X-ray and neutron diffractometers, which have reduced the decoding time for structures by a factor of hundreds of thousands, are being used to study the structure of crystals and organic molecules. High-resolution electron microscopes are also used in structural analysis. Neutron diffraction analysis makes it possible to study the magnetic structure of solids (seeNEUTRON DIFFRACTION ANALYSIS).
Electron paramagnetic resonance (discovered by E. K. Zavoiskii in 1944) is being used successfully to study the structure and distribution of the electron density in matter, as are nuclear magnetic resonance (discovered by E. Purcell and F. Bloch in 1946) and the Mössbauer effect (discovered by R. L. Mössbauer in 1958). The investigation of the structure of the atoms and molecules of organic and inorganic substances on the basis of their emission and absorption spectra over a broad range of frequencies is being improved and includes the use of laser radiation (seeSPECTROSCOPY, LASER).
The phenomenon of ultralong underwater sound propagation in seas and oceans over distances of thousands of kilometers was discovered and investigated in hydroacoustics by the American scientists W. M. Ewing and J. Worzel (1944) and, independently, by the Soviet physicists L. M. Brekhovskikh, L. D. Rozenberg, and others (1946). (SeeHYDROACOUSTICS.)
Acoustical methods of studying solids based on the use of ultrasound and hypersound waves and acoustical surface waves were developed in the 1970’s.
The rapid development of semiconductor physics has revolutionized radio engineering and electronics. Semiconductor devices have supplanted electric vacuum devices. Electronic devices and computers have grown markedly smaller and more reliable, and their power consumption has been reduced significantly. Integrated circuits, which combine thousands of electronic elements on a single small crystal measuring only a few square millimeters in area, have appeared. The ever-greater miniaturization of radio-electronic instruments and devices has led to the creation of microprocessors on several crystals, which perform the operational functions of computers. Tiny computers are fabricated on a single crystal.
Electronic computers, which have become an integral part of physics research, are used both to process experimental data and to make theoretical calculations, especially calculations that previously could not be performed because of their tremendous labor intensiveness.
Of great importance both for science and for practical applications is the investigation of matter under extreme conditions, such as very low or very high temperatures, ultrahigh pressures, high vacuums, and ultrastrong magnetic fields.
High and ultrahigh vacuums are created in electronic devices and accelerators in order to prevent collisions between the particles being accelerated and gas molecules. The study of the properties of surface and thin layers of matter in an ultrahigh vacuum has engendered a new branch of solid-state physics. This research is particularly important in space exploration.
Elementary particle physics. The most fundamental problem of physics has long been the study of matter at its deepest level—the level of elementary particles. An enormous amount of experimental material has been amassed on the interactions and conversions of elementary particles, but as yet it has not been possible to theoretically generalize this material from a unified standpoint. This is because essential facts may still be missing, or simply because no idea capable of shedding light on the problem of the structure and interaction of elementary particles has as yet been advanced.
The problem of the theoretical determination of the mass spectrum of elementary particles remains unsolved. The introduction of some fundamental length (l) that would limit the applicability of conventional concepts of space-time as a continuous entity may be needed to solve this problem and eliminate the infinities in quantum field theory. The conventional space-time relations apparently are valid for distances of the order of 10–15 cm and times of the order of t ≈ l/c ≈ 10–25 sec, respectively, but may be violated at shorter distances. Attempts have been made to introduce the fundamental length in unified field theory (by Heisenberg and others) and in various versions of space-time quantization (seeUNIFIED FIELD THEORY and QUANTIZATION, SPACE-TIME). As yet, however, these attempts have produced no tangible results. The problem of constructing a quantum theory of gravitation is also unresolved. A remote possibility exists of bringing together the four fundamental interactions.
Astrophysics. The development of elementary particle and nuclear physics has enabled scientists to gain more insight into such complex problems as the early stages of the evolution of the universe, the evolution of the stars, and the formation of the chemical elements. Despite important advances, however, modern astrophysics is also faced with unsolved problems. Still unclear is the state of matter at the extremely high densities and pressures inside stars and black holes. The physical nature of quasars and radio galaxies and the causes of supernova outbursts and of gamma radiation bursts have not been established. It is not yet understood why scientists have been unable to detect solar neutrinos, which should be produced in the sun’s interior in thermonuclear reactions (seeNEUTRINO ASTRONOMY). The mechanism of acceleration of charged particles (cosmic rays) in supernova explosions and the mechanism of the emission of electromagnetic waves by pulsars have not been completely determined. Finally, scientists have only begun to unravel the mystery of the evolution of the universe as a whole. Still to be resolved are the questions of what was present in the early stages of evolution, the future fate of the universe, and whether sometime in the future the observed expansion of the universe will be followed by compression.
The most fundamental problems of modern physics undoubtedly are connected with elementary particles and the problem of the structure and development of the universe. Still to be discovered in this connection are new laws governing the behavior of matter under extraordinary conditions—at ultrasmall space-time distances, such as exist in the microworld, and at ultrahigh densities, such as those at the early stages of the universe’s expansion. All other problems are more specific and are connected with the search for ways of making effective use of the fundamental laws to explain observed phenomena and predict new ones.
Nuclear physics. Since the creation of the proton-neutron model of the nucleus, major progress has been achieved in understanding the structure of atomic nuclei, and various approximate nuclear models have been constructed. However, no consistent theory has yet been advanced of the atomic nucleus that, like the theory of atomic shells, would make it possible to calculate the coupling energy of nucleons in the nucleus and the energy levels of the nucleus. A successful theory can be achieved only after the construction of a theory of the strong interactions.
The experimental investigation of the interaction of nucleons in the nucleus—the nuclear forces—entails great difficulties owing to the enormous complexity of the forces, which depend on the distance between nucleons, the velocities of the nucleons, and the orientations of the spins.
Of considerable interest is the possibility of experimentally detecting long-lived elements with atomic numbers of around 114 and 126 (the islands of stability), predicted by theory.
One of the most important problems that remains to be solved is that of controlled thermonuclear fusion. Intensive experimental and theoretical research is being conducted toward creating a hot deuterium-tritium plasma, which is essential for a thermonuclear reaction. To this end, Soviet Tokamak-type machines appear to be the most promising, although other possibilities also exist. In particular, laser radiation and electron or ion beams produced in powerful pulsed accelerators can be used to heat deuterium-tritium pellets.
Quantum electronics. Quantum-mechanical oscillators produce electromagnetic radiation with unique properties. Laser radiation is coherent and can reach a tremendous power—1012–1013 W—in a narrow spectral range. The divergence of the light beam is only about 10–4 rad. The electric field strength of the laser radiation may exceed the strength of the intra-atomic field.
The creation of lasers has stimulated the rapid development of a new branch of optics—nonlinear optics (seeNONLINEAR OPTICS). The nonlinear effects of the interaction of an electromagnetic wave with the medium becomes significant in strong laser radiation. These effects, including conversion of the radiation frequency and self-focusing of light beams, are of great theoretical and practical interest.
The nearly absolute monochromaticity of laser radiation has made it possible to obtain three-dimensional images of objects (seeHOLOGRAPHY) through wave interference.
Laser radiation is used in isotope separation, in particular to enrich uranium with the isotope 235U, in the vaporization and welding of metals in a vacuum, and in medicine. The use of lasers to heat matter to temperatures at which thermonuclear reactions are possible appears to be promising. Still ahead is the search for new applications of laser radiation, for example, communications in space. The main problem in laser research is the development of ways to increase laser power and to broaden the range of wavelengths of the laser beam with smooth frequency adjustment. Research is also under way to develop X-ray and gammaray lasers. (See alsoQUANTUM ELECTRONICS.)
Solid-state physics. Solid-state physics occupies a leading position in developing materials with extreme properties in terms of mechanical strength, heat resistance, and electrical, magnetic, and optical characteristics.
An active search for nonphonon mechanisms of superconductivity has been under way since the 1970’s. The solution of this problem would make it possible to construct high-temperature superconductors, which would be of tremendous importance for experimental physics and technology and, in particular, would solve the problem of transmitting electric power over great distances essentially without losses.
The investigation of the physical properties of solid and liquid helium-3 at ultralow temperatures (below 3 × 10 –3 °K) is an extremely interesting problem. Solid helium-3 apparently should be the only nuclear exchange antiferromagnetic substance. Liquid helium-3 is the simplest Fermi fluid, the theory of which is an important subject of quantum statistics.
The production of metallic hydrogen and the study of its physical properties are of great scientific and practical interest. When produced, metallic hydrogen would be a unique physical object, since its lattice would consist of protons, and it would presumably have a number of unusual properties, the study of which may lead to fundamentally new discoveries in physics. The first steps in this direction have been taken at the Institute of High-pressure Physics of the Academy of Sciences of the USSR, where the transition of thin films of solid hydrogen to the metallic state has been detected at a temperature of 4.2°K and a pressure of approximately 1 megabar.
New areas of investigation of solids using acoustic methods have emerged: acoustic electronics (the interaction of acoustic waves with electrons in semiconductors, metals, and superconductors), acoustic nuclear and paramagnetic resonance, and the determination of the phonon spectrum and dispersion curves.
It should be noted that the development of the traditional fields of solid-state physics has often led to unexpected discoveries of new physical phenomena or new materials, such as the Josephson effect, heterojunction and type II superconductors, and quantum and whisker crystals.
Despite the advances achieved thus far, fundamentally new methods of producing more reliable miniature semiconductor devices must be developed, as well as new methods of producing higher pressures and ultralow temperatures (seeMICROELECTRONICS).
Of great importance is the study of the physics of polymers, with their unusual mechanical and thermodynamic properties, and, in particular, the physics of biopolymers, which include all proteins.
Plasma physics. The importance of the study of plasma is connected with two facts. First, the overwhelming majority of matter in the universe is in the plasma state, for example, the stars and their atmospheres, the interstellar medium, and the earth’s radiation belts and ionosphere. Second, it is in high-temperature plasma that there is a real possibility of carrying out controlled thermonuclear fusion.
The basic equations that describe plasma are well known. The processes in plasma are so complex, however, that it is extremely difficult to predict the plasma’s behavior under various conditions. The main problem is to develop effective methods of heating the plasma to a temperature of the order of 1 billion degrees and then maintaining it in this state (despite the various types of instabilities inherent in a high-temperature plasma) long enough for a thermonuclear reaction to occur in most of the working volume. The solution of the problem of plasma stability also plays an important role in supporting the operation of clashing-beam accelerators and in developing collective methods of particle acceleration.
The investigation of the electromagnetic and corpuscular radiation of plasma is of decisive importance for explaining the acceleration of charged particles in supernova explosions, the radiation of pulsars, and other phenomena.
Needless to say, the problems of modern physics do not reduce to those mentioned above. Each branch of physics has its own problems, and the problems are so numerous that it would be impossible to list them all here.
Physics and philosophy. Because of the generality and breadth of its laws, physics has always had an impact on the development of philosophy and, in turn, has itself been influenced by philosophy. With each new discovery in the natural sciences, in the words of F. Engels, materialism inevitably must change form.
The highest form of materialism—dialectical materialism—is further confirmed and concretized in the achievements of modern physics. The laws of the dialectic—the unity of opposites—is manifested especially clearly when the study of the microworld is undertaken. The unity of the discontinuous and continuous is reflected in the wave-particle duality of microparticles. The necessity and the chance appear in an inseparable bond, which is expressed in the probabilistic, statistical character of the laws of the motion of microparticles. The unity of the material world advocated by materialism is clearly manifested in the mutual conversions of elementary particles—the possible forms of existence of physical matter. Correct philosophical analysis is especially important in the revolutionary periods in the development of physics, when old concepts are subjected to fundamental reexamination. The classical example of such an analysis was given by V. I. Lenin in the book Materialism and Empiriocriticism. Only an understanding of the relationship between absolute and relative truth makes it possible to assess properly the essence of the revolutionary transformations in physics and to see in them the enrichment and deepening of our understanding of matter and the further development of materialism.
Physics and mathematics. Physics is a quantitative science, and its fundamental laws are formulated in mathematical language, chiefly through differential equations. On the other hand, new ideas and methods in mathematics often have arisen under the influence of physics. Infinitesimal analysis was devised by Newton (contemporaneously with G. W. von Leibniz) in his formulation of the fundamental laws of mechanics. The creation of the theory of the electromagnetic field led to the development of vector analysis. The development of such branches of mathematics as tensor calculus, Riemannian geometry, and group theory was stimulated by new physical theories, namely, the general theory of relativity and quantum mechanics. The development of quantum field theory has raised new problems in functional analysis. These are but a few examples of how physics has influenced mathematics.
Physics and other natural sciences. Because of the close relationship between physics and the other branches of natural science, physics, in the words of S. I. Vavilov, sank its deepest roots into astronomy, geology, chemistry, biology, and the other natural sciences. A number of overlapping disciplines emerged, such as astrophysics, geophysics, biophysics, and physical chemistry. Physical methods of research gained importance in all of the natural sciences.
The electron microscope has increased by several orders of magnitude the ability to discern the details of objects, making it possible to observe individual molecules. X-ray diffraction analysis is used to study not only crystals but the most complex biological structures as well. The determination of the structure of the DNA molecules, which are found in the chromosomes of the cell nuclei of all living organisms and which are the carriers of the genetic code, has been one of the genuine triumphs of physics. The revolution in biology associated with the rise of molecular biology and genetics would have been impossible without physics.
The use of tagged atoms plays a major role in the study of metabolism in living organisms, contributing to the solution of many problems of biology, physiology, and medicine. Ultrasound is used in medicine for diagnosis and therapy.
As stated elsewhere in this article, the laws of quantum mechanics underlie the theory of the chemical bond. The kinetics of chemical reactions can be traced by means of tagged atoms. Physical techniques using, for example, muon beams from accelerators can be used to effect chemical reactions that do not occur under ordinary conditions. The structural analogs of the hydrogen atom—positronium and muonium, whose existence and properties were determined by physicists—are being used. In particular, muoniums are used to measure the rate of fast chemical reactions.
The development of electronics is making it possible to observe processes that occur in a time less than 10–12 sec. This development has revolutionized astronomy, leading to the creation of radio astronomy.
The findings and techniques of nuclear physics are used in geology, in particular, to measure the absolute age of rocks and of the earth as a whole (seeGEOCHRONOLOGY).
Physics and technology. Physics forms the foundation for the most important areas of technology, including electrical engineering, power engineering, radio engineering, electronics, light engineering, civil engineering, hydraulic engineering, and a large part of military engineering. Through the conscious utilization of physical laws, engineering has emerged from the domain of chance finds onto the path of goal-directed development. Whereas in the 19th century it took decades before a physical discovery had its first engineering application, it now takes only a few years.
The development of technology, in turn, exerts no less a significant influence on experimental physics. The creation of such devices as charged-particle accelerators, huge bubble and spark chambers, and semiconductor devices would have been impossible without the development of electrical engineering, electronics, and the manufacturing processes for strong and impurity-free materials.
The rise of nuclear power engineering has been connected with major achievements in nuclear physics. Fast nuclear breeder reactors can use natural uranium and thorium, the reserves of which are vast. The successful realization of controlled thermonuclear fusion will free mankind virtually forever from the threat of energy shortages.
The technology of the future will no longer be based on natural materials but on synthetic materials with predetermined properties, and to this end the investigation of the structure of matter plays a leading role.
The development of electronics and advanced electronic computers, based on the achievements of solid-state physics, has expanded immeasurably the creative opportunities for mankind and has led to the construction of “thinking” automatons capable of making quick decisions while processing large volumes of data.
Considerable increases in labor productivity are being achieved through the use of electronic computers, which are used in the automation of production and control. As the national economy grows more complex, the volume of information to be processed becomes exceedingly large. Therefore, of great importance is the further improvement of computers, directed at increasing their speed, memory capabilities, and reliability and reducing their size and cost. These improvements are possible only through new achievements in physics.
Modern physics, the source of revolutionary changes in all areas of technology, is making a decisive contribution to the scientific and technological revolution.
REFERENCESHistory and methodology of science
Engels, F. Dialektika prirody. Moscow, 1975.
Lenin, V. I. Materializm i empiriokrititsizm. Poln. sobr. soch., 5th ed., vol. 18.
Lenin, V. I. “Filosofskie tetradi.” Ibid., vol. 29.
Dorfman, Ia. G. Vsemirnaia istoriia fiziki s drevneishikh vremen do kontsa XVIII veka. Moscow, 1974.
Kudriavtsev, P. S. Istoriia fiziki, vols. 1–3. Moscow, 1956–71.
Laue, M. von. Istoriia fiziki. Moscow, 1956. (Translated from German.)
Gliozzi, M. Istoriia fiziki. Moscow, 1970. (Translated from Italian.)
Markov, M. A. Oprirode materii. Moscow, 1976.
Khaikin, S. E. Fizicheskie osnovy mekhaniki, 2nd ed. Moscow, 1971.
Strelkov, S. P. Mekhanika, 3rd ed. Moscow, 1975.
Landsberg, G. S. Optika, 5th ed. Moscow, 1976.
Kikoin, A. K., and I. K. Kikoin. Molekuliarnaia fizika, 2nd ed. Moscow, 1976.
Kalashnikov, S. G. Elektrichestvo, 3rd ed. Moscow, 1970.
Shirokov, Iu. M., and A. P. Iudin. ladernaiafizika. Moscow, 1972.
Gorelik, G. S. Kolebaniia i volny: Vvedenie v akustiku, radiofiziku i optiku, 2nd ed. Moscow, 1959.
Born, M. Atomnaia fizika, 3rd ed. Moscow, 1970. (Translated from English.)
Shpol’skii, E. V. Atomnaia fizika, vol. 1, 6th ed.; vol. 2, 4th ed. Moscow, 1974.
Feynman, R., R. Layton, and M. Sands. Feinmanovskie lektsii po fizike, vols. 1–9. Moscow, 1965–67. (Translated from English.)
Berkleevskii kurs fiziki, vols. 1–5. Moscow, 1971–74. (Translated from English.)
Landau, L. D. and E. M. Lifshits. Kurs teoreticheskoi fiziki, vol. 1: Mekhanika, 3rd ed. Moscow, 1973. Vol. 2: Teoriia polia, 6th ed. Moscow, 1973. Vol. 3: Kvantovaia mekhanika: Nereliativistskaia teoriia, 3rd ed. Moscow, 1974.
Berestetskii, V. B., E. M. Lifshits, and L. P. Pitaevskii. Kurs teoreticheskoi fiziki, vol. 4, part 1: Reliativistskaia kvantovaia teoriia. Moscow, 1968.
Lifshits, E. M., and L. P. Pitaevskii. Kurs teoreticheskoi fiziki, vol. 4, part 2: Reliativistskaia kvantovaiia teoriia. Moscow, 1971.
Landau, L. D., and E. M. Lifshits. Kurs teoreticheskoi fiziki, vol. 5, part 1: Statisticheskaia fizika, 3rd ed. Moscow, 1976.
Landau, L. D., and E. M. Lifshits. Mekhanika sploshnykh sred, 2nd ed. Moscow, 1954.
Landau, L. D., and E. M. Lifshits. Elektrodinamika sploshnykh sred. Moscow, 1959.
Goldstein, H. Klassicheskaia mekhanika, 2nd ed. Moscow, 1975. (Translated from English.)
Leontovich, M. A. Vvedenie v termodinamiku, 2nd ed. Moscow-Leningrad, 1952.
Leontovich, M. A. Statisticheskaia fizika. Moscow-Leningrad, 1944.
Kubo, R. Termodinamika. Moscow, 1970. (Translated from English.)
Kubo, R. Statisticheskaia mekhanika. Moscow, 1967. (Translated from English.)
Tamm, I. E. Osnovy teorii elektrichestva, 9th ed. Moscow, 1976.
Born, M., and E. Wolf. Osnovy optiki, 2nd ed. Moscow, 1973. (Translated from English.)
Davydov, A. S. Kvantovaia mekhanika, 2nd ed. Moscow, 1973.
Blokhintsev, D. I. Osnovy kvantovoi mekhaniki, 5th ed. Moscow, 1976.
Dirac, P. A. M. Printsipy kvantovoi mekhaniki. Moscow, 1960. (Translated from English.)
Abrikosov, A. A. Vvedenie v teoriiu normal’nykh metallov. Moscow, 1972.
Andronov, A. A., A. A. Vitt, and S. E. Khaikin. Teoriia kolebanii. Moscow, 1959.
Artsimovich, L. A. Upravliaemye termoiadernye reaktsii, 2nd ed. Moscow, 1963.
Akhiezer, A. I., and V. B. Berestetskii. Kvantovaia elektrodinamika, 3rd ed. Moscow, 1969.
Bethe, H., and A. Sommerfeld. Elektronnaia teoriia metallov. Leningrad-Moscow, 1938. (Translated from German.)
Blokhin, M. A. Fizika rentgenovskikh luchei, 2nd ed. Moscow, 1957.
Bogoliubov, N. N. Problemy dinamicheskoi teorii v statisticheskoi fizike. Moscow-Leningrad, 1946.
Bogoliubov, N. N., and D. V. Shirkov. Vvedenie v teoriiu kvantovannykhpolei, 3rd ed. Moscow, 1976.
Brillouin, L. Nauka i teoriia informatsii. Moscow, 1960. (Translated from English.)
Vonsovskii, S. V. Magnetizm. Moscow, 1971.
Gibbs, J. W. Termodinamicheskie raboty. Moscow-Leningrad, 1950. (Translated from English.)
Gibbs, J. W. Osnovnye printsipy statisticheskoi mekhaniki. Moscow-Leningrad, 1946. (Translated from English.)
Ginzburg, V. L. O fizike i astrofizike, 2nd ed. Moscow, 1974.
Ansel’m, A. I. Vvedenie v teoriiu poluprovodnikov. Moscow-Leningrad, 1962.
El’iashevich, M. A. Atomnaia i molekuliarnaia spektroskopiia. Moscow, 1962.
Zel’dovich, Ia. B., and I. D. Novikov. Teoriia tiagoteniia i evoliutsiia zvezd. Moscow, 1971.
Zel’dovich, Ia. B., and Iu. P. Raizer. Fizika udarnykh voln i vysokotemperaturnykh gidrodinamicheskikh iavlenii, 2nd ed. Moscow, 1966.
Sommerfeld, A. Stroenie atoma i spektry, vols. 1–2. Moscow, 1956. (Translated from German.)
Zubarev, D. N. Neravnovesnaia statisticheskaia termodinamika. Moscow, 1971.
Kapitsa, P. L. Eksperiment, teoriia, praktika. Moscow, 1974.
Carslaw, H., and J. C. Jaeger. Teploprovodnost’ tverdykh tel. Moscow, 1964. (Translated from English.)
Kittel, C. Vvedenie v fiziku tverdogo tela, 2nd ed. Moscow, 1962. (Translated from English.)
Lorentz, H. A. Teoriia elektronov i ee primenenie k iavleniiam sveta i teplovogo izlucheniia, 2nd ed. Moscow, 1956. (Translated from English.)
Luk’ianov, S. Iu. Goriachaia plazma i upravliaemyi iadernyi sintez. Moscow, 1975.
Neumann, J. von. Matematicheskie osnovy kvantovoi mekhaniki. Moscow, 1964. (Translated from German.)
Okun’, L. B. Slaboe vzaimodeistvie elementarnykh chastits. Moscow, 1963.
Skudryzk, E. Osnovy akustiki, vols. 1–2. Moscow, 1976. (Translated from English.)
Strutt, J. W. (Lord Rayleigh). Teoriia zvuka, vols. 1–2, 2nd ed. Moscow, 1955.
Fok, A. Teoriia prostranstva, vremeni i tiagoteniia, 2nd ed. Moscow, 1961.
Frenkel’, Ia. I. Vvedenie v teoriiu metallov, 3rd ed. Moscow, 1958.
Einstein, A., and L. Infeld, L. Evoliutsiia fiziki, 3rd ed. Moscow, 1965. (Translated from English.)
Encyclopedias and handbooks
Fizicheskii entsiklopedicheskii slovar’, vols. 1–5. Moscow, 1960–66.
Encyclopaedic Dictionary of Physics. Edited by J. Thewlis, vols. 1–9. Oxford-New York, 1961–64.
Spravochnik po fizike dlia inzhenerov i studentov vuzov, 6th ed. Moscow, 1974.
A. M. PROKHOROV