Also found in: Dictionary, Wikipedia.
aerial and satellite photography
aerial and satellite photography, technology and science of taking still or moving-picture photographs from a camera mounted on a balloon, airplane, satellite, rocket, or spacecraft. In the 19th cent., photographers such as Thaddeus Lowe and George R. Lawrence took impressive pictures with cameras suspended in hot-air balloons or hung from kites, demonstrating both the scenic and military value of aerial photography. With the development of aviation, photogrammetry (the science of making measurements and maps from photographs) became an important tool. During World War I and subsequent conflicts, aerial photographs provided vital intelligence. Military aerial photography has now advanced to the point that the rank of a foot soldier can be determined from photographs taken from high-flying planes and satellites. Because of its military importance, much of the most sophisticated surveillance technology remains classified.
Aerial photography and satellite photography work in similar fashion. Course and speed are set before entering the area to be photographed, to ensure uniformity of speed and altitude. The result is an image of a narrow strip, which can be combined with overlapping images of neighboring strips to produce a panoramic view, commonly called a mosaic. Commercially available aerial and satellite photographs are capable of resolving objects of about 10 sq ft (1 sq m), which means that a satellite would be able to distinguish between a car and truck. Aerial photographs may be high oblique (including the horizon), low oblique (below the horizon), or vertical (perpendicular to the earth). Only the vertical may be accurately scaled for mapmaking purposes. Often a multilens camera is used to photograph one section vertically and the adjacent areas obliquely. The individual oblique exposures are then corrected, scaled, and joined to the vertical section to form one continuous photograph. By viewing two overlapping photographs through a stereoscope, a three-dimensional image of a region, or topographic map, can be obtained.
Images can also be produced at other wavelengths, such as microwave or infrared, by using a technique known as remote scanning, which measures variations in spectral reflectance rather than patterns of light and shadow. Remote scanning aids such disparate fields as archaeology, geology, forestry, highway construction, and land conservation. The best-known remote scanners are the Landsat series of satellites, which have mapped vegetation and geological formations on the earth's surface since 1972; the French SPOT series, first launched in 1986; Magellan, which used radar to map the planet Venus (1990); Lunar Prospector, which mapped the moon's surface composition and its magnetic and gravity fields (1998); Mars Global Surveyor, which engaged in a systematic mapping of Mars (1999); and Galileo, which returned pictures of Jupiter and its major moons (1995–2003).
See P. R. Wolf, Elements of Photogrammetry (1983); H. Lloyd, Aerial Photography (1990); R. H. Arnold, Interpretation of Airphotos and Remotely Sensed Imagery (1995); N. Henbest, The Planets: Portraits of New Worlds (1995); E. D. Conway, An Introduction to Satellite Image Interpretation (1997); P. Taubman, Secret Empire: Eisenhower, the CIA and the Hidden Story of America's Space Espionage (2003).
the scientific and technical discipline concerned with the determination of the dimensions, shape, and position of objects from examination of their images in photographs. The photographs are obtained either directly by conventional, moving-film aerial, or panoramic cameras or by using radar, television, infrared-heat-sensing, or laser systems. Photographs produced by conventional cameras are the most widely used, especially in aerial photographic surveys. In the theory of photogrammetry these photographs are considered the central projection of the object. Deviations from the central projection caused by lens distortion, deformation of the film and print paper, and other sources of error are taken in account in the calibration data for the aerial camera and the photographs. Photogrammetry uses both individual photographs and stereoscopic pairs; the use of stereopairs makes it possible to obtain a stereoscopic model of objects. The branch of photogrammetry that studies objects by means of stereopairs is called stereophotogrammetry.
The positioning at the moment the photograph is taken (see Figure 1) is determined by three elements of relative orientation—the camera focusing distance f and the coordinates of the principal point o (xo and io)—and by six elements of absolute orientation—the coordinates of the center of projection S (Xs, Ys, and Zs, the longitudinal (α) and transverse (ω) angles of inclination of the photograph, and the angle of rotation (K).
The following relationship exists between the coordinates of the point of an object and the object’s image on the photograph:
where X, Y, Z and Xs, Ys, and Zs are the coordinates of the points M and S in the system OXYZ; X’, Y’, and Z’ are the coordinates of point m in the system SXYZ, parallel to OXYZ, as computed by the plane coordinates x and y:
In this case
a1 = cos α cos κ – sin α sin ω sin κ
a2 = – cos α sin κ – sin α sin ω cos κ
a3 = – sin α cos ω
b1 = cos ω sin κ
b2 = cos ω cos κ
b3 = – sin ω
c1 = sin α cos κ + cos α sin ω sin κ
c2 = – sin α sin κ + cos α sin ω cos κ
c3 = cos α cos ω
are the direction cosines.
The formulas of the relationship between the coordinates of point M of the object (see Figure 2) and the coordinates of the object’s images m1 and m2 in the stereopair P1–P2 have the following form:
BX, BY, and BZ are projections of base B on the axes of the coordinate. If the elements of absolute orientation of the stereopair are known, then the coordinates of the point of the object can be determined from formula (4). This is known as the direct intersection method. The position of a point of an object may be found by means of a single photograph in the special case where the object is flat, for example with geographical plains (Z = const). The coordinates x and y of the points of the photograph are measured on a monocular comparator or stereocomparator. The elements of relative orientation are known from the results of camera calibration; the elements of absolute orientation can be determined when photographing the objects or during the process of aerial triangulation.
If the elements of absolute orientation of the photographs are unknown, the coordinates of the point of an object are found by using control points; this is called the reverse intersection method. The control point is a contour point of an object recognized on the photograph, the coordinates of which are obtained by geodetic measurement or from aerial triangulation. When reverse intersection is used, it is first necessary to determine the elements of mutual orientation of the photographs P1–P2 (see Figure 3)—α1′, κ1′, α2′, ω2, κ2′ in the system S1X′Y′Z′, whose axis X coincides with the base and whose axis Z lies in the main base plane S1o1S2 of photograph P1 The coordinates of the points of the model are then computed in the same system. Finally, by using the control points, the transition is made from the coordinates of the points of the model to the coordinates of the points of the objects.
The elements of mutual orientation make it possible to establish photographs in the same position relative to one another that they occupied when the object was photographed. In this case, each pair of corresponding rays, for example, S1m1 and S2m2, intersects and forms a point (m) of the model. The set of rays belonging to the photograph is called a bundle and the center of projection S1 or S2 is called the vertex of the bundle. The scale of the model remains unknown because the distance S1S2 between the peaks of the bundles is chosen arbitrarily. The corresponding points m1 and m2 of the stereopair are set in a single plane passing through the base S1S2. Therefore,
Assuming that the approximate values of the elements of mutual orientation are known, equation (6) can be put in the linear form
(7) aδα1’ + bδα2’ + cδω2’ + dδκ1’ + eδκ2’ + l = V
where δα1’, . . ., δκ2’ are corrections to the approximate values of the unknowns; a,. . . ., e are partial derivatives of function (6) with respect to the variables α1’, . . ., κ’; and I is the value of function (6) computed for the approximate values of the unknowns. In order to determine the elements of mutual orientation, the coordinates of at least five points of the stereopair are measured; then equation (7) is compiled and solved by the method of successive approximations. The coordinates of the points of the model are computed according to formula (4); an arbitrary length is selected for base B with the following assumptions: XS1 = YS1 = ZS1 = 0; BX = B; and BY = Bz = 0. In this case the spatial coordinates of points m1 and m2 are found from formula (2), and the direction cosines by formula (3). For photograph P1 the coordinates are found from the elements α1’, ω1’ = 0, κ1’; for photograph P2 they are found from the elements α2’; ω2’, κ2’.
The coordinates of the point of the object are determined with respect to the coordinates X’, Y’, Z’ of the point of the model, as follows:
where t is the denominator of the scale of the model. The direction cosines are obtained from formula (3) by substituting the model’s longitudinal angle of inclination ξ, transverse angle of inclination η, and angle of rotation θ for the angles α, ω, and κ, respectively.
In order to determine the seven elements of absolute orientation of the model (XS1, ZS1, ZS1, ξ, η, θ and t), equation (8) is written for three or more control points and then solved. The coordinates of the control points are found by geodetic methods or by means of aerial triangulation. The set of points of an object whose coordinates are known forms a numerical model of the object, which may be used for drawing a map and solving various engineering problems, for example, finding the optimum route for a road. In addition to analytic methods of processing photographs, analogue methods based on the use of various photogrammetric instruments, such as rectifiers, stereographs, and stereoprojectors, also find application.
Panoramic photographs and photographs from moving-film aerial cameras, as well as radar, television, infrared-heat-sensing, and other photographic systems, greatly expand the possibilities of photogrammetry, particularly in space studies. However, they do not present a single center of projection, and their elements of absolute orientation change continuously in the process of constructing an image, which makes it difficult to use such photographs for measurement purposes.
Photogrammetric techniques possess several fundamental advantages: high productivity, since the images of objects, rather than the objects themselves, are measured; high accuracy, due to the use of precision equipment to obtain and measure the photographs and the use of strict procedures for processing the results of measurements; the possibility of studying both fixed and moving objects; and completely objective measurement results. In addition, measurements can be made at some distance removed from the object, which is particularly important where objects are inaccessible (such as an airplane or projectile in flight) or when there would be danger to an observer in the area of the object (as with an active volcano or nuclear explosion).
Photogrammetry is widely used to make maps of the earth, other planets, and the moon; to measure the geologic elements of rock beds and to document mining excavations; to study the movement of glaciers and the dynamics of the thawing of the snow cover; to determine forest valuation characteristics; and to investigate soil erosion and observe changes in vegetation cover. It is also used in the study of sea waves and currents and in underwater surveys and prospecting; in the design, construction, and operation of engineering structures; in the monitoring of the condition of architectural ensembles, buildings, and monuments; and in the determination of coordinates of fire positions and targets in military operations.
REFERENCESBobir, N. la., A. N. Lobanov, and G. D. Fedoruk. Fotogrammelriia. Moscow, 1974.
Drobyshev, F. V. Osnovy aerofotos” emki i fotogrammetrii, 3rd ed. Moscow, 1973.
Konshin, M. D. Aerofotogrammetriia. Moscow, 1967.
Lobanov, A. N. Aerofototopografiia. Moscow, 1971.
Lobanov, A. N. Fotolopografiia, 3rd ed. Moscow, 1968.
Deineko, V. F. Aerofotogeodeziia. Moscow, 1968.
Sokolova, N. A. Tekhnologiia krupnomasshtabnykh aerotopograficheskikhs”emok. Moscow, 1973.
Rusinov, M. M. Inzhenernaia fotogrammetriia. Moscow, 1966.
Rüger, W., and A. Buchholtz. Photogrammetrie, 3rd ed. Berlin, 1973.
Manual of Photogrammetry, vols. 1–2. Menasha, Wis., 1966.
Bonneval.H. Photogrammétrie générale, vols. 1–4. Paris, 1972.
Piasecki, M. B. Fotogrametria, 3rd ed. Warsaw, 1973.
A. N. LOBANOV