# Approximate Solution of Differential Equations

## Approximate Solution of Differential Equations

the obtaining of analytic expressions (formulas) or numerical values that approximate the desired solution of a differential equation to some degree of accuracy.

An approximate solution to a differential equation in the form of an analytic expression can be found by the method of series (power series, trigonometric series, and so on), the method of small parameters, the method of successive approximations, the Ritz and Galerkin methods, and the Chaplygin method. Each of these methods defines one or more infinite processes that under certain conditions can be used to obtain an exact solution to a problem. Termination of the process after a finite number of steps yields an approximate solution.

If a solution is represented by means of an infinite series, a finite portion of the series can be taken as the approximate solution. For example, suppose we wish to find a solution to the differential equation *y*^{ʹ} = *f(x, y)*, which satisfies the initial conditions *y* (*x*_{0}) = *y*_{0}. Suppose, moreover, that *f(x, y*) is an analytic function of *x* and *y* in some neighborhood of the point (*x*_{0}, *y*_{0}). The solution can then be sought in the form of a power series:

The coefficients *A _{k}* of the series can be found either from the formulas

or by means of the method of undetermined coefficients. The series method permits a solution to be found only for small values of the quantity *x* – *x*_{0}.

Cases are often encountered—for example, in the study of periodic motions in celestial mechanics and in the theory of oscillations—where an equation consists of two kinds of terms: principal and secondary, the secondary terms being characterized by the presence of small parameters, or small constant factors. If the secondary terms are dropped, an equation that admits of an exact solution is usually obtained. The solution of the original equation can then be sought in the form of a series whose first term is the solution of the equation without the secondary terms and whose remaining terms are arranged according to the powers of the small parameters. Since the equations for the coefficients of the powers of the small parameters are linear, they are rather easy to solve. Initial values are sometimes used as small parameters—for example, in the study of oscillations about an equilibrium position. The method of small parameters was used by L. Euler and P. Laplace in solving problems of perturbed motion in celestial mechanics. The theoretical foundation for the method was provided by A. M. Liapunov and J. H. Poincaré.

Numerical methods include methods that permit approximate solutions to be found for certain values of the argument (that is, they permit the obtaining of a table of approximate values of the desired solution) by using known values of the solution at one or more points. Examples of such methods are the Euler method, the Runge-Kutta method, and a number of difference methods.

These methods can be illustrated with the equation

*y*^{ʹ} = *f(x, y*)

with the initial condition *y* (x_{0}) = *y*_{0}. Let the exact solution of this equation be represented in a certain neighborhood of the point *x*_{0} by a power series in *h* = *x* – *x*_{0}. A basic characteristic of the accuracy of formulas for the approximate solution of differential equations is the requirement that the first *k* terms of the power series in *h* of the approximate solution coincide with the first *k* terms in the power series in *h* of the exact solution.

The Euler method is based on the use of the series method to calculate approximate values of the solution *y(x*) at the points *x*_{1}, *x*_{2}, …, *x _{n}* of the fixed closed interval [

*x*

_{0},

*b*]. Thus, to compute

*y(x*

_{1})—where

*x*

_{1}=

*x*

_{0}+

*h*and

*h*=

*b*–

*x*

_{0})/

*n*–

*y*(

*x*

_{1}) is represented by a finite number of terms in the power series in

*h*=

*x*

_{1}–

*x*

_{0}. For example, keeping only the first two terms of the series, we obtain the following formulas for computing

*y*(

*x*):

_{k}*y* (*x _{k}*) =

*y*(

*x*

_{k}- 1) +

*hf*(

*x*

_{k - 1},

*y*

_{k - 1})

*x*=

_{k}*x*

_{0}+

*kh*

This method is called the Euler method; for each interval [*x _{k}, x*

_{k + 1}] the integral curve is replaced by a line segment. The error in this method is proportional to

*h*

^{2}.

In the Runge-Kutta method, instead of finding derivatives we form a combination of values of *f(x, y*) at several points that gives with a certain accuracy the first few terms of the power series for the exact solution of the equation. For example, the right-hand side of the Runge formula

where

gives the first five terms of the power series with accuracy up to terms of order *h*^{5}.

In difference formulas, already computed values of the right-hand side are used several times. The solution is sought as a linear combination of the *y* (*x*_{i}), η_{i}, and the differences Δ^{i}η_{j}, where η_{j} = *hf* (*x _{j}, y_{j}*), Δη

_{j}= η

_{j}+ 1 - η

_{j}, and Δ

^{i}η

_{j}= Δ

^{i-1}η

_{j+1}– Δ

^{i-1}η

_{j}. An example of a difference formula is the Adams extrapolation formula. If we use differences up to third order, then the Adams formula

gives the solution *y(x*) at the point *x*_{k} with accuracy up to terms of order *h*^{4}.

Formulas for the numerical integration of second-order equations can be obtained by applying the Adams formula twice. The Norwegian mathematician F. Størmer obtained the formula

which is especially convenient for solving equations of the form *yʺ* = *f* (*x, y).* With this formula, Δ^{2}*y*_{n -1} is found, and then *y*_{n + 1} = *y _{n}* + Δ

*y*

_{n + 1}+ Δ

^{2}

*y*

_{n - 1.}With

*y*

_{n + 1}known,

*y*ʺ

_{n + 1}=

*f*(

*x*

_{n + 1},

*y*

_{n + 1}) is computed, the differences are found, and the process is continued.

The numerical methods mentioned above can be extended to systems of differential equations.

The importance of numerical methods for solving differential equations has grown considerably with the advent of computers.

In addition to analytic and numerical methods, graphic methods are also used for the approximate solution of differential equations. In the simplest of these, a set of directions, or direction field, determined by the differential equation is constructed —that is, at certain points the directions of the tangents to the integral curves passing through the points are drawn. Then a curve is drawn having these directions as tangents.

### REFERENCES

Berezin, I. S., and N. P. Zhidkov.*Metody vychislenii*, 2nd ed., vol. 2. Moscow, 1962.

Bakhvalov, N. S.

*Chislennye metody.*Moscow, 1973.

Collatz, L.

*Chislennye metody resheniia differentsial’nykh uravnenii.*Moscow, 1953. (Translated from German.)

Milne, W. E.

*Chislennoe reshenie differentsial’nykh uravnenii.*Moscow, 1955. (Translated from English.)