Dynamic Programming

Also found in: Acronyms, Wikipedia.

dynamic programming

[dī¦nam·ik ′prō·grə·miŋ]
(mathematics)
A mathematical technique, more sophisticated than linear programming, for solving a multidimensional optimization problem, which transforms the problem into a sequence of single-stage problems having only one variable each.

Dynamic Programming

a branch of mathematics devoted to the theory and methods of solving multistage optional control problems.

Dynamic programming attempts to find for a control process, that control from among all possible controls which provides the extreme (largest or smallest) value of a criterion function representing a certain numerical characteristic of the process. The term “multistage” refers either to the multistage structure of a process or to the breakdown of a control into a number of sequential stages (steps) corresponding, as a rule, to various moments in time. Thus, in so-called “dynamic programming,” the word “programming” is understood to mean “decision-making,” or “planning,” and the word “dynamic” indicates the essential role of time and the sequential performance of operations in the processes and methods under investigation.

The methods of dynamic programming are an integral part of the methods used in studying operations (operations re-search) and are used both in problems of optimal planning and in solving various engineering problems (for example, in determining optimal dimensions of multistage rockets, in routing roads optimally, and so on).

Suppose, for example, that the control process of a certain system consists of m steps (stages) and that at step i the control yi transforms the system from state xi - 1 to a new state xi, which depends onxi 1 and yi.

xi = xi(yi, xi - 1)

Thus, the control sequence y1, y2, … , ym transforms the system from an initial state x0 into a final state xm. It is necessary to select x0 and y1 … , ym such that the function

attains its maximum value F*. The basic method of dynamic programming is to reduce this general problem to a sequence of simpler extremal problems. By using the so-called principle of optimality formulated by the American mathematician R. Bellman, it is easy to obtain the basic recurrence equation:

Thus, the method of dynamic programming makes it necessary to solve this recurrence relation. In the solution process, the sequence of stages is gone through twice: the first time from the end to the beginning (optimal values are found for F* and x0*) and the second time, from the beginning to the end (the optional control sequence y1*, … , ym* is found).

The methods of dynamic programming are used not only in discrete but also in continuous control processes, for example, in processes where decisions must be made at every moment of a certain interval of time. Thus dynamic programming has offered a new approach to problems in the calculus of variations.

Although the method of dynamic programming allows a radical initial simplification of many important problems, its direct application still entails cumbersome computations. Approximate methods of dynamic programming are being worked out to overcome these difficulties.

REFERENCES

Bellman, R. Dinamicheskoe programmirovanie. Moscow, 1960. (Translated from English.)
Hadley, G. Nelineinoe i dinamicheskoe programmirovanie. Moscow, 1967. (Translated from English.)

V. G. KARMANOV

References in periodicals archive ?
[8] solved the problem using the martingale method, while we use the dynamic programming method.
The objective of the dynamic programming algorithm is to select the optimal sequence of SOE such that the total cost is minimized.
Since the minimization of the task execution time in cognitive radio ad hoc networks (CRAHNs) is very important for increasing the performance of this type of network, we are interested in this work to the implementation of an exact method (dynamic programming) using multi core architecture in CRAHNs.
In such a case, there exists a highly computationally effective parametric dynamic programming procedure [3] based on recurrent decomposition of the initial problem of minimizing function of many variables into a succession of partial problems, each of which consists in minimizing a function of only one variable.
Yang, Application of Adaptive Dynamic Programming in Optimization Control of Precalciner Kiln, Guangxi University, 2009.
The dynamic programming method is used to realize the torque distribution of hub motor, as shown in Figure 6.
The main contributions of the study are: (i) integration of the electrochemical battery aging model and refrigerant-based cooling model into the SHEV powertrain system, (ii) quantification of battery aging and its impact on vehicle fuel economy, and (iii) methodology for applying the Stochastic Dynamic Programming (SDP) for optimization of the S-HEV powertrain supervisory controller, with the goal of improving fuel economy and extending battery life.
The proposed two-stage seam searching method based on enhanced dynamic programming can obtain satisfactory seam-searching result while achieving better real-time performance.
A stereo algorithm [22] combines the use of lightness-invariant pixel difference evaluation within a dynamic programming depth estimation approach.
In the context of hydraulic hybrids there are three popular approaches to design of real time implementable EMS's: rule-based, stochastic dynamic programming (SDP), and model predictive control (MPC).

Site: Follow: Share:
Open / Close