principle of optimality

(redirected from Bellman equation)
Also found in: Wikipedia.

principle of optimality

[′prin·sə·pəl əv ‚äp·tə′mal·əd·ē]
(control systems)
A principle which states that for optimal systems, any portion of the optimal state trajectory is optimal between the states it joins.
References in periodicals archive ?
We will refer to this condition as the Bellman equation for the value function V.
The Bellman equation associated to the problem defined in Eq.
When the exponential decay, Poisson event and information cost are jointly considered, the Bellman equation becomes:
The somewhat unusual Bellman equation for the dual problem can be written
David Romer, an economist at the University of California at Berkeley, used it while looking at the first quarters of some 700 NFL games and has written a paper, "It's Fourth Down and What Does the Bellman Equation Say?
Defining the value function V(m,n) in the obvious way, the Bellman equation is
A Bellman equation instructs the decisionmaker to vary the policy instrument in order to generate information about unknown parameters and model probabilities.
The Bellman equation implies the following law of motion for [Mu]:
Unemployed and employed workers' Bellman equations are given by:
In the first four chapters of this section, the authors provide the tools, namely, they describe Markov chains, linear stochastic difference equations, dynamic programming, linear quadratic dynamic programming and different methods to compute and numerically approximate (through discre-tisation of the state space and polynomials) Bellman equations.
They satisfy the following flow versions of the Bellman equations from dynamic programming,