In (Paulavicius and Zilinskas 2006), we have shown that, for dimension n = 2, a combination of bounds based on 2 extreme (infinite and first) norms gives by 22% smaller number of function evaluations than the bound based on the Euclidean norm, and, for dimension n = 3, combination (7) gives 39% smaller number of function evaluations than in the case the Euclidean norm is used alone.

In (Paulavicius and Zilinskas 2006, 2007) the combination of bounds, based on the extreme (infinite and first) and Euclidean norms over the multidimensional simplex I, was proposed:

For both GB and BB, they introduced an error matrix and showed the reduction of its Euclidean norm.

In the case m = [mu] < n, the algorithm finds only some solution x of the many solutions of Ax = b, usually not the interesting particular solution of smallest Euclidean norm.

Though we concentrate mostly on the lower bound for the A-norm of the error, we describe also an estimate for the Euclidean norm of the error based on [24, Theorem 6:3].

7) describes decrease of the Euclidean norm of the error in terms of the A-norm of the error in the given steps.

They emphasized, however, that while the A-norm of the error and the Euclidean norm of the error had to decrease monotonically at each step, the residual norm oscillated and might even increase in each but the last step.

The monotonicity of the A-norm and of the Euclidean norm of the error is in CG preserved (with small additional inaccuracy) also in finite precision computations (see [19], [22]).

Finally, using the fact that the monotonicity of the A-norm and the Euclidean norm of the error is preserved also in finite precision CG computations (with small additional inaccuracy, see [19], [22]), we obtain the finite precision analogy of (4.