However,

normal matrix multiplication calculates element by element on the matrix.

Let A [member of] [C.sup.nxn] be normal matrix, [[lambda].sub.1], [[lambda].sub.2], ..., [[lambda].sub.n] are eigenvalues of matrix A, [delta] [member of] [C.sup.nxn] is an arbitrary matrix, and to denotes the eigenvalue of matrix A + [delta]; then there exists an eigenvalue [[lambda].sub.i] of matrix A such that

Let to, be an arbitrary eigenvalue of matrix -1 + (I - [rho]M)[DELTA](t); since -I is a normal matrix and all of the eigenvalues of matrix -I are -1, by Lemma 7, it yields that

After calculating the

normal matrix, the full-relations matrix has been obtained according to the following steps.

Unless specially stated, A is a real

normal matrix in this paper.

Matrix transformations of [lambda]-boundedness fields of

normal matrix methods.

For a

normal matrix and assuming that we know the coefficients [[beta].sup.(k).sub.0], ..., [[beta].sup.(k).sub.k-1], we obtain a (k + 1) x n linear system for the moduli squared, [[omega].sub.i] = [[absolute value of ([c.sub.i])].sup.2].

When A is a

normal matrix, its [epsilon]-pseudospectrum is the union of closed disks of radius [epsilon] with centers at the eigenvalues.

The simplicity of our formulas contrasts with the rather cumbersome task of determining the closest

normal matrix of a general square matrix; a Jacobi-type algorithm for the latter endeavor has been described by Ruhe [20].

When the parameter [[member of].sub.V[SIGMA]W*] is small, the SVD can be expressed as the sum of a

normal matrix and a matrix whose norm is small.

The smaller X([W.sub.[alpha]]), the closer [B.sub.[alpha]] to a

normal matrix. Table 3.1 and Figure 3.1 show that the computed factors X([W.sub.[alpha]]), [[kappa].sub.b] and [kappa].

For any matrix A, [F.sub.k](A) [subset] [H.sub.k](A), and it was argued in [8], based on results in [5], that if A is a

normal matrix or a triangular Toeplitz matrix, then these two sets are identical.