vector multiplication

vector multiplication

[¦vek·tər ‚məl·tə·plə′kā·shən]
(mathematics)
The operation which associates to each ordered pair of vectors the cross product of these two vectors.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.
References in periodicals archive ?
The use of these orthogonal matrices permits the use of fast transform techniques permitting the matrix vector multiplication in order N[log.sub.2](N) operation instead of requiring order [N.sup.2] operations for obtaining the transformed symbol vector [X.sup.n](k).
Currently, researchers propose many optimized methods for the scale-free networks model by using sparse matrix vector multiplication to construct scale-free networks [16], using the internal weighted average method to calculate the configuration parameters of scale-free networks [17], and using boosting regression algorithm and Bayesian algorithm to construct prior information and establish the scale-free networks based on prior information [18].
Thus, the upper half of the vector multiplication [F.sub.2K][[mu].sub.2K] gives the matrix-vector multiplication [T.sub.K][[mu].sub.K].
There are mainly one tridiagonal matrix vector multiplication, many constant vector multiplications, and many vector-vector additions in the right-sided computation.
White, III and P.Sadayappan," On Improving the Performance of Sparse Matrix-Matrix Vector Multiplication", "IEEE Conference, 1997.
Other topics include scheduling intervals for reconfigurable computing, simultaneous retiming and placement for pipelined netlists, optical flow calculations on FPGA and GPU architectures, and sparse matrix- vector multiplication on a reconfigurable supercomputer.
Algorithm: Vector Multiplication with a Generalized Tensor Product
Bordawekar, "Optimizing Sparse Matrix vector Multiplication on GPUs," Tech.
The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner.
Fiore and Gennaro [2] proposed schemes to securely Outsource evaluation of multivariate polynomials and matrix vector multiplication and verify the corresponding result in a public manner.
where each [r.sub.j] is a vector of length (m + n)/l, and the [K.sup.T]r vector multiplication result becomes:
On the other hand, the proposed stream acceleration approach directly updates the RC or BRC matrix structure, which allows for very efficient matrix vector multiplication as well as GaussSeidel relaxation, and the approach is always faster than the SuiteSparse library.