hyperplane

(redirected from Hyperplanes)
Also found in: Dictionary.

hyperplane

[¦hī·pər‚plān]
(mathematics)
A hyperplane is an (n- 1)-dimensional subspace of an n-dimensional vector space.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.
References in periodicals archive ?
The rows of A correspond to (random) hyperplanes, and thus [Q.sub.i,j] simply captures on which side of the ith hyperplane the ith data point lies.
Consequentially, the classification will be conducted based on such representation by identifying the hyperplane which is a separator that aims to divide the data in accordance with the class labels.
To improve the generalization ability of learning machine, the nonlinearly samples are mapped into a higher dimensional space by kernel function and the optimal hyperplane is set up to make it linearly separable [17,18].
Note that the step/staircase activation function makes it possible to precisely locate possible discriminative hyperplanes.
The primary difference between RF and oRF is that oRF splits the feature space by using multivariate hyperplanes that are oblique [18, 19].
Caption: Figure 2 The maximum margin hyperplanes. Positive/negative examples are marked as +/- test examples as dot, the dashed line is the solution of the inductive SVM, the solid line shows the transductive classification.
SVM can be defined as a method for creation of an optimal hyperplane in a multi dimensional space such that the hyperplane separates the two categories and has the lowest possible misclassification error (Burges, 1998).
That is, at every iteration, we update the volume with a weighted sum of the orthogonal projections of the current solution onto the set of hyperplanes defined by the experimental measurements.
The slacked vector of variables [zeta] = [([[zeta].sub.1], [[zeta].sub.2], ..., [[zeta].sub.N]).sup.T] measures the extent to samples' violating the support hyperplanes. The primal problem can be converted to the dual one by solving the Karush-Kuhn-Tucker optimal functions derived from the Lagrange equation [35].
Define two standard hyperplanes [H.sub.1] : w x O(x) + b = +1 and [H.sub.2] : w x [PHI](x) + b = -1, where w is the weight vector and b the bias.
(1992) presented away to produce non linear classifiers by using the kernel trickfor maximum-margin hyperplanes. The present standard type (soft margin) was suggested by Cortes and Vapnik in 1993 and published in 1995.