The linear classifier is the hyperplane
H (wx+b=0) with the maximum width (distance between hyperplanes
H1 and H2).
5] defined by the Plucker coordinates of the lines in L, and the hyperplanes
defined by the Plucker coefficients of the lines in [[?
i] | i [less than or equal to] n> bijectively onto the hyperplanes
of V that contain [[intersection].
Under some mild conditions, it is shown in Shao and Wu  that the proposed Criterion LS-C selects the true number of regression hyperplanes
with probability one among all class-growing sequences of classifications, when the number of observations n from the population increases to infinity.
7] have used the supporting hyperplanes
and efficient surfaces to calculate marginal rates of substitution in DEA.
If [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are adjacent facets of P incident to [upsilon], then the Euclidean dihedral angle between [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is equal to the hyperbolic dihedral angle between the supporting hyperplanes
e] be the random variable that counts how many of the N hyperplanes
separate the end-vertices of e.
The support vectors are training samples that define the optimal separating hyperplane
and are the most difficult patterns to classify.
Thus, a way to reduce this task to SVM learning is to construct new instances for each pair of ranked instance in the training data, and to learn a hyperplane
on this new training data.
The proposed approach consists in the dissection of extended disease region set with a pair of hyperplanes
perpendicular to the direction vector of one dissecting class among other ones.
It then employs hyperplanes
to separate positive data from the negative ones.
We want to choose the parameters w and b to maximize the margin, or distance between the parallel hyperplanes
that are as far apart as possible while still separating the data.