In , the combination of the sparse prior and nonsparse SAR prior was proposed, but the combination coefficient had to be determined by the hand empirically.
and [alpha] = ([[alpha].sub.1], [[alpha].sub.2], ..., [[alpha].sub.d]) is the prior combination coefficient vector which makes this model more adaptable than TV prior and existing [l.sub.1] norm prior.
We have not compared our method with the one in , because how to determine the combination coefficients is not given in .
where [a.sub.k,l] represent the combination coefficients that play an important role in the performance of the diffusion network.
For the combination coefficients [a.sub.k,l] we specified them due to these number of neighboring nodes.
In order to avoid the effects of feature similarity, we used the linear discriminant analysis (LDA) method to project the highly correlated feature dimensions onto the most principal dimension based on linear combination coefficients
. The principal dimension of the MDVP:Jitter(%), MDVP:Jitter(Abs), MDVP:RAP, MDVP:PPQ, Jitter:DDP, and NHR features, denoted as MDVP-LDA, was projected by the linear combination coefficients
of 0.0062, 4.4 x [10.sup.5], 0.0033, 0.0034, 0.0099, and 0.0248, respectively.
Using the line combinations of the EEFs and optimization for the energy error functions of the line combination coefficients
, the optimal combination of EEFs for model reduction of nonlinear PDEs is obtained, where the first 3 initial EEFs and 3 optimal combined EEFs are shown in Figures 1 and 2.
3) Discriminant fusion strategy is only a linear combination of multiple features with different combination coefficients
, so other fusion strategies can be considered to create based on the discriminant analysis.
Then, we introduce two types of linear combinations of base HLK-ELMs, named sparse MHLK-ELM and non-sparse MHLK-ELM, based on different constricts towards combination coefficients [gamma].
As shown in E.q(13), the object function is to optimize the parameter of [beta] and kernel combination coefficients y jointly.
Moreover, the [l.sub.1] norm constrict on [[gamma].sub.p] induces sparse combination coefficients. Accordingly, we name this method as the sparse MHLK-ELM.