File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Generalized multiple kernel learning with data-dependent priors

TitleGeneralized multiple kernel learning with data-dependent priors
Authors
KeywordsData fusion
Dirty data
Missing views
Multiple kernel learning
Partial correspondence
Semisupervised learning
Issue Date2015
Citation
IEEE Transactions on Neural Networks and Learning Systems, 2015, v. 26, n. 6, p. 1134-1148 How to Cite?
AbstractMultiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.
Persistent Identifierhttp://hdl.handle.net/10722/345089
ISSN
2023 Impact Factor: 10.2
2023 SCImago Journal Rankings: 4.170

 

DC FieldValueLanguage
dc.contributor.authorMao, Qi-
dc.contributor.authorTsang, Ivor W.-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorWang, Li-
dc.date.accessioned2024-08-15T09:25:09Z-
dc.date.available2024-08-15T09:25:09Z-
dc.date.issued2015-
dc.identifier.citationIEEE Transactions on Neural Networks and Learning Systems, 2015, v. 26, n. 6, p. 1134-1148-
dc.identifier.issn2162-237X-
dc.identifier.urihttp://hdl.handle.net/10722/345089-
dc.description.abstractMultiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Neural Networks and Learning Systems-
dc.subjectData fusion-
dc.subjectDirty data-
dc.subjectMissing views-
dc.subjectMultiple kernel learning-
dc.subjectPartial correspondence-
dc.subjectSemisupervised learning-
dc.titleGeneralized multiple kernel learning with data-dependent priors-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TNNLS.2014.2334137-
dc.identifier.scopuseid_2-s2.0-85027945338-
dc.identifier.volume26-
dc.identifier.issue6-
dc.identifier.spage1134-
dc.identifier.epage1148-
dc.identifier.eissn2162-2388-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats