File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Feature weight estimation for gene selection: A local hyperlinear learning approach

TitleFeature weight estimation for gene selection: A local hyperlinear learning approach
Authors
KeywordsFeature weighting
Classification
RELIEF
KNN
Local hyperplane
Issue Date2014
Citation
BMC Bioinformatics, 2014, v. 15, n. 1, article no. 70 How to Cite?
AbstractBackground: Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers.Results: We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB).Conclusion: Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. © 2014 Cai et al.; licensee BioMed Central Ltd.
Persistent Identifierhttp://hdl.handle.net/10722/276988
ISSN
2023 Impact Factor: 2.9
2023 SCImago Journal Rankings: 1.005
PubMed Central ID
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorCai, Hongmin-
dc.contributor.authorRuan, Peiying-
dc.contributor.authorNg, Michael-
dc.contributor.authorAkutsu, Tatsuya-
dc.date.accessioned2019-09-18T08:35:16Z-
dc.date.available2019-09-18T08:35:16Z-
dc.date.issued2014-
dc.identifier.citationBMC Bioinformatics, 2014, v. 15, n. 1, article no. 70-
dc.identifier.issn1471-2105-
dc.identifier.urihttp://hdl.handle.net/10722/276988-
dc.description.abstractBackground: Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers.Results: We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB).Conclusion: Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. © 2014 Cai et al.; licensee BioMed Central Ltd.-
dc.languageeng-
dc.relation.ispartofBMC Bioinformatics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectFeature weighting-
dc.subjectClassification-
dc.subjectRELIEF-
dc.subjectKNN-
dc.subjectLocal hyperplane-
dc.titleFeature weight estimation for gene selection: A local hyperlinear learning approach-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1186/1471-2105-15-70-
dc.identifier.pmid24625071-
dc.identifier.pmcidPMC4007530-
dc.identifier.scopuseid_2-s2.0-84899071680-
dc.identifier.volume15-
dc.identifier.issue1-
dc.identifier.spagearticle no. 70-
dc.identifier.epagearticle no. 70-
dc.identifier.isiWOS:000334543400001-
dc.identifier.issnl1471-2105-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats