File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Deciphering Feature Effects on Decision-Making in Ordinal Regression Problems: An Explainable Ordinal Factorization Model

TitleDeciphering Feature Effects on Decision-Making in Ordinal Regression Problems: An Explainable Ordinal Factorization Model
Authors
Keywordsdecision support
explainable machine learning
factorization machines
Ordinal regression
Issue Date2021
Citation
ACM Transactions on Knowledge Discovery from Data, 2021, v. 16, n. 3, article no. 59 How to Cite?
AbstractOrdinal regression predicts the objects' labels that exhibit a natural ordering, which is vital to decision-making problems such as credit scoring and clinical diagnosis. In these problems, the ability to explain how the individual features and their interactions affect the decisions is as critical as model performance. Unfortunately, the existing ordinal regression models in the machine learning community aim at improving prediction accuracy rather than explore explainability. To achieve high accuracy while explaining the relationships between the features and the predictions, we propose a new method for ordinal regression problems, namely the Explainable Ordinal Factorization Model (XOFM). XOFM uses piecewise linear functions to approximate the shape functions of individual features, and renders the pairwise features interaction effects as heat-maps. The proposed XOFM captures the nonlinearity in the main effects and ensures the interaction effects' same flexibility. Therefore, the underlying model yields comparable performance while remaining explainable by explicitly describing the main and interaction effects. To address the potential sparsity problem caused by discretizing the whole feature scale into several sub-intervals, XOFM integrates the Factorization Machines (FMs) to factorize the model parameters. Comprehensive experiments with benchmark real-world and synthetic datasets demonstrate that the proposed XOFM leads to state-of-the-art prediction performance while preserving an easy-to-understand explainability.
Persistent Identifierhttp://hdl.handle.net/10722/330835
ISSN
2021 Impact Factor: 4.157
2020 SCImago Journal Rankings: 0.728
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorGuo, Mengzhuo-
dc.contributor.authorXu, Zhongzhi-
dc.contributor.authorZhang, Qingpeng-
dc.contributor.authorLiao, Xiuwu-
dc.contributor.authorLiu, Jiapeng-
dc.date.accessioned2023-09-05T12:15:04Z-
dc.date.available2023-09-05T12:15:04Z-
dc.date.issued2021-
dc.identifier.citationACM Transactions on Knowledge Discovery from Data, 2021, v. 16, n. 3, article no. 59-
dc.identifier.issn1556-4681-
dc.identifier.urihttp://hdl.handle.net/10722/330835-
dc.description.abstractOrdinal regression predicts the objects' labels that exhibit a natural ordering, which is vital to decision-making problems such as credit scoring and clinical diagnosis. In these problems, the ability to explain how the individual features and their interactions affect the decisions is as critical as model performance. Unfortunately, the existing ordinal regression models in the machine learning community aim at improving prediction accuracy rather than explore explainability. To achieve high accuracy while explaining the relationships between the features and the predictions, we propose a new method for ordinal regression problems, namely the Explainable Ordinal Factorization Model (XOFM). XOFM uses piecewise linear functions to approximate the shape functions of individual features, and renders the pairwise features interaction effects as heat-maps. The proposed XOFM captures the nonlinearity in the main effects and ensures the interaction effects' same flexibility. Therefore, the underlying model yields comparable performance while remaining explainable by explicitly describing the main and interaction effects. To address the potential sparsity problem caused by discretizing the whole feature scale into several sub-intervals, XOFM integrates the Factorization Machines (FMs) to factorize the model parameters. Comprehensive experiments with benchmark real-world and synthetic datasets demonstrate that the proposed XOFM leads to state-of-the-art prediction performance while preserving an easy-to-understand explainability.-
dc.languageeng-
dc.relation.ispartofACM Transactions on Knowledge Discovery from Data-
dc.subjectdecision support-
dc.subjectexplainable machine learning-
dc.subjectfactorization machines-
dc.subjectOrdinal regression-
dc.titleDeciphering Feature Effects on Decision-Making in Ordinal Regression Problems: An Explainable Ordinal Factorization Model-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3487048-
dc.identifier.scopuseid_2-s2.0-85134091965-
dc.identifier.volume16-
dc.identifier.issue3-
dc.identifier.spagearticle no. 59-
dc.identifier.epagearticle no. 59-
dc.identifier.eissn1556-472X-
dc.identifier.isiWOS:000804983600019-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats