File Download

There are no files associated with this item.

Supplementary

Conference Paper: Distributionally Robust Fair Principal Components via Geodesic Descents

TitleDistributionally Robust Fair Principal Components via Geodesic Descents
Authors
Issue Date25-Apr-2022
Abstract

Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines. In consequential domains such as college admission, healthcare and credit approval, it is imperative to take into account emerging criteria such as the fairness and the robustness of the learned projection. In this paper, we propose a distributionally robust optimization problem for principal component analysis which internalizes a fairness criterion in the objective function. The learned projection thus balances the trade-off between the total reconstruction error and the reconstruction error gap between subgroups, taken in the min-max sense over all distributions in a moment-based ambiguity set. The resulting optimization problem over the Stiefel manifold can be efficiently solved by a Riemannian subgradient descent algorithm with a sub-linear convergence rate. Our experimental results on real-world datasets show the merits of our proposed method over state-of-the-art baselines


Persistent Identifierhttp://hdl.handle.net/10722/337790

 

DC FieldValueLanguage
dc.contributor.authorVu, Hieu-
dc.contributor.authorTran, Toan-
dc.contributor.authorYue, Man-Chung-
dc.contributor.authorNguyen, Viet Anh-
dc.date.accessioned2024-03-11T10:23:54Z-
dc.date.available2024-03-11T10:23:54Z-
dc.date.issued2022-04-25-
dc.identifier.urihttp://hdl.handle.net/10722/337790-
dc.description.abstract<p>Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines. In consequential domains such as college admission, healthcare and credit approval, it is imperative to take into account emerging criteria such as the fairness and the robustness of the learned projection. In this paper, we propose a distributionally robust optimization problem for principal component analysis which internalizes a fairness criterion in the objective function. The learned projection thus balances the trade-off between the total reconstruction error and the reconstruction error gap between subgroups, taken in the min-max sense over all distributions in a moment-based ambiguity set. The resulting optimization problem over the Stiefel manifold can be efficiently solved by a Riemannian subgradient descent algorithm with a sub-linear convergence rate. Our experimental results on real-world datasets show the merits of our proposed method over state-of-the-art baselines<br></p>-
dc.languageeng-
dc.relation.ispartof10th International Conference on Learning Representations - ICLR 2022 (25/04/2022-25/04/2022)-
dc.titleDistributionally Robust Fair Principal Components via Geodesic Descents-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats