File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
  • Find via Find It@HKUL
Supplementary

Article: Adaptive False Discovery Rate Control with Privacy Guarantee

TitleAdaptive False Discovery Rate Control with Privacy Guarantee
Authors
Issue Date1-Jul-2023
PublisherJournal of Machine Learning Research
Citation
Journal of Machine Learning Research, 2023, v. 24, p. 1-35 How to Cite?
Abstract

Differentially private multiple testing procedures can protect the information of individuals used in hypothesis tests while guaranteeing a small fraction of false discoveries. In this paper, we propose a differentially private adaptive FDR control method that can control the classic FDR metric exactly at a user-specified level α with a privacy guarantee, which is a non-trivial improvement compared to the differentially private Benjamini-Hochberg method proposed in Dwork et al. (2021). Our analysis is based on two key insights: 1) a novel p-value transformation that preserves both privacy and the mirror conservative property, and 2) a mirror peeling algorithm that allows the construction of the filtration and application of the optimal stopping technique. Numerical studies demonstrate that the proposed DP-AdaPT performs better compared to the existing differentially private FDR control methods. Compared to the non-private AdaPT, it incurs a small accuracy loss but significantly reduces the computation cost.


Persistent Identifierhttp://hdl.handle.net/10722/337713
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorXia, Xintao-
dc.contributor.authorCai, Zhanrui-
dc.date.accessioned2024-03-11T10:23:18Z-
dc.date.available2024-03-11T10:23:18Z-
dc.date.issued2023-07-01-
dc.identifier.citationJournal of Machine Learning Research, 2023, v. 24, p. 1-35-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/337713-
dc.description.abstract<p>Differentially private multiple testing procedures can protect the information of individuals used in hypothesis tests while guaranteeing a small fraction of false discoveries. In this paper, we propose a differentially private adaptive FDR control method that can control the classic FDR metric exactly at a user-specified level α with a privacy guarantee, which is a non-trivial improvement compared to the differentially private Benjamini-Hochberg method proposed in Dwork et al. (2021). Our analysis is based on two key insights: 1) a novel p-value transformation that preserves both privacy and the mirror conservative property, and 2) a mirror peeling algorithm that allows the construction of the filtration and application of the optimal stopping technique. Numerical studies demonstrate that the proposed DP-AdaPT performs better compared to the existing differentially private FDR control methods. Compared to the non-private AdaPT, it incurs a small accuracy loss but significantly reduces the computation cost.<br></p>-
dc.languageeng-
dc.publisherJournal of Machine Learning Research-
dc.relation.ispartofJournal of Machine Learning Research-
dc.titleAdaptive False Discovery Rate Control with Privacy Guarantee -
dc.typeArticle-
dc.identifier.volume24-
dc.identifier.spage1-
dc.identifier.epage35-
dc.identifier.eissn1533-7928-
dc.identifier.issnl1532-4435-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats