File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Differentially private SGD with non-smooth losses

TitleDifferentially private SGD with non-smooth losses
Authors
KeywordsAlgorithmic stability
Differential privacy
Generalization
Stochastic gradient descent
Issue Date2022
Citation
Applied and Computational Harmonic Analysis, 2022, v. 56, p. 306-336 How to Cite?
AbstractIn this paper, we are concerned with differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization (SCO). Most of the existing work requires the loss to be Lipschitz continuous and strongly smooth, and the model parameter to be uniformly bounded. However, these assumptions are restrictive as many popular losses violate these conditions including the hinge loss for SVM, the absolute loss in robust regression, and even the least square loss in an unbounded domain. We significantly relax these restrictive assumptions and establish privacy and generalization (utility) guarantees for private SGD algorithms using output and gradient perturbations associated with non-smooth convex losses. Specifically, the loss function is relaxed to have an α-Hölder continuous gradient (referred to as α-Hölder smoothness) which instantiates the Lipschitz continuity (α=0) and the strong smoothness (α=1). We prove that noisy SGD with α-Hölder smooth losses using gradient perturbation can guarantee (ϵ,δ)-differential privacy (DP) and attain optimal excess population risk [Formula presented], up to logarithmic terms, with the gradient complexity [Formula presented]. This shows an important trade-off between α-Hölder smoothness of the loss and the computational complexity for private SGD with statistically optimal performance. In particular, our results indicate that α-Hölder smoothness with α≥1/2 is sufficient to guarantee (ϵ,δ)-DP of noisy SGD algorithms while achieving optimal excess risk with a linear gradient complexity O(n).
Persistent Identifierhttp://hdl.handle.net/10722/329744
ISSN
2023 Impact Factor: 2.6
2023 SCImago Journal Rankings: 2.231
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Puyu-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorYing, Yiming-
dc.contributor.authorZhang, Hai-
dc.date.accessioned2023-08-09T03:35:01Z-
dc.date.available2023-08-09T03:35:01Z-
dc.date.issued2022-
dc.identifier.citationApplied and Computational Harmonic Analysis, 2022, v. 56, p. 306-336-
dc.identifier.issn1063-5203-
dc.identifier.urihttp://hdl.handle.net/10722/329744-
dc.description.abstractIn this paper, we are concerned with differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization (SCO). Most of the existing work requires the loss to be Lipschitz continuous and strongly smooth, and the model parameter to be uniformly bounded. However, these assumptions are restrictive as many popular losses violate these conditions including the hinge loss for SVM, the absolute loss in robust regression, and even the least square loss in an unbounded domain. We significantly relax these restrictive assumptions and establish privacy and generalization (utility) guarantees for private SGD algorithms using output and gradient perturbations associated with non-smooth convex losses. Specifically, the loss function is relaxed to have an α-Hölder continuous gradient (referred to as α-Hölder smoothness) which instantiates the Lipschitz continuity (α=0) and the strong smoothness (α=1). We prove that noisy SGD with α-Hölder smooth losses using gradient perturbation can guarantee (ϵ,δ)-differential privacy (DP) and attain optimal excess population risk [Formula presented], up to logarithmic terms, with the gradient complexity [Formula presented]. This shows an important trade-off between α-Hölder smoothness of the loss and the computational complexity for private SGD with statistically optimal performance. In particular, our results indicate that α-Hölder smoothness with α≥1/2 is sufficient to guarantee (ϵ,δ)-DP of noisy SGD algorithms while achieving optimal excess risk with a linear gradient complexity O(n).-
dc.languageeng-
dc.relation.ispartofApplied and Computational Harmonic Analysis-
dc.subjectAlgorithmic stability-
dc.subjectDifferential privacy-
dc.subjectGeneralization-
dc.subjectStochastic gradient descent-
dc.titleDifferentially private SGD with non-smooth losses-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.acha.2021.09.001-
dc.identifier.scopuseid_2-s2.0-85116046185-
dc.identifier.volume56-
dc.identifier.spage306-
dc.identifier.epage336-
dc.identifier.eissn1096-603X-
dc.identifier.isiWOS:000707855000001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats