File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CNS56114.2022.9947254
- Scopus: eid_2-s2.0-85143412319
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Membership Inference Attack in Face of Data Transformations
Title | Membership Inference Attack in Face of Data Transformations |
---|---|
Authors | |
Keywords | Data Privacy Data Transformation Membership Inference |
Issue Date | 2022 |
Citation | 2022 IEEE Conference on Communications and Network Security, CNS 2022, 2022, p. 299-307 How to Cite? |
Abstract | Membership inference attacks (MIAs) on machine learning models, which try to infer whether a sample is in the training dataset of a target model, have been widely studied over recent years as data privacy attracts increasing attention. One unignorable problem in the current MIA threat model is that it assumes the attacker always obtains exactly the same samples as in the training set. In reality, however, the attacker is more likely to gather only a transformed version of the training samples. For instance, portraits downloadable from a social networking website usually are re-scaled and compressed, while the website owner can train models with RAW images. We believe a transformed training sample still causes privacy leakage if the transformation is semantic-preserving. Therefore, we broaden the concept of membership inference into more realistic scenarios by considering data transformations. We introduce two strategies for designing MIAs in face of data transformations: one adapts current MIAs to transformations, and the other tries to reverse the transformations approximately. We demonstrated the effectiveness of our strategies and the significance of considering data transformations by extensive evaluations of multiple datasets with several common data transformations and by comparisons with six state-of-the-art attacks. Moreover, we conduct evaluations on data-augmented and privacy-preserving models protected by three state-of-the-art defenses. |
Persistent Identifier | http://hdl.handle.net/10722/346552 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jiyu | - |
dc.contributor.author | Guo, Yiwen | - |
dc.contributor.author | Chen, Hao | - |
dc.contributor.author | Gong, Neil | - |
dc.date.accessioned | 2024-09-17T04:11:41Z | - |
dc.date.available | 2024-09-17T04:11:41Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | 2022 IEEE Conference on Communications and Network Security, CNS 2022, 2022, p. 299-307 | - |
dc.identifier.uri | http://hdl.handle.net/10722/346552 | - |
dc.description.abstract | Membership inference attacks (MIAs) on machine learning models, which try to infer whether a sample is in the training dataset of a target model, have been widely studied over recent years as data privacy attracts increasing attention. One unignorable problem in the current MIA threat model is that it assumes the attacker always obtains exactly the same samples as in the training set. In reality, however, the attacker is more likely to gather only a transformed version of the training samples. For instance, portraits downloadable from a social networking website usually are re-scaled and compressed, while the website owner can train models with RAW images. We believe a transformed training sample still causes privacy leakage if the transformation is semantic-preserving. Therefore, we broaden the concept of membership inference into more realistic scenarios by considering data transformations. We introduce two strategies for designing MIAs in face of data transformations: one adapts current MIAs to transformations, and the other tries to reverse the transformations approximately. We demonstrated the effectiveness of our strategies and the significance of considering data transformations by extensive evaluations of multiple datasets with several common data transformations and by comparisons with six state-of-the-art attacks. Moreover, we conduct evaluations on data-augmented and privacy-preserving models protected by three state-of-the-art defenses. | - |
dc.language | eng | - |
dc.relation.ispartof | 2022 IEEE Conference on Communications and Network Security, CNS 2022 | - |
dc.subject | Data Privacy | - |
dc.subject | Data Transformation | - |
dc.subject | Membership Inference | - |
dc.title | Membership Inference Attack in Face of Data Transformations | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CNS56114.2022.9947254 | - |
dc.identifier.scopus | eid_2-s2.0-85143412319 | - |
dc.identifier.spage | 299 | - |
dc.identifier.epage | 307 | - |