2019 |
NDSS |
Hagestedt et al. |
MBeacon: Privacy-Preserving Beacons for DNA Methylation Data |
2019 |
SP |
Zheng et al. |
Helen: Maliciously Secure Coopetitive Learning for Linear Models |
2019 |
SP |
Melis et al. |
Exploiting Unintended Feature Leakage in Collaborative Learning |
2019 |
SEC |
Mirsky et al. |
CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning |
2019 |
CCS |
Agraval et al. |
QUOTIENT: Two-Party Secure Neural Network Training and Prediction |
2019 |
CCS |
Baluta et al. |
Quantitative Verification of Neural Networks and Its Security Applications |
2019 |
CCS |
Du et al. |
Lifelong Anomaly Detection Through Unlearning |
2020 |
NDSS |
Bohler et al. |
Secure Sublinear Time Differentially Private Median Computation |
2020 |
NDSS |
Patra et al. |
BLAZE: Blazing Fast Privacy-Preserving Machine Learning |
2020 |
NDSS |
Chaudari et al. |
Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning |
2020 |
SEC |
Shan et al. |
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models |
2020 |
SEC |
Fang et al. |
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning |
2020 |
SEC |
Pan et al. |
Justinian’s GAAvernor: Robust Distributed Learning with Gradient Aggregation Agent |
2020 |
SP |
Pan et al. |
Privacy Risks of General-Purpose Language Models |
2020 |
SP |
Wu et al. |
The Value of Collaboration in Convex Machine Learning with Differential Privacy |
2020 |
SP |
Hasan et al. |
BLAZE: Blazing Fast Privacy-Preserving Machine Learning |
2020 |
SP |
Kumar et al. |
CryptFlow: Secure TensorFlow Inference |
2020 |
CCS |
Rathee et al. |
CryptFlow2: Practical 2-Party Secure Inference |
2021 |
NDSS |
Ma et al. |
Let’s Stride Blindfolded in a Forest: Sublinear Multi-Client Decision Trees Evaluation |
2021 |
NDSS |
Zhang et al. |
GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks |
2021 |
NDSS |
Liang et al. |
FARE: Enabling Fine-grained Attack Categorization under Low-quality Labeled Data |
2021 |
NDSS |
Sav et al. |
POSEIDON: Privacy-Preserving Federated Neural Network Learning |
2021 |
NDSS |
Cao et al. |
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping |
2021 |
NDSS |
Shejwalkar and Houmansadr |
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning |
2021 |
NDSS |
Le et al. |
CV-INSPECTOR: Towards Automating Detection of Adblock Circumvention |
2021 |
NDSS |
Zimmeck et al. |
PrivacyFlash Pro: Automating Privacy Policy Generation for Mobile Apps |
2021 |
NDSS |
Vishvamitra et al. |
Towards Understanding and Detecting Cyberbullying in Real-world Images |
2021 |
SP |
Bourtoule et al. |
Machine Unlearning |
2021 |
SP |
Abdullah et al. |
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems |
2021 |
SP |
Roy et al. |
Learning Differentially Private Mechanisms |
2021 |
SP |
Tan et al. |
CRYPTGPU: Fast Privacy-Preserving Machine Learning on the GPU |
2021 |
SP |
Cheu et al. |
Manipulation Attacks in Local Differential Privacy |
2021 |
SP |
Rathee et al. |
SIRNN: A Math Library for Secure RNN Inference |
2021 |
SP |
Jia et al. |
Proof-of-Learning: Definitions and Practice |
2021 |
SEC |
Zheng et al. |
Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning |
2021 |
SEC |
Zhang et al. |
Leakage of Dataset Properties in Multi-Party Machine Learning |
2021 |
SEC |
Koti et al. |
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning |
2021 |
SEC |
Chen et al. |
Cost-Aware Robust Tree Ensembles for Security Applications |
2021 |
SEC |
Yang et al. |
CADE: Detecting and Explaining Concept Drift Samples for Security Applications |
2021 |
SEC |
Sun et al. |
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps |
2021 |
CCS |
Han et al. |
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications |
2021 |
CCS |
Zhao et al. |
AI-Lancet: Locating Error-inducing Neurons to Optimize Neural Networks |
2021 |
CCS |
Chen et al. |
Learning Security Classifiers with Verified Global Robustness Properties |
2021 |
CCS |
Malekzadeh et al. |
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers’ Outputs |