Real Attackers Don't Compute Gradients Supplementary Resources
Welcome to the website related to the paper “Real Attackers Don’t Compute Gradients”: Bridging the Gap between Adversarial ML Research and Practice accepted to IEEE SaTML’23. A preprint is here.
We presented the paper in Raleigh (NC, USA) in February 2023! [Slides] [Poster]
You can watch the presentation in the video below:
Summary: what is our (position) paper about?
We aim to spearhead more impactful research in the context of Adversarial Machine Learning (ML). In the last decade, real-world deployments of ML have skyrocketed; however, despite thousands of papers showing the vulnerability of ML models to various security violations, practitioners still see this research domain with skepticism.
We believe that a stronger connection between adversarial ML research and practice would greatly benefit our society, as it will lead to an improved security of operational ML systems. To strengthen such a connection, we:
- present three real-world case studies, fostering the contribution of large companies, elucidating some aspects of ML-systems’ security that may be overlooked by researchers;
- review all recent papers published in top-conferences, highlighting positive trends as well as some confusing inconsistencies;
- state five positions that, if embraced, would build a bridge between adversarial ML research and practice.
Our paper is enriched with discussions, considerations and thorough analyses on all the abovementioned points – provided in a (lengthy!) Appendix culminating with a comprehensive table that summarizes the state-of-the-art.
Acknowledgement
Our work is the result of a joint effort of researchers and practitioners. However, the idea of our paper was born slightly after the Dagstuhl Seminar on “Security of Machine Learning”, held in July 2022. During this event (which most of the paper’s authors attended to), many discussions were held on the underlying topic tackled by the paper. Thus, the authors would like to thank all participants (and organizers!) of this Dagstuhl Seminar, without which our paper would have never come to be.
Extra Resources
Alongside our main paper, our contributions also cover the following additional resources:
- Screenshots of the 100 “evasive” phishing webpages – described in Section III.B [Archive (~17MB) and SHA256]
- Source Material of our analysis of the 2021 MLSEC antiphishing challenge – described in Section III.C [Notebook and Data] [Submissions (~500MB) and SHA256]
- List of Excluded Papers that entail ML and Cybersecurity, but which fell outside our scope and were not included in our literature review – refer to Appendix B-A [List]
If you use any of such resources, we kindly ask you to cite our paper with the following BibTeX entry:
@inproceedings{apruzzese2023realgradients,
title={"Real Attackers Don't Compute Gradients": Bridging the Gap between Adversarial ML Research and Practice},
author={Apruzzese, Giovanni and Anderson, Hyrum S. and Dambra, Savino and Freeman, David and Pierazzi, Fabio and Roundy, Kevin A.},
booktitle={Proceedings of the 1st IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)},
year={2023},
}
Contact
Feel free to reach out to us! You can contact either the primary author, Giovanni Apruzzese, or any of the other authors (the contact references are at the bottom of this webpage). You can also post a comment on the discussion page of this GitHub Repository.
Follow-up
Four of the authors (Giovanni Apruzzese, Hyrum Anderson, David Freeman, Fabio Pierazzi) held a 1-hour webinar (organized by Robust Intelligence) in which they discussed some of the paper’s main takeaways. The webinar can be watched at the following link: YouTube