[04/2024 - Award] Evading Black-box Classifiers Without Breaking Eggs, selected as Distinguished Paper Award Runner-up at IEEE SaTML 2024!

[04/2024 - New Paper: JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models] We have a new paper about benchmarking LLM jailbreak attacks and defenses with a focus on transparency and reproducibility. Take a look here.

[12/2023 - SaTML 2024 news] Presenting Evading Black-box Classifiers Without Breaking Eggs, and co-organizing the LLMs CTF.

[06/2023 - New paper: Privacy Side Channels in Machine Learning Systems] We have a new paper, about side-channels in ML systems, i.e., by exploiting components other than the model. Spoiler alert: some of those components, on paper, are meant to improve privacy! Take a look here.

[06/2023 - New paper: Evading Black-box Classifiers Without Breaking Eggs] We uploaded on arXiv a new paper, where we propose a new real-world oriented metric for black-box decision-based attacks on security-critical systems. Take a look here!

[11/2022 - A Light Recipe to Train Robust Vision Transformers accepted at SaTML] The paper derived from my master thesis was accepted at the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML 2023).

[09/2022 - New paper: A Light Recipe to Train Robust Vision Transformers] We uploaded on arXiv the paper derived from my Master’s thesis, with additional experiments and insights. Take a look here.

[08/2022 - I started my PhD] On August 1st, 2022, I started my PhD at ETH Zürich, in the Privacy and Security Lab of Prof. Florian Tramèr.

[12/05/2022 - I earned my MSc at EPFL!] On April 27th I successfully defended my MSc thesis about Adversarially Robust Vision Transformers! You can read it here. Feel free to contact me if you have any questions about it!