Seminar: ceClub: The Technion Computer Engineering Club

ECE Women Community

Neural Network Robustness Verification of Few-Pixel Attacks

Date: December,03,2025 Start Time: 14:30 - 15:30
Location: Taub 301
Add to:
Lecturer: Yuval Shapira
While successful, neural networks have been shown to be vulnerable to adversarial attacks. In l_0 adversarial attacks, also known as few-pixel attacks, the attacker picks t pixels from the image and arbitrarily perturbs them. While many verifiers prove robustness against l_p attacks for a positive integer p, very little work deals with robustness verification for l_0 attacks. This verification introduces a combinatorial challenge because the space of pixels to perturb is discrete and of exponential size. In this talk, we present a series of papers tackling this challenging problem. We first show that l_โˆž verifiers can be used for l_0 verification, and that by relying on covering designs we can significantly reduce the number of l_โˆž tasks that need to be submitted to the underlying l_โˆž verifier. This idea is implemented in Calzone, the first sound and complete l_0 verifier. We then present CoVerD, which improves upon Calzone by tailoring effective but analysis-incompatible coverings to l_0 robustness verification. Lastly, we characterize the convex hull of the non-convex l_0 perturbation space. Equipped with this geometric perspective, we examine different approaches for linear bound propagation for l_0 verification, which enables us to improve the precision of CoVerDโ€™s underlying verifier in our settings and boost its performance.

M.Sc. student under the supervision of. Dr.Dana Drachsler Cohen.

 

*The lecture will be given in Hebrew*

 

All Seminars
Skip to content