Seminar: Pixel Club

ECE Women Community

Low-Level Processing: Provable Benefits for High-Level Tasks and a Zero-Shot Restoration Framework That Works with as Little as 4 Iterations

Date: December,30,2025 Start Time: 11:30 - 12:00
Location: 1061, Meyer Building
Add to:
Lecturer: Tom Tirer
The data processing inequality is an information-theoretic principle stating that the information content of a signal cannot be increased by processing the observations. In particular, it suggests that there is no benefit in enhancing the signal or encoding it before addressing a classification problem. This assertion can be proven to be true for the case of the optimal Bayes classifier. However, in practice, it is common to perform “low-level” tasks before “high-level” downstream tasks despite the overwhelming capabilities of modern deep neural networks. With the aim of understanding when and why low-level processing can be beneficial for classification, I will present a comprehensive theoretical study of a binary classification setup, where we consider a classifier that is tightly connected to the optimal Bayes classifier and converges to it as the number of training samples increases. We prove that for any finite number of training samples, there exists a pre-classification processing that improves the classification accuracy. We explore the factors that affect this gain and discover a non-intuitive relation between the maximal gain and a notion of signal-to-noise ratio. Importantly, we conduct an empirical study where we investigate the effect of denoising and encoding on the performance of practical deep classifiers on benchmark datasets and demonstrate trends that are consistent with our theoretical results.

Motivated by the first part of the talk, which shows the usefulness of low-level tasks, in the second part I will focus on “zero-shot” (training-free) image restoration using the guidance of Diffusion Models (DMs) and their variants. We hypothesize that current methods require many Neural Function Evaluations (NFEs) for performing well due to the many NFEs needed in the original generative functionality of the DMs. On the other hand, existing restoration methods that use faster generative models, such as Consistency Models (CMs), still require tens of NFEs or fine-tuning of the model per task. To fill this gap, I will present a zero-shot restoration scheme that uses CMs and operates well with as little as 4 NFEs. It is based on a wise combination of several ingredients: better initialization, back-projection guidance, and, above all, a novel noise injection approach that acts as momentum. We demonstrate the advantages of our approach for image super-resolution, deblurring and inpainting. Interestingly, we show that the usefulness of our noise injection strategy goes beyond CMs: it can also mitigate the performance degradation of existing guided DM methods when reducing their NFE count.

Tom Tirer is a Senior Lecturer (Assistant Professor) in the Faculty of Engineering at Bar-Ilan University. He received his BSc degree (summa cum laude) in electrical engineering from Ben-Gurion University of the Negev in 2010, and his MSc (summa cum laude) and PhD degrees in electrical engineering from Tel Aviv University in 2016 and 2020, where he also was a postdoctoral researcher during 2021. In 2022 he was a postdoctoral researcher at NYU Center for Data Science. His research interests are in the (often intersecting) fields of signal and image processing, machine learning and optimization. Alongside his academic endeavors, he also worked for several years in the industry in various engineering, algorithms and research roles.

 

All Seminars
Skip to content