סמינר: Machine Learning Seminar

קהילת נשות הנדסת חשמל ומחשבים

Implicit Biases of Gradient Descent in Offline System Identification and Optimal Control

Date: January,22,2025 Start Time: 11:30 - 12:30
Location: 506, Zisapel Building
Add to:
Lecturer: Nadav Cohen

 When learning to control a critical system (e.g., in healthcare or manufacturing), trial and error is often prohibitively costly or dangerous.  A natural alternative approach is offline system identification and optimal control: using pre-recorded data for offline learning of a system model, and then using the system model for offline learning of an optimal controller.  When implemented with overparameterized models (e.g., neural networks) trained via gradient descent (GD), this approach achieves notable success.  For instance, it enables reducing CO2 emissions of industrial manufacturing plants by up to 20%.  This success is driven by implicit biases of GD, which yield not only in-distribution generalization, but also out-of-distribution generalization.  Towards elucidating this phenomenon, I will present a series of works that theoretically analyze implicit biases of GD when applied to overparameterized linear models in offline system identification and optimal control.  The results I will present offer theoretical explanations for the success of GD in controlling critical systems, and suggest potential avenues for enhancing this success.

 

Nadav Cohen is an Assoc. Prof. of Computer Science at Tel Aviv University, studying the theoretical and algorithmic foundations of neural networks. He is also the CTO, President and a Co-Founder at Imubit, a company that applies neural networks for control and optimization of industrial manufacturing lines, thereby reducing CO2 emissions while improving yield.  Nadav earned a BSc in electrical engineering and a BSc in mathematics (both summa cum laude) at the Technion. He obtained his PhD (direct track, summa cum laude) at the School of Computer Science and Engineering in the Hebrew University of Jerusalem. Subsequently, he was a postdoctoral research scholar at the Institute for Advanced Study in Princeton. For his contributions to the foundations of neural networks, Nadav received several honors and awards, including the ERC Starting Grant, the Google Research Scholar Award, the Google Doctoral Fellowship in Machine Learning, the Rothschild Postdoctoral Fellowship, and the Zuckerman Postdoctoral Fellowship.

 

 

כל הסמינרים
דילוג לתוכן