סמינר: Pixel Club

קהילת נשות הנדסת חשמל ומחשבים

Advancing Automatic Machine Learning (AutoML)

Date: November,12,2024 Start Time: 11:30 - 12:30
Location: ZOOM
Add to:
Lecturer: Niv Nayman
Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human experts manually processing data to build, select and train models. In this work we focus on advancing two different aspects of Automated Machine Learning (AutoML) and provide automatic methods and processes to mitigate some of the reliance on ML experts.
First, we introduce an interpretable neural architecture search method to efficiently optimize a deep neural network under latency constraints on different devices:
Making neural networks practical often requires adhering to resource constraints such as latency, energy and memory. We solve this by introducing a Bilinear Interpretable approach for constrained Neural Architecture Search (BINAS). Our method is based on an accurate and simple bilinear formulation of both an accuracy estimator and the expected resource requirement, jointly with a scalable search method with theoretical guarantees. One major advantage of BINAS is providing interpretability via insights about the contribution of different design choices. For example, we find that in the examined search space, adding depth and width is more effective at deeper stages of the network and at the beginning of each resolution stage. BINAS differs from previous methods that typically use complicated accuracy predictors that make them hard to interpret, sensitive to many hyper-parameters, and thus with compromised final accuracy. Our experiments show that BINAS generates comparable to or better than state of the art architectures, while reducing the marginal search cost, as well as strictly satisfying the resource constraints. Secondly, we present an extensive study for identifying intrinsic properties of pre-trained models for model selection in the context of transfer learning to different downstream tasks. We quesiton the commonly accepted hypothesis stating that models with higher accuracy on Imagenet perform better on other downstream tasks, which led to much research dedicated to optimizing Imagenet accuracy. Recently this hypothesis has been challenged by evidence showing that self-supervised models transfer better than their supervised counterparts, despite their inferior Imagenet accuracy. This calls for identifying the additional factors, on top of Imagenet accuracy, that make models transferable. In this work we show that high diversity of the filters learnt by the model promotes transferability jointly with Imagenet accuracy. Encouraged by the recent transferability results of self-supervised models, we use a simple procedure to combine self-supervised and supervised pretraining and generate models with both high diversity and high accuracy, and as a result high transferability. We experiment with several architectures and multiple downstream tasks, including both single-label and multi-label classification.
Niv Nayman is an applied scientist at Amazon Web Services AI Labs and a Phd candidate at the Technion, working primarily on AutoML research with applications in computer vision and document analysis. His work has been integrated in products at scale and published at top venues (NeurIPS, ICLR, ICML, ECCV, etc.). Before joining Amazon, Niv served as a senior research scientist at Alibaba DAMO Academy after completing a long service as an officer in the intelligence technological unit – 81, where he performed a variety of roles through the years, from hardware design, to algorithmic research and cyber security. Niv holds BSc degrees both in Electrical Engineering and in Physics (Cum Laude, ‘Psagot’ program) and a MSc in Optimization and Machine Learning, all from the Technion.

Ph.D. student under the supervision of Prof. Lihi Zelnik Manor.

 

כל הסמינרים
דילוג לתוכן