The scope of areas traditionally known as “computer architecture” and “computer systems” has broadened tremendously over the last decade, and is now faced with new challenges requiring a broad toolset and overreaching insights. In terms of technology, the stagnation of Moore’s Law challenges the ubiquitous von Neumann computing model and calls for designing new computer systems comprising novel processing models and compute accelerators. Furthermore, the advent of new memory technologies has introduced new opportunities to improve compute performance and escape traditional capacity bottlenecks.
Collectively, our computer networks and computer engineering groups cover a broad spectrum of topics ranging from low-level hardware and storage architectures and accelerators, via operating systems to computer and storage networks, and further to security and formal methods, high-level software applications such as blockchains, distributed systems protocols and algorithms.
The main research directions include the following.
Learning Theory: Current success in applications of Machine Learning (ML) raises fundamental theoretical questions about learning that would not have been considered in the past. Research in the group aims to contribute to the understanding of this unexpected success, which often seems to contradict classic statistical principles. Work by Mannor and Meir focuses on basic statistical aspects, as well as on the interaction between approximation and estimation. Weinberger’s work focuses on connections between information theory and statistical learning. He studies fundamental performance bounds on the ability to learn channel and source encoders and decoders, and the use of information-theoretic tools in connection with statistical learning and machine learning. Weinberger also studies fundamental statistical inference problems in high dimensional models.
Reinforcement Learning (RL): RL deals with the development of efficient algorithms for learning effective behaviors (or control policies) in dynamic uncertain environments. Our research spans algorithm development, theoretical analysis, and foundational aspects, as well as the interactions between RL biological systems. The so-called multi-armed bandit (MAB) problem is a specific fundamental model that has been extensively analyzed, as well as many other more complex setups. Recent work focuses on deep RL, temporal abstractions, meta-learning, improved exploration, lifelong and curriculum learning, and multi-agent systems. RL research is led by Mannor and Meir, as well as Tamar (with a focus on robotics) and Shimkin.
Deep Learning and Optimization: Most machine learning approaches, and in particular deep learning algorithms, are trained on data using an iterative optimization procedure with some loss function. An important goal is to analyze and design fast and efficient optimization algorithms, preferably with performance guarantees. Theoretically, it is important to understand the convergence rate of these algorithms; why optimization seems to work well in many non-convex models, and which specific solution is being chosen by the algorithm in the (common) case when multiple equivalent solutions exist. Soudry’s work examines, from the theoretical side, why optimization typically works so well in deep learning, e.g. why it converges to solutions that perfectly fit the data, yet generalizes well to unseen data. On the practical side, he develops methods to train neural networks quickly and efficiently, for example using techniques such as parallelization and quantization.
Online Learning: Here, data samples are presented sequentially and processed online by the learning algorithm. Of interest is the development of basic bounds and improved schemes for this common scenario. Related research is also done on the so-called expert learning problem, which provides a powerful framework for dynamic decision making in adversarial and arbitrarily changing environments. Work in this area is carried out by Crammer, Levy, Mannor and Shimkin. Levy’s research lies at the intersection between machine learning and mathematical optimization. His goal is to understand when standard training methods fail or degrade, and to design alternative and efficient methods with provable guarantees.
Robotics: Current robotics excel at performing repetitive tasks in controlled environments. However, bringing robots into unstructured domains, such as assisting in homes, hospitals, or disaster areas, requires a paradigm shift in the design of robotic autonomy. Recently, machine learning has become a promising direction for this task. In this approach, the robot learns to perform tasks in a data-driven manner. Within this framework, some fundamental questions need to be solved, such as how machines can learn models of their environments that are useful for performing tasks, and how to learn behavior from interaction in an interpretable and safe manner. These issues are addressed in Tamar’s research, which mostly falls under the framework of reinforcement learning and its connections to representation learning, planning, and risk-averse optimization.
Statistical Data Analysis: Learning algorithms are increasingly prevalent within consequential real-world systems, where reliability is an essential consideration: confidently deploying learning algorithms requires more than high prediction accuracy in controlled testbeds. The Romano group work focuses on the development of machine learning systems that can be safely deployed in high-stakes applications. These tools can be wrapped around any predictive model, guaranteeing reliable data-driven inferences under practical, testable, and realistic assumptions.
Lifelong Learning: Intelligent agents are expected to continually learn and transfer knowledge across domains and tasks. Work in the group of Meir focuses on basic principles of lifelong learning, and on the acquisition of reliable prior knowledge in a continually evolving setting, thereby enabling agents to develop a broad set of skills across multiple domains, generalizing across both data and tasks.
Applications: Applications to medical treatment, autonomous driving, energy systems and flood predictions have been studied in the Mannor, Meir, and Soudry groups. The field of medical applications, in particular, is characterized by extensive amounts of complex heterogeneous data in multiple formats, and an extensive attempt is being made worldwide within recent years to use modern data analysis tools to help medical professionals better diagnose and predict medical situations. Within the Technion’s collaborative center of ML and Medicine, we are working on combining model-based techniques with black-box deep learning methods in order to benefit from both solid prior knowledge and the open-ended flexibility of deep learning.
Additional areas: There is a lot of work in machine learning that is done in other areas of the faculty, such as computer vision. Applications of learning to these areas are described in the appropriate areas of research.