Seminar: Graduate Seminar

ECE Women Community

Learning Control by Iterative Inversion

Date: February,02,2023 Start Time: 10:30 - 11:30 Add to:
Lecturer: Or Avner
Affiliations: The Andrew and Erna Viterbi Faculty of Electrical & Computer Engineering

We formulate learning for control as an inverse problem — inverting a dynamical system to give the actions which yield desired behavior. The key challenge in this formulation is a distribution shift in the inputs to the function to be inverted — the learning agent can only observe the forward mapping (its actions’ consequences) on trajectories that it can execute, yet must learn the inverse mapping for inputs-outputs that correspond to a different, desired behavior.

We propose a general recipe for inverse problems with a distribution shift that we term iterative inversion — learn the inverse mapping under the current input distribution (policy), then use it on the desired output samples to obtain a new input distribution, and repeat. As we show, iterative inversion can converge to the desired inverse mapping, but under rather strict conditions on the mapping itself.

We next apply iterative inversion to learn control. Our input is a set of demonstrations of desired behavior, given as video embeddings of trajectories (without actions), and our method iteratively learns to imitate trajectories generated by the current policy, perturbed by random exploration noise. We find that constantly adding the demonstrated trajectory embeddings as input to the policy when generating trajectories to imitate, a-la iterative inversion, effectively steers the learning towards the desired trajectory distribution.

To the best of our knowledge, this is the first exploration of learning control from the viewpoint of inverse problems, and the main advantage of our approach is simplicity — it does not require rewards, and only employs supervised learning, which can be easily scaled to use state-of-the-art trajectory embedding techniques and policy representations. Indeed, with a VQ-VAE embedding, and a transformer-based policy, we demonstrate non-trivial continuous control on several tasks. Further, we report an improved performance on imitating diverse behaviors compared to reward based methods.

 

* M.Sc. student under the supervision of Prof. Aviv Tamar.

 

All Seminars
Skip to content