Seminar: Graduate Seminar
Robust Autoencoders with Built-in Noise Invariance
Sparse autoencoders are useful for extracting low-dimensional representations from high-dimensional data. Yet, the performance of sparse autoencoders degrades sharply when the noise in the input at test time differs from the noise employed during training. In this talk, we formalize single-layer sparse autoencoders as a transform learning problem. Leveraging the transform modeling interpretation, we propose an optimization problem that leads to a predictive model that is invariant to the noise level at test time. In other words, the same model is able to generalize to different noise levels. The optimization algorithm is translated into a new, computationally efficient auto-encoding architecture. After proving that our new approach is invariant to the noise level, we trained networks using the proposed architecture for denoising problems. The trained models yield a significant improvement in stability against different types of noise compared to the commonly used architectures.
M.Sc. student under the supervision of Dr.Yaniv Romano.