- Image Communication
- Computer Graphics
- Pattern Recognition
- Speech and Audio Processing
- Medical Imaging
The research in this area is focused on developing algorithms for the representation, transmission, and processing of signals and information. The research encompasses theory, applications, and implementations with contributions in both pure and applied science. Research directions related to this area span a wide range of disciplines, including image processing, image synthesis / computer graphics, low-level vision, bio-inspired vision, physics-based vision, computational photography as well as mid and high-level vision, including search-by-content, pattern recognition and classification. It also includes multimedia and multidimensional signal processing with applications to speech, audio, image and video; statistical and probabilistic analysis of random processes; denoising and robust filter design; analysis and modeling of acoustic signals; system identification and adaptive filtering; geometry-based data analysis and modeling; biomedical signal processing; and computational neuroscience.Read More
Beginning with computer graphics, the research deals with understanding and analyzing shapes embedded in 3D (Prof A. Tal). Related topics are mesh completion, mesh reconstruction, shape-based similarity and retrieval, detection of feature curves on meshes, visibility and saliency detection. Prof Tal and her research group collaborate with archaeologists, since in archaeology, shape is a major characteristic. They look for algorithms that can handle the special (and difficult) models that archaeologists are interested in. In particular, they developed state-of-the-art algorithms for reconstructing an object from its fragments (puzzle solving), as well as completing broken artifacts, re-colorizing them and sketching them for documentation. There is a wide range of applications of computation of injective simplicial maps with low distortion in computer vision and computer graphics. Prof. Y. Zeevi adopts this geometric approach to two- and three-dimensional mappings for the purpose of change detection and quantification by means of simplicial maps. The mappings with minimal distortions are interpreted in the context of variational calculus. This approach has been implemented in medical imaging and in the analysis of one-dimensional signals embedded in spectrogram surfaces and manifolds.
In the context of image analysis and processing, both theoretical and applied aspects of noise removal from corrupted images and multi-dimensional data sets are investigated, in particular for medical imaging (Prof. M. Porat). The scope of the work ranges from establishing fundamental performance limits, through studying universality and algorithmic aspects, to experimentation with simulated and real data. A fundamental theoretical research was initiated by Prof. G. Gilboa, formulating nonlinear systems and processes through nonlinear eigenvalue analysis. A rich theory has been developed, based on convex analysis principles. Notions, common in the linear framework, such as the eigen-space, transforms, Parseval theorem and Rayleigh quotients were generalized to the nonlinear convex setting. This enhances the understanding of nonlinear signal and image processing and opens new design approaches, especially of signals containing discontinuities and phase transitions. In image analysis and processing it is often advantageous to embed images in surfaces and higher dimensional manifolds. Hence, the geometrical richness of manifolds of dimension higher than two is exploited to develop a geometrical approach to sampling and reconstruction of surfaces and higher dimensional manifolds. Additional research topics, which are studied by Prof. T. Michaeli and his group, include efficient deep neural network architectures for image restoration and classification, new deep generative models with applications in image editing and manipulation, and use of deep learning for optimal design of optical imaging systems (particularly in microscopy).
In bio-inspired or research motivated by neurobiology, the goal has been to implement in technology principles of organization, architectures and processing encountered in biological vision. An example is the spatio-temporal AGC algorithm for acquisition and enhancement of visual data over a wide dynamic range. This research originally resulted in the development of the adaptive sensitivity camera (Prof. Y. Zeevi and Prof. R. Ginosar). The fundamental model of AGC in vision was further developed in recent years to enhance additional visual dimension such as curvature and depth. Motivated by neurobiological experimental research conducted by Prof. S. Marom and his group at the Center for Biological Networks, using cortical tissue culture interfaced with the computer by means of microelectronics, Y. Zeevi and his group came up with new paradigms for search by content. Autonomous AI technology has emerged out of this research (Cortica Ltd.).
A midlevel vision research program is concerned with the problem of visual tracking. This is a fundamental issue in controlled active vision for which many different approaches have been proposed. The problem of tracking is also treated by acquisition tools (Prof. M. Porat). In high-level vision, the understanding of images and video is investigated. In particular, the use of neural networks to enhance the beamforming process to achieve improved resolution and reduced side lobe effect (Prof. A. Feuer). The same technology is being applied to motion estimation via the Doppler effect in both ultrasound and radar applications with similar results (Prof. A. Feuer). Further application of deep learning technology is investigated in the context of seismic signals with the purpose of detecting underground spaces (tunnels) and optimizing the seismic waves excitation signal for that purpose (Prof. A. Feuer).
Physics-based vision is used is studies which account for physical models of image formation and even affecting image formation, coupled to computational image analysis (Prof. Y. Schechner). In recent years, a major focus in this approach has been analysis of heterogeneous multiply-scattering media such as tissue (in X-ray) and the atmosphere. Towards this, his group derived scattering-based computed tomography (CT). The results are about to be tested from low Earth orbit using new and dedicated space missions. If successful, these imaging and analysis methods should help answer climate questions. An additional noteworthy use of the physics-based approach is way to sense and analyze the AC electric grid, through imaging of bulb flicker in wide fields.
Computer Vision: Image and video understanding and synthesis (Prof. L. Zelnik-Manor).
The research of this group revolves around the understanding the visual content, which is at the core of computer vision. The understanding, analysis and efficient representations of images and videos are the key enablers of numerous real-world applications. In recent years the group focused its research on two main areas. The first is the study of neural network design with the goal of obtaining models with high performance both in terms of accuracy and in terms of resource efficiency. Studying and proposing efficient methods for searching and training high performance backbone architectures, focusing on the key computer vision tasks of single and multilabel image classification. The second area studied in the group aims at making images and videos accessible to people with visual impairment by converting them into tactile models. This required research of both theory and hardware planning to open up a new world of “seeing” through touch for the visually impaired.
Multimedia and Multidimensional Signal Processing (Prof. D. Malah).
Processing of multimedia signals such as speech, audio, image and video, is very important for efficient storage and transmission over bit-rate limited channels. This involves signal analysis and representation, signal enhancement and error concealment, efficient delivery of high-rate encoded video to low bit-rate destinations, and reversed-complexity coding. Coding of this type is useful in applications where the encoder has limited power and computational resources, e.g., in cellular phones, aerial video and at sensor-level implementations. Multidimensional signal processing includes 3D point clouds processing for registration and segmentation of 3D scenes.
Speech and Audio Processing (Prof. D. Malah, Prof. I. Cohen, Prof. R. Talmon)
Research activity focuses on various aspects of speech and audio processing. This includes statistical modeling of speech signals, designing speech enhancement systems, low foot-print text-to-speech (TTS) synthesis for computer voice response, speech bandwidth extension, voice conversion, keyword spotting, deep-learning methods for speech and audio processing, analysis and modeling of acoustic signals, microphone arrays, source localization, blind source separation, system identification and adaptive filtering. The research in this area is motivated by numerous applications including hands-free communication and narrow band telephony transmission, voice over IP (VoIP), hearing aids, speech recognition, teleconferencing systems, and mobile phones. The research involves both traditional signal processing algorithms adapted to this setting, such as beamforming, denoising, and statistical models, as well as new deep-learning methods that serve as a basis for designated audio processing algorithms.
Image and Video Processing (Prof. Tomer Michaeli, Prof. D. Malah)
Various aspects of image and video processing are being investigated. In addition to video coding, these include image and video interpolation methods, denoising and deblurring techniques, super-resolution, and analysis of three dimensional fMRI brain images based on new geometrical tools. Other areas include hyperspectral image analysis for anomaly detection, 2D/3D object recognition, and bandwidth control including video transcoding and resampling with minimum effect on image quality. Part of the research in this area is motivated by the role of biological vision in image perception and analysis. Fundamental limitations in restoration and compression of signals and images are also studied. One example is the observation and theoretical analysis of the perception-distortion tradeoff in signal restoration, proving that higher perceptual quality necessarily comes at the cost of higher distortion. This phenomenon also arises in lossy compression as a rate-distortion-perception tradeoff. Theoretical research on these topics has a direct impact on the way researchers compare between image restoration and compression methods.
Statistical Signal Processing and Random Processes (Prof. I. Cohen, Prof. R. Talmon)
Besides the detailed applications outlined above, various more theoretical aspects are being investigated and developed. These include estimation and detection methods, systematic design techniques for biased and robust estimators, noise removal from corrupted signals and corrupted data sets, performance bounds in general parameter estimation problems, and study of the geometrical structure of random processes defined over regions in space. The latter is important in many signal processing applications, for example processing of two and three dimensional images. The work in this area is mainly theoretical, combining statistics, optimization, probability and geometry, but also applicative. The scope ranges from establishing fundamental performance limits, through studying universality and algorithmic aspects, to data experimentation. Part of the work relies on various convex optimization methods, which are also studied within our group.
Data Analysis and Modeling (Prof. R. Talmon, Prof. G. Gilboa, Prof. I. Cohen)
Data analysis and modeling, including development of intrinsic representations, multimodal data analysis and fusion, and manifold learning for network analysis. Intrinsic representations are useful for building models from observations that describe the observed phenomena in terms of their physical attributes. Multimodal data analysis facilitates the construction of efficient low-dimensional representations of data, which characterize the common structures and the differences between the different modalities. Network analysis is addressed from the perspective of diffusion operators, leading to new dynamic connectivity maps between data sets. In the emerging field of signal processing on graphs, connectivity maps assume a central role, where typically prior knowledge is used for their construction. In this context, the capability to extract dynamic connectivity maps from data observations circumvents the need of such prior knowledge and the bias it inherently embodies.
Statistical Data Analysis (Prof. Y. Romano) Work is focused on designing machine learning systems that could be safely deployed in high-stakes applications. The tools being developed can be viewed as statistical wrappers and protective layers that can be integrated seamlessly with any predictive model to guarantee that data-driven decisions and inferences are valid under practical, testable, and realistic assumptions. Concretely, we focus on the following challenging problems in modern data analysis: (i) Enhancing the interpretability of complex predictive models (such as deep neural nets) by casting this task as a multiple hypothesis testing problem. (ii) Improving the reliability of black-box predictions by constructing valid uncertainty estimates, pushing flexible statistical tools such as conformal prediction and cross-validation to new heights. (iii) Designing models that are robust to nuisance parameters, by learning representations that are invariant to such perturbations. Furthermore, to improve the predictive performance, novel learning schemes are invented, designed to optimally interact with the proposed statistical wrappers, leading to highly efficient and responsible data-driven solutions.