Publications

Topics:
  1. S. Vedula, O. Senouf, A. M. Bronstein, O. V. Michailovich, M. Zibulevsky, Towards CT-quality ultrasound imaging using deep learning, arXiv:1710.06304, 2017 details

    Towards CT-quality ultrasound imaging using deep learning

    S. Vedula, O. Senouf, A. M. Bronstein, O. V. Michailovich, M. Zibulevsky
    arXiv:1710.06304, 2017

    The cost-effectiveness and practical harmlessness of ultra- sound imaging have made it one of the most widespread tools for medical diagnosis. Unfortunately, the beam-forming based image formation produces granular speckle noise, blur- ring, shading and other artifacts. To overcome these effects, the ultimate goal would be to reconstruct the tissue acoustic properties by solving a full wave propagation inverse prob- lem. In this work, we make a step towards this goal, using Multi-Resolution Convolutional Neural Networks (CNN). As a result, we are able to reconstruct CT-quality images from the reflected ultrasound radio-frequency(RF) data obtained by simulation from real CT scans of a human body. We also show that CNN is able to imitate existing computationally heavy despeckling methods, thereby saving orders of magni- tude in computations and making them amenable to real-time applications.

    O. Litany, T. Remez, E. Rodolà, A. M. Bronstein, M. M. Bronstein, Deep Functional Maps: Structured prediction for dense shape correspondence, Proc. Int'l Conf. on Computer Vision (ICCV), 2017 details

    Deep Functional Maps: Structured prediction for dense shape correspondence

    O. Litany, T. Remez, E. Rodolà, A. M. Bronstein, M. M. Bronstein
    Proc. Int'l Conf. on Computer Vision (ICCV), 2017

    We introduce a new framework for learning dense correspondence between deformable 3D shapes. Existing learning based approaches model shape correspondence as a labelling problem, where each point of a query shape receives a label identifying a point on some reference domain; the correspondence is then constructed a posteriori by composing the label predictions of two input shapes. We propose a paradigm shift and design a structured prediction model in the space of functional maps, linear operators that provide a compact representation of the correspondence. We model the learning process via a deep residual network which takes dense descriptor fields defined on two shapes as input, and outputs a soft map between the two given objects. The resulting correspondence is shown to be accurate on several challenging benchmarks comprising multiple categories, synthetic models, real scans with acquisition artifacts, topological noise, and partiality.

    Z. Laehner, M. Vestner, A. Boyarski, O. Litany, R. Slossberg, T. Remez, E. Rodolà, A. M. Bronstein, M. M. Bronstein, R. Kimmel, D. Cremers, Efficient deformable shape correspondence via kernel matching, Proc. 3D Vision (3DV), 2017 details

    Efficient deformable shape correspondence via kernel matching

    Z. Laehner, M. Vestner, A. Boyarski, O. Litany, R. Slossberg, T. Remez, E. Rodolà, A. M. Bronstein, M. M. Bronstein, R. Kimmel, D. Cremers
    Proc. 3D Vision (3DV), 2017

    We present a method to match three dimensional shapes under non-isometric deformations, topology changes and partiality. We formulate the problem as matching between a set of pair-wise and point-wise descriptors, imposing a continuity prior on the mapping, and propose a projected descent optimization procedure inspired by difference of convex functions (DC) programming. Surprisingly, in spite of the highly non-convex nature of the resulting quadratic assignment problem, our method converges to a semantically meaningful and continuous mapping in most of our experiments, and scales well. We provide preliminary theoretical analysis and several interpretations of the method.

    G. Alexandroni, Y. Podolsky, H. Greenspan, T. Remez, O. Litany, A. M. Bronstein, R. Giryes, White matter fiber representation using continuous dictionary learning, Proc. Int'l Conf. Medical Image Computing & Computer Assisted Intervention (MICCAI), 2017 details

    White matter fiber representation using continuous dictionary learning

    G. Alexandroni, Y. Podolsky, H. Greenspan, T. Remez, O. Litany, A. M. Bronstein, R. Giryes
    Proc. Int'l Conf. Medical Image Computing & Computer Assisted Intervention (MICCAI), 2017

    With increasingly sophisticated Diffusion Weighted MRI acquisition methods and modelling techniques, very large sets of streamlines (fibers) are presently generated per imaged brain. These reconstructions of white matter architecture, which are important for human brain research and pre-surgical planning, require a large amount of storage and are often unwieldy and difficult to manipulate and analyze. This work proposes a novel continuous parsimonious framework in which signals are sparsely represented in a dictionary with continuous atoms. The significant innovation in our new methodology is the ability to train such continuous dictionaries, unlike previous approaches that either used pre-fixed continuous transforms or training with finite atoms. This leads to an innovative fiber representation method, which uses Continuous Dictionary Learning to sparsely code each fiber with high accuracy. This method is tested on numerous tractograms produced from the Human Connectome Project data and achieves state-of-the-art performances in compression ratio and reconstruction error.

    M. Vestner, R. Litman, E. Rodolà, A. M. Bronstein, D. Cremers, Product Manifold Filter: Non-rigid shape correspondence via kernel density estimation in the product space, Proc. Computer Vision and Pattern Recognition (CVPR), 2017 details

    Product Manifold Filter: Non-rigid shape correspondence via kernel density estimation in the product space

    M. Vestner, R. Litman, E. Rodolà, A. M. Bronstein, D. Cremers
    Proc. Computer Vision and Pattern Recognition (CVPR), 2017

    Many algorithms for the computation of correspondences between deformable shapes rely on some variant of nearest neighbor matching in a descriptor space. Such are, for example, various point-wise correspondence recovery algorithms used as a post-processing stage in the functional correspondence framework. Such frequently used techniques implicitly make restrictive assumptions (e.g., near-isometry) on the considered shapes and in practice suffer from a lack of accuracy and result in poor surjectivity. We propose an alternative recovery technique capable of guaranteeing a bijective correspondence and producing significantly higher accuracy and smoothness. Unlike other methods, our approach does not depend on the assumption that the analyzed shapes are isometric. We derive the proposed method from the statistical framework of kernel density estimation and demonstrate its performance on several challenging deformable 3D shape matching datasets.

    O. Litany, E. Rodolà, A. M. Bronstein, M. M. Bronstein, Fully spectral partial shape matching, Computer Graphics Forum, Vol. 36(2), 2017 details

    Fully spectral partial shape matching

    O. Litany, E. Rodolà, A. M. Bronstein, M. M. Bronstein
    Computer Graphics Forum, Vol. 36(2), 2017

    We propose an efficient procedure for calculating partial dense intrinsic correspondence between deformable shapes performed entirely in the spectral domain. Our technique relies on the recently introduced partial functional maps formalism and on the joint approximate diagonalization (JAD) of the Laplace-Beltrami operators previously introduced for matching non-isometric shapes. We show that a variant of the JAD problem with an appropriately modified coupling term (surprisingly) allows to construct quasi-harmonic bases localized on the latent corresponding parts. This circumvents the need to explicitly compute the unknown parts by means of the cumbersome alternating minimization used in the previous approaches, and allows performing all the calculations in the spectral domain with constant complexity independent of the number of shape vertices. We provide an extensive evaluation of the proposed technique on standard non-rigid correspondence benchmarks and show state-of-the-art performance in various settings, including partiality and the presence of topological noise.

    A. Boyarski, A. M. Bronstein, M. M. Bronstein, Subspace least squares multidimensional scaling, Proc. Scale Space and Variational Methods (SSVM), 2017 details

    Subspace least squares multidimensional scaling

    A. Boyarski, A. M. Bronstein, M. M. Bronstein
    Proc. Scale Space and Variational Methods (SSVM), 2017

    Multidimensional Scaling (MDS) is one of the most popular methods for dimensionality reduction and visualization of high dimensional data. Apart from these tasks, it also found applications in the field of geometry processing for the analysis and reconstruction of non-rigid shapes. In this regard, MDS can be thought of as a shape from metric algorithm, consisting of finding a configuration of points in the Euclidean space that realize, as isometrically as possible, some given distance structure. In the present work we cast the least squares variant of MDS (LS-MDS) in the spectral domain. This uncovers a multiresolution property of distance scaling which speeds up the optimization by a significant amount, while producing comparable, and sometimes even better, embeddings.

    T. Remez, O. Litany, R. Giryes, A. M. Bronstein, Deep class-aware image denoising, Proc. Int'l Conf. on Image Processing (ICIP), 2017 details

    Deep class-aware image denoising

    T. Remez, O. Litany, R. Giryes, A. M. Bronstein
    Proc. Int'l Conf. on Image Processing (ICIP), 2017

    The increasing demand for high image quality in mobile devices brings forth the need for better computational enhancement techniques, and image denoising in particular. To this end, we propose a new fully convolutional deep neural network architecture which is simple yet powerful and achieves state-of-the-art performance for additive Gaussian noise removal. Furthermore, we claim that the personal photo-collections can usually be categorized into a small set of semantic classes. However simple, this observation has not been exploited in image denoising until now. We show that a significant boost in performance of up to 0.4dB PSNR can be achieved by making our network class-aware, namely, by fine-tuning it for images belonging to a specific semantic class. Relying on the hugely successful existing image classifiers, this research advocates for using a class-aware approach in all image enhancement tasks.

    O. Litany, T. Remez, A. M. Bronstein, Cloud Dictionary: Sparse coding and modeling for point clouds, arXiv:1612.04956, 2017 details

    Cloud Dictionary: Sparse coding and modeling for point clouds

    O. Litany, T. Remez, A. M. Bronstein
    arXiv:1612.04956, 2017

    With the development of range sensors such as LIDAR and time-of-flight cameras, 3D point cloud scans have become ubiquitous in computer vision applications, the most prominent ones being gesture recognition and autonomous driving. Parsimony-based algorithms have shown great success on images and videos where data points are sampled on a regular Cartesian grid. We propose an adaptation of these techniques to irregularly sampled signals by using continuous dictionaries. We present an example application in the form of point cloud denoising.

    T. Remez, O. Litany, R. Giryes, A. M. Bronstein, Deep class-aware denoising, arXiv:1701.01698, 2017 details

    Deep class-aware denoising

    T. Remez, O. Litany, R. Giryes, A. M. Bronstein
    arXiv:1701.01698, 2017

    The increasing demand for high image quality in mobile devices brings forth the need for better computational enhancement techniques, and image denoising in particular. At the same time, the images captured by these devices can be categorized into a small set of semantic classes. However simple, this observation has not been exploited in image denoising until now. In this paper, we demonstrate how the reconstruction quality improves when a denoiser is aware of the type of content in the image. To this end, we first propose a new fully convolutional deep neural network architecture which is simple yet powerful as it achieves state-of-the-art performance even without be- ing class-aware. We further show that a significant boost in performance of up to 0.4 dB PSNR can be achieved by making our network class-aware, namely, by fine-tuning it for images belonging to a specific semantic class. Relying on the hugely successful existing image classifiers, this research advocates for using a class-aware approach in all image enhancement tasks.

    T. Remez, O. Litany, R. Giryes, A. M. Bronstein, Deep convolutional denoising of low-light images, arXiv:1701.01687, 2017 details

    Deep convolutional denoising of low-light images

    T. Remez, O. Litany, R. Giryes, A. M. Bronstein
    arXiv:1701.01687, 2017

    Poisson distribution is used for modeling noise in photon-limited imaging. While canonical examples include relatively exotic types of sensing like spectral imaging or astronomy, the problem is relevant to regular photography now more than ever due to the booming market for mobile cameras. Restricted form factor limits the amount of absorbed light, thus computational post-processing is called for. In this paper, we make use of the powerful framework of deep convolutional neural networks for Poisson denoising. We demonstrate how by training the same network with images having a specific peak value, our denoiser outperforms previous state-of-the-art by a large margin both visually and quantitatively. Being flexible and data-driven, our solution resolves the heavy ad hoc engineering used in previous methods and is an order of magnitude faster. We further show that by adding a reasonable prior on the class of the image being processed, another significant boost in performance is achieved.

    O. Litany, T. Remez, D. Freedman, L. Shapira, A. M. Bronstein, R. Gal, ASIST: Automatic Semantically Invariant Scene Transformation, Computer Vision and Image Understanding, Vol. 157, 2017 details

    ASIST: Automatic Semantically Invariant Scene Transformation

    O. Litany, T. Remez, D. Freedman, L. Shapira, A. M. Bronstein, R. Gal
    Computer Vision and Image Understanding, Vol. 157, 2017

    We present ASIST, a technique for transforming point clouds by replacing objects with their semantically equivalent counterparts. Transformations of this kind have applications in virtual reality, repair of fused scans, and robotics. ASIST is based on a unified formulation of semantic labeling and object replacement; both result from minimizing a single objective. We present numerical tools for the efficient solution of this optimization problem. The method is experimentally assessed on new datasets of both synthetic and real point clouds, and is additionally compared to two recent works on object replacement on data from the corresponding papers.

    M. Ovsjanikov, E. Corman, M. M. Bronstein, E. Rodolà, M. Ben-Chen, L. Guibas, F. Chazal, A. M. Bronstein, Computing and processing correspondences with functional maps, SIGGRAPH Courses, 2017 details

    Computing and processing correspondences with functional maps

    M. Ovsjanikov, E. Corman, M. M. Bronstein, E. Rodolà, M. Ben-Chen, L. Guibas, F. Chazal, A. M. Bronstein
    SIGGRAPH Courses, 2017

    Notions of similarity and correspondence between geometric shapes and images are central to many tasks in geometry processing, computer vision, and computer graphics. The goal of this course is to familiarize the audience with a set of recent techniques that greatly facilitate the computation of mappings or correspondences between geometric datasets, such as 3D shapes or 2D images by formulating them as mappings between functions rather than points or triangles. Methods based on the functional map framework have recently led to state-of-the-art results in problems as diverse as non-rigid shape matching, image co-segmentation and even some aspects of tangent vector field design. One challenge in adopting these methods in practice, however, is that their exposition often assumes a significant amount of background in geometry processing, spectral methods and functional analysis, which can make it difficult to gain an intuition about their performance or about their applicability to real-life problems. In this course, we try to provide all the tools necessary to appreciate and use these techniques, while assuming very little background knowledge. We also give a unifying treatment of these techniques, which may be difficult to extract from the individual publications and, at the same time, hint at the generality of this point of view, which can help tackle many problems in the analysis and creation of visual content. This course is structured as a half day course. We will assume that the participants have knowledge of basic linear algebra and some knowledge of differential geometry, to the extent of being familiar with the concepts of a manifold and a tangent vector space. We will discuss in detail the functional approach to finding correspondences between non-rigid shapes, the design and analysis of tangent vector fields on surfaces, consistent map estimation in networks of shapes and applications to shape and image segmentation, shape variability analysis, and other areas.