Publications

Topics:
  1. O. Litany, T. Remez, A. M. Bronstein, Image reconstruction from dense binary pixels, arXiv:1512.01774
    D. Eynard, A. Kovnatsky, M. M. Bronstein, K. Glashoff, A. M. Bronstein, Multimodal manifold analysis using simultaneous diagonalization of Laplacians, IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), Vol. 37(12), 2015 details

    Multimodal manifold analysis using simultaneous diagonalization of Laplacians

    D. Eynard, A. Kovnatsky, M. M. Bronstein, K. Glashoff, A. M. Bronstein
    IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), Vol. 37(12), 2015

    We construct an extension of spectral and diffusion geometry to multiple modalities through simultaneous diagonalization of Laplacian matrices. This naturally extends classical data analysis tools based on spectral geometry, such as diffusion maps and spectral clustering. We provide several synthetic and real examples of manifold learning, retrieval, and clustering demonstrating that the joint spectral geometry frequently better captures the inherent structure of multi-modal data. We also show the relation of many previous approaches to multimodal manifold analysis to our framework, of which the can be seen as particular cases.

    T. Remez, O. Litany, A. M. Bronstein, A Picture is Worth a Billion Bits: Real-time image reconstruction from dense binary pixels, arXiv:1510.04601 details

    A Picture is Worth a Billion Bits: Real-time image reconstruction from dense binary pixels

    T. Remez, O. Litany, A. M. Bronstein
    arXiv:1510.04601

    The pursuit of smaller pixel sizes at ever-increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled bi- nary measurements. The extreme quantization of the Pois- son statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary thresh- old pixels and propose a reconstruction algorithm combin- ing an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

    A. M. Bronstein, New dimensions of media, Universidad La Salle, Revista de ciencias de la computación, Vol. 3(1), 2015
    H. Haim, A. M. Bronstein, E. Marom, Computational all-in-focus imaging using an optical phase mask, OSA Optics Express, Vol. 23(19), 2015 details

    Computational all-in-focus imaging using an optical phase mask

    H. Haim, A. M. Bronstein, E. Marom
    OSA Optics Express, Vol. 23(19), 2015

    A method for extended depth of field imaging based on image acquisition through a thin binary phase plate followed by fast automatic computational post-processing is presented. By placing a wavelength dependent optical mask inside the pupil of a conventional camera lens, one acquires a unique response for each of the three main color channels, which adds valuable information that allows blind reconstruction of blurred images without the need of an iterative search process for estimating the blurring kernel. The presented simulation as well as capture of a real life scene show how acquiring a one-shot image focused at a single plane, enable generating a de-blurred scene over an extended range in space.

    R. Litman, S. Korman, A. M. Bronstein, S. Avidan, GMD: Global model detection via inlier rate estimation, Proc. Computer Vision and Pattern Recognition (CVPR), 2015 details

    GMD: Global model detection via inlier rate estimation

    R. Litman, S. Korman, A. M. Bronstein, S. Avidan
    Proc. Computer Vision and Pattern Recognition (CVPR), 2015

    This work presents a novel approach for detecting inliers in a given set of correspondences (matches). It does so without explicitly identifying any consensus set, based on a method for inlier rate estimation (IRE). Given such an estimator for the inlier rate, we also present an algorithm that detects a globally optimal transformation. We provide a theoretical analysis of the IRE method using a stochastic generative model on the continuous spaces of matches and transformations. This model allows rigorous investigation of the limits of our IRE method for the case of 2D translation, further giving bounds and insights for the more general case. Our theoretical analysis is validated empirically and is shown to hold in practice for the more general case of 2D affinities. In addition, we show that the combined framework works on challenging cases of 2D homography estimation, with very few and possibly noisy inliers, where RANSAC generally fails.

    I. Sipiran, B. Bustos, T. Schreck, A. M. Bronstein, M. M. Bronstein, U. Castellani, S. Choi, L. Lai, H. Li, R. Litman, L. Sun, SHREC'15 Track: Scalability of non-rigid 3D shape retrieval, Proc. EUROGRAPHICS Workshop on 3D Object Retrieval (3DOR), 2015 details

    SHREC'15 Track: Scalability of non-rigid 3D shape retrieval

    I. Sipiran, B. Bustos, T. Schreck, A. M. Bronstein, M. M. Bronstein, U. Castellani, S. Choi, L. Lai, H. Li, R. Litman, L. Sun
    Proc. EUROGRAPHICS Workshop on 3D Object Retrieval (3DOR), 2015

    Due to recent advances in 3D acquisition and modeling, increasingly large amounts of 3D shape data become available in many application domains. This rises not only the need for effective methods for 3D shape retrieval, but also efficient retrieval and robust implementations. Previous 3D retrieval challenges have mainly considered data sets in the range of a few thousands of queries. In the 2015 SHREC track on Scalability of 3D Shape Retrieval we provide a benchmark with more than 96 thousand shapes. The data set is based on a non-rigid retrieval benchmark enhanced by other existing shape benchmarks. From the baseline models, a large set of partial objects were automatically created by simulating a range-image acquisition process. Four teams have participated in the track, with most methods providing very good to near-perfect retrieval results, and one less complex baseline method providing fair performance. Timing results indicate that three of the methods including the latter baseline one provide near- interactive time query execution. Generally, the cost of data pre-processing varies depending on the method.

    X. Bian, H. Krim, A. M. Bronstein, L. Dai, Sparse null space basis pursuit and analysis dictionary learning for high-dimensional data analysis, Proc. Int'l Conf. on Acoustics, Speech, and Signal Processing (ICASSP), 2015 details

    Sparse null space basis pursuit and analysis dictionary learning for high-dimensional data analysis

    X. Bian, H. Krim, A. M. Bronstein, L. Dai
    Proc. Int'l Conf. on Acoustics, Speech, and Signal Processing (ICASSP), 2015

    Sparse models in dictionary learning have been successfully applied in a wide variety of machine learning and computer vision problems, and have also recently been of increasing research interest. Another interesting related problem based on a linear equality constraint, namely the sparse null space problem (SNS), first appeared in 1986, and has since inspired results on sparse basis pursuit. In this paper, we investigate the relation between the SNS problem and the analysis dictionary learning problem, and show that the SNS problem plays a central role, and may be utilized to solve dictionary learning problems. Moreover, we propose an efficient algorithm of sparse null space basis pursuit, and extend it to a solution of analysis dictionary learning. Experimental results on numerical synthetic data and realworld data are further presented to validate the performance of our method.

    Y. Aflalo, A. M. Bronstein, R. Kimmel, On convex relaxation of graph isomorphism, Proc. US National Academy of Sciences (PNAS), 2015 details

    On convex relaxation of graph isomorphism

    Y. Aflalo, A. M. Bronstein, R. Kimmel
    Proc. US National Academy of Sciences (PNAS), 2015

    We consider the problem of exact and inexact matching of weighted undirected graphs, in which a bijective correspondence is sought to minimize a quadratic weight disagreement. This computationally challenging problem is often relaxed as a convex quadratic program, in which the space of permutations is replaced by the space of doubly stochastic matrices. However, the applicability of such a relaxation is poorly understood. We define a broad class of friendly graphs characterized by an easily verifiable spectral property. We prove that for friendly graphs, the convex relaxation is guaranteed to find the exact isomorphism or certify its inexistence. This result is further extended to approximately isomorphic graphs, for which we develop an explicit bound on the amount of weight disagreement under which the relaxation is guaranteed to find the globally optimal approximate isomorphism. We also show that in many cases, the graph matching problem can be further harmlessly relaxed to a convex quadratic program with only n separable linear equality constraints, which is substantially more efficient than the standard relaxation involving 2n equality and n2 inequality constraints. Finally, we show that our results are still valid for unfriendly graphs if additional information in the form of seeds or attributes is allowed, with the latter satisfying an easy to verify spectral characteristic.

    P. Sprechmann, A. M. Bronstein, G. Sapiro, Learning efficient sparse and low-rank models, IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), Vol. 37(9), 2015 details

    Learning efficient sparse and low-rank models

    P. Sprechmann, A. M. Bronstein, G. Sapiro
    IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), Vol. 37(9), 2015

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, parsimonious modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these models is the difficulty of their inclusion in supervised learning scenarios, where the higher-level training objective would depend on the solution of the lower-level pursuit problem. The resulting bilevel optimization problems are in general notoriously difficult to solve. In this paper, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models from the iteration of proximal descent algorithms. These architectures approximate the exact parsimonious representation with a fraction of the complexity of the standard optimization methods. We also show that carefully chosen training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms.

    P. Sprechmann, A. M. Bronstein, G. Sapiro, Supervised non-negative matrix factorization for audio source separation, Chapter in Excursions in Harmonic Analysis (R. Balan, M. Begue, J. J. Benedetto, W. Czaja, K. Okoudjou Eds.), Birkhaeuser, 2015 details

    Supervised non-negative matrix factorization for audio source separation

    P. Sprechmann, A. M. Bronstein, G. Sapiro
    Chapter in Excursions in Harmonic Analysis (R. Balan, M. Begue, J. J. Benedetto, W. Czaja, K. Okoudjou Eds.), Birkhaeuser, 2015

    Source separation is a widely studied problems in signal processing. Despite the permanent progress reported in the literature it is still considered a significant challenge. This chapter first reviews the use of non-negative matrix factorization (NMF) algorithms for solving source separation problems, and proposes a new way for the supervised training in NMF. Matrix factorization methods have received a lot of attention in recent year in the audio processing community, producing particularly good results in source separation. Traditionally, NMF algorithms consist of two separate stages: a training stage, in which a generative model is learned; and a testing stage in which the pre-learned model is used in a high level task such as enhancement, separation, or classification. As an alternative, we propose a tasksupervised NMF method for the adaptation of the basis spectra learned in the first stage to enhance the performance on the specific task used in the second stage. We cast this problem as a bilevel optimization program efficiently solved via stochastic gradient descent. The proposed approach is general enough to handle sparsity priors of the activations, and allow non-Euclidean data terms such as beta-divergences. The framework is evaluated on speech enhancement.