Matthieu Simeoni

matthieu.simeoni@epfl.ch +41 21 693 74 58 https://matthieumeo.github.io/
Citizenship: Française
Birth date: 26.08.1991
EPFL IMAGING
BM 4143 (Bâtiment BM)
Station 17
CH-1015 Lausanne
+41 21 693 74 58
Office:
BM 4143
EPFL
>
VPA
>
VPA-AVP-CP
>
IMAGING
>
IMAGING-GE
+41 21 693 74 58
EPFL
>
IC
>
IC-SSC
>
SSC-ENS
Web site: Web site: https://ssc.epfl.ch
EPFL IC IINFCOM LCAV
BC 332 (Bâtiment BC)
Station 14
CH-1015 Lausanne
Web site: Web site: https://lcav.epfl.ch/
+41 21 693 74 58
EPFL
>
IC
>
IC-SIN
>
SIN-ENS
Web site: Web site: https://sin.epfl.ch
Education
Docteur ès Sciences (PhD)
"Functional Inverse Problems on Spheres: Theory, Algorithms and Applications" Advisors: Prof. Martin Vetterli, Prof. Victor Panaretos and Prof. Paul Hurley.
EPFL
2015-2019
Master of Science
Applied Mathematics
EPFL
2013-2015
Bachelor of Science
Mathematics
EPFL
2011-2013
Classes Préparatoires aux Grandes Écoles
Mathematics and Physics
CIV, Sophia Antipolis, France
2009-2011
Baccalauréat Scientifique
Specializing in Mathematics and Physics
Lycée Thierry-Maulnier
2009
Awards
IBM Research Prize in Computational Science
This prize promotes research in computational sciences and recognizes outstanding master theses focused on advanced modelling and simulation methods.
2015
Publications
Infoscience publications
[1] Functional estimation of anisotropic covariance and autocovariance operators on the sphere
We propose nonparametric estimators for the second-order central moments of possibly anisotropic spherical random fields, within a functional data analysis context. We consider a measurement framework where each random field among an identically distributed collection of spherical random fields is sampled at a few random directions, possibly subject to measurement error. The collection of random fields could be i.i.d. or serially dependent. Though similar setups have already been explored for random functions defined on the unit interval, the nonparametric estimators proposed in the literature often rely on local polynomials, which do not readily extend to the (product) spherical setting. We therefore formulate our estimation procedure as a variational problem involving a generalized Tikhonov regularization term. The latter favours smooth covariance/autocovariance functions, where the smoothness is specified by means of suitable Sobolev-like pseudo-differential operators. Using the machinery of reproducing kernel Hilbert spaces, we establish representer theorems that fully characterize the form of our estimators. We determine their uniform rates of convergence as the number of random fields diverges, both for the dense (increasing number of spatial samples) and sparse (bounded number of spatial samples) regimes. We moreover demonstrate the computational feasibility and practical merits of our estimation procedure in a simulation setting, assuming a fixed number of samples per random field. Our numerical estimation procedure leverages the sparsity and second-order Kronecker structure of our setup to reduce the computational and memory requirements by approximately three orders of magnitude compared to a naive implementation would require.
Electronic Journal of Statistics
2022
DOI : 10.1214/22-EJS2064
[2] Privacy-Enhancing Optical Embeddings for Lensless Classification
Lensless imaging can provide visual privacy due to the highly multiplexed characteristic of its measurements. However, this alone is a weak form of security, as various adversarial attacks can be designed to invert the one-to-many scene mapping of such cameras. In this work, we enhance the privacy provided by lensless imaging by (1) downsampling at the sensor and (2) using a programmable mask with variable patterns as our optical encoder. We build a prototype from a low-cost LCD and Raspberry Pi components, for a total cost of around 100 USD. This very low price point allows our system to be deployed and leveraged in a broad range of applications. In our experiments, we first demonstrate the viability and reconfigurability of our system by applying it to various classification tasks: MNIST, CelebA (face attributes), and CIFAR10. By jointly optimizing the mask pattern and a digital classifier in an end-to-end fashion, low-dimensional, privacy-enhancing embeddings are learned directly at the sensor. Secondly, we show how the proposed system, through variable mask patterns, can thwart adversaries that attempt to invert the system (1) via plaintext attacks or (2) in the event of camera parameters leaks. We demonstrate the defense of our system to both risks, with 55% and 26% drops in image quality metrics for attacks based on model-based convex optimization and generative neural networks respectively. We open-source a wave propagation and camera simulator needed for end-to-end optimization, the training software, and a library for interfacing with the camera.
2022
[3] pyFFS: A Python Library for Fast Fourier Series Computation and Interpolation with GPU Acceleration
Fourier transforms are an often necessary component in many computational tasks, and can be computed efficiently through the fast Fourier transform (FFT) algorithm. However, many applications involve an underlying continuous signal, and a more natural choice would be to work with e.g. the Fourier series (FS) coefficients in order to avoid the additional overhead of translating between the analog and discrete domains. Unfortunately, there exists very little literature and tools for the manipulation of FS coefficients from discrete samples. This paper introduces a Python library called pyFFS for efficient FS coefficient computation, convolution, and interpolation. While the libraries SciPy and NumPy provide efficient functionality for discrete Fourier transform coefficients via the FFT algorithm, pyFFS addresses the computation of FS coefficients through what we call the fast Fourier series (FFS). Moreover, pyFFS includes an FS interpolation method based on the chirp Z-transform that can make it more than an order of magnitude faster than the SciPy equivalent when one wishes to perform interpolation. GPU support through the CuPy library allows for further acceleration, e.g. an order of magnitude faster for computing the 2-D FS coefficients of 1000 x 1000 samples and nearly two orders of magnitude faster for 2-D interpolation. As an application, we discuss the use of pyFFS in Fourier optics. pyFFS is available as an open source package at https://github.com/imagingofthings/pyFFS, with documentation at https://pyffs.readthedocs.io.
SIAM Journal on Scientific Computing
2022-08-18
DOI : 10.1137/21M1448641
[4] Learning rich optical embeddings for privacy-preserving lensless image classification
By replacing the lens with a thin optical element, lensless imaging enables new applications and solutions beyond those supported by traditional camera design and post-processing, e.g. compact and lightweight form factors and visual privacy. The latter arises from the highly multiplexed measurements of lensless cameras, which require knowledge of the imaging system to recover a recognizable image. In this work, we exploit this unique multiplexing property: casting the optics as an encoder that produces learned embeddings directly at the camera sensor. We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion. Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements. Additional experiments show that such an optimization allows for lensless measurements that are more robust to typical real-world image transformations. While this work focuses on classification, the proposed programmable lensless camera and end-to-end optimization can be applied to other computational imaging tasks.
2022
[5] LenslessPiCam: A Hardware and Software Platform for Lensless Computational Imaging with a Raspberry Pi
Lensless imaging seeks to replace/remove the lens in a conventional imaging system. The earliest cameras were in fact lensless, relying on long exposure times to form images on the other end of a small aperture in a darkened room/container (camera obscura). The introduction of a lens allowed for more light throughput and therefore shorter exposure times, while retaining sharp focus. The incorporation of digital sensors readily enabled the use of computational imaging techniques to post-process and enhance raw images (e.g. via deblurring, inpainting, denoising, sharpening). Recently, imaging scientists have started leveraging computational imaging as an integral part of lensless imaging systems, allowing them to form viewable images from the highly multiplexed raw measurements of lensless cameras. This represents a real paradigm shift in camera system design as there is more flexibility to cater the hardware to the application at hand (e.g. lightweight or flat designs). This increased flexibility comes however at the price of a more demanding post-processing of the raw digital recordings and a tighter integration of sensing and computation, often difficult to achieve in practice due to inefficient interactions between the various communities of scientists involved. With LenslessPiCam, we provide an easily accessible hardware and software framework to enable researchers, hobbyists, and students to implement and explore practical and computational aspects of lensless imaging. We also provide detailed guides and exercises so that LenslessPiCam can be used as an educational resource, and point to results from our graduate-level signal processing course.
2022
[6] Une version polyatomique de l'algorithme Frank-Wolfe pour résoudre le problème LASSO en grandes dimensions
Nous nous intéressons à la reconstruction parcimonieuse d’images à l’aide du problème d’optimisation régularisé LASSO. Dans de nombreuses applications pratiques, les grandes dimensions des objets à reconstruire limitent, voire empêchent, l’utilisation des méthodes de résolution proximales classiques. C’est le cas par exemple en radioastronomie. Nous détaillons dans cet article le fonctionnement de l’algorithme Frank-Wolfe Polyatomique, spécialement développé pour résoudre le problème LASSO dans ces contextes exigeants. Nous démontrons sa supériorité par rapport aux méthodes proximales dans des situations en grande dimension avec des mesures de Fourier, lors de la résolution de problèmes simulés inspirés de la radio-interférométrie.
2022. 28ème Colloque Gretsi'22 , Nancy, France , September 5-9, 2022.[7] A Fast and Scalable Polyatomic Frank-Wolfe Algorithm for the LASSO
We propose a fast and scalable Polyatomic Frank-Wolfe (P-FW) algorithm for the resolution of high-dimensional LASSO regression problems. The latter improves upon traditional Frank-Wolfe methods by considering generalized greedy steps with polyatomic (i.e. linear combinations of multiple atoms) update directions, hence allowing for a more efficient exploration of the search space. To preserve sparsity of the intermediate iterates, we moreover re-optimize the LASSO problem over the set of selected atoms at each iteration. For efficiency reasons, the accuracy of this re-optimization step is relatively low for early iterations and gradually increases with the iteration count. We provide convergence guarantees for our algorithm and validate it in simulated compressed sensing setups. Our experiments reveal that P-FW outperforms state-of-the-art methods in terms of runtime, both for FW methods and optimal first-order proximal gradient methods such as the Fast Iterative Soft-Thresholding Algorithm (FISTA).
IEEE Signal Processing Letters
2022-02-08
DOI : 10.1109/LSP.2022.3149377
[9] Lecture Slides: Mathematical Foundations of Signal Processing
Signal processing tools are presented from an intuitive geometric point of view which is at the heart of all modern signal processing techniques. The student will develop the mathematical depth and rigour needed for the study of advanced topics in signal processing and approximation theory.
2021-03-17.
[10] Impact Environnemental du Numérique à l’EPFL
L’EPFL, institution académique d’envergure internationale, s’est engagée à respecter les Accords de Paris en matière de climat, ratifiés par la Suisse le 6 octobre 2017, et se prépare aujourd’hui à employer son savoir- faire et ses compétences pour opérer une baisse drastique de ses émissions de gaz à effets de serre (GES). Les objectifs, fixés par la Confédération Helvétique, sont les suivants: atteindre 50% des émissions de GES de 2006 d’ici à 2030, et la neutralité carbone à l’horizon 2050. Afin de pouvoir respecter ces exigences, il est nécessaire, dans un premier temps, d’évaluer de façon précise l’empreinte environnementale de l’institution. À cet égard, le numérique, pourtant fer de lance de l’innovation à l’EPFL, est très souvent omis des études d’impact environnemental alors même qu’il représente aujourd’hui environ 4% des émissions mondiales de GES, et avoisinera les 8% d’ici à 2025 ( (The Shift Project, 2018) et (Andrae, A., & Edler, T, 2015)). Par ailleurs, le numérique exerce également des tensions considérables sur l’eau douce, l’énergie primaire et les ressources abiotiques (minerais rares). L’épuisement de ces ressources menace directement la viabilité de plusieurs filières, dont la filière informatique. Dans ce rapport, nous proposons donc une évaluation de l’empreinte environnementale globale de l’usage du numérique à l’EPFL par le personnel et les étudiants. Notre analyse ne se limite pas aux émissions de GES, mais considère aussi la tension sur l’eau douce ainsi que l’épuisement énergétique et des ressources abiotiques. Ce document a fait l’objet d’une évaluation par les pairs auprès d’un comité d’experts internationaux. Nous évaluons l’impact carbone brut du numérique à l’EPFL au chiffre conséquent de 7 900 t éq CO2 / an, soit l’équivalent d’environ 3500 vols Paris/New York aller-retour en classe économique, ou de 400 voitures sur la totalité de leur cycle de vie. À titre de comparaison, cela représente 25% des émissions carbones globales1 de l’EPFL en 2017 (32 000 t éq CO2), ainsi que respectivement 103% et 68% des émissions liées à la mobilité pendulaire (7 700 t éq CO2) et aux déplacements professionnels (11 700 t éq CO2) cette même année. De plus, le numérique à l’EPFL consomme annuellement environ 121 000 m3 d’eau douce, 240 300 GJ EP d’énergie primaire et 541 kg éq Sb de ressources abiotiques. La méthode Swiss Ecofactors 2013 publiée par l’OFEV permet de comparer ces différents impacts en les ramenant à des points de charge environnementale (UBP). L’impact cumulatif du numérique à l’EPFL selon ces 4 métriques s’élève alors à 5 061 701 kUBP, la tension sur l’eau représentant 0.9% du total (44 723 kUBP), celle sur les ressources abiotique 11.8% (595 061 kUBP), l’épuisement énergétique 16.1% (817 019 kUBP) et les émissions de GES 71.2% (3 604 898 kUBP). Il apparait de plus que 40 à 50% de l’impact environnemental de l’EPFL est dû au matériel électronique grand public (ordinateurs, laptops, portables, tablettes, etc). Il est aussi estimé que plus de la moitié de l’impact des appareils électroniques est à imputer à leur production. Notre institution souffre, à ce propos, d’un problème de suréquipement. D’après l’inventaire EPFL, un membre du personnel moyen à l’EPFL est ainsi équipé par l’école de 3.3 ordinateurs (fixes et portables confondus). La perte financière due à ce suréquipement s’élève selon nos estimations à près de 7.4 millions de francs suisses par an. Celle due à la surconsommation électrique, à 1 million de francs annuels supplémentaires. En limitant le renouvellement du matériel électronique, en faisant chuter le taux d’équipement à une valeur plus raisonnable et en favorisant son recyclage, l’EPFL pourrait donc réduire de manière significative son empreinte environnementale numérique et se doter de moyens conséquents pour assurer la transition vers un modèle de numérique durable. Pour ce faire, nous recommandons la création d’un service de gestion centralisée. Enfin, sur la base des données disponibles, nous avons grossièrement estimé que l’empreinte du numérique à l’EPFL devrait être réduite de près de 18% par an jusqu’en 2030 afin de respecter les obligations fédérales. Même si ce chiffre pourrait être raffiné par une étude plus approfondie, cet objectif semble si ambitieux qu’il sera probablement obligatoire de compenser l’impact excédentaire du numérique par des baisses supplémentaires dans d’autres domaines. L’intégration du numérique au bilan environnemental annuel de l’EPFL nous semble dès lors indispensable.
2021-03-01
[11] SiML: Sieved Maximum Likelihood for Array Signal Processing
Stochastic Maximum Likelihood (SML) is a popular direction of arrival (DOA) estimation technique in array signal processing. It is a parametric method that jointly estimates signal and instrument noise by maximum likelihood, achieving excellent statistical performance. Some drawbacks are the computational overhead as well as the limitation to a point-source data model with fewer sources than sen- sors. In this work, we propose a Sieved Maximum Likelihood (SiML) method. It uses a general functional data model, allowing an unrestricted number of arbitrarily-shaped sources to be recovered. To this end, we leverage functional analysis tools and express the data in terms of an infinite-dimensional sampling operator acting on a Gaussian random function. We show that SiML is computationally more efficient than traditional SML, resilient to noise, and results in much better accuracy than spectral-based methods.
2021-02-03. 2021 IEEE 46th International Conference on Acoustics, Speech and Signal Processing (ICASSP) , Online conference , June 6-11, 2021. p. 4535-4539.DOI : 10.1109/ICASSP39728.2021.9414991.
[12] TV-based reconstruction of periodic functions
We introduce a general framework for the reconstruction of periodic multivariate functions from finitely many and possibly noisy linear measurements. The reconstruction task is formulated as a penalized convex optimization problem, taking the form of a sum between a convex data fidelity functional and a sparsity-promoting total variation based penalty involving a suitable spline-admissible regularizing operator L. In this context, we establish a periodic representer theorem, showing that the extreme-point solutions are periodic L-splines with less knots than the number of measurements. The main results are specified for the broadest classes of measurement functionals, spline-admissible operators, and convex data fidelity functionals. We exemplify our results for various regularization operators and measurement types (e.g., spatial sampling, Fourier sampling, or square-integrable functions). We also consider the reconstruction of both univariate and multivariate periodic functions.
Inverse Problems
2020-11-01
DOI : 10.1088/1361-6420/abbd7e
[13] Fourier Tools
The FFT algorithm is a key pillar of modern numerical computing. This document is a collection of working notes on FFT-based algorithms. Efficient implementations of the former are made available through the pyFFS package.
2020-09-30
[14] Functional penalised basis pursuit on spheres
In this work, we propose a unified theoretical and practical spherical approximation framework for functional inverse problems on the hypersphere. More specifically, we consider recovering spherical fields directly in the continuous domain using functional penalised basis pursuit problems with gTV regularisation terms. Our framework is compatible with various measurement types as well as non-differentiable convex cost functionals. Via a novel representer theorem, we characterise their solution sets in terms of spherical splines with sparse innovations. We use this result to derive an approximate canonical spline-based discretisation scheme, with vanishing approximation error. To solve the resulting finite-dimensional optimisation problem, we propose an efficient and provably convergent primal-dual splitting algorithm. We illustrate the versatility of our framework on real-life examples from the field of environmental sciences.
Applied and Computational Harmonic Analysis
2020
DOI : 10.1016/j.acha.2020.12.004
[15] CPGD: Cadzow Plug-and-Play Gradient Descent for Generalised FRI
Finite rate of innovation (FRI) is a powerful reconstruction framework enabling the recovery of sparse Dirac streams from uniform low-pass filtered samples. An extension of this framework, called generalised FRI (genFRI), has been recently proposed for handling cases with arbitrary linear measurement models. In this context, signal reconstruction amounts to solving a joint constrained optimisation problem, yielding estimates of both the Fourier series coefficients of the Dirac stream and its so-called annihilating filter, involved in the regularisation term. This optimisation problem is however highly non convex and non linear in the data. Moreover, the proposed numerical solver is computationally intensive and without convergence guarantee. In this work, we propose an implicit formulation of the genFRI problem. To this end, we leverage a novel regularisation term which does not depend explicitly on the unknown annihilating filter yet enforces sufficient structure in the solution for stable recovery. The resulting optimisation problem is still non convex, but simpler since linear in the data and with less unknowns. We solve it by means of a provably convergent proximal gradient descent (PGD) method. Since the proximal step does not admit a simple closed-form expression, we propose an inexact PGD method, coined as Cadzow plug-and-play gradient descent (CPGD). The latter approximates the proximal steps by means of Cadzow denoising, a well-known denoising algorithm in FRI. We provide local fixed-point convergence guarantees for CPGD. Through extensive numerical simulations, we demonstrate the superiority of CPGD against the state-of-the-art in the case of non uniform time samples.
IEEE Transactions on Signal Processing
2020-05-01
DOI : 10.1109/TSP.2020.3041089
[16] Functional Inverse Problems on Spheres: Theory, Algorithms and Applications
Many scientific inquiries in natural sciences involve approximating a spherical field -namely a scalar quantity defined over a continuum of directions- from generalised samples of the latter (e.g. directional samples, local averages, etc). Such an approximation task is often carried out by means of a convex optimisation problem, assessing an optimal trade-off between a data-fidelity and regularisation term. To solve this problem numerically, scientists typically discretise the spherical domain by means of quasi-uniform spherical point sets. Finite-difference methods for approximating (pseudo-)differential operators on such discrete domains are however unavailable in general, making it difficult to work with generalised Tikhonov (gTikhonov) or Total Variation (gTV) regularisers, favouring physically admissible spherical fields with smooth and sharp variations respectively. To overcome such limitations, canonical spline-based discretisation schemes have been proposed. In the case of gTikhonov regularisation, the optimality of such schemes has been proven for spherical scattered data interpolation problems with quadratic cost functionals. This result is however too restrictive for most practical purposes, since it is restricted to directional samples and Gaussian noise models. Moreover, a similar optimality result for gTV regularisation is still lacking. In this thesis, we propose a unified theoretical and practical spherical approximation framework for functional inverse problems on the hypersphere. More specifically, we consider recovering spherical fields directly in the continuous domain using penalised convex optimisation problems with gTikhonov or gTV regularisation terms. Our framework is compatible with various measurement types as well as non-differentiable convex cost functionals. Via novel representer theorems, we characterise the solutions of the reconstruction problem for both regularisation strategies. For gTikhonov regularisation, we show that the solution is unique and can be expressed as a linear combination of the sampling linear functionals -modelling the acquisition process- primitived twice with respect to the gTikhonov pseudo-differential operator. For gTV regularisation, we show that the solutions are convex combinations of spherical splines with less innovations than available measurements. We use both results to design canonical spline-based discretisation schemes, exact for gTikhonov regularisation and with vanishing approximation error for gTV regularisation. We propose efficient and provably convergent proximal algorithms to solve the discrete optimisation problems resulting from both discretisation schemes. We illustrate the superiority of our continuous-domain spherical approximation framework over traditional methods on a variety of real and simulated datasets in the fields of meteorology, forestry, radio astronomy and planetary sciences. The sampling functionals, cost functions and regularisation strategies considered in each case are diverse, showing the versatility of both our theoretical framework and algorithmic solutions. In the last part of this thesis finally, we design an efficient and locally convergent algorithm for recovering the spatial innovations of periodic Dirac streams with finite rates of innovation, and propose a recurrent neural-network for boosting spherical approximation methods in the context of real-time acoustic imaging.
Lausanne, EPFL, 2020.DOI : 10.5075/epfl-thesis-7174.
[17] DeepWave: A Recurrent Neural-Network for Real-Time Acoustic Imaging
We propose a recurrent neural-network for real-time reconstruction of acoustic camera spherical maps. The network, dubbed DeepWave, is both physically and algorithmically motivated: its recurrent architecture mimics iterative solvers from convex optimisation, and its parsimonious parametrisation is based on the natural structure of acoustic imaging problems. Each network layer applies successive filtering, biasing and activation steps to its input, which can be interpreted as generalised deblurring and sparsification steps. To comply with the irregular geometry of spherical maps, filtering operations are implemented efficiently by means of graph signal processing techniques. Unlike commonly-used imaging network architectures, DeepWave is moreover capable of directly processing the complex-valued raw microphone correlations, learning how to optimally back-project these into a spherical map. We propose moreover a smart physically-inspired initialisation scheme that attains much faster training and higher performance than random initialisation. Our real-data experiments show DeepWave has similar computational speed to the state-of-the-art delay-and-sum imager with vastly superior resolution. While developed primarily for acoustic cameras, DeepWave could easily be adapted to neighbouring signal processing fields, such as radio astronomy, radar and sonar.
2019-12-16. Thirty-third Conference on Neural Information Processing Systems (NeurIPS) , Vancouver, British Columbia, Canada , December 9-14, 2019.[18] Sparse Spline Approximation on the Hypersphere by Generalised Total Variation Basis Pursuit
Many scientific inquiries in natural sciences involve approximating a spherical field –namely a scalar quantity defined over a continuum of directions– from generalised samples of the latter. Typically, a convex optimisation problem is formulated in terms of a data-fidelity and regularisation trade-off. To solve this optimisation problem numerically, scientists resort to discretisation via spherical pixelisation schemes called tessellations. Finite-difference methods for approximating (pseudo-)differential operators on spherical tessellations are however unavailable in general, making it hard to work with generalised Tikhonov or Total Variation (gTV) regularisers. To overcome such limitations, canonical spline-based discretisation schemes have been proposed. In the case of Tikhonov regularisation, optimality has been proven for spherical interpolation. A similar result for gTV regularisation is however still lacking. In this work, we propose a spline approximation framework for a generic class of reconstruction problems on the hypersphere. Such problems are formulated over infinite-dimensional Banach spaces, seeking spherical fields with minimal gTV and verifying a convex data-fidelity constraint. The data itself can be acquired by generalised sampling strategies. Via a novel representer theorem, we characterise their solution sets in terms of spherical splines with sparse innovations, Green functions of the gTV pseudo-differential operator. We use this result to derive an approximate canonical spline-based discretisation scheme, with controlled approximation error. To solve the resulting finite-dimensional optimisation problem, we propose an efficient primal-dual splitting method. We illustrate the versatility of our framework on numerous real-life examples from the field of environmental sciences and radio astronomy.
2019-09-11
[19] Graph Spectral Clustering of Convolution Artefacts in Radio Interferometric Images
The starting point for deconvolution methods in radioastronomy is an estimate of the sky intensity called a dirty image. These methods rely on the telescope point-spread function so as to remove artefacts which pollute it. In this work, we show that the intensity field is only a partial summary statistic of the matched filtered interferometric data, which we prove is spatially correlated on the celestial sphere. This allows us to define a sky covariance function. This previously unexplored quantity brings us additional information that can be leveraged in the process of removing dirty image artefacts. We demonstrate this using a novel unsupervised learning method. The problem is formulated on a graph: each pixel interpreted as a node, linked by edges weighted according to their spatial correlation. We then use spectral clustering to separate the artefacts in groups, and identify physical sources within them.
2019-05-12. 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , Brighton, UK , 12-17 May 2019. p. 4260-4264.DOI : 10.1109/ICASSP.2019.8683841.
[20] A Physical Model of Non-stationary Blur in Ultrasound Imaging
Conventional ultrasound (US) imaging relies on delay-and-sum (DAS) beamforming which retrieves a radio- frequency (RF) image, a blurred estimate of the tissue reflectivity function (TRF). Despite the non-stationarity of the blur induced by propagation effects, most state-of-the-art US restoration approaches exploit shift-invariant models and are inaccurate in realistic situations. Recent techniques approximate the shift- variant blur using sectional methods resulting in improved accuracy. But such methods assume shift-invariance of the blur in the lateral dimension which is not valid in many US imaging configurations. In this work, we propose a physical model of the non-stationary blur, which accounts for the diffraction effects related to the propagation. We show that its evaluation results in the sequential application of a forward and an adjoint propagation operators under some specific assumptions that we define. Taking into account this sequential structure, we exploit efficient formulations of the operators in the discrete domain and provide an evaluation strategy which exhibits linear complexity with respect to the grid size. We also show that the proposed model can be interpreted in terms of common simplification strategies used to model non-stationary blur. Through simulations and in vivo experimental data, we demonstrate that using the proposed model in the context of maximum-a-posteriori image restoration results in higher image quality than using state-of-the-art shift-invariant models. The supporting code is available on github: https://github.com/LTS5/us-non-stationary-deconv.
IEEE Transactions on Computational Imaging
2019
DOI : 10.1109/TCI.2019.2897951
[21] Towards More Accurate Radio Telescope Images
Radio interferometry usually compensates for high levels of noise in sensor/antenna electronics by throwing data and energy at the problem: observe longer, then store and process it all. We propose instead a method to remove the noise explicitly before imaging. To this end, we developed an algorithm that first decomposes the instances of antenna correlation matrix, the so-called visibility matrix, into additive components using Singular Spectrum Analysis and then cluster these components using graph Laplacian matrix. We show through simulation the potential for radio astronomy, in particular, illustrating the benefit for LOFAR, the low frequency array in Netherlands. Least-squares images are estimated with far higher accuracy with low computation cost without the need for long observation time.
2018-01-01. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , Salt Lake City, UT , Jun 18-22, 2018. p. 1983-1985.DOI : 10.1109/CVPRW.2018.00254.
[22] A Functional Framework For Ultrasound Imaging
Delay-And-Sum (DAS), the state-of-art in ultrasound imaging, is known to be sub-optimal, resulting in low resolution and contrast. Most proposed improvements involve ad-hoc re-weighting, or hit computational bottlenecks given real-time requirements. This paper takes a fresh perspective on the problem, leveraging a functional framework to obtain a regularized least-squares estimate of the tissue reflectivity function. An explicit solution is derived, which for specific cases - can be efficiently implemented, making it suitable for real-time imaging. In our formulation, DAS appears as a back-projection without any optimal properties. We illustrate the framework through first a one-dimensional set-up, and then a two-dimensional extension with Synthetic Aperture Focusing Technique (SAFT). The one-dimensional simulations show a 77% resolution improvement with respect to DAS, which artificially limits the available bandwidth. On a standard performance-assessment phantom, simulations show that SAFT depth resolution can be improved by 71%.
2018-01-01. 25th IEEE International Conference on Image Processing (ICIP) , Athens, GREECE , Oct 07-10, 2018. p. 1837-1841.DOI : 10.1109/ICIP.2018.8451283.
[23] Hardware And Software For Reproducible Research In Audio Array Signal Processing
In our demo, we present two hardware platforms for prototyping audio array signal processing. Pyramic is a 48-channel microphone array fitted on an FPGA and Compact Six is a portable microphone array with six microphones, closer to the technical constraints of consumer electronics. A browser based interface was developed that allows the user to interact with the audio stream from the arrays in real time. The software component of this demo is a Python module with implementations of basic audio signal processing blocks and popular techniques like STFT, beamforming, and DoA. Both the hardware design files and the software are open source and freely shared. As part of a collaboration with IBM Research, their beamforming and imaging technologies will also be portrayed. The hardware will be demonstrated through an installation processing the microphone signals into light patterns on a circular LED array. The demo will be interactive and let visitors play with different algorithms for DoA (SRP, FRIDA [1], Bluebild) and beamforming (MVDR, Flexibeam [2]). The availability of an open platform with reference implementations encourages reproducible research and minimizes setup-time when testing and benchmarking new audio array signal processing algorithms. It can also serve as a useful educational tool, providing a means to work with real-life signals.
2017. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , New Orleans, LANew Orleans, LA, USA , MAR 05-09, 20175-9 March, 2017. p. 6591-6592.DOI : 10.1109/ICASSP.2017.8005297.
[24] A Functional Framework for Ultrasound Imaging
Delay-And-Sum (DAS), the state-of-art in ultrasound imaging, is known to be sub-optimal, resulting in low resolution and contrast. Most proposed improvements involve ad-hoc re-weighting, or hit computational bottlenecks given real-time requirements. This pa- per takes a fresh perspective on the problem, leveraging a functional framework to obtain a regularized least-squares estimate of the tissue reflectivity function. An explicit solution is derived, which – for specific cases – can be efficiently implemented, making it suit- able for real-time imaging. In our formulation, DAS appears as a back-projection without any optimal properties. We illustrate the framework through first a one-dimensional set-up, and then a two- dimensional extension with Synthetic Aperture Focusing Technique (SAFT). The one-dimensional simulations show a 77% resolution improvement with respect to DAS, which artificially limits the available bandwidth. On a standard performance-assessment phantom, simulations show that SAFT depth resolution can be improved by 71%.
2017. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing , Calgary, Alberta, Canada , 15–20 April 2018.[25] LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims. The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (v) show that FRI does not lead to an augmented rate of false positives. Methods. We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results. We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.
Astronomy & Astrophysics
2017
DOI : 10.1051/0004-6361/201731828
[26] Randomized Projections for Improved Sensing and Imaging in Positron Emission Tomography
Positron Emission Tomography (PET) aims at recovering the metabolic activity of an organ of interest. Established algorithms implemented in contemporary PET scans are based on an approximation of the inverse Radon transform, resulting in a suboptimal estimate. In this context, the Bluebild algorithm is proposed to recover the metabolic activity through a mathematical model that reformulates the inversion problem in a continuous framework. The procedure involves computing the in- verse of a very large and dense Gram matrix, increasing significantly the computational cost and numerical instability of the algorithm. In this project, we investigate the use of Gaussian random projections as means of reducing the high dimensionality of the PET scan data, and consequently of the Gram matrix. We show that the conditioning of the Gram matrix is improved in expectation, making the recovery more stable and resilient to noise. Simulations are used to as- sess the accuracy of the estimate as well as the conditioning of the Gram matrix. Finally, we show that the results with the Bluebild framework are more accurate than the state-of-the-art algorithms.
2017Advisor(s) : M. M. J.-A. Simeoni; P. Hurley; H. Bolcskei
[27] DETERMINING POSITIONS OF TRANSDUCERS FOR RECEIVING AND/OR TRANSMITTING WAVE SIGNALS
The invention is notably directed to a method for determining positions {pi}i=1,..., N of transducers {Ai}i=1,..., N of an apparatus. The transducers are assumed to be configured for receiving wave signals from and/or transmitting wave signals to one or more regions {Rm}m=1,..., M of interest in an n-dimensional space, with n = 2 or 3. The method first comprises determining an n-dimensional spatial filter function , which matches projections {Pm}m=1,..., M of the one or more regions {Rm}m=1,..., M of interest onto an n – 1-dimensional sphere centered on the apparatus. Then, a density function is obtained, based on a Fourier transform of the determined spatial filter function . Finally, a position pi is determined, within said n-dimensional space, for each of N transducers, based on the obtained density function and a prescribed number N of the transducers. The invention is further directed to related devices, apparatuses and systems, as well as computer program products.
2017.
US10379198
US2018292509
US2018292508
[28] METHOD AND SYSTEM TO REDUCE NOISE IN PHASED-ARRAY SIGNALS FROM RECEIVERS LOCATED AT DIFFERENT LOCATIONS
The present invention is notably directed to a computerized method to reduce noise in phased- array signals from a set of receivers at different locations. Time-series are received from the receivers, which time-series form phased-array signals. The time-series are ordered based on the different locations of the receivers and spatially phased series are obtained from the ordered time- series. Each of the spatially phased series obtained comprises a series of signal values that are spatially phased. A noise component is identified in each of the spatially phased series obtained and removed from the spatially phased series to obtain denoised series. The invention is further directed to related receiver systems and computer program products.
2017.
US2020014106
US10396456
US2018062259
[29] Methods and apparatuses for versatile beamforming
The present invention is directed to methods and apparatuses for beamforming signals or compute beamformed signals. The present approach is to determine a series of beams from or for a set of devices configured for receiving signals from and/or transmitting signals to one or more regions of interest in an n-dimensional space, with n=2 or 3. Each of the devices has a known position pi within said n-dimensional space. Signals are to be respectively transmitted or received non-uniformly in this space, i.e., according to the particular regions of interest. During a first phase, operations are performed in order to successively obtain a spatial filter function {circumflex over (ω)}(r), a beamforming function ω(p), and beamforming weights ω(p1). The spatial filter function {circumflex over (ω)}(r) matches projections of the regions of interest onto an n−1-dimensional sphere centered on said set of devices. The function {circumflex over (ω)}(r), however, extends over the n-dimensional space and does not restrict to the n−1-dimensional sphere. During a second phase, delays and gains are suitably introduced in the signals, by weighting time-series according to beamforming weights ω(pi) obtained during the first phase.
2017.
US2017250468
[30] Blind calibration of sensors of sensor arrays
Embodiments include methods for calibrating sensors of one or more sensor arrays. Aspects include accessing one or more beamforming matrices respectively associated to the one or more sensor arrays. Source intensity estimates are obtained for a set of points in a region of interest, based on measurement values as obtained after beamforming signals from the one or more sensor arrays based on the one or more beamforming matrices, assuming fixed amplitude and phase of gains of sensors of the one or more sensor arrays. Estimates of amplitude and phase of the sensor gains are obtained based on: measurement values as obtained before beamforming; and the previously obtained source intensity estimates. The obtained estimates of amplitude and phase can be used for calibrating said sensors.
2017.
US2017149132
[31] Reconstruction using approximate message passing methods
The present invention is notably directed to a computer-implemented method for image reconstruction. The method comprises: accessing elements that respectively correspond to measurement values, which can be respectively mapped to measurement nodes; and performing message passing estimator operations to obtain estimates of random variables associated with variable nodes, according to a message passing method in a bipartite factor graph. In this message passing method: the measurement values are, each, expressed as a term that comprises linear combinations of the random variables; each message exchanged between any of the measurement nodes and any of the variable nodes is parameterized by parameters of a distribution of the random variables; and performing the message passing estimator operations further comprises randomly mapping measurement values to the measurement nodes, at one or more iterations of the message passing method. Finally, image data are obtained from the obtained estimates of the random variables, which image data are adapted to reconstruct an image. The present invention is further directed to related systems and methods using the above image reconstruction method.
2017.
US2017103550
[32] Iterative image subset processing for image reconstruction
The present invention is notably directed to computer-implemented methods and systems for recovering an image. Present methods comprise: accessing signal data representing signals; identifying subsets of points arranged so as to span a region of interest as current subsets of points; reconstructing an image based on current subsets of points, by combining signal data associated to the current subsets of points; detecting one or more signal features in a last image reconstructed; for each of the detected one or more signal features, modifying one or more subsets of the current subsets, so as to increase, for each of the modified one or more subsets, a relative number of points at a location of said each of the detected one or more signal features. The relative number of points of a given subset at a given location may be defined as the number of points of said given subset at the given location divided by the total number of points of said given subset, whereby new current subsets of points are obtained; and repeating the above steps of reconstructing, detecting and modifying, as necessary to obtain a reconstructed image that satisfies a given condition.
2017.
US2017103549
[33] Denoising radio interferometric images by subspace clustering
Radio interferometry usually compensates for high levels of noise in sensor/antenna electronics by throwing data and energy at the problem: observe longer, then store and process it all. Furthermore, only the end image is cleaned, reducing flexibility substantially. We pro- pose instead a method to remove the noise explicitly before imaging. To this end, we developed an algorithm that first decomposes the sensor signals into components using Singular Spectrum Analysis and then cluster these components using graph Laplacian matrix. We show through simulation the potential for radio astronomy, in particular, illustrating the benefit for LOFAR, the low frequency array in Netherlands. From telescopic data to least-squares image estimates, far higher accuracy with low computation cost results without the need for long observation time.
2017. IEEE International Conference on Image Processing , Beijing, China , September 17-20, 2017. p. 2134-2138.DOI : 10.1109/ICIP.2017.8296659.
[34] On Denoising Crosstalk in Radio Interferometry
Noise reduction in radio interferometers is a formidable task due to the relatively weak signals under observation. The usual strategy is to compensate with a large number of antennas and extend the observation time. We showed recently that one could denoise directly antenna time-series, under the assumption of uncorrelated noise. This work is a first step to extend it for when crosstalk is present. We first propose a subspace based algorithm to estimate noise covariance, and then demonstrate that the noise covariance can be accurately estimated, and the image denoised. We show sky images generated using a core station LOFAR, and that even with one tenth the observation time (and thus one tenth the data) the estimate can still be enhanced.
International BASP Frontiers workshop 2017, Villars-sur-Ollon, Switzerland, January 29 - February 3, 2017.[35] Sinobeam: Focused Beamforming for PET Scanners
Focused beamformers have been extensively used in phased-array signal processing, leading to simple and efficient imaging procedures, with high sensitivity and resolution. The beamshape acts as a spatial filter, scanning the intensity of the incoming signal for particular locations. We introduce beamforming in the context of Positron Emission Tomography (PET), and propose a new beamformer called Sinobeam. Inspired by the Flexibeam framework, we sample the beamforming weights from an analytically-specified beamforming function. Since the weights are data-independent, the resulting imaging algorithm is extremely efficient, while presenting better resolution and contrast than state of the art methods as demonstrated by simulation.
2017. IEEE International Symposium on Biomedical Imagery (ISBI) , Melbourne, Australia , April 18-21, 2017.[36] On an Analytical, Spatially-Varying, Point-Spread-Function
The point spread function (PSF), namely the response of an ultrasound system to a point source, is a powerful measure of the quality of an imaging system. The lack of an analytical formulation inhibits many applications ranging from apodization optimization, array-design, and deconvolution algorithms. We propose to fill this gap through a general PSF derivation that is flexible with respect to the type of transmission (synthetic aperture, plane-wave, diverging-wave etc.), while faithfully capturing the spatially-variant blurring of the Tissue Reflectivity Function as caused by Delay-And-Sum reconstruction. We validate the derived PSF against simulation using Field II, and show that accounting for PSF spatial-variability in sparse- based deconvolution improves reconstruction.
2017. 2017 IEEE International Ultrasound Symposium (IUS) , Washington D.C. USA , September 6-9, 2017.DOI : 10.1109/ULTSYM.2017.8092301.
[37] Flexarray: Random phased array layouts for analytical spatial filtering
We propose a method for designing phased-arrays according to a given, analytically-specified, target beamshape. Building on the flexibeam framework, antenna locations are sampled from a probabilistic density function. Naturally scalable with the number of antennas, it is also computationally efficient and numerically stable, as it relies on analytical derivation. We prove that, under mild conditions, the achieved beamshapes converge uniformly to the target beamshapes as the number of antennas increases. We illustrate the technique through a number of examples. For instance, by use of the Laplace filter, beams with extremely fast decay away from the centre of focus are achieved. Some macroscopic observations result. We observe that matched beamforming weights may, for a given layout, achieve beamshapes targeting regions, rather than isolated directions as commonly believed. Additionally, the convergence analysis can be used to fore- cast the growth of future large phased arrays such as the Square Kilometre Array (SKA).
2017. IEEE 2017 International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017) , New Orleans, Louisiana, USA , 5-9 March 2017. p. 3380-3384.DOI : 10.1109/ICASSP.2017.7952783.
[38] Imaging in radio interferometry by iterative subset scanning using a modified amp algorithm
Imaging techniques in radio interferometry often face a significant challenge posed by the large number of antenna signals received, from which the image information needs to be extracted. Beam- forming is envisaged to reduce the rate required for transporting data from groups of antennas to a central site for further processing. We propose a novel method for image reconstruction based on the iterative scanning of a region of interest, combined with randomized beamforming. A modified approximate message-passing algorithm is adopted to extract relevant image information from beamformed signals received at the antenna stations. The method is illustrated by simulations, with reference to the LOFAR radio interferometer, and compared with the CLEAN algorithm.
2016. Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on , Shanghai, China , 20-25 March 2016.DOI : 10.1109/ICASSP.2016.7472293.
[39] On Flexibeam for radio interferometry
Beamforming in radio astronomy focuses at and around a direction using matched beamforming or a derivative, to both maximise the energy coming from this point and reduce the data rate to the central processor. Such beamformers often result in large side-lobes, with influence from undesired directions. Moreover, there is a fundamental lack of flexibility when, for example, targeting extended regions or tracking objects with uncertainty as to their location. We show how the analytic framework Flexibeam can be leveraged to achieve beamshapes that cover general spatial areas with substantially more energy concentration within the region-of-interest. The method is numerically stable, and scalable in the number of antennas, and does not magnify noise.
International BASP Frontiers workshop 2017, Villars-sur-Ollon, Switzerland, January 29 - February 3, 2017.[40] Laplace Beamshapes for Phased-Array Imaging
The imaging capabilities of phased-array systems are governed by the properties of their array beamshape, directly linked to the instrument impulse response. To ensure good spatial resolution, beamshapes are designed with a very narrow main lobe, at the cost of a complex sidelobe structure, potentially leading to severe image artifacts. We propose the use of a new beamshape, called the Laplace beamshape, built with the Flexibeam framework. This beamshape trades spatial resolution for smoother sidelobes, resulting in an artifact-free image that is much easier to process. This tradeoff can be optimally assessed through a single parameter of the beamshape, allowing the analyst to perform a multi-scale analysis.
International BASP Frontiers workshop 2017, Villars-sur-Ollon, Switzerland, January 29 - February 3, 2017.[41] Beamforming towards regions of interest for multi-site mobile networks
We show how a beamforming technique for analytical spatial filtering, called Flexibeam, can be applied to mobile phone mast broadcasting so as to result in concentrations of power where most devices are. To that end, Flexibeam is interpreted as transmission beamforming. An analytically described radiation pattern is extended from a sphere to Euclidean space. A continuous beamforming function is then obtained by the Fourier transform of the extended radiation pattern. We then show how a Gaussian filter can be approximately achieved using beamforming. The method is then expanded by means of an example of a collection of mobile phone masts covering an area of Zurich city so as to concentrate energy where devices are concentrated.
2016. International Zurich Seminar on Communications (IZS 2016) , Sorell Hotel Zürichberg, Zurich, Switzerland , March 2 – 4, 2016.DOI : 10.3929/ethz-a-010602015.
[42] Flexibeam: Analytic Spatial Filtering By Beamforming
We propose a new, general, method for spatial filtering by beam-forming. The desired filter, specified analytically on an n-dimensional sphere, is extended to n+1-dimensional Euclidean space. A continuous beamforming function is then obtained by the n+1-dimensional Fourier transform of the extended filter. The beamforming weight at a given array element corresponds then to a sample of the function at the array element location., The scheme is a generalisation of focused beamforming on a single point by phase difference alignment. The analytic framework allows tractable, stable determination of beamforming weights, and for clear filter specification. By avoiding approximating a Dirac, desired areas can be covered with reduced side lobes. Multiple areas may be targeted simultaneously. In communications applications, channel information updates can be reduced, and movement accounted for. A WiFi demonstration shows that more flexible beam-shapes can be beneficial for real-life examples, factoring in attenuation.
2016. IEEE 2016 International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016) , Shanghai, China , March 15-20, 2016. p. 2877-2880.DOI : 10.1109/ICASSP.2016.7472203.
[43] Deconvolution of Gaussian Random Fields using the Graph Fourier Transform
Real-life acquisition systems are fundamentally limited in their ability to reproduce point sources. For example, a point source object, a star say, observed with an optical telescope is blurred by the imperfect lenses composing the system. Mathematically, the point sources are convolved with a device-specific kernel. This kernel, which depends only on the characteristics of the acquisition system, can to a certain extent be designed so as to minimise the effect of the convolution. But in many applications careful design of the sensing device is not good enough, and a proper deconvolution step is needed. In this paper, we propose an efficient deconvolution algorithm for point source Gaussian random fields as sensed by phased arrays. The algorithm first obtains a continuous least-squares estimate of the random field’s second order moment. The procedure is then in two subsequent steps, decoupling sources localisation and intensity recovery. First, we sample the continuous estimate at a high enough resolution and use the covariance function to construct a weighted graph. We then define a signal on this graph by assigning to each of the sample locations in the field their corresponding intensity (variance of the field at this location). The Graph Fourier Transform (GFT) is then used in order to filter out the convolution artifacts within the estimate. Candidate locations of the sources can then be identified with local maxima of the filtered estimate. From these locations a deconvolution problem is solved by means of weighted linear regression and the intensities of the sources within the field recovered. Finally, a multi-scale approach based on the filtering of the leading eigenvalues of the covariance operator is discussed, and its benefits in terms of efficiency and accuracy are highlighted.
2016Advisor(s) : M. Vetterli; V. Panaretos; P. Hurley
[44] Attributing Aliasing Artifacts in Dirty Intensity Fields to their Parent Source
Estimating intensity fields of stochastic phenomenons is of crucial interest in many scientific applications. Typical experimental setups involve an acquisition system, that subsequently filters and samples the probed intensity field. This equivalently defines a sampling operator, fully specified by the characteristics of the measuring device. Reconstruction of the intensity field can then be achieved by performing an interpolation step on the collected samples, with an interpolation operator ideally matched with the sampling operator. Sampling followed by interpolation can be shown to act as an orthogonal projection onto the unknown intensity field. For this reason, the reconstructed intensity field can in general differ quite substantially from the actual one, polluted by aliasing artifacts that complicate and often forbid the identification of features within it. Nonlinear algorithms such as CLEAN (acclamated in radio astronomy) have been proposed to recover the actual features from the aliased intensity field, but their convergence properties have not yet been fully assessed. In this work, we propose a novel method to locate sources within the recovered intensity field and attribute aliasing artifacts to their parent source, in the specific case of point source intensity fields. Rather than directly estimating the intensity field, we first reconstruct the underlying random field it characterises. We then estimate the second moment of this reconstructed random field, including its covariance function. We use the covariance function as a measure of resemblance of different parts of the intensity field. The problem is then formulated as a clustering problem on a graph: the intensity field is sampled on a fine grid, each grid point is interpreted as a node on a graph, and edges between nodes are weighted proportionally to their covariance. We then use spectral clustering to separate the artifacts in groups, and identify the actual sources in the field as the nodes with maximum intensity within each cluster. We conclude with an application of our method to the field of radio astronomy.
2016Advisor(s) : M. Vetterli; V. Panaretos; P. Hurley
[45] Iterative image subset scanning for image reconstruction from sensor signals
Image reconstruction techniques from signals received by sensors find application in several fields, including radio interferometry for astronomical investigations and magnetic resonance imaging for medical applications. This paper presents a novel method for image reconstruction based on the iterative scanning of a region of interest. A modified approximate message passing (AMP) algorithm is adopted to extract relevant image information with low computational complexity from signals received by sensors. The method is illustrated by simulations, with reference to the LOFAR radio interferometer, and compared in the case of radio astronomy with the CLEAN algorithm.
2015. Signal Processing and Information Technology (ISSPIT), 2015 IEEE International Symposium on , Abu Dhabi, United Arab Emirates , 7-10 Dec. 2015.DOI : 10.1109/ISSPIT.2015.7394413.
[46] Towards More Accurate and Efficient Beamformed Radio Interferometry Imaging
The Square Kilometre Array (SKA) will form the largest radio telescope ever built, generating on the order of one terabyte of data per second. To reduce the data flow sent to the central processor, hierarchical designs have been proposed: the data is primarily collected in groups of antennas, and summed coherently by beamforming. Historically, Fourier analysis has played a prominent role in radio astronomy interferometry, legitimated by the celebrated van Cittert-Zernike theorem. We show that, in the case of modern hierarchical designs, beamformed data has a less intimate, and thus more complicated relationship to the Fourier domain. Unsatisfactory attempts have been proposed to compensate, which implicitly retain the Fourier framework, and are limited to directive beamforming. We show that when stepping away from Fourier, we can embed the data in a more natural domain originating from the telescope configuration and the specific beamforming technique. This leads to a new, more accurate, imaging pipeline. Standard techniques such as w-projection, and gridding are no longer needed, as the reconstruction is performed on the celestial sphere. The proposed imager operates in two steps. First, a preconditioning based on the Gram-Schmidt orthogonalization procedure is performed, in order to facilitate the computation of the pseudoinverse sky estimate. Then, from this, the LASSO estimate is approximated very efficiently. The quality of this approximation is shown to be linked directly to the effective support of the instrument point spread function. Due to the greater flexibility of this framework, information-maximising beamforming techniques such as randomised beamforming can be readily incorporated. Moreover, we use the Bonferroni method to construct global confidence intervals onto the Gram-Schmidt least squares estimate, and use them to test the statistical significance of each pixel. The complexity of the proposed technique is assessed and compared to the the state-of-the-art combined CLEAN and A-projection algorithm. In the case of LOFAR, we show that our algorithm can be from 2 to 34 times faster. The accuracy and sensitivity of the new technique is also shown, for simulated data, to be superior.
2015Advisor(s) : V. Panaretos; M. Vetterli; P. Hurley
[47] Statistical Inference in Positron Emission Tomography
In this report, we investigate mathematical algorithms for image reconstruction in the context of positron emission tomography (a medical diagnosis technique). We first take inspiration from the physics of PET to design a mathematical model tailored to the problem. We think of positron emissions as an output of an indirectly observed Poisson process and formulate the link between the emissions and the scanner records through the Radon transform. This model allows us to express the image reconstruction in terms of a standard problem in statistical estimation from incomplete data. Then, we investigate different algorithms as well as stopping criterion, and compare their relative efficiency.
2014Advisor(s) : M. Kuusela; V. Panaretos
[48] Semi-Automatic Transcription Tool for Ancient Manuscripts
In this work, we investigate various techniques from the fields of shape analysis and image processing in order to construct a semi-automatic transcription tool for ancient manuscripts. First, we design a shape matching procedure using shape contexts, introduced in [1], and exploit this procedure to compute different distances between two arbitrary shapes/words. Then, we use Fischer discrimination to combine these distances in a single similarity measure and use it to naturally represent the words on a similarity graph. Finally, we investigate an unsupervised clustering analysis on this graph to create groups of semantically similar words and propose an uncertainty measure associated with the attribution of one word to a group. The clusters together with the uncertainty measure form the core of the semi-automatic transcription tool, that we test on a dataset of 42 words. The average classification accuracy achieved with this technique on this dataset is of 86%, which is quiet satisfying. This tool allows to reduce the actual number of words we need to type to transcript a document of 70%.
IC Research Day 2014: Challenges in Big Data, SwissTech Convention Center, Lausanne, Switzerland, June 12, 2014.[49] Shape Optimization of an Hydrofoil by Isogeometric Analysis
We use Isogeometric Analysis as a framework for NURBS-based shape optimization of hydrofoils. We present geometrical representations by NURBS and some of their properties to design an hydrofoil. Then, we consider an irrotational flow around an hydrofoil and solve the Laplace equation in the stream function formulation. Finally, we perform the shape optimization of the hydrofoil by considering the stream function formulation as the state problem and different objective functionals.
2014Advisor(s) : L. Dede'; A. Quarteroni
[50] Discussion sur le comportement du Libor durant la crise financière
Dans ce papier, nous nous intéressons aux valeurs du LIBOR pendant la crise financière de 2007. Suivant les observations passées et la logique économique, nous établissons un modèle linéaire qui explique ce taux à partir de celui des treasury bills américains, pour la période janvier 2000 - août 2007. Nous contruisons alors des intervalles de confiance pour les valeurs du LIBOR de 2000 à nos jours et observons les écarts du taux ob- servé par rapport aux bornes prédites. Les résultats sont nets. Les valeurs du LIBOR ne sortent de leur intervalle de confiance qu’entre le 15 juin 2007 et le 1 Juin 2009, soit pendant la crise. Nous observons néanmoins une au- tocorrélation dans les résidus, que nous supprimons en fittant un modèle ARMA(1,1).
2013Advisor(s) : A. C. Davison
[51] Statistics on Manifolds applied to Shape Theory
In this report, we use a variety of tools from differential geometry to propose a nonlinear extension of the principal components analysis (PCA) into manifolds setting. This extension, that we shall call principal geodesics analysis (PGA), attempts to find analogs of the principal components by introducing the principal geodesic components. We then construct the shape space of triangles Σ^3_2 and find a convenient parametrization of it. Finally, we apply the PGA procedure previously designed to analyze the variability of a sample of shapes, randomly chosen onto the shape space of triangles.
2013Advisor(s) : V. Panaretos
[52] Modélisation numérique des formes d’équilibre d’un globule rouge
Nous nous proposons dans cette étude de valider le modèle introduit par Canham et Helfrich afin de décrire les formes d’équilibre statique d’un globule rouge. Selon ce modèle, la forme du globule rouge est solution d’un problème d’optimisation sous contrainte: minimisation de l’énergie de Canham Helfrich pour un volume et une aire fixés. Après avoir formaliser le problème matématiquement, nous dérivons la conditionn d’optimalité menant à une équation différentielle ordinaire non linéaire vérifiée par la forme du globule rouge. Nous traitons les cas bidimensionnel et tridimensionnel axismétrique. Nous présentons ensuite la méthodologie employée pour résoudre numériquement le problème et exploitons les résultats des simulations.
2013Advisor(s) : A. Laadhari; A. Quarteroni
[53] Détermination d’une orbite autour de L2 pour la mission CHEOPS
Afin de déterminer l’orbite qu’un satellite d’observation d’exoplanètes devait adopter, il a été nécessaire de considérer les points de Lagrange et plus particulièrement le deuxième de ces points. Dans les pages suivantes, nous localisons les trois premiers de ces points (L1, L2 et L3), expliquons en quoi ils sont importants et démontrons la quasi-stabilité de L1 et L2. Ensuite, nous analysons les différents types d’orbites quasi-périodiques qui existent autour de L2 et construisons une orbite de Lyapunov grâce au logiciel STK/Astrogator, ce qui permet de modéliser la mission et de souligner ses avantages et ses inconvénients.
2012Advisor(s) : A. Ivanov
Teaching & PhD
Teaching
Communication Systems
Computer Science