Evangelos Alexiou

evangelos.alexiou@epfl.ch +41 21 693 46 20
Web site: Site web: https://go.epfl.ch/edee
EPFL STI IEL GR-EB
ELD 225 (Bâtiment ELD)
Station 11
CH-1015 Lausanne
Web site: Site web: https://mmspl.epfl.ch/
Biographie
Evangelos Alexiou has been working as a Doctoral Assistant in the Multimedia Signal Processing Group (MMSPG) at EPFL since July 2016, under the supervision of Prof. Touradj Ebrahimi. His research interests include multimedia, video compression and transmission, image processing, and communication systems. Currently he is working on quality assessment and compression of point cloud representation.
Evangelos Alexiou was born in Thessaloniki, Greece, on September 18th, 1986. He received his diploma in Electronic and Computer Engineering from the faculty of Technical University of Crete (TUC) in 2011. He received his M.Sc degree in Signal Processing for Communications and Multimedia from the faculty of National and Kapodistrian University of Athens (DI) in 2013. From November 2012 to September 2015, he was involved in a research program, namely MusiNet, with topic the comprehensive design and implementation of a networked music performance system under the supervision of Professor Alexandros Eleftheriadis.
Evangelos Alexiou was born in Thessaloniki, Greece, on September 18th, 1986. He received his diploma in Electronic and Computer Engineering from the faculty of Technical University of Crete (TUC) in 2011. He received his M.Sc degree in Signal Processing for Communications and Multimedia from the faculty of National and Kapodistrian University of Athens (DI) in 2013. From November 2012 to September 2015, he was involved in a research program, namely MusiNet, with topic the comprehensive design and implementation of a networked music performance system under the supervision of Professor Alexandros Eleftheriadis.
Publications
Publications Infoscience
2020
Quality Evaluation Of Static Point Clouds Encoded Using MPEG Codecs
2020. 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, Octobre 25-28, 2020. p. 3428-3432. DOI : 10.1109/ICIP40778.2020.9191308.Towards neural network approaches for point cloud compression
2020. SPIE Optical Engineering + Applications, Online, August 24-28, 2020. p. 1151008. DOI : 10.1117/12.2569115.Benchmarking of the plane-to-plane metric
2020Towards a point cloud structural similarity metric
2020. 2020 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), London, United Kingdom, July 6-10, 2020.PointXR: A toolbox for visualization and subjective evaluation of point clouds in virtual reality
2020. QoMEX 2020 International Conference on Quality of Multimedia Experience, Athlone, Ireland, May 26-28, 2020.2019
A comprehensive study of the rate-distortion performance in MPEG point cloud compression
2019-11-12. DOI : 10.1017/ATSIP.2019.20.Towards Modelling of Visual Saliency in Point Clouds for Immersive Applications
2019. IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, September 22-25, 2019. DOI : 10.1109/ICIP.2019.8803479.Exploiting user interactivity in quality assessment of point cloud imaging
2019. Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 5-7, 2019. DOI : 10.1109/QoMEX.2019.8743277.Point cloud quality evaluation: Towards a definition for test conditions
2019. Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 5-7, 2019. DOI : 10.1109/QoMEX.2019.8743258.2018
A novel methodology for quality assessment of voxelized point clouds
2018. SPIE Optical Engineering + Applications, San Diego, California, USA, August 19-23, 2018. p. 107520I. DOI : 10.1117/12.2322741.Point cloud subjective evaluation methodology based on reconstructed surfaces
2018. SPIE Optical Engineering + Applications, San Diego, CA, USA, August 19-23, 2018. p. 107520H. DOI : 10.1117/12.2321518.Impact of Visualisation Strategy for Subjective Quality Assessment of Point Clouds
2018. IEEE International Conference on Multimedia & Expo Workshops (ICMEW), San Diego, California, USA, July 23-27, 2018. DOI : 10.1109/ICMEW.2018.8551498.Benchmarking of Objective Quality Metrics for Colorless Point Clouds
2018. Picture Coding Symposium (PCS), San Francisco, California, USA, June 24-27, 2018. DOI : 10.1109/PCS.2018.8456252.Point Cloud Subjective Evaluation Methodology based on 2D Rendering
2018. Tenth International Conference on Quality of Multimedia Experience (QoMEX), Sardinia, Italy, May 29 - June 1, 2018. DOI : 10.1109/QoMEX.2018.8463406.Point Cloud Quality Assessment Metric Based on Angular Similarity
2018. IEEE International Conference on Multimedia and Expo (ICME), San Diego, California, USA, July 23-27, 2018. DOI : 10.1109/ICME.2018.8486512.2017
On the performance of metrics to predict quality in point cloud representations
2017. SPIE Optical Engineering + Applications, San Diego, California, USA, August 6-10, 2017. p. 103961H. DOI : 10.1117/12.2275142.Towards subjective quality assessment of point cloud imaging in augmented reality
2017. IEEE 19th International Workshop on Multimedia Signal Processing (MMSP), Luton Bedfordshire, United Kingdom, October 16-18, 2017. DOI : 10.1109/MMSP.2017.8122237.On subjective and objective quality evaluation of point cloud geometry
2017. Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, May 31 – June 2, 2017. DOI : 10.1109/QoMEX.2017.7965681.Autres publications
Point cloud structural similarity metric
In this repository, scripts for the computation of structural similarity scores and the voxelization of point clouds are provided. A structural similarity score is based on the comparison of feature maps that reflect local properties of a point cloud attribute. Voxelization is optionally enabled prior to feature extraction, similarly to downsampling in 2D imaging, to simulate inspection from further distances. In some aspect, we explore the applicability of the well-known SSIM in a higher-dimensional, irregular space (volumetric content), incorporating geometrical and textural information.
PointXR
We explore the use of virtual reality in subjective quality evaluation of point cloud contents. For this purpose, we develop the PointXR toolbox, a software that is built in Unity for rendering and visualization of 3D point clouds in virtual environments. Moreover, we generate and release the PointXR dataset, a point cloud repository that consists of 20 high-quality cultural heritage models. Finally, we publish the PointXR experimental data, stimuli and quality scores collected from a subjective evaluation campaign in virtual reality to assess the performance of the state-of-the-art MPEG Geometry-based Point Cloud Compression (G-PCC) color encoders under two evaluation protocols, using our software and dataset.
In this webpage, we make publicly available relevant material.
M-PCCD: MPEG Point Cloud Compression Dataset
The emerging MPEG point cloud codecs (V-PCC and G-PCC variants) are assessed, and best practices for rate allocation are investigated. For this purpose, three experiments are conducted. In the first experiment, a rigorous evaluation of the codecs is performed, adopting test conditions dictated by experts of the group on a carefully selected set of models, using both subjective and objective quality assessment methodologies. In the other two experiments, different rate allocation schemes for geometry-only and geometry-plus-color encoding are subjectively evaluated, in order to draw conclusions on the best-performing approaches in terms of perceived quality for a given bit rate.
In this webpage, we make publicly available quality scores associated with the stimuli under assessment for each experiment. For purposes of reproducibility, a content that is used while not being part of established point cloud repositories adopted by standardisation bodies, is re-distributed. Moreover, scripts are provided in order to generate the reference models and the rendering-related meta-data that are used in this study.
Point cloud web renderer
In this repository, an open source web-based point cloud renderer is made publicly open. The renderer is developed on top of the well-established three.js library, ensuring compatibility across different devices and operating systems. The renderer supports visualization of point clouds with real-time interaction, while viewing conditions can be easily configured. The user is able to choose between either an adaptive, or a fixed splat size rendering mode in order to display the models. The renderer supports both PLY and PCD point cloud file formats. The current settings have been optimized for voxelized contents, without this limiting its usage, since any point cloud can be displayed independently of its geometric structure (i.e., regular or irregular).
ViAtPCVR: Visual Attention for Point Clouds in VR
An eye-tracking experiment is conduced in an immersive virtual reality environment with 6 degrees of freedom, using a head mounted display. The users interact with 3D point cloud models following a task-dependent protocol, while recording their gaze and head trajectories.
In this webpage, we make publicly available a data set consisting of the tracked behavioural data, post-processing results, saliency maps in form of importance weights, re-distribution of a sub-set of contents and scripts to generate the exact versions of the point clouds that were used in the study, and usage examples.
Point cloud angular similarity metric
In this repository, a script for the computation of the angular similarity between two point clouds is provided. This metric relies on normal vectors that are carried with associated points that belong to the two models, and quantifies the difference of corresponding tangent planes in terms of orientation. An angular similarity score for a point cloud under evaluation is obtained after pooling across individual angular similarity values that are obtained from the pairs of associated points. In this implementation, the points are associated as nearest neighbors. This metric is also called as plane-to-plane.
RG-PCD: Reconstructed Geometry Point Cloud Dataset
This study involves the participation of five independent laboratories in order to examine the alternative of applying a surface reconstruction algorithm to render point clouds. For this purpose, a representative set of geometry-only point clouds is assembled and degraded using Octree-pruning. The screened Poisson surface reconstruction algorithm is used to convert point clouds to meshes for display.
In this webpage, we make publicly available a dataset consisting of reference point clouds, degraded point cloud and mesh stimuli, degradation levels, and subjective quality scores that were obtained from the participating laboratories. Moreover, results from inter-laboratory comparisons, benchmarking of objective quality metrics, and correlation between two types of point cloud visualization approaches are provided.
G-PCD: Geometry Point Cloud Dataset
A series of studies is conducted in order to evaluate the performance of existing objective quality metrics and to propose new quality assessment methodologies. For this purpose, a representative set of geometry-only point clouds is assembled and degraded using two different types of distortions.
In this webpage, we make publicly available a dataset consisting of reference point clouds, degraded stimuli, degradation levels, and subjective quality scores that were obtained under two experimental setups; that is, one carried in a desktop setup and a second performed in augmented reality using a head-mounted display.