Evangelos Alexiou was born in Thessaloniki, Greece, on September 18th, 1986. He received his diploma in Electronic and Computer Engineering from the faculty of Technical University of Crete (TUC) in 2011. He received his M.Sc degree in Signal Processing for Communications and Multimedia from the faculty of National and Kapodistrian University of Athens (DI) in 2013. From November 2012 to September 2015, he was involved in a research program, namely MusiNet, with topic the comprehensive design and implementation of a networked music performance system under the supervision of Professor Alexandros Eleftheriadis.
Geometry-only point cloud data set
A series of studies is conducted to investigate the performance of state-of-the-art objective quality metrics and propose new subjective and objective evaluation methodologies. For this purpose, a representative set of geometry-only point clouds is assembled and degraded using two different types of distortions.
In this webpage, we make publicly available a data set consisting of reference point cloud models, degraded stimuli, and subjective quality scores that were obtained in two experimental setups.
Reconstructed geometry-only point cloud data set
Subjective quality assessment of geometry-only point clouds is conducted in five independent laboratories, using the Screened Poisson surface reconstruction algorithm to display the models. The contents are degraded using Octree-based compression at different quality levels, and are evaluated by subjects in a passive way.
In this webpage, we make publicly available a data set consisting of point clouds and their corresponding reconstructed meshes that are used in our experiments. Moreover, results from subjective evaluations, inter-laboratory comparisons, benchmarking of objective quality metrics, and correlation between two types of point cloud visualization approaches are provided.
Visual attention for point clouds in VR
An eye-tracking experiment is conduced in an immersive virtual reality environment with 6 degrees of freedom, using a head mounted display. The users interact with 3D point cloud models following a task-dependent protocol, while recording their gaze and head trajectories.
In this webpage, we make publicly available a data set consisting of the tracked behavioural data, post-processing results, saliency maps in form of importance weights, re-distribution of a sub-set of contents and scripts to generate the exact versions of the point clouds that were used in the study, and usage examples.
Quality assessment for point cloud compression
The emerging MPEG point cloud codecs (V-PCC and G-PCC variants) are assessed, and best practices for rate allocation are investigated. For this purpose, three experiments are conducted. In the first experiment, a rigorous evaluation of the codecs is performed, adopting test conditions dictated by experts of the group on a carefully selected set of models, using both subjective and objective quality assessment methodologies. In the other two experiments, different rate allocation schemes for geometry-only and geometry-plus-color encoding are subjectively evaluated, in order to draw conclusions on the best-performing approaches in terms of perceived quality for a given bit rate.
In this webpage, we make publicly available quality scores associated with the stimuli under assessment for each experiment. For purposes of reproducibility, a content that is used while not being part of established point cloud repositories adopted by standardisation bodies, is re-distributed. Moreover, scripts are provided in order to generate the reference models and the rendering-related meta-data that are used in this study.
Point cloud web renderer
In this repository, an open source web-based point cloud renderer is made publicly open. The renderer is developed on top of the well-established three.js library, ensuring compatibility across different devices and operating systems. The renderer supports visualization of point clouds with real-time interaction, while viewing conditions can be easily configured. The user is able to choose between either an adaptive, or a fixed splat size rendering mode in order to display the models. The renderer supports both PLY and PCD point cloud file formats. The current settings have been optimized for voxelized contents, without this limiting its usage, since any point cloud can be displayed independently of its geometric structure (i.e., regular or irregular).