I am a Postdoctoral Research Assistant with the Centre for Intelligent Sensing and the School of Electronic Engineerging and Computer Science at Queen Mary University of London (QMUL), UK. I currently investigate machine learning models based on Graph Neural Networks for explainable AI, specifically for the use-case of privacy protection in multimedia data (images, videos, and audio-visual data), under the project GraphNEx.
I obtained the Ph.D. in Electronic Engineering from Queen Mary University of London, UK, in September 2020, the Master Degree in Telecommunications Engineering and the Bachelor Degree in Electronic and Telecommunication Engineering from the University of Trento in March 2015 and February 2012, respectively. I was a Research Assistant and Postdoctoral Research Assistant in Queen Mary University of London and I investigated novel, robust, deployable multi-modal (audio-visual) models for human-to-robot handovers under the project "Collaborative Object Recognition, Shared Manipulation and Learning (CORSMAL)".
I am an IEEE Member and a member of the IEEE Signal Processing Society and the Computer Vision Foundation. I serve as a reviewer for international conferences and journals, including IEEE Transactions on Multimedia, IEEE Robotics and Automation Letters, IEEE Sensors Journal, IET Computer Vision, International Conference on Computer Vision and Pattern Recognition (CVPR), European Conference on Computer Vision (ECCV), IEEE International Conference on Image Processing (ICIP), IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE International Conference on Pattern Recognition (ICPR), British Machine Vision Conference (BMVC), and ACM/IEEE International Conference on Human-Robot Interaction (HRI). I received the Outstanding Reviewer recognition at 5 international conferences: ECCV 2024, CVPR 2024, IEEE ICASSP 2022-2023, and IEEE ICIP 2020.
His Ph.D. was funded from 2015 to 2019 under the Audio-Visual Intelligent Sensing Project, a joint collaboration between QMUL and Fondazione Bruno Kessler (FBK), Italy. Alessio investigated the problem of matching local spatio-temporal features between moving monocular cameras under the supervision of Prof. Andrea Cavallaro (QMUL) and Dr. Oswald Lanz (FBK). He also collaborated within the project to investigate the problem of audio-visual tracking in 3D of a talking person from a co-located camera with a circular microphone array, and the annotation and calibration of the audio-visual recording system for a novel dataset (CAV3D) in conjuction with the tracking task. During the Ph.D. program, Alessio was also Demonstrator for the module Introduction to Computer Vision with Prof. Andrea Cavallaro for the years 2018 and 2019. He prepared tutorials, assisted students during the lab sessions, and marked/graded the module coursework (software and report).
For the master thesis, Alessio took a 6-months internship with the PERCEPTION team at INRIA Grenoble Rhone-Alpes (March-August 2014), and he investigated the problem of Multiple Object Tracking under the supervision of Dr. Radu Horaud (INRIA) and Dr. Nicola Conci (MMLAB, UNITN), and co-advised by Xavier-Alameda Pineda and Sileye Ba.
Alessio obtained the Bachelor degree in Electronic and Telecommunications Engineering (2008-2012) and the Master degree (2-years) in Telecommunications Engineering (2012-2015) at University of Trento (UNITN), Italy. From March to September 2015, before starting the Ph.D., Alessio also collaborated with MMLAB (UNITN) for the development of a client-server application within the LifeGate project and for the collection and annotation of datasets for the Synchronization of Multi-User Event Media task.