Dr. Adriano Pastore
Monday May 13th 2PM
Short bio: Adriano Pastore received a Diplôme de l’École Centrale Paris (ECP) in 2006 as well as a Dipl.-Ing. degree in electrical engineering from the Associate Institute for Signal Processing Methods at TUM in 2009. He obtained his PhD from the Department of Signal Theory and Communications at the Universitat Politècnica de Catalunya in 2014. From 2014 to 2016 he was a postdoctoral researcher at École Polytechnique Fédérale de Lausanne (EPFL) in the Laboratory for Information in Networked Systems. He is currently a senior researcher at the Research Unit on Signal and Information Processing for Intelligent Communications at the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC).
He is Principal Investigator of the 6G-AINA project (2022-2025) on developing an AI-native air interface for 6G systems, and served until recently as supervisor within the H2020 Marie Skłodowska-Curie Action International Training Network WINDMILL (2019-2023), which aims at developing cutting-edge machine learning algorithms for wireless communications. He has also participated in several projects funded by the European Commission and direct industrial contracts, e.g., with Software Radio Systems (SRS) or Nokia Networks France. He was awarded with a Leonardo Fellowship 2023 from the BBVA Foundation for fundamental research on coded over-the-air computation.
His main topics of interest lie in the field of information theory and wireless communications, machine learning for communications, physical-layer network coding, quantum key distribution, and privacy-utility tradeoffs.
Title: Machine Learning for Signal Processing and Coding in Communications: An Overview
Abstract: Over the last decade, data-driven approaches have firmly entered the realm of communications engineering and have reshaped this research field. In this talk we give an overview over several topics that involve the application of state-of-the-art machine learning and artificial intelligence algorithms at the L1 and L2 layers of wireless communication systems. A presentation of selected research problems and recent results is complemented with a short practical introduction to Nvidia's increasingly popular open-source Python library for differentiable simulation of PHY-layer communication systems.
Prof. Jordi Pérez-Romero
Monday May 13th 2PM
Short bio: Jordi Pérez-Romero is a Professor with the Department of Signal Theory and Communications of the Universitat Politècnica de Catalunya (UPC), where he received the degree in telecommunications engineering and the PhD degree in 1997 and 2001, respectively. He is working in the field of wireless communication systems, with a particular focus on 5G and beyond cellular systems, including radio resource management and network optimization, and studying the applicability of AI/ML tools for these problems. He has been involved in different European projects with different responsibilities, such as researcher, work package leader, and project responsible, has participated in different projects for private companies and has contributed to the 3GPP and ETSI standardization bodies. He has coauthored more than 300 papers in international journals and conferences. He has also coauthored three books and has contributed to seven book chapters.
Title: Towards Trustworthy Reinforcement Learning for beyond 5G radio access networks
Abstract: Artificial Intelligence (AI) has emerged as one of the key technologies for beyond 5G (B5G) networks. Among the different techniques, Reinforcement Learning (RL) is expected to play a particularly relevant role for optimizing different decision-making problems in the Radio Access Network (RAN). Moreover, ensuring the trustworthiness of AI solutions is key for their successful introduction in B5G, as this will guarantee their robustness towards errors ensuring that they do not have unsafe consequences. This talk will review the basic operation of RL and will highlight the applicability of RL in the RAN. It will also overview the different stages required from the design to the implementation of RL solutions and will discuss guidelines on how to train and evaluate RL solutions to obtain trustworthy RL policies.
Dr. Lorenzo Valerio
Tuesday May 14th 11 AM
Short Bio: Dr. Lorenzo Valerio is a researcher at IIT-CNR. He holds a Ph.D in Mathematics and Statistics for computational Sciences and a MSc in Information and Communication Technologies, both from the University of Milano, Italy. His main research activity focuses on decentralised and resource-contrained machine learning mainly targeting Edge/IoT environments. His main research interests include machine learning, decentralised learning, deep learning, causal learning, generative models, opportunistic and mobile networking. He has published in journals and conference proceedings more than 50+ papers. He has served as Workshop co-chair for IEEE AOC’15 and he is the co-organiser of the IEEE PeRConAI Workshop. He guest edited for the Elsevier Computer Communications, Elsevier Pervasive and Mobile Computing and Springer Evolving Systems. He received three Best Paper Awards from international conferences and research institutes and one Best Paper Nomination. He is currently member of the editorial board of Elsevier Computer Communication journal and in the PC of several intl. conferences such as IEEE IJCNN and AAAI, IEEE MSN. He is (or has been) active in several European as well as national projects such as PNRR FAIR, PNRR RESTART, HE RE4DY, HE TRANTOR, H2020 MARVEL, CHIST-ERA SAI, H2020 SoBigData++, H2020 SoBigData, H2020 AutoWare, H2020 Replicate, PON-MIUR OK-INSAID FP7 Moto, FP7 FET Recognition.
Title: Federated Learning: coping with heterogeneity and limited resources at the Edge.
Abstract: The explosion of the number of IoT and Edge devices is boosting the generation of massive amounts of data at the Edge of the Internet. In parallel, the knowledge extraction process from these data for training AI models is facing a paradigm shift, from centralized solutions run in remote Cloud facilities to more decentralized and lightweight ones executed at the Edge of the Internet. Performing collaborative training in an Edge environment poses several challenges, all connected to the extreme heterogeneity of the context, i.e., data patterns might be represented unevenly across devices, devices have limited resources that might prevent or limit their contribution to the process, the locality experienced by each device in training the local AI model might affect the overall process. Federated Learning is a decentralized training framework that implements a Cloud-to-Edge paradigm shift for decentralized collaborative training, addressing the above-mentioned challenges. The course will introduce the Federated Learning framework, exploring the most recent advancements proposed to address some of the challenges connected to the constraints posed by Edge environment.
Dr. David Kappel
Thursday May 16th 9 AM
Short Bio: David Kappel is a researcher at the Ruhr-Universität Bochum, where he has led the Sustainable Machine Learning group since 2023. Previously, he worked as a postdoctoral researcher at the TU Dresden and the University of Göttingen. David Kappel received his Ph.D. in computer science from Graz University of Technology in 2018. His research interests focus on efficient algorithms and models for synaptic plasticity, neural dynamics, Bayesian inference, and hierarchical learning models.
Title: Biological principles of efficient signal processing and learning
Abstract: Recent advances in machine learning (ML) have demonstrated impressive performance on complex tasks such as human-level image understanding and natural language processing. These ML models rely on artificial neural networks that, like biological brains, use billions of neurons and synapses to process complex stimuli. However, unlike biological brains, these neural networks consume vast amounts of energy, with a single training session often exceeding the energy and carbon footprint of a car over its lifetime. In this lecture, I will highlight the mechanisms that enable the amazing energy efficiency of biological brains and compare them to their artificial counterparts. Based on these findings, I will present some examples of new approaches to significantly reduce the energy footprint using hybrid ML/bio-inspired models.
Prof. Patrick Gallinari
Thursday May 16th 11 AM
Short Bio: Patrick Gallinari is a professor at Sorbonne University, affiliated with the ISIR laboratory (Institute of Intelligent Systems and Robotics), and a distinguished researcher at Criteo AI Lab in Paris. He is a pioneer in the field of neural networks. His research focuses on statistical learning and deep learning, with applications in various domains such as semantic data processing and complex data analysis. A few years ago, he spearheaded research on physics-aware machine learning and contributed to seminal works in this field. Additionally, he holds a national AI chair titled "Deep Learning for Physical Processes with applications to Earth System Science".
Title: AI4Science: Physics-Aware Deep Learning for Modeling Dynamical Systems
Abstract: Deep learning has recently gained traction in modeling complex physical processes across industrial and scientific fields such as environment, climate, health, biology, and engineering. This rapidly evolving interdisciplinary topic presents new challenges for machine learning. In this tutorial, I will introduce deep learning approaches for physics-aware machine learning, as part of the broader topic AI4Science, focusing on modeling dynamic physical systems ubiquitous in science. I will illustrate some of the main challenges of this topic, including incorporating physical priors in machine learning models, generalization issues for modeling physical processes, neural operators, and perspectives on foundation models for science. This will be illustrated with feature applications from different domains.
Prof. Claudio Gallicchio
Friday May 17th 9 AM
Short bio: Claudio Gallicchio is an Assistant Professor of Machine Learning at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS). He is currently coordinating the project “NEURONE: extremely efficient NEUromorphic Reservoir cOmputing in Nanowire network hardwarE”, funded by the Italian ministry of Research.
Title: Reservoir Computing at the Edge: Empowering IoT and Distributed AI Systems
Abstract: As the use of Deep Learning (DL) models becomes more pervasive in Internet of Things (IoT) and distributed Artificial Intelligence (AI) systems, the need for sustainable and efficient computational paradigms becomes more critical. This seminar addresses the challenge of developing DL models that can operate within the constraints of edge computing, focusing on the need for models that are not only resource efficient, but also capable of learning from streaming data in real-time. We introduce Reservoir Computing (RC) as a promising solution to these challenges, exploring its basic principles and distinct advantages over traditional recurrent neural network approaches. The discussion includes Echo State Networks and Deep Echo State Networks, among other advanced methods within RC, and highlights their potential in online, unsupervised, federated, and continual learning scenarios. Using examples from pervasive AI applications, we demonstrate the practical impact of RC in enhancing the capabilities of IoT and distributed systems. Finally, the seminar briefly touches on the intersection of RC with neuromorphic computing, highlighting the synergistic potential to innovate edge-based AI.
Prof. Vincenzo Lomonaco
Friday May 17th 11 AM
Bio: Vincenzo Lomonaco is a Researcher at the University of Pisa, Italy where he teaches the Artificial Intelligence and Continual Learning courses. Currently, He also serves as Co-Founding President and Lab Director at ContinualAI, a non-profit research organization and the largest open community on Continual Learning for AI, Co-founding Board Member at AI for People, and as proud member of the European Lab for Learning and Intelligent Systems (ELLIS). In Pisa, he works within the Pervasive AI Lab and the Computational Intelligence and Machine Learning Group, which is also part of the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE). Vincenzo is a Task Leader of two main European projects and a Principal Investigator of several industrial research contracts with companies such as Meta, Intel, Leonardo s.p.a. and SeaVision s.r.l. Previously, he was a Post-Doc @ University of Bologna (with: Davide Maltoni) where he also obtained his PhD in early 2019 with a dissertation titled “Continual Learning with Deep Architectures” (on a topic he’s been working on for more than 7 years now) which was recognized as one of the top-5 AI dissertations of 2019 by the Italian Association for Artificial Intelligence. For more than 5 years he worked as a teaching assistant for the Machine Learning and Computer Architectures courses in the Department of Computer Science of Engineering (DISI) at UniBo. In the past Vincenzo has been a Visiting Research Scientist at AI Labs in 2020, at Numenta (with: Jeff Hawkins, Subutai Ahmad) in 2019, at ENSTA ParisTech (with: David Filliat) in 2018 and at Purdue University (with: Eugenio Culurciello) in 2017. Even before, he was a Machine Learning Software Engineer @ iDL in-line Devices and a Master Student @ UniBo. His main research interest and passion is about Continual Learning in all its facets. In particular, he loves to study Continual Learning under four main lights: Deep Learning, Distributed Learning and Practical Applications, all within an AI Sustainability developmental framework.
Title: Deep Continual Learning for Efficient Pervasive AI Systems
Abstract: Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning research. Naively fine-tuning prediction models only on the newly available data often incurs in Catastrophic Forgetting or Interference: a sudden erase of all the previously acquired knowledge. On the other hand, re-training prediction models from scratch on the accumulated data is not only inefficient but possibly unsustainable in the long-term and where fast, frequent model updates are necessary. In this first part of the lecture we will discuss recent progress and trends in making machines learn continually through architectural, regularization and replay approaches. In this second part, we present Avalanche, an open-source end-to-end library for continual learning R&D based on PyTorch and discuss possible applications for the notion of Pervasive AI in the real-world as well as future challenges in continual learning research and applications that Avalanche may help us tackle.