There are four tutorials available: two parallel sessions in the morning on Sunday 8th July 2018 and another two parallel sessions in the afternoon:
Morning tutorials:
(1). Massive MIMO – Fundamentals, Trends and Recent Developments
(2). Recent advances in multisensory intelligent signal analysis
Afternoon tutorials:
(3). Large-System Analysis: Random Matrices, Free Probability & Replica Method (by Ralf Müller)
(4). Graph Sampling for Signal and Covariance Estimation
1. Massive MIMO – Fundamentals, Trends and Recent Developments
Presenter: Luca Sanguinetti
Abstract: Multiuser MIMO (MU‐MIMO) technology consists of using multiple jointly processed antennas at the infrastructure side to separate interference in the spatial domain, allowing multiple users to send (uplink) or receive (downlink) data simultaneously, on the same time‐frequency slot. While MU-MIMO is theoretically well‐understood and has been around for decades, only relatively recently it has overcome practical implementation skepticism, and has become a mainstream technology. An important step forward, that pushed industry to widely embrace MU‐MIMO, is the introduction of the concept of “Massive MIMO”. This consists of a particular regime of MU‐MIMO where the number of base station antennas is much larger than the number of simultaneously transmitted data streams. Everybody talks about Massive MIMO, but do they all mean the same thing? What is the canonical definition of Massive MIMO? What are the differences from the classical MU‐MIMO technology from the nineties? How does the channel model impact the spectral efficiency? How can Massive MIMO be deployed and what is the impact of hardware impairment? Is pilot contamination a problem in practice? This first half of this tutorial aims to answer all the above questions and to explain why Massive MIMO is a promising solution to handle several orders-of-magnitude more wireless data traffic than today’s technologies. The second half reviews the most significant trends that are pushing the original Massive MIMO ideas in different directions, including: the key role of Massive MIMO when designing cellular networks that are highly energy efficient; how Massive MIMO makes more efficient use of the hardware, which opens the door for using component with lower resolution; an overview of important practical aspects, such as power allocation, pilot assignment, scheduling, load balancing, and the role of Massive MIMO in heterogeneous networks.
Short Bio: Dr. L. Sanguinetti is an Associate Professor in the Dipartimento di Ingegneria dell’Informazione of the University of Pisa. He received the Telecommunications Engineer degree (cum laude) and the Ph.D. degree in information engineering from the University of Pisa, Italy, in 2002 and 2005, respectively. In 2004, he was a visiting Ph.D. student at the German Aerospace Center (DLR), Oberpfaffenhofen, Germany. During the period June 2007 – 2008, he was a postdoctoral associate in the Department of Electrical Engineering at Princeton. Since July 2013, he is also with CentraleSupelec, Paris, France. He is serving as an Associate Editor for IEEE Trans. Wireless Commun. and IEEE Signal Process. Lett. He is the Lead Guest Associate Editor for IEEE JSAC – Game Theory for Networks. From June 2015 to June 2016, he was in the editorial board of IEEE JSAC – Series on Green Commun. and Networking. Dr. Sanguinetti served as Exhibit Chair of ICASSP14 and as the general co-chair of the 2016 Tyrrhenian Workshop on 5G&Beyond. His expertise and general interests span the areas of communications and signal processing with special emphasis on multiuser MIMO, game theory and random matrix theory for wireless communications. He was the co- recipient of 2 best paper awards: IEEE Wireless Commun. and Networking Conference (WCNC) 2013 and IEEE Wireless Commun. and Networking Conference (WCNC) 2014. He was also the recipient of the FP7 Marie Curie IEF 2013 “Dense deployments for green cellular networks”. Dr. Sanguinetti is a Senior IEEE Member.
2. Recent advances in multisensory intelligent signal analysis
Presenters: Nicholas Cummins and Björn Schuller
Abstract: A combination of ubiquitous sensing technologies and advances in machine learning provide a myriad of opportunities to intelligently exploit multimodal sensory input data and foster a new generation of truly smart devices. This tutorial is an introduction to recent advances in such intelligent and “in the wild” sensing and will comprise of two main sections. First it will overview of the topic covering data collection, feature extraction and relevant recent machine learning principles with a core focus on recent deep learning paradigms such as end-to-end and spiking networks. Second, attendees will receive training on running open-source established feature extraction and machine learning toolkits, including the state-of-the-art openSMILE and novel openXBOW and auDeep multisensorial feature extraction toolkits, the recently released End2You toolkit for end-to-end deep learning as well as CAS2T and iHEARu-PLAY for rapid learning data acquisition and annotation by intelligent gamified crowdsourcing.
Short Bio: Dr Nicholas Cummins received his Ph.D. in Electrical Engineering from UNSW Australia in February 2016. He did his undergraduate BE degree at UNSW, graduating with first class honours in 2011. Currently, he is a postdoctoral researcher at the Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg, Germany where he is involved in the Horizon 2020 projects DE-ENIGMA, RADAR-CNS and TAPAS. His current research interests include multisenory signal analysis, affective computing, and computer audition with a particular focus on the understanding and analysis of different health states. He has (co)authored over 40 conference and journal papers (over 400 citations, h-index 10). Dr Cummins is a reviewer for IEEE, ACM and ISCA journals and conferences as well as serving on program and organisational committees. He is a member of ISCA, IEEE and the IET.
Prof. Dr.-Ing. habil. Björn Schuller is a Full Professor and head of the ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg, Germany and is also a Reader of machine learning in the Department of Computing, Imperial College London, London, U.K. He has a very strong experience in PhD supervision (6 completed and 16 ongoing theses). He has (co-)authored five books and more than 600 publications in peer reviewed books, journals, and conference proceedings leading to more than 17000 citations (h-index = 63) and was coordinator/PI in >10 European Projects. He was honoured as one of 40 extraordinary scientists under the age of 40, World Economic Forum (WEF), 2015, and is a Fellow of the IEEE.
3. Large-System Analysis: Random Matrices, Free Probability & Replica Method
Presenter: Ralf Müller
Abstract: The applications of random matrices are widely spread. Random matrices occur in array processing, covariance estimation, compressive sensing, wireless communications, machine learning, nuclear physics, statistical finance, number theory, and many more. At the same time, random matrices are not a mature field of mathematics, but a subject of intense ongoing research. This makes it difficult for newcomers to enter the field. Overview literature is often not up to date while recent literature is typically targeting experts. This tutorial aims to provide the audience an intuitive understanding of random matrices. In contrast to mathematicians that drive forward the progress in random matrix theory, an engineer’s primary task is not to proof theorems, but to understand the principles that govern the system they aims to design. Systems governed by large random matrices may show some counterintuitive behavior to engineers that are not familiar with them. This may misguide their approaches to circuit design and leave ingenious system architectures and potential complexity savings undiscovered. While scattered results of random matrix theory have been around for close to a century, it was the discovery of free probability theory that gave birth to a comprehensive, elegant, and well-suited framework to deal with random matrices in engineering practice. Freeness replaces the classical notion of statistical independence, which is circumstantial for random matrices. The free analogs to the Gaussian and Poisson distribution turn out to be the semicircle and the quarter circle law, respectively. Free convolution and de-convolution can be defined in terms of free cumulant generating functions. Free probability theory is fully sufficient if random matrices interact with Gaussian signals. For more general settings, a dive into statistical mechanics may be necessary. Hereby, the replica method turns out particularly helpful. It was invented in the 1960s to analyze unstructured magnetic materials (spin glasses) and offers the formidable possibility to analyze np-hard algorithms like exhaustive search. The replica method can be happily married with free probability theory to provide a powerful toolbox for the analysis of large non-linear systems governed by random matrices with non-trivial statistics. It is presumed that the replica method will play a key role in the future analysis of deep learning algorithms.
Short Bio: Ralf R. Müller is Professor for Information Transmission at the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany. He received the Dipl.-Ing. and Dr.-Ing. degree (summa cum laude) from FAU in 1996 and 1999, respectively. From 2000 to 2004, he directed a research group at The Telecommunications Research Center Vienna in Austria and taught as an adjunct professor at TU Wien. In 2005, he was appointed full professor at the Department of Electronics and Telecommunications at the Norwegian University of Science and Technology in Trondheim, Norway. In 2013, he joined the Institute for Digital Communications at FAU. He held visiting appointments at Princeton University, US, Institute Eurécom, France, University of Melbourne, Australia, University of Oulu, Finland, National University of Singapore, Babes-Bolyai Uni. Dr. Müller received the Leonard G. Abraham Prize (jointly with Sergio Verdú) for the paper “Design and analysis of low-complexity interference mitigation on vector channels” from the IEEE Communications Society. He was awarded by both the Vodafone Foundation for Mobile Communications and the German Information Technology Society (ITG) for his dissertation “Power and bandwidth efficiency of multiuser systems with random spreading” Moreover, he received the ITG award for the paper “A random matrix model for communication via antenna arrays” as well as the Philipp-Reis Award (jointly with Robert Fischer). Dr. Müller served as an associate editor for the IEEE TRANSACTIONS ON INFORMATION THEORY from 2003 to 2006 and as an executive editor for the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS from 2014 to 2016.
4. Graph Sampling for Signal and Covariance Estimation
Presenter: Sundeep Prabhakar Chepuri
Abstract: Ubiquitous sensors generate prohibitively large datasets. Large volumes of data that we collect nowadays are complex in nature as they are collected on manifolds, irregular domains, networks, or point clouds. Extending classical signal processing concepts and tools to represent, interpret, and analyse signals defined on irregular domains is an emerging area of research. Graphs can be used for describing and explaining relationships in complex datasets. Graphs are also useful for approximating complicated surfaces. In this tutorial, the main focus will be on sampling as well as reconstructing (A) signals and (B) second-order statistics of signals residing on nodes of arbitrary graphs. We will discuss how the underlying geometrical structure of the domain on which the data is defined can be exploited for sampling. To recover signals defined on graphs from a given set of samples, algorithms that incorporate prior knowledge on the signal such as smoothness or subspace priors related to the underlying graph are presented. Techniques to sample diffusion processes, e.g., processes obeying the heat equation, on complicated manifolds will be presented. Next, the concepts of graph stationarity and the graph power spectrum are introduced, which facilitate the analysis and processing of random graph signals. Connections to the field of compressive covariance sensing will be presented. We will see that, by sampling a significantly smaller subset of vertices we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. Near-optimal greedy methods for sparsely sensing signals and second-order statistics will be presented. Throughout this tutorial a significant attention will be given to illustrate the developed theory with a number of diverse synthetic and real-world examples.
Short Bio: Dr Chepuri, IEEE Member since 2016, received the M.Sc. degree (cum laude) in electrical engineering and the Ph.D. degree (cum laude) from the Delft University of Technology, Delft, The Netherlands, in 2011 and 2016, respectively. He has held positions at Robert Bosch, India, during 2007–2009, and Holst Centre/imec-nl, The Netherlands, during 2010–2011. He is currently with the Circuits and Systems Group at the Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology. His research interests include mathematical signal processing, statistical inference, sensor networks, and wireless communications. Dr. Chepuri received the Best Student Paper Award for his publication at the ICASSP 2015 conference in Australia.