University of Amsterdam, the Netherlands
► Personal homepage 1
► Personal homepage 2
Biography:Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He has a secondary appointment as a senior fellow at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep learning which got acquired by Qualcomm in summer 2017. In the past he held postdoctoral positions at Caltech (98-00), UCL (00-01) and the U. Toronto (01-03). He received his PhD in 98 under supervision of Nobel laureate Prof. G. 't Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015 (impact factor 4.8). He serves on the board of the NIPS foundation since 2015 (the largest conference in machine learning) and has been program chair and general chair of NIPS in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. He received multiple grants from Google, Facebook, Yahoo, NSF, NIH, NWO and ONR-MURI among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010. Welling is in the board of the Data Science Research Center in Amsterdam, he directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). Max Welling has over 200 scientific publications in machine learning, computer vision, statistics and physics.
Abstract:Deep learning is all too often considered as a pure optimization problem instead of a statistical estimation problem. But key to any machine learning algorithm are statistical concepts such as overfitting and the bias-variance tradeoff. Bayesian statistics provides a beautiful consistent statistical framework that can be combined with deep learning resulting in a research field called Bayesian Deep Learning (BDL). In this talk I will discuss a number of advantages of BDL, among which a natural protection against overfitting, confidence estimation, better robustness against adversarial attacks, better privacy preservation and a framework to compress and quantize deep architectures. In the second part of the talk I will focus on making deep learning more power and memory efficient. There are a number of reasons why I believe this is an important direction for research, among which the economic feasibility of large scale applications of AI and the thermal ceiling of AI on the edge. I will provide some examples of how BDL has guided us in developing these efficient deep learning implementations.
CNRS, Toulouse, France
► Personal homepage
Biography:Cédric Févotte is a CNRS senior researcher at Institut de Recherche en Informatique de Toulouse (IRIT). Previously, he has been a CNRS researcher at Laboratoire Lagrange (Nice, 2013-2016) & Télécom ParisTech (2007-2013), a research engineer at Mist-Technologies (the startup that became Audionamix, 2006-2007) and a postdoc at University of Cambridge (2003-2006). He holds MEng and PhD degrees in EECS from École Centrale de Nantes. His research interests concern statistical signal processing and machine learning, in particular for source separation and inverse problems. He has been a member of the IEEE Machine Learning for Signal Processing technical committee (2012-2018) and a member of SPARS steering committee since 2018. He has been an associate editor for the IEEE Transactions on Signal Processing since 2014. In 2014, he was the co-recipient of an IEEE Signal Processing Society Best Paper Award for his work on audio source separation using multichannel nonnegative matrix factorisation. He is the principal investigator of the European Research Council (ERC) project FACTORY (New paradigms for latent factor estimation, 2016-2021).
Tencent AI Lab, Seattle, USA
► Personal homepage
Biography:Dr. Dong Yu is a distinguished scientist and vice general manager at Tencent AI Lab, an IEEE Fellow and an ACM Distinguished Scientist. Before joining Tencent in 2017, he was a principal researcher at Microsoft Research, where he joined in 1998. His research has been focusing on speech recognition and other applications of machine learning techniques. He has published two monographs and 160+ papers. His works have been cited over 17k times per Google Scholar and have been recognized by the prestigious IEEE Signal Processing Society 2013 and 2016 best paper award.
Dr. Dong Yu currently is serving as a member of the IEEE Speech and Language Processing Technical Committee (2013-2018) and a distinguished lecturer of APSIPA (2017-2018). He has served as an associate editor of the IEEE/ACM transactions on audio, speech, and language processing (2011-2015), an associate editor of the IEEE signal processing magazine (2008-2011), and members of organization and technical committees of many conferences and workshops.
Abstract:In this talk, I will introduce and compare the most promising end-to-end speech recognition systems such as Connectionist Temporal Classification (CTC), RNN Transducer, RNN aligner, and sequence-to-sequence translation model with attention. I will discuss advantages and shortcomings of each setup, present key observations we have made when exploring these models, and discuss possible further developments.
Implement Consulting Group, Copenhagen, Denmark
► Personal homepage