MLSP2018
twitter facebook linkedin share privacy policy contact search
Bayesian Filtering and Smoothing Methods for Machine Learning
Simo Sarkka Professor Simo Särkkä
Department of Electrical Engineering and Automation
Aalto University, Helsinki, Finland

Personal homepage

Biography:

Currently, Dr. Särkkä is an Associate Professor with Aalto University, Technical Advisor of IndoorAtlas Ltd., and an Adjunct Professor with Tampere University of Technology and Lappeenranta University of Technology. In 2013 he was a Visiting Professor with the Department of Statistics of Oxford University and in 2011 he was a Visiting Scholar with the Department of Engineering at the University of Cambridge, UK. His research interests are in multi-sensor data processing systems with applications in location sensing, health technology, machine learning, inverse problems, and brain imaging. He has authored or coauthored ~100 peer-reviewed scientific articles and his book "Bayesian Filtering and Smoothing" along with its Chinese translation were recently published via the Cambridge University Press. His latest book "Applied Stochastic Differential Equations" is published via the Cambridge University Press in 2018. He is a Senior Member of IEEE, serving as an Associate Editor of IEEE Signal Processing Letters, and is a member of IEEE Machine Learning for Signal Processing Technical Committee.

Abstract:

Machine learning methods that are able to continuously learn from continuous streams of large amounts of data are becoming more and more important in applications like ubiquitous sensor systems, self-driving cars, smartphone apps, and artificial intelligence (AI) systems. In those applications a separate training phase is not available, but instead, the methods must be able to learn from the data in real time. Due to this constraint, it is beneficial to use recursive Bayesian estimation methodology, also called Bayesian filtering and smoothing, to both enable the online learning and as well as to speed up the learning. The aim of this tutorial is to give an overview of the state-of-the-art in this kind of methods.

Tentative outline

  • From linear regression to Kalman filtering and beyond
  • Recursive Bayesian estimation and Bayesian filtering and smoothing
  • State-space representation of Gaussian process regression
  • Spatiotemporal learning with recursive Bayesian estimation
  • Hyper-parameter learning methods
  • Applications will be presented aside with the methods

Opening the Black Box - How to Interpret Machine Learning Functions and Their Decisions
Lars Kai Hansen Laura Rieger Professor Lars Kai Hansen and
Phd Student Laura Rieger

Section for Cognitive Systems, DTU Compute
Technical University of Denmark

Personal homepage Lars Kai Hansen
Personal homepage Laura Rieger

Biography:

Lars Kai Hansen has MSc and PhD degrees in physics from University of Copenhagen. Since 1990 he has been with the Technical University of Denmark, where he heads the Section for Cognitive Systems. He has published more than 300 contributions on machine learning, signal processing, and applications in AI and cognitive systems. His research has been generously funded by the Danish Research Councils and private foundations, the European Union, and the US National Institutes of Health. He has made seminal contributions to machine learning including the introduction of ensemble methods('90) and to functional neuroimaging including the first brain state decoding work based on PET('94) and fMRI('97). In the context of neuroimaging he has developed a suite of methods for visualizing machine learning models and quantification of uncertainty. In 2011 he was elected "Catedra de Excelencia" at UC3M Madrid, Spain.

Laura Rieger has dual MSc degrees in Computer Science from the Technical University Berlin and the Korea Advanced Institute of Science and Technology. Since fall 2017, she is a PhD student at DTU Compute, working with Prof. Lars Kai Hansen in the Cognitive Systems section. Her research interests include interpretability and uncertainty of neural networks, deep learning and safety in machine learning.

Abstract:

To "let the data speak" machine learning is often based on weak model assumptions - leading to the general notion of machine learning as a black box approach. Indeed, much machine learning research has been devoted to developing expressive representations and algorithms with high statistical and computational efficiency. However, in certain domains - such as systems neuroscience - interpretability and accountability are key to successful application. We will give an introduction to classic and modern tools for understanding machine learning representations and inference with a specific focus on uncertainty quantification. The tutorial is illustrated by applications in bio-medicine, computer vision and natural language processing.
Powered by CONWIZ, © Copyright 2018 | Privacy Policy and Terms of Use | Contact Page Editor