Joachim M. Buhmann: Katalogdaten im Frühjahrssemester 2015

NameHerr Prof. Dr. Joachim M. Buhmann
LehrgebietInformatik (Information Science and Engineering)
Adresse
Institut für Maschinelles Lernen
ETH Zürich, OAT Y 13.2
Andreasstrasse 5
8092 Zürich
SWITZERLAND
Telefon+41 44 632 31 24
Fax+41 44 632 15 62
E-Mailjbuhmann@inf.ethz.ch
URLhttp://www.ml.inf.ethz.ch/
DepartementInformatik
BeziehungOrdentlicher Professor

NummerTitelECTSUmfangDozierende
252-0526-00LStatistical Learning Theory Information 4 KP2V + 1UJ. M. Buhmann
KurzbeschreibungThe course covers advanced methods of statistical learning :
PAC learning and statistical learning theory;variational methods and optimization, e.g., maximum entropy techniques, information bottleneck, deterministic and simulated annealing; clustering for vectorial, histogram and relational data; model selection; graphical models.
LernzielThe course surveys recent methods of statistical learning. The fundamentals of machine learning as presented in the course "Introduction to Machine Learning" are expanded and in particular, the theory of statistical learning is discussed.
Inhalt# Boosting: A state-of-the-art classification approach that is sometimes used as an alternative to SVMs in non-linear classification.
# Theory of estimators: How can we measure the quality of a statistical estimator? We already discussed bias and variance of estimators very briefly, but the interesting part is yet to come.
# Statistical learning theory: How can we measure the quality of a classifier? Can we give any guarantees for the prediction error?
# Variational methods and optimization: We consider optimization approaches for problems where the optimizer is a probability distribution. Concepts we will discuss in this context include:

* Maximum Entropy
* Information Bottleneck
* Deterministic Annealing

# Clustering: The problem of sorting data into groups without using training samples. This requires a definition of ``similarity'' between data points and adequate optimization procedures.
# Model selection: We have already discussed how to fit a model to a data set in ML I, which usually involved adjusting model parameters for a given type of model. Model selection refers to the question of how complex the chosen model should be. As we already know, simple and complex models both have advantages and drawbacks alike.
# Reinforcement learning: The problem of learning through interaction with an environment which changes. To achieve optimal behavior, we have to base decisions not only on the current state of the environment, but also on how we expect it to develop in the future.
Skriptno script; transparencies of the lectures will be made available.
LiteraturDuda, Hart, Stork: Pattern Classification, Wiley Interscience, 2000.

Hastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001.

L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996
Voraussetzungen / BesonderesRequirements:

basic knowledge of statistics, interest in statistical methods.

It is recommended that Introduction to Machine Learning (ML I) is taken first; but with a little extra effort Statistical Learning Theory can be followed without the introductory course.