Niao He: Katalogdaten im Frühjahrssemester 2021

NameFrau Prof. Dr. Niao He
LehrgebietInformatik
Adresse
Professur für Informatik
ETH Zürich, OAT Y 21.1
Andreasstrasse 5
8092 Zürich
SWITZERLAND
E-Mailniao.he@inf.ethz.ch
URLhttps://odi.inf.ethz.ch/
DepartementInformatik
BeziehungAssistenzprofessorin (Tenure Track)

NummerTitelECTSUmfangDozierende
252-0945-12LDoctoral Seminar Machine Learning (FS21)
Only for Computer Science Ph.D. students.

This doctoral seminar is intended for PhD students affiliated with the Institute for Machine Learning. Other PhD students who work on machine learning projects or related topics need approval by at least one of the organizers to register for the seminar.
2 KP1SN. He, M. Sachan, J. M. Buhmann, T. Hofmann, A. Krause, G. Rätsch
KurzbeschreibungAn essential aspect of any research project is dissemination of the findings arising from the study. Here we focus on oral communication, which includes: appropriate selection of material, preparation of the visual aids (slides and/or posters), and presentation skills.
LernzielThe seminar participants should learn how to prepare and deliver scientific talks as well as to deal with technical questions. Participants are also expected to actively contribute to discussions during presentations by others, thus learning and practicing critical thinking skills.
Voraussetzungen / BesonderesThis doctoral seminar of the Machine Learning Laboratory of ETH is intended for PhD students who work on a machine learning project, i.e., for the PhD students of the ML lab.
261-5110-00LOptimization for Data Science Information 10 KP3V + 2U + 4AB. Gärtner, D. Steurer, N. He
KurzbeschreibungThis course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in data science.
LernzielUnderstanding the theoretical guarantees (and their limits) of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science.
InhaltThis course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in machine learning and data science.

In the first part of the course, we will first give a brief introduction to convex optimization, with some basic motivating examples from machine learning. Then we will analyse classical and more recent first and second order methods for convex optimization: gradient descent, Nesterov's accelerated method, proximal and splitting algorithms, subgradient descent, stochastic gradient descent, variance-reduced methods, Newton's method, and Quasi-Newton methods. The emphasis will be on analysis techniques that occur repeatedly in convergence analyses for various classes of convex functions. We will also discuss some classical and recent theoretical results for nonconvex optimization.

In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation.
Voraussetzungen / BesonderesAs background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary.