Search result: Catalogue data in Spring Semester 2023
Computer Science Master | ||||||||||||||||||||||||||||||||||||||||||||||||
Majors | ||||||||||||||||||||||||||||||||||||||||||||||||
Major in Machine Intelligence | ||||||||||||||||||||||||||||||||||||||||||||||||
Core Courses | ||||||||||||||||||||||||||||||||||||||||||||||||
Number | Title | Type | ECTS | Hours | Lecturers | |||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
261-5110-00L | Optimization for Data Science | W | 10 credits | 3V + 2U + 4A | B. Gärtner, N. He | |||||||||||||||||||||||||||||||||||||||||||
Abstract | This course provides an in-depth theoretical treatment of optimization methods that are relevant in data science. | |||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Understanding the guarantees and limits of relevant optimization methods used in data science. Learning theoretical paradigms and techniques to deal with optimization problems arising in data science. | |||||||||||||||||||||||||||||||||||||||||||||||
Content | This course provides an in-depth theoretical treatment of classical and modern optimization methods that are relevant in data science. After a general discussion about the role that optimization has in the process of learning from data, we give an introduction to the theory of (convex) optimization. Based on this, we present and analyze algorithms in the following four categories: first-order methods (gradient and coordinate descent, Frank-Wolfe, subgradient and mirror descent, stochastic and incremental gradient methods); second-order methods (Newton and quasi Newton methods); non-convexity (local convergence, provable global convergence, cone programming, convex relaxations); min-max optimization (extragradient methods). The emphasis is on the motivations and design principles behind the algorithms, on provable performance bounds, and on the mathematical tools and techniques to prove them. The goal is to equip students with a fundamental understanding about why optimization algorithms work, and what their limits are. This understanding will be of help in selecting suitable algorithms in a given application, but providing concrete practical guidance is not our focus. | |||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | A solid background in analysis and linear algebra; some background in theoretical computer science (computational complexity, analysis of algorithms); the ability to understand and write mathematical proofs. | |||||||||||||||||||||||||||||||||||||||||||||||
263-3710-00L | Machine Perception | W | 8 credits | 3V + 2U + 2A | O. Hilliges, J. Song | |||||||||||||||||||||||||||||||||||||||||||
Abstract | Recent developments in neural networks have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and human shape modeling This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual and generative tasks. | |||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Students will learn about fundamental aspects of modern deep learning approaches for perception and generation. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics, and shape modeling. The optional final project assignment will involve training a complex neural network architecture and applying it to a real-world dataset. The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human-centric signals. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others. | |||||||||||||||||||||||||||||||||||||||||||||||
Content | We will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures, and advanced deep learning concepts in particular probabilistic deep learning models. The course covers the following main areas: I) Foundations of deep learning. II) Advanced topics like probabilistic generative modeling of data (latent variable models, generative adversarial networks, auto-regressive models, invertible neural networks, diffusion models). III) Deep learning in computer vision, human-computer interaction, and robotics. Specific topics include: I) Introduction to Deep Learning: a) Neural Networks and training (i.e., backpropagation) b) Feedforward Networks c) Timeseries modelling (RNN, GRU, LSTM) d) Convolutional Neural Networks II) Advanced topics: a) Latent variable models (VAEs) b) Generative adversarial networks (GANs) c) Autoregressive models (PixelCNN, PixelRNN, TCN, Transformer) d) Invertible Neural Networks / Normalizing Flows e) Coordinate-based networks (neural implicit surfaces, NeRF) f) Diffusion models III) Applications in machine perception and computer vision: a) Fully Convolutional architectures for dense per-pixel tasks (i.e., instance segmentation) b) Pose estimation and other tasks involving human activity c) Neural shape modeling (implicit surfaces, neural radiance fields) d) Deep Reinforcement Learning and Applications in Physics-Based Behavior Modeling | |||||||||||||||||||||||||||||||||||||||||||||||
Literature | Deep Learning Book by Ian Goodfellow and Yoshua Bengio | |||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | This is an advanced grad-level course that requires a background in machine learning. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. The course will focus on state-of-the-art research in deep learning and will not repeat the basics of machine learning Please take note of the following conditions: 1) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge 2) All practical exercises will require basic knowledge of Python and will use libraries such as Pytorch, scikit-learn, and scikit-image. We will provide introductions to Pytorch and other libraries that are needed but will not provide introductions to basic programming or Python. The following courses are strongly recommended as prerequisites: * "Visual Computing" or "Computer Vision" The course will be assessed by a final written examination in English. No course materials or electronic devices can be used during the examination. Note that the examination will be based on the contents of the lectures, the associated reading materials, and the exercises. The exam will be a 3-hour end-of-term exam and take place at the end of the teaching period. | |||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
|
- Page 1 of 1