Jie Song: Catalogue data in Spring Semester 2023

Name Dr. Jie Song
Address
Professur für Informatik
ETH Zürich, STD G 23
Stampfenbachstrasse 48
8092 Zürich
SWITZERLAND
E-mailjsong@inf.ethz.ch
DepartmentComputer Science
RelationshipLecturer

NumberTitleECTSHoursLecturers
263-3710-00LMachine Perception Information Restricted registration - show details 8 credits3V + 2U + 2AO. Hilliges, J. Song
AbstractRecent developments in neural networks have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and human shape modeling This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual and generative tasks.
Learning objectiveStudents will learn about fundamental aspects of modern deep learning approaches for perception and generation. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics, and shape modeling. The optional final project assignment will involve training a complex neural network architecture and applying it to a real-world dataset.

The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human-centric signals. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others.
ContentWe will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures, and advanced deep learning concepts in particular probabilistic deep learning models.

The course covers the following main areas:
I) Foundations of deep learning.
II) Advanced topics like probabilistic generative modeling of data (latent variable models, generative adversarial networks, auto-regressive models, invertible neural networks, diffusion models).
III) Deep learning in computer vision, human-computer interaction, and robotics.

Specific topics include:
I) Introduction to Deep Learning:
a) Neural Networks and training (i.e., backpropagation)
b) Feedforward Networks
c) Timeseries modelling (RNN, GRU, LSTM)
d) Convolutional Neural Networks
II) Advanced topics:
a) Latent variable models (VAEs)
b) Generative adversarial networks (GANs)
c) Autoregressive models (PixelCNN, PixelRNN, TCN, Transformer)
d) Invertible Neural Networks / Normalizing Flows
e) Coordinate-based networks (neural implicit surfaces, NeRF)
f) Diffusion models
III) Applications in machine perception and computer vision:
a) Fully Convolutional architectures for dense per-pixel tasks (i.e., instance segmentation)
b) Pose estimation and other tasks involving human activity
c) Neural shape modeling (implicit surfaces, neural radiance fields)
d) Deep Reinforcement Learning and Applications in Physics-Based Behavior Modeling
LiteratureDeep Learning
Book by Ian Goodfellow and Yoshua Bengio
Prerequisites / NoticeThis is an advanced grad-level course that requires a background in machine learning. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. The course will focus on state-of-the-art research in deep learning and will not repeat the basics of machine learning

Please take note of the following conditions:
1) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge
2) All practical exercises will require basic knowledge of Python and will use libraries such as Pytorch, scikit-learn, and scikit-image. We will provide introductions to Pytorch and other libraries that are needed but will not provide introductions to basic programming or Python.

The following courses are strongly recommended as prerequisites:
* "Visual Computing" or "Computer Vision"

The course will be assessed by a final written examination in English. No course materials or electronic devices can be used during the examination. Note that the examination will be based on the contents of the lectures, the associated reading materials, and the exercises.

The exam will be a 3-hour end-of-term exam and take place at the end of the teaching period.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Project Managementassessed
Social CompetenciesCommunicationfostered
Cooperation and Teamworkfostered
Leadership and Responsibilityfostered
Self-presentation and Social Influence fostered
Personal CompetenciesAdaptability and Flexibilityfostered
Creative Thinkingfostered
Critical Thinkingfostered
Integrity and Work Ethicsfostered
Self-direction and Self-management fostered