Otmar Hilliges: Catalogue data in Spring Semester 2021

Name Prof. Dr. Otmar Hilliges
Name variantsOtmar Hilliges
FieldComputer Science
Professur für Informatik
ETH Zürich, STD H 24
Stampfenbachstrasse 48
8092 Zürich
Telephone+41 44 632 39 56
DepartmentComputer Science
RelationshipFull Professor

263-3710-00LMachine Perception Information Restricted registration - show details
Number of participants limited to 200.
8 credits3V + 2U + 2AO. Hilliges, S. Tang
AbstractRecent developments in neural networks (aka “deep learning”) have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and intelligent UIs. This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual tasks.
ObjectiveStudents will learn about fundamental aspects of modern deep learning approaches for perception. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics and HCI. The final project assignment will involve training a complex neural network architecture and applying it on a real-world dataset of human activity.

The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human input into computing systems. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others.
ContentWe will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures and advanced deep learning concepts in particular probabilistic deep learning models

The course covers the following main areas:
I) Foundations of deep-learning.
II) Probabilistic deep-learning for generative modelling of data (latent variable models, generative adversarial networks and auto-regressive models).
III) Deep learning in computer vision, human-computer interaction and robotics.

Specific topics include: 
I) Deep learning basics:
a) Neural Networks and training (i.e., backpropagation)
b) Feedforward Networks
c) Timeseries modelling (RNN, GRU, LSTM)
d) Convolutional Neural Networks for classification
II) Probabilistic Deep Learning:
a) Latent variable models (VAEs)
b) Generative adversarial networks (GANs)
c) Autoregressive models (PixelCNN, PixelRNN, TCNs)
III) Deep Learning techniques for machine perception:
a) Fully Convolutional architectures for dense per-pixel tasks (i.e., instance segmentation)
b) Pose estimation and other tasks involving human activity
c) Deep reinforcement learning
IV) Case studies from research in computer vision, HCI, robotics and signal processing
LiteratureDeep Learning
Book by Ian Goodfellow and Yoshua Bengio
Prerequisites / Notice***
In accordance with the ETH Covid-19 master plan the lecture will be fully virtual. Details on the course website.

This is an advanced grad-level course that requires a background in machine learning. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. The course will focus on state-of-the-art research in deep-learning and will not repeat basics of machine learning

Please take note of the following conditions:
1) The number of participants is limited to 200 students (MSc and PhDs).
2) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge
3) All practical exercises will require basic knowledge of Python and will use libraries such as Pytorch, scikit-learn and scikit-image. We will provide introductions to Pytorch and other libraries that are needed but will not provide introductions to basic programming or Python.

The following courses are strongly recommended as prerequisite:
* "Visual Computing" or "Computer Vision"

The course will be assessed by a final written examination in English. No course materials or electronic devices can be used during the examination. Note that the examination will be based on the contents of the lectures, the associated reading materials and the exercises.
263-3712-00LAdvanced Seminar on Computational Haptics Information Restricted registration - show details
Number of participants limited to 14.

The deadline for deregistering expires at the end of the second week of the semester. Students who are still registered after that date, but do not attend the seminar, will officially fail the seminar.
2 credits2SO. Hilliges
AbstractHaptic rendering technologies stimulate the user’s senses of touch and motion just as felt when interacting with physical objects. Actuation techniques need to address three questions: 1) What to actuate, 2) How to actuate it and 3) When to actuate it. We will approach each of these questions from a heavily technical perspective, with a focus on optimization and machine learning to find answers.
ObjectiveThe goal of the seminar is to familiarize students with exciting new research topics in this important area, but also to teach basic scientific writing and oral presentation skills.
ContentHaptics rendering is the use of technology that stimulates the senses of touch and motion that would be felt by a user interacting directly with physical objects. This usually involves hardware that is capable of delivering these senses. Three questions arise here: 1) What to actuate, 2) How to actuate it and 3) When to actuate. We will approach these questions from a heavy technical perspective that usually have an optimization or machine learning focus. Papers from scientific venues such as CHI, UIST & SIGGRAPH will be examined in-depth that answer these questions (partially). Students present and discuss the papers to extract techniques and insights that can be applied to software & hardware projects. Topics revolve around computational design, sensor placement, user state interference (through machine learning), and actuation as an optimization problem.

The seminar will have a different structure from regular seminars to encourage more discussion and a deeper learning experience. We will use a case-study format where all students read the same paper each week but fulfill different roles and hence prepare with different viewpoints in mind ( "presenter", "historian", "PhD", and “Journalist”).

The final deliverables include:
20 Minute presentation as presenter
5 Minute presentation as historian
1 A4 research proposal as the PhD
1 A4 summary of the discussion as the Journalist.

Example papers are:
Tactile Rendering Based on Skin Stress Optimization - (http://mslab.es/projects/TactileRenderingSkinStress/) SIGGRAPH 2020
SimuLearn: Fast and Accurate Simulator to Support Morphing Materials Design and Workflows - (https://dl.acm.org/doi/10.1145/3379337.3415867) UIST 2019
Fabrication-in-the-Loop Co-Optimization of Surfaces and Styli for Drawing Haptics -(https://www.pdf.inf.usi.ch/projects/SurfaceStylusCoOpt/index.html) SIGGRAPH 2020

For each topic, a paper will be chosen that represents the state of the art of research or seminal work that inspired and fostered future work. Students will learn how to incorporate computational methods into systems that involve software, hardware, and, very importantly, users.
LiteratureComputational Interaction, Edited by Antti Oulasvirta, Per Ola Kristensson, Xiaojun Bi, and Andrew Howes, 2018. PDF Freely available through the ETH Network.