Search result: Catalogue data in Spring Semester 2024

Computer Science Master Information
Minors
Minor in Computer Graphics
NumberTitleTypeECTSHoursLecturers
252-0538-00LShape Modeling and Geometry Processing Information W8 credits2V + 1U + 4AO. Sorkine Hornung
AbstractThis course covers the fundamentals and developments in geometric modeling and geometry processing. Topics include surface modeling based on point clouds and polygonal meshes, mesh generation, surface reconstruction, mesh fairing and parameterization, discrete differential geometry, interactive shape editing, topics in digital shape fabrication.
Learning objectiveThe students will learn how to design, program and analyze algorithms and systems for interactive 3D shape modeling and geometry processing.
ContentRecent advances in 3D geometry processing have created a plenitude of novel concepts for the mathematical representation and interactive manipulation of geometric models. This course covers the fundamentals and some of the developments in geometric modeling and geometry processing. Topics include surface modeling based on point clouds and triangle meshes, mesh generation, surface reconstruction, mesh fairing and parameterization, discrete differential geometry, interactive shape editing and digital shape fabrication.
Lecture notesSlides and course notes
Prerequisites / NoticePrerequisites:
Visual Computing, Computer Graphics or an equivalent class. Experience with C++ programming. Solid background in linear algebra and analysis. Some knowledge of differential geometry, computational geometry and numerical methods is helpful but not a strict requirement.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesProblem-solvingassessed
263-5704-00LArtificial Intelligence for Digital Characters Information W4 credits2V + 1AR. Wampfler
AbstractThis lecture provides an overview of techniques to build conversational digital characters.
Learning objectiveThis lecture provides an overview of techniques to build conversational digital characters. Three main components of conversational digital characters are introduced: Chatbots, animation synthesis, and speech synthesis. Real-life application of such digital characters is demonstrated on different use cases (e.g., digital Einstein).
ContentThis lecture opens with basics on digital characters. Afterwards, the main components to build a conversational digital character are introduced. This includes the basics of natural language processing used to build a chatbot, animating a digital character based on motion capturing and deep learning, and synthesizing the speech. Further, autonomous agents based on knowledge graphs are covered. The lecture ends with real-life applications of digital characters.
LiteratureLecture slides will be made available at the course website.
Prerequisites / NoticeThere are no prerequisites for this class. However, it will help if the student has taken an undergraduate or graduate level class in statistics or machine learning.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
252-5706-00LMathematical Foundations of Computer Graphics and Vision Information W5 credits2V + 1U + 1AT. Aydin, A. Djelouah, B. Gözcü
AbstractThis course presents the fundamental mathematical tools and concepts used in computer graphics and vision. Each theoretical topic is introduced in the context of practical vision or graphic problems, showcasing its importance in real-world applications.
Learning objectiveThe main goal is to equip the students with the key mathematical tools necessary to understand state-of-the-art algorithms in vision and graphics. In addition to the theoretical part, the students will learn how to use these mathematical tools to solve a wide range of practical problems in visual computing. After successfully completing this course, the students will be able to apply these mathematical concepts and tools to practical industrial and academic projects in visual computing.
ContentThe theory behind various mathematical concepts and tools will be introduced, and their practical utility will be showcased in diverse applications in computer graphics and vision. The course will cover topics in sampling, reconstruction, approximation, optimization, robust fitting, differentiation, quadrature and spectral methods. Applications will include 3D surface reconstruction, camera pose estimation, image editing, data projection, character animation, structure-aware geometry processing, and rendering.
263-5806-00LDigital Humans Information W8 credits3V + 2U + 2AS. Tang
AbstractThis course covers the core technologies required to model and simulate motions for digital humans. The curriculum includes human body modeling, human motion capture, data-driven human motion synthesis, and ML-based generative models. Each topic will be extensively illustrated with examples to provide a comprehensive understanding of the subject matter.
Learning objectiveStudents will learn how to estimate human pose, shape, and motion from videos and create basic human avatars from various visual inputs. Students will also learn how to represent and algorithmically generate motions for digital characters. The lectures are accompanied by exercise sessions and a capstone project.
Content- Basic concepts of 3D representations
- Human body/hand models
- Human motion capture;
- Neural rendering
- Transformers
- Generative models for digital humans
Prerequisites / NoticeExperience with python and C++ programming, numerical linear algebra, multivariate calculus and probability theory. Solid background in deep learning, computer vision, physics-based modeling, kinematics, and dynamics is preferred.
CompetenciesCompetencies
Techniques and Technologiesassessed
Minor in Computer Vision
NumberTitleTypeECTSHoursLecturers
252-0579-00L3D Vision Information W5 credits3G + 1AM. Pollefeys, D. B. Baráth
AbstractThe course covers camera models and calibration, feature tracking and matching, camera motion estimation via simultaneous localization and mapping (SLAM) and visual odometry (VO), epipolar and mult-view geometry, structure-from-motion, (multi-view) stereo, augmented reality, and image-based (re-)localization.
Learning objectiveAfter attending this course, students will:
1. understand the core concepts for recovering 3D shape of objects and scenes from images and video.
2. be able to implement basic systems for vision-based robotics and simple virtual/augmented reality applications.
3. have a good overview over the current state-of-the art in 3D vision.
4. be able to critically analyze and asses current research in this area.
ContentThe goal of this course is to teach the core techniques required for robotic and augmented reality applications: How to determine the motion of a camera and how to estimate the absolute position and orientation of a camera in the real world. This course will introduce the basic concepts of 3D Vision in the form of short lectures, followed by student presentations discussing the current state-of-the-art. The main focus of this course are student projects on 3D Vision topics, with an emphasis on robotic vision and virtual and augmented reality applications.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesfostered
Techniques and Technologiesfostered
Method-specific CompetenciesAnalytical Competenciesfostered
Decision-makingfostered
Media and Digital Technologiesfostered
Problem-solvingfostered
Project Managementfostered
Social CompetenciesCommunicationassessed
Cooperation and Teamworkassessed
Personal CompetenciesCreative Thinkingfostered
Self-direction and Self-management fostered
263-3710-00LMachine Perception Information Restricted registration - show details W8 credits3V + 2U + 2AJ. Song, X. Chen, S. Coros, O. Hilliges, M. Kaufmann
AbstractRecent developments in neural networks have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and human shape modeling.
This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual and generative tasks.
Learning objectiveStudents will learn about fundamental aspects of modern deep learning approaches for perception and generation. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics, and shape modeling. The optional final project assignment will involve training a complex neural network architecture and applying it to a real-world dataset.

The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human-centric signals. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others.
ContentWe will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures, and advanced deep learning concepts in particular probabilistic deep learning models.

The course covers the following main areas:
I) Foundations of deep learning.
II) Advanced topics like probabilistic generative modeling of data (latent variable models, generative adversarial networks, auto-regressive models, invertible neural networks, diffusion models).
III) Deep learning in computer vision, human-computer interaction, and robotics.

Specific topics include:
I) Introduction to Deep Learning:
a) Neural Networks and training (i.e., backpropagation)
b) Feedforward Networks
c) Timeseries modelling (RNN, GRU, LSTM)
d) Convolutional Neural Networks
II) Advanced topics:
a) Latent variable models (VAEs)
b) Generative adversarial networks (GANs)
c) Autoregressive models (PixelCNN, PixelRNN, TCN, Transformer)
d) Invertible Neural Networks / Normalizing Flows
e) Coordinate-based networks (neural implicit surfaces, NeRF)
f) Diffusion models
III) Applications in machine perception and computer vision:
a) Fully Convolutional architectures for dense per-pixel tasks (i.e., instance segmentation)
b) Pose estimation and other tasks involving human activity
c) Neural shape modeling (implicit surfaces, neural radiance fields)
d) Deep Reinforcement Learning and Applications in Physics-Based Behavior Modeling
LiteratureDeep Learning
Book by Ian Goodfellow and Yoshua Bengio
Prerequisites / NoticeThis is an advanced grad-level course that requires a background in machine learning. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. The course will focus on state-of-the-art research in deep learning and will not repeat the basics of machine learning

Please take note of the following conditions:
1) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge
2) All practical exercises will require basic knowledge of Python and will use libraries such as Pytorch, scikit-learn, and scikit-image. We will provide introductions to Pytorch and other libraries that are needed but will not provide introductions to basic programming or Python.

The following courses are strongly recommended as prerequisites:
* "Visual Computing" or "Computer Vision"

The course will be assessed by a final written examination in English. No course materials or electronic devices can be used during the examination. Note that the examination will be based on the contents of the lectures, the associated reading materials, and the exercises.

The exam will be a 3-hour end-of-term exam and take place at the end of the teaching period.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Project Managementfostered
Social CompetenciesCommunicationfostered
Cooperation and Teamworkfostered
Leadership and Responsibilityfostered
Self-presentation and Social Influence fostered
Personal CompetenciesAdaptability and Flexibilityfostered
Creative Thinkingfostered
Critical Thinkingfostered
Integrity and Work Ethicsfostered
Self-direction and Self-management fostered
263-5052-00LInteractive Machine Learning: Visualization & Explainability Information Restricted registration - show details W5 credits3G + 1AM. El-Assady
AbstractVisual Analytics supports the design of human-in-the-loop interfaces that enable human-machine collaboration. In this course, will go through the fundamentals of designing interactive visualizations, later applying them to explain and interact with machine leaning models.
Learning objectiveThe goal of the course is to introduce techniques for interactive information visualization and to apply these on understanding, diagnosing, and refining machine learning models.
ContentInteractive, mixed-initiative machine learning promises to combine the efficiency of automation with the effectiveness of humans for a collaborative decision-making and problem-solving process. This can be facilitated through co-adaptive visual interfaces.

This course will first introduce the foundations of information visualization design based on data charecteristics, e.g., high-dimensional, geo-spatial, relational, temporal, and textual data.

Second, we will discuss interaction techniques and explanation strategies to enable explainable machine learning with the tasks of understanding, diagnosing, and refining machine learning models.

Tentative list of topics:
1. Visualization and Perception
2. Interaction and Explanation
3. Systems Overview
Lecture notesCourse material will be provided in form of slides.
LiteratureWill be provided during the course.
Prerequisites / NoticeBasic understanding of machine learning as taught at the Bachelor's level.
Minor in Data Management
NumberTitleTypeECTSHoursLecturers
227-0558-00LPrinciples of Distributed Computing Information W7 credits2V + 2U + 2AR. Wattenhofer
AbstractWe study the fundamental issues underlying the design of distributed systems: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques.
Learning objectiveDistributed computing is essential in modern computing and communications systems. Examples are on the one hand large-scale networks such as the Internet, and on the other hand multiprocessors such as your new multi-core laptop. This course introduces the principles of distributed computing, emphasizing the fundamental issues underlying the design of distributed systems and networks: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques, basically the "pearls" of distributed computing. We will cover a fresh topic every week.
ContentDistributed computing models and paradigms, e.g. message passing, shared memory, synchronous vs. asynchronous systems, time and message complexity, peer-to-peer systems, small-world networks, social networks, sorting networks, wireless communication, and self-organizing systems.

Distributed algorithms, e.g. leader election, coloring, covering, packing, decomposition, spanning trees, mutual exclusion, store and collect, arrow, ivy, synchronizers, diameter, all-pairs-shortest-path, wake-up, and lower bounds
Lecture notesAvailable.
LiteratureLecture Notes By Roger Wattenhofer. These lecture notes are taught at about a dozen different universities through the world.

Mastering Distributed Algorithms
Roger Wattenhofer
Inverted Forest Publishing, 2020. ISBN 979-8628688267

Distributed Computing: Fundamentals, Simulations and Advanced Topics
Hagit Attiya, Jennifer Welch.
McGraw-Hill Publishing, 1998, ISBN 0-07-709352 6

Introduction to Algorithms
Thomas Cormen, Charles Leiserson, Ronald Rivest.
The MIT Press, 1998, ISBN 0-262-53091-0 oder 0-262-03141-8

Disseminatin of Information in Communication Networks
Juraj Hromkovic, Ralf Klasing, Andrzej Pelc, Peter Ruzicka, Walter Unger.
Springer-Verlag, Berlin Heidelberg, 2005, ISBN 3-540-00846-2

Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes
Frank Thomson Leighton.
Morgan Kaufmann Publishers Inc., San Francisco, CA, 1991, ISBN 1-55860-117-1

Distributed Computing: A Locality-Sensitive Approach
David Peleg.
Society for Industrial and Applied Mathematics (SIAM), 2000, ISBN 0-89871-464-8
Prerequisites / NoticeCourse pre-requisites: Interest in algorithmic problems. (No particular course needed.)
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingassessed
Problem-solvingassessed
263-3855-00LCloud Computing Architecture Information W9 credits3V + 2U + 3AA. Klimovic
AbstractCloud computing hosts a wide variety of online services that we use on a daily basis, including web search, social networks, and video streaming. This course will cover how datacenter hardware, systems software, and application frameworks are designed for the cloud.
Learning objectiveAfter successful completion of this course, students will be able to: 1) reason about performance, energy efficiency, and availability tradeoffs in the design of cloud system software, 2) describe how datacenter hardware is organized and explain why it is organized as such, 3) implement cloud applications as well as analyze and optimize their performance.
ContentIn this course, we study how datacenter hardware, systems software, and applications are designed at large scale for the cloud. The course covers topics including server design, cluster management, large-scale storage systems, serverless computing, data analytics frameworks, and performance analysis.
Lecture notesLecture slides will be available on the course website.
Prerequisites / NoticeUndergraduate courses in 1) computer architecture and 2) operating systems, distributed systems, and/or database systems are strongly recommended.
263-5354-00LLarge Language Models Information W8 credits3V + 2U + 2AR. Cotterell, M. Sachan, F. Tramèr
AbstractLarge language models have become one of the most commonly deployed NLP inventions. In the past half-decade, their integration into core natural language processing tools has dramatically increased the performance of such tools, and they have entered the public discourse surrounding artificial intelligence.
Learning objectiveTo understand the mathematical foundations of large language models as well as how to implement them.
ContentWe start with the probabilistic foundations of language models, i.e., covering what constitutes a language model from a formal, theoretical perspective. We then discuss how to construct and curate training corpora, and introduce many of the neural-network architectures often used to instantiate language models at scale. The course covers aspects of systems programming, discussion of privacy and harms, as well as applications of language models in NLP and beyond.
LiteratureThe lecture notes will be supplemented with various readings from the literature.
Minor in Information Security
NumberTitleTypeECTSHoursLecturers
252-0408-00LCryptographic Protocols Information W6 credits2V + 2U + 1AM. Hirt
AbstractIn a cryptographic protocol, a set of parties wants to achieve some common goal, while some of the parties are dishonest. Most prominent example of a cryptographic protocol is multi-party computation, where the parties compute an arbitrary (but fixed) function of their inputs, while maintaining the secrecy of the inputs and the correctness of the outputs even if some of the parties try to cheat.
Learning objectiveTo know and understand a selection of cryptographic protocols and to
be able to analyze and prove their security and efficiency.
ContentThe selection of considered protocols varies. Currently, we consider
multi-party computation, secret-sharing, broadcast and Byzantine
agreement. We look at both the synchronous and the asynchronous
communication model, and focus on simple protocols as well as on
highly-efficient protocols.
Lecture notesWe provide handouts of the slides. For some of the topics, we also
provide papers and/or lecture notes.
Prerequisites / NoticeA basic understanding of fundamental cryptographic concepts (as taught
for example in the course Information Security) is useful, but not
required.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingfostered
Personal CompetenciesCreative Thinkingfostered
Critical Thinkingfostered
263-4600-00LFormal Methods for Information Security Information W5 credits2V + 1U + 1AS. Krstic, R. Sasse, C. Sprenger
AbstractThe course focuses on formal methods for the modeling and analysis of security protocols for critical systems, ranging from authentication protocols for network security to electronic voting protocols and online banking. In addition, we will also introduce the notions of non-interference and runtime monitoring.
Learning objectiveThe students will learn the key ideas and theoretical foundations of formal modeling and analysis of security protocols. The students will complement their theoretical knowledge by solving practical exercises, completing a small project, and using state-of-the-art tools. The students also learn the fundamentals of non-interference and runtime monitoring.
ContentThe course treats formal methods mainly for the modeling and analysis of security protocols. Cryptographic protocols (such as SSL/TLS, SSH, Kerberos, SAML single-sign on, and IPSec) form the basis for secure communication and business processes. Numerous attacks on published protocols show that the design of cryptographic protocols is extremely error-prone. A rigorous analysis of these protocols is therefore indispensable, and manual analysis is insufficient. The lectures cover the theoretical basis for the (tool-supported) formal modeling and analysis of such protocols. Specifically, we discuss their operational semantics, the formalization of security properties, and techniques and algorithms for their verification.

The second part of this course will cover a selection of advanced topics in security protocols such as abstraction techniques for efficient verification, secure communication with humans, the link between symbolic protocol models and cryptographic models as well as RFID protocols (a staple of the Internet of Things) and electronic voting protocols, including the relevant privacy properties.

Moreover, we will give an introduction to two additional topics: non-interference as a general notion of secure systems, both from a semantic and a programming language perspective (type system), and runtime verification/monitoring to detect violations of security policies expressed as trace properties.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Project Managementfostered
Social CompetenciesCooperation and Teamworkfostered
Personal CompetenciesSelf-direction and Self-management fostered
263-4656-00LDigital Signatures Information W5 credits2V + 2AD. Hofheinz
AbstractDigital signatures as one central cryptographic building block. Different security goals and security definitions for digital signatures, followed by a variety of popular and fundamental signature schemes with their security analyses.
Learning objectiveThe student knows a variety of techniques to construct and analyze the security of digital signature schemes. This includes modularity as a central tool of constructing secure schemes, and reductions as a central tool to proving the security of schemes.
ContentWe will start with several definitions of security for signature schemes, and investigate the relations among them. We will proceed to generic (but inefficient) constructions of secure signatures, and then move on to a number of efficient schemes based on concrete computational hardness assumptions. On the way, we will get to know paradigms such as hash-then-sign, one-time signatures, and chameleon hashing as central tools to construct secure signatures.
LiteratureJonathan Katz, "Digital Signatures."
Prerequisites / NoticeIdeally, students will have taken the D-INFK Bachelors course "Information Security" or an equivalent course at Bachelors level.
263-4660-00LApplied Cryptography Information W8 credits3V + 2U + 2PK. Paterson, F. Günther, F. Tramèr
AbstractThis course will introduce the basic primitives of cryptography, using rigorous syntax and game-based security definitions. The course will show how these primitives can be combined to build cryptographic protocols and systems.
Learning objectiveThe goal of the course is to put students' understanding of cryptography on sound foundations, to enable them to start to build well-designed cryptographic systems, and to expose them to some of the pitfalls that arise when doing so.
ContentBasic symmetric primitives (block ciphers, modes, hash functions); generic composition; AEAD; basic secure channels; basic public key primitives (encryption,signature, DH key exchange); ECC; randomness; applications.
LiteratureTextbook: Boneh and Shoup, “A Graduate Course in Applied Cryptography”, http://toc.cryptobook.us/book.pdf.
Prerequisites / NoticeStudents should have taken the D-INFK Bachelor's course “Information Security" (252-0211-00) or an alternative first course covering cryptography at a similar level. / In this course, we will use Moodle for content delivery: https://moodle-app2.let.ethz.ch/course/view.php?id=22001.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Media and Digital Technologiesassessed
Personal CompetenciesCreative Thinkingfostered
Critical Thinkingfostered
Integrity and Work Ethicsfostered
Minor in Machine Learning
NumberTitleTypeECTSHoursLecturers
252-0526-00LStatistical Learning Theory Information
Takes place for the last time!
W8 credits3V + 2U + 2AJ. M. Buhmann
AbstractThe course covers advanced methods of statistical learning:

- Variational methods and optimization.
- Deterministic annealing.
- Clustering for diverse types of data.
- Model validation by information theory.
Learning objectiveThe course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning.
Content- Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing.

- Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures.

- Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation.

- Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models.
Lecture notesA draft of a script will be provided. Lecture slides will be made available.
LiteratureHastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001.

L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996
Prerequisites / NoticeKnowledge of machine learning (introduction to machine learning and/or advanced machine learning)
Basic knowledge of statistics.
261-5120-00LMachine Learning for Health Care Information Restricted registration - show details W5 credits2V + 2AJ. Vogt, V. Boeva, G. Rätsch
AbstractThe course will review the most relevant methods and applications of Machine Learning in Biomedicine, discuss the main challenges they present and their current technical problems.
Learning objectiveDuring the last years, we have observed a rapid growth in the field of Machine Learning (ML), mainly due to improvements in ML algorithms, the increase of data availability and a reduction in computing costs. This growth is having a profound impact in biomedical applications, where the great variety of tasks and data types enables us to get benefit of ML algorithms in many different ways. In this course we will review the most relevant methods and applications of ML in biomedicine, discuss the main challenges they present and their current technical solutions.
ContentThe course will consist of different topic clusters that will cover the most relevant applications of ML in Health Care, as e.g.
1) Structured time series: Temporal time series of structured data often appear in biomedical datasets, presenting challenges such as containing variables with different periodicities or being influenced by static data.
2) Medical notes: Many medical observations are stored as free text. We will analyze strategies for extracting knowledge from these textual records.
3) Medical images: Images are a fundamental piece of information in many medical disciplines. We will study how to train ML algorithms to analyze and interpret medical images effectively.
4) Genomics data: ML in genomics is still an emerging subfield. However, given that genomics data are arguably the most extensive and complex datasets in biomedicine, we expect many relevant ML applications to arise in the near future. We will review and discuss current applications and challenges in genomics data analysis.
5) Explainable/Interpretable ML: Interpretable and explainable machine learning focuses on the design of human-understandable models and algorithms that allow for black-box model introspection after training, i.e., post hoc. We will explore methods to make ML models more transparent, particularly in the context of healthcare.
6) Representation Learning: Representation learning is a crucial aspect of ML in healthcare. It involves learning meaningful representations from data, which can be especially valuable in tasks like medical image analysis, feature extraction from biomedical time series data, or dimensionality reduction. We will delve into techniques for efficient representation learning to enhance the performance of healthcare ML models.
Prerequisites / NoticeData Structures & Algorithms, Introduction to Machine Learning, Statistics/Probability, Programming in Python, Unix Command Line.

Prior attendance of additional machine learning courses is highly recommended but not formally required. Students should have a basic understanding of the current machine learning landscape, as some concepts will be briefly mentioned, with the expectation that students can delve deeper into them using current literature. This course primarily emphasizes the practical application of existing concepts and methods in healthcare data.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesfostered
Techniques and Technologiesfostered
263-3710-00LMachine Perception Information Restricted registration - show details W8 credits3V + 2U + 2AJ. Song, X. Chen, S. Coros, O. Hilliges, M. Kaufmann
AbstractRecent developments in neural networks have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and human shape modeling.
This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual and generative tasks.
Learning objectiveStudents will learn about fundamental aspects of modern deep learning approaches for perception and generation. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics, and shape modeling. The optional final project assignment will involve training a complex neural network architecture and applying it to a real-world dataset.

The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human-centric signals. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others.
ContentWe will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures, and advanced deep learning concepts in particular probabilistic deep learning models.

The course covers the following main areas:
I) Foundations of deep learning.
II) Advanced topics like probabilistic generative modeling of data (latent variable models, generative adversarial networks, auto-regressive models, invertible neural networks, diffusion models).
III) Deep learning in computer vision, human-computer interaction, and robotics.

Specific topics include:
I) Introduction to Deep Learning:
a) Neural Networks and training (i.e., backpropagation)
b) Feedforward Networks
c) Timeseries modelling (RNN, GRU, LSTM)
d) Convolutional Neural Networks
II) Advanced topics:
a) Latent variable models (VAEs)
b) Generative adversarial networks (GANs)
c) Autoregressive models (PixelCNN, PixelRNN, TCN, Transformer)
d) Invertible Neural Networks / Normalizing Flows
e) Coordinate-based networks (neural implicit surfaces, NeRF)
f) Diffusion models
III) Applications in machine perception and computer vision:
a) Fully Convolutional architectures for dense per-pixel tasks (i.e., instance segmentation)
b) Pose estimation and other tasks involving human activity
c) Neural shape modeling (implicit surfaces, neural radiance fields)
d) Deep Reinforcement Learning and Applications in Physics-Based Behavior Modeling
LiteratureDeep Learning
Book by Ian Goodfellow and Yoshua Bengio
Prerequisites / NoticeThis is an advanced grad-level course that requires a background in machine learning. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. The course will focus on state-of-the-art research in deep learning and will not repeat the basics of machine learning

Please take note of the following conditions:
1) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge
2) All practical exercises will require basic knowledge of Python and will use libraries such as Pytorch, scikit-learn, and scikit-image. We will provide introductions to Pytorch and other libraries that are needed but will not provide introductions to basic programming or Python.

The following courses are strongly recommended as prerequisites:
* "Visual Computing" or "Computer Vision"

The course will be assessed by a final written examination in English. No course materials or electronic devices can be used during the examination. Note that the examination will be based on the contents of the lectures, the associated reading materials, and the exercises.

The exam will be a 3-hour end-of-term exam and take place at the end of the teaching period.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Project Managementfostered
Social CompetenciesCommunicationfostered
Cooperation and Teamworkfostered
Leadership and Responsibilityfostered
Self-presentation and Social Influence fostered
Personal CompetenciesAdaptability and Flexibilityfostered
Creative Thinkingfostered
Critical Thinkingfostered
Integrity and Work Ethicsfostered
Self-direction and Self-management fostered
263-5000-00LComputational Semantics for Natural Language Processing Information W6 credits2V + 1U + 2AM. Sachan
AbstractThis course presents an introduction to Natural language processing (NLP) with an emphasis on computational semantics i.e. the process of constructing and reasoning with meaning representations of natural language text.
Learning objectiveThe objective of the course is to learn about various topics in computational semantics and its importance in natural language processing methodology and research. Exercises and the project will be key parts of the course so the students will be able to gain hands-on experience with state-of-the-art techniques in the field.
ContentWe will take a modern view of the topic, and focus on various statistical and deep learning approaches for computation semantics. We will also overview various primary areas of research in language processing and discuss how the computational semantics view can help us make advances in NLP.
Lecture notesLecture slides will be made available at the course Web site.
LiteratureNo textbook is required, but there will be regularly assigned readings from research literature, linked to the course website.
Prerequisites / NoticeThe student should have successfully completed a graduate level class in machine learning (252-0220-00L), deep learning (263-3210-00L) or natural language processing (252-3005-00L) before. Similar courses from other universities are acceptable too.
263-5052-00LInteractive Machine Learning: Visualization & Explainability Information Restricted registration - show details W5 credits3G + 1AM. El-Assady
AbstractVisual Analytics supports the design of human-in-the-loop interfaces that enable human-machine collaboration. In this course, will go through the fundamentals of designing interactive visualizations, later applying them to explain and interact with machine leaning models.
Learning objectiveThe goal of the course is to introduce techniques for interactive information visualization and to apply these on understanding, diagnosing, and refining machine learning models.
ContentInteractive, mixed-initiative machine learning promises to combine the efficiency of automation with the effectiveness of humans for a collaborative decision-making and problem-solving process. This can be facilitated through co-adaptive visual interfaces.

This course will first introduce the foundations of information visualization design based on data charecteristics, e.g., high-dimensional, geo-spatial, relational, temporal, and textual data.

Second, we will discuss interaction techniques and explanation strategies to enable explainable machine learning with the tasks of understanding, diagnosing, and refining machine learning models.

Tentative list of topics:
1. Visualization and Perception
2. Interaction and Explanation
3. Systems Overview
Lecture notesCourse material will be provided in form of slides.
LiteratureWill be provided during the course.
Prerequisites / NoticeBasic understanding of machine learning as taught at the Bachelor's level.
263-5255-00LFoundations of Reinforcement Learning Information Restricted registration - show details W7 credits3V + 3AN. He
AbstractReinforcement learning (RL) has been in the limelight of many recent breakthroughs in artificial intelligence. This course focuses on theoretical and algorithmic foundations of reinforcement learning, through the lens of optimization, modern approximation, and learning theory. The course targets M.S. students with strong research interests in reinforcement learning, optimization, and control.
Learning objectiveThis course aims to provide students with an advanced introduction of RL theory and algorithms as well as bring them near the frontier of this active research field.

By the end of the course, students will be able to
- Identify the strengths and limitations of various reinforcement learning algorithms;
- Formulate and solve sequential decision-making problems by applying relevant reinforcement learning tools;
- Generalize or discover “new” applications, algorithms, or theories of reinforcement learning towards conducting independent research on the topic.
ContentTopics include fundamentals of Markov decision processes, approximate dynamic programming, linear programming and primal-dual perspectives of RL, model-based and model-free RL, policy gradient and actor-critic algorithms, Markov games and multi-agent RL. If time allows, we will also discuss advanced topics such as batch RL, inverse RL, causal RL, etc. The course keeps strong emphasis on in-depth understanding of the mathematical modeling and theoretical properties of RL algorithms.
Lecture notesLecture slides will be posted on Moodle.
LiteratureDynamic Programming and Optimal Control, Vol I & II, Dimitris Bertsekas
Reinforcement Learning: An Introduction, Second Edition, Richard Sutton and Andrew Barto.
Algorithms for Reinforcement Learning, Csaba Czepesvári.
Reinforcement Learning: Theory and Algorithms, Alekh Agarwal, Nan Jiang, Sham M. Kakade.
Prerequisites / NoticeStudents are expected to have strong mathematical background in linear algebra, probability theory, optimization, and machine learning.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesfostered
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingfostered
Problem-solvingassessed
Project Managementassessed
Social CompetenciesCommunicationassessed
Cooperation and Teamworkassessed
Leadership and Responsibilityfostered
Self-presentation and Social Influence fostered
Personal CompetenciesAdaptability and Flexibilityfostered
Creative Thinkingassessed
Critical Thinkingassessed
Integrity and Work Ethicsfostered
Self-awareness and Self-reflection fostered
Self-direction and Self-management fostered
  •  Page  1  of  3 Next page Last page     All