Search result: Catalogue data in Autumn Semester 2020

Cyber Security Master Information
Minor
Computational Science
Core Courses
NumberTitleTypeECTSHoursLecturers
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
ObjectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
636-0007-00LComputational Systems Biology Information W6 credits3V + 2UJ. Stelling
AbstractStudy of fundamental concepts, models and computational methods for the analysis of complex biological networks. Topics: Systems approaches in biology, biology and reaction network fundamentals, modeling and simulation approaches (topological, probabilistic, stoichiometric, qualitative, linear / nonlinear ODEs, stochastic), and systems analysis (complexity reduction, stability, identification).
ObjectiveThe aim of this course is to provide an introductory overview of mathematical and computational methods for the modeling, simulation and analysis of biological networks.
ContentBiology has witnessed an unprecedented increase in experimental data and, correspondingly, an increased need for computational methods to analyze this data. The explosion of sequenced genomes, and subsequently, of bioinformatics methods for the storage, analysis and comparison of genetic sequences provides a prominent example. Recently, however, an additional area of research, captured by the label "Systems Biology", focuses on how networks, which are more than the mere sum of their parts' properties, establish biological functions. This is essentially a task of reverse engineering. The aim of this course is to provide an introductory overview of corresponding computational methods for the modeling, simulation and analysis of biological networks. We will start with an introduction into the basic units, functions and design principles that are relevant for biology at the level of individual cells. Making extensive use of example systems, the course will then focus on methods and algorithms that allow for the investigation of biological networks with increasing detail. These include (i) graph theoretical approaches for revealing large-scale network organization, (ii) probabilistic (Bayesian) network representations, (iii) structural network analysis based on reaction stoichiometries, (iv) qualitative methods for dynamic modeling and simulation (Boolean and piece-wise linear approaches), (v) mechanistic modeling using ordinary differential equations (ODEs) and finally (vi) stochastic simulation methods.
Lecture notesLink
LiteratureU. Alon, An introduction to systems biology. Chapman & Hall / CRC, 2006.

Z. Szallasi et al. (eds.), System modeling in cellular biology. MIT Press, 2010.

B. Ingalls, Mathematical modeling in systems biology: an introduction. MIT Press, 2013
Electives
NumberTitleTypeECTSHoursLecturers
252-0543-01LComputer Graphics Information W8 credits3V + 2U + 2AM. Gross, M. Papas
AbstractThis course covers some of the fundamental concepts of computer graphics generation of photorealistic images from digital representations of 3D scenes and image-based methods for recovering digital scene representations from captured images.
ObjectiveAt the end of the course the students will be able to build a rendering system. The students will study the basic principles of rendering and image synthesis. In addition, the course is intended to stimulate the students' curiosity to explore the field of computer graphics in subsequent courses or on their own.
ContentThis course covers fundamental concepts of modern computer graphics. Students will learn about 3D object representations and the details of how to generate photorealistic images from digital representations of 3D scenes. Starting with an introduction to 3D shape modeling, geometry representation and texture mapping, we will move on to the physics of light transport, acceleration structures, appearance modeling and Monte Carlo integration. We will apply these principles for computing light transport of direct and global illumination due to surfaces and participating media. We will end with an overview of modern image-based capture and image synthesis methods, covering topics such as geometry and material capture, light-fields and depth-image based rendering.
Lecture notesno
LiteratureBooks:
High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting
Multiple view geometry in computer vision
Physically Based Rendering: From Theory to Implementation
Prerequisites / NoticePrerequisites:
Fundamentals of calculus and linear algebra, basic concepts of algorithms and data structures, programming skills in C++, Visual Computing course recommended.
The programming assignments will be in C++. This will not be taught in the class.
261-5100-00LComputational Biomedicine Information Restricted registration - show details
Number of participants limited to 60.
W5 credits2V + 1U + 1AG. Rätsch, V. Boeva, N. Davidson
AbstractThe course critically reviews central problems in Biomedicine and discusses the technical foundations and solutions for these problems.
ObjectiveOver the past years, rapid technological advancements have transformed classical disciplines such as biology and medicine into fields of apllied data science. While the sheer amount of the collected data often makes computational approaches inevitable for analysis, it is the domain specific structure and close relation to research and clinic, that call for accurate, robust and efficient algorithms. In this course we will critically review central problems in Biomedicine and will discuss the technical foundations and solutions for these problems.
ContentThe course will consist of three topic clusters that will cover different aspects of data science problems in Biomedicine:
1) String algorithms for the efficient representation, search, comparison, composition and compression of large sets of strings, mostly originating from DNA or RNA Sequencing. This includes genome assembly, efficient index data structures for strings and graphs, alignment techniques as well as quantitative approaches.
2) Statistical models and algorithms for the assessment and functional analysis of individual genomic variations. this includes the identification of variants, prediction of functional effects, imputation and integration problems as well as the association with clinical phenotypes.
3) Models for organization and representation of large scale biomedical data. This includes ontolgy concepts, biomedical databases, sequence annotation and data compression.
Prerequisites / NoticeData Structures & Algorithms, Introduction to Machine Learning, Statistics/Probability, Programming in Python, Unix Command Line
636-0017-00LComputational BiologyW6 credits3G + 2AT. Stadler, T. Vaughan
AbstractThe aim of the course is to provide up-to-date knowledge on how we can study biological processes using genetic sequencing data. Computational algorithms extracting biological information from genetic sequence data are discussed, and statistical tools to understand this information in detail are introduced.
ObjectiveAttendees will learn which information is contained in genetic sequencing data and how to extract information from this data using computational tools. The main concepts introduced are:
* stochastic models in molecular evolution
* phylogenetic & phylodynamic inference
* maximum likelihood and Bayesian statistics
Attendees will apply these concepts to a number of applications yielding biological insight into:
* epidemiology
* pathogen evolution
* macroevolution of species
ContentThe course consists of four parts. We first introduce modern genetic sequencing technology, and algorithms to obtain sequence alignments from the output of the sequencers. We then present methods for direct alignment analysis using approaches such as BLAST and GWAS. Second, we introduce mechanisms and concepts of molecular evolution, i.e. we discuss how genetic sequences change over time. Third, we employ evolutionary concepts to infer ancestral relationships between organisms based on their genetic sequences, i.e. we discuss methods to infer genealogies and phylogenies. Lastly, we introduce the field of phylodynamics, the aim of which is to understand and quantify population dynamic processes (such as transmission in epidemiology or speciation & extinction in macroevolution) based on a phylogeny. Throughout the class, the models and methods are illustrated on different datasets giving insight into the epidemiology and evolution of a range of infectious diseases (e.g. HIV, HCV, influenza, Ebola). Applications of the methods to the field of macroevolution provide insight into the evolution and ecology of different species clades. Students will be trained in the algorithms and their application both on paper and in silico as part of the exercises.
Lecture notesLecture slides will be available on moodle.
LiteratureThe course is not based on any of the textbooks below, but they are excellent choices as accompanying material:
* Yang, Z. 2006. Computational Molecular Evolution.
* Felsenstein, J. 2004. Inferring Phylogenies.
* Semple, C. & Steel, M. 2003. Phylogenetics.
* Drummond, A. & Bouckaert, R. 2015. Bayesian evolutionary analysis with BEAST.
Prerequisites / NoticeBasic knowledge in linear algebra, analysis, and statistics will be helpful. Programming in R will be required for the project work (compulsory continuous performance assessments). We provide an R tutorial and help sessions during the first two weeks of class to learn the required skills. However, in case you do not have any previous experience with R, we strongly recommend to get familiar with R prior to the semester start. For the D-BSSE students, we highly recommend the voluntary course „Introduction to Programming“, which takes place at D-BSSE from Wednesday, September 12 to Friday, September 14, i.e. BEFORE the official semester starting date Link
For the Zurich-based students without R experience, we recommend the R course Link, or working through the script provided as part of this R course.
Distributed Systems
Core Courses
NumberTitleTypeECTSHoursLecturers
252-1414-00LSystem Security Information W7 credits2V + 2U + 2AS. Capkun, A. Perrig
AbstractThe first part of the lecture covers individual system aspects starting with tamperproof or tamper-resistant hardware in general over operating system related security mechanisms to application software systems, such as host based intrusion detection systems. In the second part, the focus is on system design and methodologies for building secure systems.
ObjectiveIn this lecture, students learn about the security requirements and capabilities that are expected from modern hardware, operating systems, and other software environments. An overview of available technologies, algorithms and standards is given, with which these requirements can be met.
ContentThe first part of the lecture covers individual system's aspects starting with tamperproof or tamperresistant hardware in general over operating system related security mechanisms to application software systems such as host based intrusion detetction systems. The main topics covered are: tamper resistant hardware, CPU support for security, protection mechanisms in the kernel, file system security (permissions / ACLs / network filesystem issues), IPC Security, mechanisms in more modern OS, such as Capabilities and Zones, Libraries and Software tools for security assurance, etc.

In the second part, the focus is on system design and methodologies for building secure systems. Topics include: patch management, common software faults (buffer overflows, etc.), writing secure software (design, architecture, QA, testing), compiler-supported security, language-supported security, logging and auditing (BSM audit, dtrace, ...), cryptographic support, and trustworthy computing (TCG, SGX).

Along the lectures, model cases will be elaborated and evaluated in the exercises.
263-3845-00LData Management Systems Information W8 credits3V + 1U + 3AG. Alonso
AbstractThe course will cover the implementation aspects of data management systems using relational database engines as a starting point to cover the basic concepts of efficient data processing and then expanding those concepts to modern implementations in data centers and the cloud.
ObjectiveThe goal of the course is to convey the fundamental aspects of efficient data management from a systems implementation perspective: storage, access, organization, indexing, consistency, concurrency, transactions, distribution, query compilation vs interpretation, data representations, etc. Using conventional relational engines as a starting point, the course will aim at providing an in depth coverage of the latest technologies used in data centers and the cloud to implement large scale data processing in various forms.
ContentThe course will first cover fundamental concepts in data management: storage, locality, query optimization, declarative interfaces, concurrency control and recovery, buffer managers, management of the memory hierarchy, presenting them in a system independent manner. The course will place an special emphasis on understating these basic principles as they are key to understanding what problems existing systems try to address. It will then proceed to explore their implementation in modern relational engines supporting SQL to then expand the range of systems used in the cloud: key value stores, geo-replication, query as a service, serverless, large scale analytics engines, etc.
LiteratureThe main source of information for the course will be articles and research papers describing the architecture of the systems discussed. The list of papers will be provided at the beginning of the course.
Elective Courses
NumberTitleTypeECTSHoursLecturers
252-0817-00LDistributed Systems Laboratory
This only applies to Study Regulations 09: In the Master Programme max.10 credits can be accounted by Labs on top of the Interfocus Courses. These Labs will only count towards the Master Programme. Additional Labs will be listed on the Addendum.
W10 credits9PG. Alonso, T. Hoefler, A. Klimovic, T. Roscoe, A. Singla, R. Wattenhofer, C. Zhang
AbstractThis course involves the participation in a substantial development and/or evaluation project involving distributed systems technology. There are projects available in a wide range of areas: from web services to ubiquitous computing including wireless networks, ad-hoc networks, RFID, and distributed applications on smartphones.
ObjectiveGain hands-on-experience with real products and the latest technology in distributed systems.
ContentThis course involves the participation in a substantial development and/or evaluation project involving distributed systems technology. There are projects available in a wide range of areas: from web services to ubiquitous computing including as well wireless networks, ad-hoc networks, and distributed application on smartphones. The goal of the project is for the students to gain hands-on-experience with real products and the latest technology in distributed systems. There is no lecture associated to the course.
227-2210-00LComputer Architecture Information W8 credits6G + 1AO. Mutlu
AbstractComputer architecture is the science & art of designing and optimizing hardware components and the hardware/software interface to create a computer that meets design goals. This course covers basic components of a modern computing system (processors, memory, interconnects, accelerators). The course takes a hardware/software cooperative approach to understanding and designing computing systems.
ObjectiveWe will learn the fundamental concepts of the different parts of modern computing systems, as well as the latest trends by exploring the recent research in Industry and Academia. We will extensively cover memory technologies (including DRAM and new Non-Volatile Memory technologies), memory scheduling, parallel computing systems (including multicore processors and GPUs), heterogeneous computing, processing-in-memory, interconnection networks, specialized systems for major data-intensive workloads (e.g. graph processing, bioinformatics, machine learning), etc.
ContentThe principles presented in the lecture are reinforced in the laboratory through 1) the design and implementation of a cycle-accurate simulator, where we will explore different components of a modern computing system (e.g., pipeline, memory hierarchy, branch prediction, prefetching, caches, multithreading), and 2) the extension of state-of-the-art research simulators (e.g., Ramulator) for more in-depth understanding of specific system components (e.g., memory scheduling, prefetching).
Lecture notesAll the materials (including lecture slides) will be provided on the course website: Link
The video recordings of the lectures are expected to be made available after lectures.
LiteratureWe will provide required and recommended readings in every lecture. They will mainly consist of research papers presented in major Computer Architecture and related conferences and journals.
Prerequisites / NoticeDigital Design and Computer Architecture.
263-3850-00LInformal Methods Information W5 credits2G + 2AD. Cock
AbstractFormal methods are increasingly a key part of the methodological toolkit of systems programmers - those writing operating systems, databases, and distributed systems. This course is about how to apply concepts, techniques, and principles from formal methods to such software systems, and how to get into the habit of thinking formally about systems design even when writing low-level C code.
ObjectiveThis course is about equipping students whose focus is systems with the insights and conceptual tools provided by formal methods, and thereby enabling them to become better systems programmers.
By the end of the course, students should be able to seamlessly integrate basic concepts form formal methods into how they conceive, design, implement, reason about, and debug computer systems.

The goal is not to provide a comprehensive introduction to formal methods - this is well covered by other courses in the department. Instead, it is intended to provide students in computer systems (who may or may not have existing background knowledge of formal methods) with a basis for applying formal methods in their work.
ContentThis course does not assume prior knowledge of formal methods, and will start with a quick review of topics such static vs. dynamic reasoning, variants and invariants, program algebra and refinement, etc. However, it is strongly recommended that students have already taken one of the introductory formal methods course at ETH (or equivalents elsewhere) before taking this course - the emphasis is on reinforcing these concepts by applying them, not to teach them from scratch.

Instead, the majority of the course will be about how to apply these techniques to actual, practical code in real systems. We will work from real systems code written both by students taking the course, and practical systems developed using formal techniques, in particular the verified seL4 microkernel will be a key case study. We will also focus on informal, pen-and-paper arguments for correctness of programs and systems rather than using theorem provers or automated verification tools; again these latter techniques are well covered in other courses (and recommended as a complement to this one).
Information Systems
Core Courses
NumberTitleTypeECTSHoursLecturers
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
ObjectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
263-3010-00LBig Data Information Restricted registration - show details W10 credits3V + 2U + 4AG. Fourny
AbstractThe key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations.
ObjectiveThis combination of requirements, together with the technologies that have emerged in order to address them, is typically referred to as "Big Data." This revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm".

Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it were and are still needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small.

The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof.

After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each business use case efficiently and consistently.
ContentThis course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe. We take the monolithic, one-machine relational stack from the 1970s, smash it down and rebuild it on top of large clusters: starting with distributed storage, and all the way up to syntax, models, validation, processing, indexing, and querying. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem.

No data is harmed during this course, however, please be psychologically prepared that our data may not always be in third normal form.

- physical storage: distributed file systems (HDFS), object storage(S3), key-value stores

- logical storage: document stores (MongoDB), column stores (HBase), graph databases (neo4j), data warehouses (ROLAP)

- data formats and syntaxes (XML, JSON, RDF, Turtle, CSV, XBRL, YAML, protocol buffers, Avro)

- data shapes and models (tables, trees, graphs, cubes)

- type systems and schemas: atomic types, structured types (arrays, maps), set-based type systems (?, *, +)

- an overview of functional, declarative programming languages across data shapes (SQL, XQuery, JSONiq, Cypher, MDX)

- the most important query paradigms (selection, projection, joining, grouping, ordering, windowing)

- paradigms for parallel processing, two-stage (MapReduce) and DAG-based (Spark)

- resource management (YARN)

- what a data center is made of and why it matters (racks, nodes, ...)

- underlying architectures (internal machinery of HDFS, HBase, Spark, neo4j)

- optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing)

- applications.

Large scale analytics and machine learning are outside of the scope of this course.
LiteraturePapers from scientific conferences and journals. References will be given as part of the course material during the semester.
Prerequisites / NoticeThis course, in the autumn semester, is only intended for:
- Computer Science students
- Data Science students
- CBB students with a Computer Science background

Mobility students in CS are also welcome and encouraged to attend. If you experience any issue while registering, please contact the study administration and you will be gladly added.

For students of all other departements interested in this fascinating topic: I would love to have you visit my lectures as well! So there is a series of two courses specially designed for you:
- "Information Systems for Engineers" (SQL, relational databases): this Fall
- "Big Data for Engineers" (similar to Big Data, but adapted for non Computer Scientists): Spring 2021
There is no hard dependency, so you can either them in any order, but it may be more enjoyable to start with Information Systems for Engineers.

Students who successfully completed Big Data for Engineers are not allowed to enrol in the course Big Data.
263-3845-00LData Management Systems Information W8 credits3V + 1U + 3AG. Alonso
AbstractThe course will cover the implementation aspects of data management systems using relational database engines as a starting point to cover the basic concepts of efficient data processing and then expanding those concepts to modern implementations in data centers and the cloud.
ObjectiveThe goal of the course is to convey the fundamental aspects of efficient data management from a systems implementation perspective: storage, access, organization, indexing, consistency, concurrency, transactions, distribution, query compilation vs interpretation, data representations, etc. Using conventional relational engines as a starting point, the course will aim at providing an in depth coverage of the latest technologies used in data centers and the cloud to implement large scale data processing in various forms.
ContentThe course will first cover fundamental concepts in data management: storage, locality, query optimization, declarative interfaces, concurrency control and recovery, buffer managers, management of the memory hierarchy, presenting them in a system independent manner. The course will place an special emphasis on understating these basic principles as they are key to understanding what problems existing systems try to address. It will then proceed to explore their implementation in modern relational engines supporting SQL to then expand the range of systems used in the cloud: key value stores, geo-replication, query as a service, serverless, large scale analytics engines, etc.
LiteratureThe main source of information for the course will be articles and research papers describing the architecture of the systems discussed. The list of papers will be provided at the beginning of the course.
Elective Courses
NumberTitleTypeECTSHoursLecturers
263-2400-00LReliable and Interpretable Artificial Intelligence Information W6 credits2V + 2U + 1AM. Vechev
AbstractCreating reliable and explainable probabilistic models is a fundamental challenge to solving the artificial intelligence problem. This course covers some of the latest and most exciting advances that bring us closer to constructing such models.
ObjectiveThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.

To facilitate deeper understanding, an important part of the course will be a group hands-on programming project where students will build a system based on the learned material.
ContentThe course covers some of the latest research (over the last 2-3 years) underlying the creation of safe, trustworthy, and reliable AI (more information here: Link):

* Adversarial Attacks on Deep Learning (noise-based, geometry attacks, sound attacks, physical attacks, autonomous driving, out-of-distribution)
* Defenses against attacks
* Combining gradient-based optimization with logic for encoding background knowledge
* Complete Certification of deep neural networks via automated reasoning (e.g., via numerical abstractions, mixed-integer solvers).
* Probabilistic certification of deep neural networks
* Training deep neural networks to be provably robust via automated reasoning
* Understanding and Interpreting Deep Networks
* Probabilistic Programming
Prerequisites / NoticeWhile not a formal requirement, the course assumes familiarity with basics of machine learning (especially probability theory, linear algebra, gradient descent, and neural networks). These topics are usually covered in “Intro to ML” classes at most institutions (e.g., “Introduction to Machine Learning” at ETH).

For solving assignments, some programming experience in Python is excepted.
263-3210-00LDeep Learning Information Restricted registration - show details W8 credits3V + 2U + 2AT. Hofmann
AbstractDeep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations.
ObjectiveIn recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the mathematical foundations of deep learning and provide insights into model design, training, and validation. The main objective is a profound understanding of why these methods work and how. There will also be a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology.
Prerequisites / NoticeThis is an advanced level course that requires some basic background in machine learning. More importantly, students are expected to have a very solid mathematical foundation, including linear algebra, multivariate calculus, and probability. The course will make heavy use of mathematics and is not (!) meant to be an extended tutorial of how to train deep networks with tools like Torch or Tensorflow, although that may be a side benefit.

The participation in the course is subject to the following condition:
- Students must have taken the exam in Advanced Machine Learning (252-0535-00) or have acquired equivalent knowledge, see exhaustive list below:

Advanced Machine Learning
Link

Computational Intelligence Lab
Link

Introduction to Machine Learning
Link

Statistical Learning Theory
Link

Computational Statistics
Link

Probabilistic Artificial Intelligence
Link
263-5210-00LProbabilistic Artificial Intelligence Information Restricted registration - show details W8 credits3V + 2U + 2AA. Krause
AbstractThis course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics and the Internet.
ObjectiveHow can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for graduate students.
ContentTopics covered:
- Probability
- Probabilistic inference (variational inference, MCMC)
- Bayesian learning (Gaussian processes, Bayesian deep learning)
- Probabilistic planning (MDPs, POMPDPs)
- Multi-armed bandits and Bayesian optimization
- Reinforcement learning
Prerequisites / NoticeSolid basic knowledge in statistics, algorithms and programming.
The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite.
Software Engineering
Core Courses
NumberTitleTypeECTSHoursLecturers
252-0237-00LConcepts of Object-Oriented Programming Information W8 credits3V + 2U + 2AP. Müller
AbstractCourse that focuses on an in-depth understanding of object-oriented programming and compares designs of object-oriented programming languages. Topics include different flavors of type systems, inheritance models, encapsulation in the presence of aliasing, object and class initialization, program correctness, reflection
ObjectiveAfter this course, students will:
Have a deep understanding of advanced concepts of object-oriented programming and their support through various language features. Be able to understand language concepts on a semantic level and be able to compare and evaluate language designs.
Be able to learn new languages more rapidly.
Be aware of many subtle problems of object-oriented programming and know how to avoid them.
ContentThe main goal of this course is to convey a deep understanding of the key concepts of sequential object-oriented programming and their support in different programming languages. This is achieved by studying how important challenges are addressed through language features and programming idioms. In particular, the course discusses alternative language designs by contrasting solutions in languages such as C++, C#, Eiffel, Java, Python, and Scala. The course also introduces novel ideas from research languages that may influence the design of future mainstream languages.

The topics discussed in the course include among others:
The pros and cons of different flavors of type systems (for instance, static vs. dynamic typing, nominal vs. structural, syntactic vs. behavioral typing)
The key problems of single and multiple inheritance and how different languages address them
Generic type systems, in particular, Java generics, C# generics, and C++ templates
The situations in which object-oriented programming does not provide encapsulation, and how to avoid them
The pitfalls of object initialization, exemplified by a research type system that prevents null pointer dereferencing
How to maintain the consistency of data structures
LiteratureWill be announced in the lecture.
Prerequisites / NoticePrerequisites:
Mastering at least one object-oriented programming language (this course will NOT provide an introduction to object-oriented programming); programming experience
263-2800-00LDesign of Parallel and High-Performance Computing Information W9 credits3V + 2U + 3AT. Hoefler, M. Püschel
AbstractAdvanced topics in parallel and high-performance computing.
ObjectiveUnderstand concurrency paradigms and models from a higher perspective and acquire skills for designing, structuring and developing possibly large parallel high-performance software systems. Become able to distinguish parallelism in problem space and in machine space. Become familiar with important technical concepts and with concurrency folklore.
ContentWe will cover all aspects of high-performance computing ranging from architecture through programming up to algorithms. We will start with a discussion of caches and cache coherence in practical computer systems. We will dive into parallel programming concepts such as memory models, locks, and lock-free. We will cover performance modeling and parallel design principles as well as basic parallel algorithms.
Prerequisites / NoticeThis class is intended for the Computer Science Masters curriculum. Students must have basic knowledge in programming in C as well as computer science theory. Students should be familiar with the material covered in the ETH computer science first-year courses "Parallele Programmierung (parallel programming)" and "Algorithmen und Datenstrukturen (algorithm and data structures)" or equivalent courses.
Elective Courses
NumberTitleTypeECTSHoursLecturers
263-2400-00LReliable and Interpretable Artificial Intelligence Information W6 credits2V + 2U + 1AM. Vechev
AbstractCreating reliable and explainable probabilistic models is a fundamental challenge to solving the artificial intelligence problem. This course covers some of the latest and most exciting advances that bring us closer to constructing such models.
ObjectiveThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.

To facilitate deeper understanding, an important part of the course will be a group hands-on programming project where students will build a system based on the learned material.
ContentThe course covers some of the latest research (over the last 2-3 years) underlying the creation of safe, trustworthy, and reliable AI (more information here: Link):

* Adversarial Attacks on Deep Learning (noise-based, geometry attacks, sound attacks, physical attacks, autonomous driving, out-of-distribution)
* Defenses against attacks
* Combining gradient-based optimization with logic for encoding background knowledge
* Complete Certification of deep neural networks via automated reasoning (e.g., via numerical abstractions, mixed-integer solvers).
* Probabilistic certification of deep neural networks
* Training deep neural networks to be provably robust via automated reasoning
* Understanding and Interpreting Deep Networks
* Probabilistic Programming
Prerequisites / NoticeWhile not a formal requirement, the course assumes familiarity with basics of machine learning (especially probability theory, linear algebra, gradient descent, and neural networks). These topics are usually covered in “Intro to ML” classes at most institutions (e.g., “Introduction to Machine Learning” at ETH).

For solving assignments, some programming experience in Python is excepted.
Theoretical Computer Science
Core Courses
NumberTitleTypeECTSHoursLecturers
252-0417-00LRandomized Algorithms and Probabilistic Methods
Does not take place this semester.
W10 credits3V + 2U + 4AA. Steger
AbstractLas Vegas & Monte Carlo algorithms; inequalities of Markov, Chebyshev, Chernoff; negative correlation; Markov chains: convergence, rapidly mixing; generating functions; Examples include: min cut, median, balls and bins, routing in hypercubes, 3SAT, card shuffling, random walks
ObjectiveAfter this course students will know fundamental techniques from probabilistic combinatorics for designing randomized algorithms and will be able to apply them to solve typical problems in these areas.
ContentRandomized Algorithms are algorithms that "flip coins" to take certain decisions. This concept extends the classical model of deterministic algorithms and has become very popular and useful within the last twenty years. In many cases, randomized algorithms are faster, simpler or just more elegant than deterministic ones. In the course, we will discuss basic principles and techniques and derive from them a number of randomized methods for problems in different areas.
Lecture notesYes.
Literature- Randomized Algorithms, Rajeev Motwani and Prabhakar Raghavan, Cambridge University Press (1995)
- Probability and Computing, Michael Mitzenmacher and Eli Upfal, Cambridge University Press (2005)
Elective Courses
NumberTitleTypeECTSHoursLecturers
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
ObjectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
252-1425-00LGeometry: Combinatorics and Algorithms Information W8 credits3V + 2U + 2AB. Gärtner, E. Welzl, M. Hoffmann, M. Wettstein
AbstractGeometric structures are useful in many areas, and there is a need to understand their structural properties, and to work with them algorithmically. The lecture addresses theoretical foundations concerning geometric structures. Central objects of interest are triangulations. We study combinatorial (Does a certain object exist?) and algorithmic questions (Can we find a certain object efficiently?)
ObjectiveThe goal is to make students familiar with fundamental concepts, techniques and results in combinatorial and computational geometry, so as to enable them to model, analyze, and solve theoretical and practical problems in the area and in various application domains.
In particular, we want to prepare students for conducting independent research, for instance, within the scope of a thesis project.
ContentPlanar and geometric graphs, embeddings and their representation (Whitney's Theorem, canonical orderings, DCEL), polygon triangulations and the art gallery theorem, convexity in R^d, planar convex hull algorithms (Jarvis Wrap, Graham Scan, Chan's Algorithm), point set triangulations, Delaunay triangulations (Lawson flips, lifting map, randomized incremental construction), Voronoi diagrams, the Crossing Lemma and incidence bounds, line arrangements (duality, Zone Theorem, ham-sandwich cuts), 3-SUM hardness, counting planar triangulations.
Lecture notesyes
LiteratureMark de Berg, Marc van Kreveld, Mark Overmars, Otfried Cheong, Computational Geometry: Algorithms and Applications, Springer, 3rd ed., 2008.
Satyan Devadoss, Joseph O'Rourke, Discrete and Computational Geometry, Princeton University Press, 2011.
Stefan Felsner, Geometric Graphs and Arrangements: Some Chapters from Combinatorial Geometry, Teubner, 2004.
Jiri Matousek, Lectures on Discrete Geometry, Springer, 2002.
Takao Nishizeki, Md. Saidur Rahman, Planar Graph Drawing, World Scientific, 2004.
Prerequisites / NoticePrerequisites: The course assumes basic knowledge of discrete mathematics and algorithms, as supplied in the first semesters of Bachelor Studies at ETH.
Outlook: In the following spring semester there is a seminar "Geometry: Combinatorics and Algorithms" that builds on this course. There are ample possibilities for Semester-, Bachelor- and Master Thesis projects in the area.
263-4500-00LAdvanced Algorithms Information W9 credits3V + 2U + 3AM. Ghaffari
AbstractThis is a graduate-level course on algorithm design (and analysis). It covers a range of topics and techniques in approximation algorithms, sketching and streaming algorithms, and online algorithms.
ObjectiveThis course familiarizes the students with some of the main tools and techniques in modern subareas of algorithm design.
ContentThe lectures will cover a range of topics, tentatively including the following: graph sparsifications while preserving cuts or distances, various approximation algorithms techniques and concepts, metric embeddings and probabilistic tree embeddings, online algorithms, multiplicative weight updates, streaming algorithms, sketching algorithms, and derandomization.
Lecture notesLink
Prerequisites / NoticeThis course is designed for masters and doctoral students and it especially targets those interested in theoretical computer science, but it should also be accessible to last-year bachelor students.

Sufficient comfort with both (A) Algorithm Design & Analysis and (B) Probability & Concentrations. E.g., having passed the course Algorithms, Probability, and Computing (APC) is highly recommended, though not required formally. If you are not sure whether you're ready for this class or not, please consult the instructor.
401-3901-00LMathematical OptimizationW11 credits4V + 2UR. Zenklusen
AbstractMathematical treatment of diverse optimization techniques.
ObjectiveThe goal of this course is to get a thorough understanding of various classical mathematical optimization techniques with an emphasis on polyhedral approaches. In particular, we want students to develop a good understanding of some important problem classes in the field, of structural mathematical results linked to these problems, and of solution approaches based on this structural understanding.
ContentKey topics include:
- Linear programming and polyhedra;
- Flows and cuts;
- Combinatorial optimization problems and techniques;
- Equivalence between optimization and separation;
- Brief introduction to Integer Programming.
Literature- Bernhard Korte, Jens Vygen: Combinatorial Optimization. 6th edition, Springer, 2018.
- Alexander Schrijver: Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. This work has 3 volumes.
- Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993.
- Alexander Schrijver: Theory of Linear and Integer Programming. John Wiley, 1986.
Prerequisites / NoticeSolid background in linear algebra.
Visual Computing
Core Courses
NumberTitleTypeECTSHoursLecturers
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
ObjectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
263-5902-00LComputer Vision Information W8 credits3V + 1U + 3AM. Pollefeys, S. Tang, V. Ferrari
AbstractThe goal of this course is to provide students with a good understanding of computer vision and image analysis techniques. The main concepts and techniques will be studied in depth and practical algorithms and approaches will be discussed and explored through the exercises.
ObjectiveThe objectives of this course are:
1. To introduce the fundamental problems of computer vision.
2. To introduce the main concepts and techniques used to solve those.
3. To enable participants to implement solutions for reasonably complex problems.
4. To enable participants to make sense of the computer vision literature.
ContentCamera models and calibration, invariant features, Multiple-view geometry, Model fitting, Stereo Matching, Segmentation, 2D Shape matching, Shape from Silhouettes, Optical flow, Structure from motion, Tracking, Object recognition, Object category recognition
Prerequisites / NoticeIt is recommended that students have taken the Visual Computing lecture or a similar course introducing basic image processing concepts before taking this course.
Elective Courses
NumberTitleTypeECTSHoursLecturers
252-0543-01LComputer Graphics Information W8 credits3V + 2U + 2AM. Gross, M. Papas
AbstractThis course covers some of the fundamental concepts of computer graphics generation of photorealistic images from digital representations of 3D scenes and image-based methods for recovering digital scene representations from captured images.
ObjectiveAt the end of the course the students will be able to build a rendering system. The students will study the basic principles of rendering and image synthesis. In addition, the course is intended to stimulate the students' curiosity to explore the field of computer graphics in subsequent courses or on their own.
ContentThis course covers fundamental concepts of modern computer graphics. Students will learn about 3D object representations and the details of how to generate photorealistic images from digital representations of 3D scenes. Starting with an introduction to 3D shape modeling, geometry representation and texture mapping, we will move on to the physics of light transport, acceleration structures, appearance modeling and Monte Carlo integration. We will apply these principles for computing light transport of direct and global illumination due to surfaces and participating media. We will end with an overview of modern image-based capture and image synthesis methods, covering topics such as geometry and material capture, light-fields and depth-image based rendering.
Lecture notesno
LiteratureBooks:
High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting
Multiple view geometry in computer vision
Physically Based Rendering: From Theory to Implementation
Prerequisites / NoticePrerequisites:
Fundamentals of calculus and linear algebra, basic concepts of algorithms and data structures, programming skills in C++, Visual Computing course recommended.
The programming assignments will be in C++. This will not be taught in the class.
252-0546-00LPhysically-Based Simulation in Computer GraphicsW5 credits2V + 1U + 1AV. da Costa de Azevedo, B. Solenthaler
AbstractThis lecture provides an introduction to physically-based animation in computer graphics and gives an overview of fundamental methods and algorithms. The practical exercises include three assignments which are to be solved in small groups. In an addtional course project, topics from the lecture will be implemented into a 3D game or a comparable application.
ObjectiveThis lecture provides an introduction to physically-based animation in computer graphics and gives an overview of fundamental methods and algorithms. The practical exercises include three assignments which are to be solved in small groups. In an addtional course project, topics from the lecture will be implemented into a 3D game or a comparable application.
ContentThe lecture covers topics in physically-based modeling,
such as particle systems, mass-spring models, finite difference and finite element methods. These approaches are used to represent and simulate deformable objects or fluids with applications in animated movies, 3D games and medical systems. Furthermore, the lecture covers topics such as rigid body dynamics, collision detection, and character animation.
Prerequisites / NoticeFundamentals of calculus and physics, basic concepts of algorithms and data structures, basic programming skills in C++. Knowledge on numerical mathematics as well as ordinary and partial differential equations is an asset, but not required.
263-2400-00LReliable and Interpretable Artificial Intelligence Information W6 credits2V + 2U + 1AM. Vechev
AbstractCreating reliable and explainable probabilistic models is a fundamental challenge to solving the artificial intelligence problem. This course covers some of the latest and most exciting advances that bring us closer to constructing such models.
ObjectiveThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.

To facilitate deeper understanding, an important part of the course will be a group hands-on programming project where students will build a system based on the learned material.
ContentThe course covers some of the latest research (over the last 2-3 years) underlying the creation of safe, trustworthy, and reliable AI (more information here: Link):

* Adversarial Attacks on Deep Learning (noise-based, geometry attacks, sound attacks, physical attacks, autonomous driving, out-of-distribution)
* Defenses against attacks
* Combining gradient-based optimization with logic for encoding background knowledge
* Complete Certification of deep neural networks via automated reasoning (e.g., via numerical abstractions, mixed-integer solvers).
* Probabilistic certification of deep neural networks
* Training deep neural networks to be provably robust via automated reasoning
* Understanding and Interpreting Deep Networks
* Probabilistic Programming
Prerequisites / NoticeWhile not a formal requirement, the course assumes familiarity with basics of machine learning (especially probability theory, linear algebra, gradient descent, and neural networks). These topics are usually covered in “Intro to ML” classes at most institutions (e.g., “Introduction to Machine Learning” at ETH).

For solving assignments, some programming experience in Python is excepted.
263-5210-00LProbabilistic Artificial Intelligence Information Restricted registration - show details W8 credits3V + 2U + 2AA. Krause
AbstractThis course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics and the Internet.
ObjectiveHow can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for graduate students.
ContentTopics covered:
- Probability
- Probabilistic inference (variational inference, MCMC)
- Bayesian learning (Gaussian processes, Bayesian deep learning)
- Probabilistic planning (MDPs, POMPDPs)
- Multi-armed bandits and Bayesian optimization
- Reinforcement learning
Prerequisites / NoticeSolid basic knowledge in statistics, algorithms and programming.
The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite.