Search result: Catalogue data in Spring Semester 2021
|Data Science Master|
|Information and Learning|
|227-0434-10L||Mathematics of Information||W||8 credits||3V + 2U + 2A||H. Bölcskei|
|Abstract||The class focuses on mathematical aspects of |
1. Information science: Sampling theorems, frame theory, compressed sensing, sparsity, super-resolution, spectrum-blind sampling, subspace algorithms, dimensionality reduction
2. Learning theory: Approximation theory, greedy algorithms, uniform laws of large numbers, Rademacher complexity, Vapnik-Chervonenkis dimension
|Objective||The aim of the class is to familiarize the students with the most commonly used mathematical theories in data science, high-dimensional data analysis, and learning theory. The class consists of the lecture, exercise sessions with homework problems, and of a research project, which can be carried out either individually or in groups. The research project consists of either 1. software development for the solution of a practical signal processing or machine learning problem or 2. the analysis of a research paper or 3. a theoretical research problem of suitable complexity. Students are welcome to propose their own project at the beginning of the semester. The outcomes of all projects have to be presented to the entire class at the end of the semester.|
|Content||Mathematics of Information|
1. Signal representations: Frame theory, wavelets, Gabor expansions, sampling theorems, density theorems
2. Sparsity and compressed sensing: Sparse linear models, uncertainty relations in sparse signal recovery, super-resolution, spectrum-blind sampling, subspace algorithms (ESPRIT), estimation in the high-dimensional noisy case, Lasso
3. Dimensionality reduction: Random projections, the Johnson-Lindenstrauss Lemma
Mathematics of Learning
4. Approximation theory: Nonlinear approximation theory, best M-term approximation, greedy algorithms, fundamental limits on compressibility of signal classes, Kolmogorov-Tikhomirov epsilon-entropy of signal classes, optimal compression of signal classes
5. Uniform laws of large numbers: Rademacher complexity, Vapnik-Chervonenkis dimension, classes with polynomial discrimination
|Lecture notes||Detailed lecture notes will be provided at the beginning of the semester.|
|Prerequisites / Notice||This course is aimed at students with a background in basic linear algebra, analysis, statistics, and probability. |
We encourage students who are interested in mathematical data science to take both this course and "401-4944-20L Mathematics of Data Science" by Prof. A. Bandeira. The two courses are designed to be complementary.
H. Bölcskei and A. Bandeira
|401-3632-00L||Computational Statistics||W||8 credits||3V + 1U||M. Mächler|
|Abstract||We discuss modern statistical methods for data analysis, including methods for data exploration, prediction and inference. We pay attention to algorithmic aspects, theoretical properties and practical considerations. The class is hands-on and methods are applied using the statistical programming language R.|
|Objective||The student obtains an overview of modern statistical methods for data analysis, including their algorithmic aspects and theoretical properties. The methods are applied using the statistical programming language R.|
|Content||See the class website|
|Prerequisites / Notice||At least one semester of (basic) probability and statistics.|
Programming experience is helpful but not required.
|261-5110-00L||Optimization for Data Science||W||10 credits||3V + 2U + 4A||B. Gärtner, D. Steurer, N. He|
|Abstract||This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in data science.|
|Objective||Understanding the theoretical guarantees (and their limits) of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising in data science.|
|Content||This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in machine learning and data science.|
In the first part of the course, we will first give a brief introduction to convex optimization, with some basic motivating examples from machine learning. Then we will analyse classical and more recent first and second order methods for convex optimization: gradient descent, Nesterov's accelerated method, proximal and splitting algorithms, subgradient descent, stochastic gradient descent, variance-reduced methods, Newton's method, and Quasi-Newton methods. The emphasis will be on analysis techniques that occur repeatedly in convergence analyses for various classes of convex functions. We will also discuss some classical and recent theoretical results for nonconvex optimization.
In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation.
|Prerequisites / Notice||As background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but they should be prepared to catch up if necessary.|
|151-0566-00L||Recursive Estimation||W||4 credits||2V + 1U||R. D'Andrea|
|Abstract||Estimation of the state of a dynamic system based on a model and observations in a computationally efficient way.|
|Objective||Learn the basic recursive estimation methods and their underlying principles.|
|Content||Introduction to state estimation; probability review; Bayes' theorem; Bayesian tracking; extracting estimates from probability distributions; Kalman filter; extended Kalman filter; particle filter; observer-based control and the separation principle.|
|Lecture notes||Lecture notes available on course website: http://www.idsc.ethz.ch/education/lectures/recursive-estimation.html|
|Prerequisites / Notice||Requirements: Introductory probability theory and matrix-vector algebra.|
|227-0150-00L||Systems-on-Chip for Data Analytics and Machine Learning|
Previously "Energy-Efficient Parallel Computing Systems for Data Analytics"
|W||6 credits||4G||L. Benini|
|Abstract||Systems-on-chip architecture and related design issues with a focus on machine learning and data analytics applications. It will cover multi-cores, many-cores, vector engines, GP-GPUs, application-specific processors and heterogeneous compute accelerators. Special emphasis given to energy-efficiency issues and hardware-software techniques for power and energy minimization.|
|Objective||Give in-depth understanding of the links and dependencies between architectures and their energy-efficient implementation and to get a comprehensive exposure to state-of-the-art systems-on-chip platforms for machine learning and data analytics. Practical experience will also be gained through practical exercises and mini-projects (hardware and software) assigned on specific topics.|
|Content||The course will cover advanced system-on-chip architectures, with an in-depth view on design challenges related to advanced silicon technology and state-of-the-art system integration options (nanometer silicon technology, novel storage devices, three-dimensional integration, advanced system packaging). The emphasis will be on programmable parallel architectures with application focus on machine learning and data analytics. The main SoC architectural families will be covered: namely, multi and many- cores, GPUs, vector accelerators, application-specific processors, heterogeneous platforms. The course will cover the complex design choices required to achieve scalability and energy proportionality. The course will will also delve into system design, touching on hardware-software tradeoffs and full-system analysis and optimization taking into account non-functional constraints and quality metrics, such as power consumption, thermal dissipation, reliability and variability. The application focus will be on machine learning both in the cloud and at the edges (near-sensor analytics).|
|Lecture notes||Slides will be provided to accompany lectures. Pointers to scientific literature will be given. Exercise scripts and tutorials will be provided.|
|Literature||John L. Hennessy, David A. Patterson, Computer Architecture: A Quantitative Approach (The Morgan Kaufmann Series in Computer Architecture and Design) 6th Edition, 2017.|
|Prerequisites / Notice||Knowledge of digital design at the level of "Design of Digital Circuits SS12" is required.|
Knowledge of basic VLSI design at the level of "VLSI I: Architectures of VLSI Circuits" is required
|227-0155-00L||Machine Learning on Microcontrollers |
Number of participants limited to 40.
Registration in this class requires the permission of the instructors.
|W||6 credits||3G||M. Magno, L. Benini|
|Abstract||Machine Learning (ML) and artificial intelligence are pervading the digital society. Today, even low power embedded systems are incorporating ML, becoming increasingly “smart”. This lecture gives an overview of ML methods and algorithms to process and extracts useful near-sensor information in end-nodes of the “internet-of-things”, using low-power microcontrollers (ARM-Cortex-M; RISC-V).|
|Objective||Learn how to Process data from sensors and how to extract useful information with low power microprocessors using ML techniques. We will analyze data coming from real low-power sensors (accelerometers, microphones, ExG bio-signals, cameras…). The main objective is to study in detail how Machine Learning algorithms can be adapted to the performance constraints and limited resources of low-power microcontrollers becoming Tiny Machine learning algorithms.|
|Content||The final goal of the course is a deep understanding of machine learning and its practical implementation on single- and multi-core microcontrollers, coupled with performance and energy efficiency analysis and optimization. The main topics of the course include:|
- Sensors and sensor data acquisition with low power embedded systems
- Machine Learning: Overview of supervised and unsupervised learning and in particular supervised learning ( Decision Trees, Random, Support Vector Machines, Artificial Neural Networks, Deep Learning, and Convolutional Networks)
- Low-power embedded systems and their architecture. Low Power microcontrollers (ARM-Cortex M) and RISC-V-based Parallel Ultra Low Power (PULP) systems-on-chip.
- Low power smart sensor system design: hardware-software tradeoffs, analysis, and optimization. Implementation and performance evaluation of ML in battery-operated embedded systems.
The laboratory exercised will show how to address concrete design problems, like motion, gesture recognition, emotion detection, image, and sound classification, using real sensors data and real MCU boards.
Presentations from Ph.D. students and the visit to the Digital Circuits and Systems Group will introduce current research topics and international research projects.
|Lecture notes||Script and exercise sheets. Books will be suggested during the course.|
|Prerequisites / Notice||Prerequisites: Good experience in C language programming. Microprocessors and computer architecture. Basics of Digital Signal Processing. Some exposure to machine learning concepts is also desirable.|
Does not take place this semester.
|W||4 credits||2V + 1U||to be announced|
|Abstract||Probability. Stochastic processes. Stochastic differential equations. Ito. Kalman filters. St Stochastic optimal control. Applications in financial engineering.|
|Objective||Stochastic dynamic systems. Optimal control and filtering of stochastic systems. Examples in technology and finance.|
|Content||- Stochastic processes|
- Stochastic calculus (Ito)
- Stochastic differential equations
- Discrete time stochastic difference equations
- Stochastic processes AR, MA, ARMA, ARMAX, GARCH
- Kalman filter
- Stochastic optimal control
- Applications in finance and engineering
|Lecture notes||H. P. Geering et al., Stochastic Systems, Measurement and Control Laboratory, 2007 and handouts|
|227-0420-00L||Information Theory II||W||6 credits||4G||A. Lapidoth, S. M. Moser|
|Abstract||This course builds on Information Theory I. It introduces additional topics in single-user communication, connections between Information Theory and Statistics, and Network Information Theory.|
|Objective||The course's objective is to introduce the students to additional information measures and to equip them with the tools that are needed to conduct research in Information Theory as it relates to Communication Networks and to Statistics.|
|Content||Sanov's Theorem, Rényi entropy and guessing, differential entropy, maximum entropy, the Gaussian channel, the entropy-power inequality, the broadcast channel, the multiple-access channel, Slepian-Wolf coding, the Gelfand-Pinsker problem, and Fisher information.|
|Literature||T.M. Cover and J.A. Thomas, Elements of Information Theory, second edition, Wiley 2006|
|Prerequisites / Notice||Basic introductory course on Information Theory.|
|227-0424-00L||Model- and Learning-Based Inverse Problems in Imaging||W||4 credits||2V + 1P||V. Vishnevskiy|
|Abstract||Reconstruction is an inverse problem which estimates images from noisy measurements. Model-based reconstructions use analytical models of the imaging process and priors. Data-based methods directly approximate inversion using training data. Combining these two approaches yields physics-aware neural nets and state-of-the-art imaging accuracy (MRI, US, CT, microscopy, non-destructive imaging).|
|Objective||The goal of this course is to introduce the mathematical models of imaging experiments and practice implementation of numerical methods to solve the corresponding inverse problem. Students will learn how to improve reconstruction accuracy by introducing prior knowledge in the form of regularization models and training data. Furthermore, students will practice incorporating imaging model knowledge into deep neural networks.|
|Content||The course is based on following fundamental fields: (i) numerical linear algebra, (ii) mathematical statistics and learning theory, (iii) convex optimization and (iv) signal processing. The first part of the course introduces classical linear and nonlinear methods for image reconstruction. The second part considers data-based regularization and covers modern deep learning approaches to inverse problems in imaging. Finally, we introduce advances in the actively developing field of experimental design in biomedical imaging (i.e. how to conduct an experiment in a way to enable the most accurate reconstruction).|
1. Introduction: Examples of inverse problems, general introduction. Refresh prerequisites.
2. Linear algebra in imaging: Refresh prerequisites. Demonstrate properties of operators employed in imaging.
3. Linear inverse problems and regularization: Classical theory of inverse problems. Introduce notion of ill-posedness and regularization.
3. Compressed sensing: Sparsity, basis-CS, TV-CS. Notion of analysis and synthesis forms of reconstruction problems. Application of PGD and ADMM to reconstruction.
4. Advanced priors and model selection: Total generalized variation, GMM priors, vectorial TV, low-rank, and tensor models. Stein's unbiased risk estimator.
5. Dictionary and prior learning: Classical dictionary learning. Gentle intro to machine learning. A lot of technical details about patch-models.
6. Deep learning in image reconstruction: Generic convolutional-NN models (automap, residual filtering, u-nets). Talk about the data generation process. Characterized difference between model- and data-based reconstruction methods. Mode averaging.
7. Loop unrolling and physics-aware networks for reconstruction: Autograd, Variational Networks, a lot of examples and intuition. Show how to use them efficiently, e.g. adding preconditioners, attention, etc.
8. Generative models and uncertainty quantification: Amortized posterior, variational autoencoders, adversarial learning. Estimation uncertainty quantification.
9. Inversible networks for estimation: Gradient flows in networks, inversible neural networks for estimation problems.
10. Experimental design in imaging: Acquisition optimization for continuous models. How far can we exploit autograd?
11. Signal sampling optimization in MRI. Reinforcement learning: Acquisition optimization for discrete models. Reinforce and policy gradients, variance minimization for discrete variables (RELAX, REBAR). Cartesian under-sampling pattern design
12. Summary and exam preparation.
|Lecture notes||Lecture slides with references will be provided during the course.|
|Prerequisites / Notice||Students are expected to know the basics of (i) numerical linear algebra, (ii) applied methods of convex optimization, (iii) computational statistics, (iv) Matlab and Python.|
|227-0427-10L||Advanced Signal Analysis, Modeling, and Machine Learning||W||6 credits||4G||H.‑A. Loeliger|
|Abstract||The course develops a selection of topics pivoting around graphical models (factor graphs), state space methods, sparsity, and pertinent algorithms.|
|Objective||The course develops a selection of topics pivoting around factor graphs, state space methods, and pertinent algorithms:|
- factor graphs and message passing algorithms
- hidden-Markov models
- linear state space models, Kalman filtering, and recursive least squares
- Gaussian message passing
- Gibbs sampling, particle filter
- recursive local polynomial fitting & applications
- parameter learning by expectation maximization
- sparsity and spikes
- binary control and digital-to-analog conversion
- duality and factor graph transforms
|Lecture notes||Lecture notes|
|Prerequisites / Notice||Solid mathematical foundations (especially in probability, estimation, and linear algebra) as provided by the course "Introduction to Estimation and Machine Learning".|
|227-0432-00L||Learning, Classification and Compression||W||4 credits||2V + 1U||E. Riegler|
|Abstract||The focus of the course is aligned to a theoretical approach of learning theory and classification and an introduction to lossy and lossless compression for general sets and measures. We will mainly focus on a probabilistic approach, where an underlying distribution must be learned/compressed. The concepts acquired in the course are of broad and general interest in data sciences.|
|Objective||After attending this lecture and participating in the exercise sessions, students will have acquired a working knowledge of learning theory, classification, and compression.|
|Content||1. Learning Theory|
(a) Framework of Learning
(b) Hypothesis Spaces and Target Functions
(c) Reproducing Kernel Hilbert Spaces
(d) Bias-Variance Tradeoff
(e) Estimation of Sample and Approximation Error
(a) Binary Classifier
(b) Support Vector Machines (separable case)
(c) Support Vector Machines (nonseparable case)
(d) Kernel Trick
3. Lossy and Lossless Compression
(a) Basics of Compression
(b) Compressed Sensing for General Sets and Measures
(c) Quantization and Rate Distortion Theory for General Sets and Measures
|Lecture notes||Detailed lecture notes will be provided.|
|Prerequisites / Notice||This course is aimed at students with a solid background in measure theory and linear algebra and basic knowledge in functional analysis.|
|227-0558-00L||Principles of Distributed Computing||W||7 credits||2V + 2U + 2A||R. Wattenhofer, M. Ghaffari|
|Abstract||We study the fundamental issues underlying the design of distributed systems: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques.|
|Objective||Distributed computing is essential in modern computing and communications systems. Examples are on the one hand large-scale networks such as the Internet, and on the other hand multiprocessors such as your new multi-core laptop. This course introduces the principles of distributed computing, emphasizing the fundamental issues underlying the design of distributed systems and networks: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques, basically the "pearls" of distributed computing. We will cover a fresh topic every week.|
|Content||Distributed computing models and paradigms, e.g. message passing, shared memory, synchronous vs. asynchronous systems, time and message complexity, peer-to-peer systems, small-world networks, social networks, sorting networks, wireless communication, and self-organizing systems.|
Distributed algorithms, e.g. leader election, coloring, covering, packing, decomposition, spanning trees, mutual exclusion, store and collect, arrow, ivy, synchronizers, diameter, all-pairs-shortest-path, wake-up, and lower bounds
|Lecture notes||Available. Our course script is used at dozens of other universities around the world.|
|Literature||Lecture Notes By Roger Wattenhofer. These lecture notes are taught at about a dozen different universities through the world.|
Distributed Computing: Fundamentals, Simulations and Advanced Topics
Hagit Attiya, Jennifer Welch.
McGraw-Hill Publishing, 1998, ISBN 0-07-709352 6
Introduction to Algorithms
Thomas Cormen, Charles Leiserson, Ronald Rivest.
The MIT Press, 1998, ISBN 0-262-53091-0 oder 0-262-03141-8
Disseminatin of Information in Communication Networks
Juraj Hromkovic, Ralf Klasing, Andrzej Pelc, Peter Ruzicka, Walter Unger.
Springer-Verlag, Berlin Heidelberg, 2005, ISBN 3-540-00846-2
Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes
Frank Thomson Leighton.
Morgan Kaufmann Publishers Inc., San Francisco, CA, 1991, ISBN 1-55860-117-1
Distributed Computing: A Locality-Sensitive Approach
Society for Industrial and Applied Mathematics (SIAM), 2000, ISBN 0-89871-464-8
|Prerequisites / Notice||Course pre-requisites: Interest in algorithmic problems. (No particular course needed.)|
|227-0560-00L||Deep Learning for Autonomous Driving |
Registration in this class requires the permission of the instructors.
Class size will be limited to 80 students.
Please send an email to Dengxin Dai <email@example.com> about your courses/projects that are related to machine learning, computer vision, and Robotics.
|W||6 credits||3V + 2P||D. Dai, A. Liniger|
|Abstract||Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.|
|Objective||Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.|
After attending this course, students will:
1) understand the core technologies of building a self-driving car;
2) have a good overview over the current state of the art in self-driving cars;
3) be able to critically analyze and evaluate current research in this area;
4) be able to implement basic systems for multiple autonomous driving tasks.
|Content||We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.|
The course covers the following main areas:
a) Fundamentals of a self-driving car
b) Fundamentals of deep-learning
a) Semantic segmentation and lane detection
b) Depth estimation with images and sparse LiDAR data
c) 3D object detection with images and LiDAR data
d) Object tracking and Lane Detection
a) GPS-based and Vision-based Localization
b) Visual Odometry and Lidar Odometry
IV) Path Planning and Control
a) Path planning for autonomous driving
b) Motion planning and vehicle control
c) Imitation learning and reinforcement learning for self driving cars
The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:
- Sensor calibration and synchronization to obtain multimodal driving data;
- Semantic segmentation and depth estimation with deep neural networks ;
- 3D object detection and tracking in LiDAR point clouds
|Lecture notes||The lecture slides will be provided as a PDF.|
|Prerequisites / Notice||This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.|
|252-0211-00L||Information Security||W||8 credits||4V + 3U||D. Basin, S. Capkun|
|Abstract||This course provides an introduction to Information Security. The focus|
is on fundamental concepts and models, basic cryptography, protocols and system security, and privacy and data protection. While the emphasis is on foundations, case studies will be given that examine different realizations of these ideas in practice.
|Objective||Master fundamental concepts in Information Security and their|
application to system building. (See objectives listed below for more details).
|Content||1. Introduction and Motivation (OBJECTIVE: Broad conceptual overview of information security) Motivation: implications of IT on society/economy, Classical security problems, Approaches to |
defining security and security goals, Abstractions, assumptions, and trust, Risk management and the human factor, Course verview. 2. Foundations of Cryptography (OBJECTIVE: Understand basic
cryptographic mechanisms and applications) Introduction, Basic concepts in cryptography: Overview, Types of Security, computational hardness, Abstraction of channel security properties, Symmetric
encryption, Hash functions, Message authentication codes, Public-key distribution, Public-key cryptosystems, Digital signatures, Application case studies, Comparison of encryption at different layers, VPN, SSL, Digital payment systems, blind signatures, e-cash, Time stamping 3. Key Management and Public-key Infrastructures (OBJECTIVE: Understand the basic mechanisms relevant in an Internet context) Key management in distributed systems, Exact characterization of requirements, the role of trust, Public-key Certificates, Public-key Infrastructures, Digital evidence and non-repudiation, Application case studies, Kerberos, X.509, PGP. 4. Security Protocols (OBJECTIVE: Understand network-oriented security, i.e.. how to employ building blocks to secure applications in (open) networks) Introduction, Requirements/properties, Establishing shared secrets, Principal and message origin authentication, Environmental assumptions, Dolev-Yao intruder model and
variants, Illustrative examples, Formal models and reasoning, Trace-based interleaving semantics, Inductive verification, or model-checking for falsification, Techniques for protocol design,
Application case study 1: from Needham-Schroeder Shared-Key to Kerberos, Application case study 2: from DH to IKE. 5. Access Control and Security Policies (OBJECTIVES: Study system-oriented security, i.e., policies, models, and mechanisms) Motivation (relationship to CIA, relationship to Crypto) and examples Concepts: policies versus models versus mechanisms, DAC and MAC, Modeling formalism, Access Control Matrix Model, Roll Based Access Control, Bell-LaPadula, Harrison-Ruzzo-Ullmann, Information flow, Chinese Wall, Biba, Clark-Wilson, System mechanisms: Operating Systems, Hardware Security Features, Reference Monitors, File-system protection, Application case studies 6. Anonymity and Privacy (OBJECTIVE: examine protection goals beyond standard CIA and corresponding mechanisms) Motivation and Definitions, Privacy, policies and policy languages, mechanisms, problems, Anonymity: simple mechanisms (pseudonyms, proxies), Application case studies: mix networks and crowds. 7. Larger application case study: GSM, mobility
|252-0526-00L||Statistical Learning Theory||W||8 credits||3V + 2U + 2A||J. M. Buhmann, C. Cotrini Jimenez|
|Abstract||The course covers advanced methods of statistical learning: |
- Variational methods and optimization.
- Deterministic annealing.
- Clustering for diverse types of data.
- Model validation by information theory.
|Objective||The course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning.|
|Content||- Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing.|
- Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures.
- Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation.
- Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models.
|Lecture notes||A draft of a script will be provided. Lecture slides will be made available.|
|Literature||Hastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001.|
L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996
|Prerequisites / Notice||Knowledge of machine learning (introduction to machine learning and/or advanced machine learning)|
Basic knowledge of statistics.
|252-0538-00L||Shape Modeling and Geometry Processing||W||8 credits||2V + 1U + 4A||O. Sorkine Hornung|
|Abstract||This course covers the fundamentals and some of the latest developments in geometric modeling and geometry processing. Topics include surface modeling based on point clouds and polygonal meshes, mesh generation, surface reconstruction, mesh fairing and parameterization, discrete differential geometry, interactive shape editing, topics in digital shape fabrication.|
|Objective||The students will learn how to design, program and analyze algorithms and systems for interactive 3D shape modeling and geometry processing.|
|Content||Recent advances in 3D geometry processing have created a plenitude of novel concepts for the mathematical representation and interactive manipulation of geometric models. This course covers the fundamentals and some of the latest developments in geometric modeling and geometry processing. Topics include surface modeling based on point clouds and triangle meshes, mesh generation, surface reconstruction, mesh fairing and parameterization, discrete differential geometry, interactive shape editing and digital shape fabrication.|
|Lecture notes||Slides and course notes|
|Prerequisites / Notice||Prerequisites:|
Visual Computing, Computer Graphics or an equivalent class. Experience with C++ programming. Solid background in linear algebra and analysis. Some knowledge of differential geometry, computational geometry and numerical methods is helpful but not a strict requirement.
|252-0579-00L||3D Vision||W||5 credits||3G + 1A||M. Pollefeys, V. Larsson|
|Abstract||The course covers camera models and calibration, feature tracking and matching, camera motion estimation via simultaneous localization and mapping (SLAM) and visual odometry (VO), epipolar and mult-view geometry, structure-from-motion, (multi-view) stereo, augmented reality, and image-based (re-)localization.|
|Objective||After attending this course, students will:|
1. understand the core concepts for recovering 3D shape of objects and scenes from images and video.
2. be able to implement basic systems for vision-based robotics and simple virtual/augmented reality applications.
3. have a good overview over the current state-of-the art in 3D vision.
4. be able to critically analyze and asses current research in this area.
|Content||The goal of this course is to teach the core techniques required for robotic and augmented reality applications: How to determine the motion of a camera and how to estimate the absolute position and orientation of a camera in the real world. This course will introduce the basic concepts of 3D Vision in the form of short lectures, followed by student presentations discussing the current state-of-the-art. The main focus of this course are student projects on 3D Vision topics, with an emphasis on robotic vision and virtual and augmented reality applications.|
|252-3005-00L||Natural Language Processing |
Number of participants limited to 400.
|W||5 credits||2V + 1U + 1A||R. Cotterell|
|Abstract||This course presents topics in natural language processing with an emphasis on modern techniques, primarily focusing on statistical and deep learning approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems.|
|Objective||The objective of the course is to learn the basic concepts in the statistical processing of natural languages. The course will be project-oriented so that the students can also gain hands-on experience with state-of-the-art tools and techniques.|
|Content||This course presents an introduction to general topics and techniques used in natural language processing today, primarily focusing on statistical approaches. The course provides an overview of the primary areas of research in language processing as well as a detailed exploration of the models and techniques used both in research and in commercial natural language systems.|
|Literature||Jacob Eisenstein: Introduction to Natural Language Processing (Adaptive Computation and Machine Learning series)|
|261-5130-00L||Research in Data Science |
Only for Data Science MSc.
|Abstract||Independent work under the supervision of a core or adjunct faculty of data science.|
|Objective||Independent work under the supervision of a core or adjunct faculty of data science. |
An approval of the director of studies is required for a non DS professor.
|Content||Project done under supervision of an approved professor.|
|Prerequisites / Notice||Only students who have passed at least one core course in Data Management and Processing, and one core course in Data Analysis can start with a research project.|
A project description must be submitted at the start of the project to the studies administration.
|263-0007-00L||Advanced Systems Lab |
Only for master students, otherwise a special permission by the study administration of D-INFK is required.
|W||8 credits||3V + 2U + 2A||M. Püschel, C. Zhang|
|Abstract||This course introduces the student to the foundations and state-of-the-art techniques in developing high performance software for mathematical functionality occurring in various fields in computer science. The focus is on optimizing for a single core and includes optimizing for the memory hierarchy, for special instruction sets, and the possible use of automatic performance tuning.|
|Objective||Software performance (i.e., runtime) arises through the complex interaction of algorithm, its implementation, the compiler used, and the microarchitecture the program is run on. The first goal of the course is to provide the student with an understanding of this "vertical" interaction, and hence software performance, for mathematical functionality. The second goal is to teach a systematic strategy how to use this knowledge to write fast software for numerical problems. This strategy will be trained in several homeworks and a semester-long group project.|
|Content||The fast evolution and increasing complexity of computing platforms pose a major challenge for developers of high performance software for engineering, science, and consumer applications: it becomes increasingly harder to harness the available computing power. Straightforward implementations may lose as much as one or two orders of magnitude in performance. On the other hand, creating optimal implementations requires the developer to have an understanding of algorithms, capabilities and limitations of compilers, and the target platform's architecture and microarchitecture.|
This interdisciplinary course introduces the student to the foundations and state-of-the-art techniques in high performance mathematical software development using important functionality such as matrix operations, transforms, filters, and others as examples. The course will explain how to optimize for the memory hierarchy, take advantage of special instruction sets, and other details of current processors that require optimization. The concept of automatic performance tuning is introduced. The focus is on optimization for a single core; thus, the course complements others on parallel and distributed computing.
Finally a general strategy for performance analysis and optimization is introduced that the students will apply in group projects that accompany the course.
|Prerequisites / Notice||Solid knowledge of the C programming language and matrix algebra.|
- Page 1 of 2 All