Search result: Catalogue data in Spring Semester 2020
|Electrical Engineering and Information Technology Master|
|Master Studies (Programme Regulations 2018)|
| Signal Processing and Machine Learning|
The core courses and specialization courses below are a selection for students who wish to specialize in the area of "Signal Processing and Machine Learning ", see https://www.ee.ethz.ch/studies/main-master/areas-of-specialisation.html.
The individual study plan is subject to the tutor's approval.
| Specialization Courses|
These specialization courses are particularly recommended for the area of "Signal Processing and Machine Learning", but you are free to choose courses from any other field in agreement with your tutor.
A minimum of 40 credits must be obtained from specialization courses during the MSc EEIT.
|227-0120-00L||Communication Networks||W||6 credits||4G||L. Vanbever|
|Abstract||At the end of this course, you will understand the fundamental concepts behind communication networks and the Internet. Specifically, you will be able to:|
- understand how the Internet works;
- build and operate Internet-like infrastructures;
- identify the right set of metrics to evaluate the performance of a network and propose ways to improve it.
|Objective||At the end of the course, the students will understand the fundamental concepts of communication networks and Internet-based communications. Specifically, students will be able to:|
- understand how the Internet works;
- build and operate Internet-like network infrastructures;
- identify the right set of metrics to evaluate the performance or the adequacy of a network and propose ways to improve it (if any).
The course will introduce the relevant mechanisms used in today's networks both from an abstract perspective but also from a practical one by presenting many real-world examples and through multiple hands-on projects.
For more information about the lecture, please visit: https://comm-net.ethz.ch
|Lecture notes||Lecture notes and material for the course will be available before each course on: https://comm-net.ethz.ch|
|Literature||Most of course follows the textbook "Computer Networking: A Top-Down Approach (6th Edition)" by Kurose and Ross.|
|Prerequisites / Notice||No prior networking background is needed. The course will include some programming assignments (in Python) for which the material covered in Technische Informatik 1 (227-0013-00L) and Technische Informatik 2 (227-0014-00L) will be useful.|
|227-0147-00L||VLSI II: Design of Very Large Scale Integration Circuits||W||6 credits||5G||F. K. Gürkaynak, L. Benini|
|Abstract||This second course in our VLSI series is concerned with how to turn digital circuit netlists into safe, testable and manufacturable mask layout, taking into account various parasitic effects. Low-power circuit design is another important topic. Economic aspects and management issues of VLSI projects round off the course.|
|Objective||Know how to design digital VLSI circuits that are safe, testable, durable, and make economic sense.|
|Content||The second course begins with a thorough discussion of various technical aspects at the circuit and layout level before moving on to economic issues of VLSI. Topics include: |
- The difficulties of finding fabrication defects in large VLSI chips.
- How to make integrated circuit testable (design for test).
- Synchronous clocking disciplines compared, clock skew, clock distribution, input/output timing.
- Synchronization and metastability.
- CMOS transistor-level circuits of gates, flip-flops and random access memories.
- Sinks of energy in CMOS circuits.
- Power estimation and low-power design.
- Current research in low-energy computing.
- Layout parasitics, interconnect delay, static timing analysis.
- Switching currents, ground bounce, IR-drop, power distribution.
- Floorplanning, chip assembly, packaging.
- Layout design at the mask level, physical design verification.
- Electromigration, electrostatic discharge, and latch-up.
- Models of industrial cooperation in microelectronics.
- The caveats of virtual components.
- The cost structures of ASIC development and manufacturing.
- Market requirements, decision criteria, and case studies.
- Yield models.
- Avenues to low-volume fabrication.
- Marketing considerations and case studies.
- Management of VLSI projects.
Exercises are concerned with back-end design (floorplanning, placement, routing, clock and power distribution, layout verification). Industrial CAD tools are being used.
|Lecture notes||H. Kaeslin: "Top-Down Digital VLSI Design, from Gate-Level Circuits to CMOS Fabrication", Lecture Notes Vol.2 , 2015.|
All written documents in English.
|Literature||H. Kaeslin: "Top-Down Digital VLSI Design, from Architectures to Gate-Level Circuits and FPGAs", Elsevier, 2014, ISBN 9780128007303.|
|Prerequisites / Notice||Highlight:|
Students are offered the opportunity to design a circuit of their own which then gets actually fabricated as a microchip! Students who elect to participate in this program register for a term project at the Integrated Systems Laboratory in parallel to attending the VLSI II course.
"VLSI I: from Architectures to Very Large Scale Integration Circuits and FPGAs" or equivalent knowledge.
|227-0418-00L||Algebra and Error Correcting Codes||W||6 credits||4G||H.‑A. Loeliger|
|Abstract||The course is an introduction to error correcting codes covering both classical algebraic codes and modern iterative decoding. The course includes a self-contained introduction of the pertinent basics of "abstract" algebra.|
|Objective||The course is an introduction to error correcting codes covering both classical algebraic codes and modern iterative decoding. The course includes a self-contained introduction of the pertinent basics of "abstract" algebra.|
|Content||Error correcting codes: coding and modulation, linear codes, Hamming space codes, Euclidean space codes, trellises and Viterbi decoding, convolutional codes, factor graphs and message passing algorithms, low-density parity check codes, turbo codes, polar codes, Reed-Solomon codes.|
Algebra: groups, rings, homomorphisms, quotient groups, ideals, finite fields, vector spaces, polynomials.
|Lecture notes||Lecture Notes (english)|
|227-0150-00L||Systems-on-chip for Data Analytics and Machine Learning|
Previously "Energy-Efficient Parallel Computing Systems for Data Analytics"
|W||6 credits||4G||L. Benini|
|Abstract||Systems-on-chip architecture and related design issues with a focus on machine learning and data analytics applications. It will cover multi-cores, many-cores, vector engines, GP-GPUs, application-specific processors and heterogeneous compute accelerators. Special emphasis given to energy-efficiency issues and hardware-software techniques for power and energy minimization.|
|Objective||Give in-depth understanding of the links and dependencies between architectures and their energy-efficient implementation and to get a comprehensive exposure to state-of-the-art systems-on-chip platforms for machine learning and data analytics. Practical experience will also be gained through practical exercises and mini-projects (hardware and software) assigned on specific topics.|
|Content||The course will cover advanced system-on-chip architectures, with an in-depth view on design challenges related to advanced silicon technology and state-of-the-art system integration options (nanometer silicon technology, novel storage devices, three-dimensional integration, advanced system packaging). The emphasis will be on programmable parallel architectures with application focus on machine learning and data analytics. The main SoC architectural families will be covered: namely, multi and many- cores, GPUs, vector accelerators, application-specific processors, heterogeneous platforms. The course will cover the complex design choices required to achieve scalability and energy proportionality. The course will will also delve into system design, touching on hardware-software tradeoffs and full-system analysis and optimization taking into account non-functional constraints and quality metrics, such as power consumption, thermal dissipation, reliability and variability. The application focus will be on machine learning both in the cloud and at the edges (near-sensor analytics).|
|Lecture notes||Slides will be provided to accompany lectures. Pointers to scientific literature will be given. Exercise scripts and tutorials will be provided.|
|Literature||John L. Hennessy, David A. Patterson, Computer Architecture: A Quantitative Approach (The Morgan Kaufmann Series in Computer Architecture and Design) 6th Edition, 2017.|
|Prerequisites / Notice||Knowledge of digital design at the level of "Design of Digital Circuits SS12" is required.|
Knowledge of basic VLSI design at the level of "VLSI I: Architectures of VLSI Circuits" is required
|227-0155-00L||Machine Learning on Microcontrollers |
Registration in this class requires the permission of the instructors. Class size will be limited to 30.
Preference is given to students in the MSc EEIT.
|W||6 credits||3G + 2A||M. Magno, L. Benini|
|Abstract||Machine Learning (ML) and artificial intelligence are pervading the digital society. Today, even low power embedded systems are incorporating ML, becoming increasingly “smart”. This lecture gives an overview of ML methods and algorithms to process and extract useful near-sensor information in end-nodes of the “internet-of-things”, using low-power microcontrollers/ processors (ARM-Cortex-M; RISC-V)|
|Objective||Learn how to Process data from sensors and how to extract useful information with low power microprocessors using ML techniques. We will analyze data coming from real low-power sensors (accelerometers, microphones, ExG bio-signals, cameras…). The main objective is to study in details how Machine Learning algorithms can be adapted to the performance constraints and limited resources of low-power microcontrollers.|
|Content||The final goal of the course is a deep understanding of machine learning and its practical implementation on single- and multi-core microcontrollers, coupled with performance and energy efficiency analysis and optimization. The main topics of the course include:|
- Sensors and sensor data acquisition with low power embedded systems
- Machine Learning: Overview of supervised and unsupervised learning and in particular supervised learning (Bayes Decision Theory, Decision Trees, Random Forests, kNN-Methods, Support Vector Machines, Convolutional Networks and Deep Learning)
- Low-power embedded systems and their architecture. Low Power microcontrollers (ARM-Cortex M) and RISC-V-based Parallel Ultra Low Power (PULP) systems-on-chip.
- Low power smart sensor system design: hardware-software tradeoffs, analysis, and optimization. Implementation and performance evaluation of ML in battery-operated embedded systems.
The laboratory exercised will show how to address concrete design problems, like motion, gesture recognition, emotion detection, image and sound classification, using real sensors data and real MCU boards.
Presentations from Ph.D. students and the visit to the Digital Circuits and Systems Group will introduce current research topics and international research projects.
|Lecture notes||Script and exercise sheets. Books will be suggested during the course.|
|Prerequisites / Notice||Prerequisites: Good experience in C language programming. Microprocessors and computer architecture. Basics of Digital Signal Processing. Some exposure to machine learning concepts is also desirable.|
|227-0384-00L||Ultrasound Fundamentals, Imaging, and Medical Applications|
Course is offered for the last time in Spring Semester 2020.
|W||4 credits||3G||O. Göksel|
|Abstract||Ultrasound is the only imaging modality that is nonionizing (safe), real-time, cost-effective, and portable, with many medical uses in diagnosis, intervention guidance, surgical navigation, and as a therapeutic option. In this course, we introduce conventional and prospective applications of ultrasound, starting with the fundamentals of ultrasound physics and imaging.|
|Objective||Students can use the fundamentals of ultrasound, to analyze and evaluate ultrasound imaging techniques and applications, in particular in the field of medicine, as well as to design and implement basic applications.|
|Content||Ultrasound is used in wide range of products, from car parking sensors, to assessing fault lines in tram wheels. Medical imaging is the eye of the doctor into body; and ultrasound is the only imaging modality that is nonionizing (safe), real-time, cheap, and portable. Some of its medical uses include diagnosing breast and prostate cancer, guiding needle insertions/biopsies, screening for fetal anomalies, and monitoring cardiac arrhythmias. Ultrasound physically interacts with the tissue, and thus can also be used therapeutically, e.g., to deliver heat to treat tumors, break kidney stones, and targeted drug delivery. Recent years have seen several novel ultrasound techniques and applications – with many more waiting in the horizon to be discovered.|
This course covers ultrasonic equipment, physics of wave propagation, numerical methods for its simulation, image generation, beamforming (basic delay-and-sum and advanced methods), transducers (phased-, linear-, convex-arrays), near- and far-field effect, imaging modes (e.g., A-, M-, B-mode), Doppler and harmonic imaging, ultrasound signal processing techniques (e.g., filtering, time-gain-compensation, displacement tracking), image analysis techniques (deconvolution, real-time processing, tracking, segmentation, computer-assisted interventions), acoustic-radiation force, plane-wave imaging, contrast agents, micro-bubbles, elastography, biomechanical characterization, high-intensity focused ultrasound and therapy, lithotripsy, histotripsy, photo-acoustics phenomenon and opto-acoustic imaging, as well as sample non-medical applications such as the basics of non-destructive testing (NDT).
Hands-on exercises: These will help to apply the concepts learned in the course, using simulation environments (such as Matlab k-Wave and FieldII toolboxes). The exercises will involve a mix of design, implementation, and evaluation examples commonly encountered in practical applications.
Project: Current and relevant applications in the field of ultrasound are offered as project topics. Projects will be carried out throughout the course, where the project reporting and presentations will be due towards the end of the semester. These will be part of the assessment in grading.
|Prerequisites / Notice||Prerequisites: Familiarity with basic numerical methods.|
Basic programming skills in Matlab.
|227-0436-00L||Digital Communication and Signal Processing||W||6 credits||2V + 2U||A. Wittneben|
|Abstract||A comprehensive presentation of modern digital modulation, detection and synchronization schemes and relevant aspects of signal processing enables the student to analyze, simulate, implement and research the physical layer of advanced digital communication schemes. The course both covers the underlying theory and provides problem solving and hands-on experience.|
|Objective||Digital communication systems are characterized by ever increasing requirements on data rate, spectral efficiency and reliability. Due to the huge advances in very large scale integration (VLSI) we are now able to implement extremely complex digital signal processing algorithms to meet these challenges. As a result the physical layer (PHY) of digital communication systems has become the dominant function in most state-of-the-art system designs. In this course we discuss the major elements of PHY implementations in a rigorous theoretical fashion and present important practical examples to illustrate the application of the theory. In Part I we treat discrete time linear adaptive filters, which are a core component to handle multiuser and intersymbol interference in time-variant channels. Part II is a seminar block, in which the students develop their analytical and experimental (simulation) problem solving skills. After a review of major aspects of wireless communication we discuss, simulate and present the performance of novel cooperative and adaptive multiuser wireless communication systems. As part of this seminar each students has to give a 15 minute presentation and actively attends the presentations of the classmates. In Part III we cover parameter estimation and synchronization. Based on the classical discrete detection and estimation theory we develop maximum likelihood inspired digital algorithms for symbol timing and frequency synchronization.|
|Content||Part I: Linear adaptive filters for digital communication|
• Finite impulse response (FIR) filter for temporal and spectral shaping
• Wiener filters
• Method of steepest descent
• Least mean square adaptive filters
Part II: Seminar block on cooperative wireless communication
• review of the basic concepts of wireless communication
• multiuser amplify&forward relaying
• performance evaluation of adaptive A&F relaying schemes and student presentations
Part III: Parameter estimation and synchronization
• Discrete detection theory
• Discrete estimation theory
• Synthesis of synchronization algorithms
• Frequency estimation
• Timing adjustment by interpolation
|Lecture notes||Lecture notes.|
|Literature|| Oppenheim, A. V., Schafer, R. W., "Discrete-time signal processing", Prentice-Hall, ISBN 0-13-754920-2.|
 Haykin, S., "Adaptive filter theory", Prentice-Hall, ISBN 0-13-090126-1.
 Van Trees, H. L., "Detection , estimation and modulation theory", John Wiley&Sons, ISBN 0-471-09517-6.
 Meyr, H., Moeneclaey, M., Fechtel, S. A., "Digital communication receivers: synchronization, channel estimation and signal processing", John Wiley&Sons, ISBN 0-471-50275-8.
|Prerequisites / Notice||Formal prerequisites: none|
Recommended: Communication Systems or equivalent
|227-0478-00L||Acoustics II||W||6 credits||4G||K. Heutschi|
|Abstract||Advanced knowledge of the functioning and application of electro-acoustic transducers.|
|Objective||Advanced knowledge of the functioning and application of electro-acoustic transducers.|
|Content||Electrical, mechanical and acoustical analogies. Transducers, microphones and loudspeakers, acoustics of musical instruments, sound recording, sound reproduction, digital audio.|
|227-0558-00L||Principles of Distributed Computing||W||7 credits||2V + 2U + 2A||R. Wattenhofer, M. Ghaffari|
|Abstract||We study the fundamental issues underlying the design of distributed systems: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques.|
|Objective||Distributed computing is essential in modern computing and communications systems. Examples are on the one hand large-scale networks such as the Internet, and on the other hand multiprocessors such as your new multi-core laptop. This course introduces the principles of distributed computing, emphasizing the fundamental issues underlying the design of distributed systems and networks: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques, basically the "pearls" of distributed computing. We will cover a fresh topic every week.|
|Content||Distributed computing models and paradigms, e.g. message passing, shared memory, synchronous vs. asynchronous systems, time and message complexity, peer-to-peer systems, small-world networks, social networks, sorting networks, wireless communication, and self-organizing systems.|
Distributed algorithms, e.g. leader election, coloring, covering, packing, decomposition, spanning trees, mutual exclusion, store and collect, arrow, ivy, synchronizers, diameter, all-pairs-shortest-path, wake-up, and lower bounds
|Lecture notes||Available. Our course script is used at dozens of other universities around the world.|
|Literature||Lecture Notes By Roger Wattenhofer. These lecture notes are taught at about a dozen different universities through the world.|
Distributed Computing: Fundamentals, Simulations and Advanced Topics
Hagit Attiya, Jennifer Welch.
McGraw-Hill Publishing, 1998, ISBN 0-07-709352 6
Introduction to Algorithms
Thomas Cormen, Charles Leiserson, Ronald Rivest.
The MIT Press, 1998, ISBN 0-262-53091-0 oder 0-262-03141-8
Disseminatin of Information in Communication Networks
Juraj Hromkovic, Ralf Klasing, Andrzej Pelc, Peter Ruzicka, Walter Unger.
Springer-Verlag, Berlin Heidelberg, 2005, ISBN 3-540-00846-2
Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes
Frank Thomson Leighton.
Morgan Kaufmann Publishers Inc., San Francisco, CA, 1991, ISBN 1-55860-117-1
Distributed Computing: A Locality-Sensitive Approach
Society for Industrial and Applied Mathematics (SIAM), 2000, ISBN 0-89871-464-8
|Prerequisites / Notice||Course pre-requisites: Interest in algorithmic problems. (No particular course needed.)|
|227-0560-00L||Deep Learning for Autonomous Driving |
Registration in this class requires the permission of the instructors. Class size will be limited to 80 students.
Preference is given to EEIT, INF and RSC students.
|W||6 credits||3V + 2P||D. Dai, A. Liniger|
|Abstract||Autonomous driving has moved from the realm of science fiction to a very real possibility during the past twenty years, largely due to rapid developments of deep learning approaches, automotive sensors, and microprocessor capacity. This course covers the core techniques required for building a self-driving car, especially the practical use of deep learning through this theme.|
|Objective||Students will learn about the fundamental aspects of a self-driving car. They will also learn to use modern automotive sensors and HD navigational maps, and to implement, train and debug their own deep neural networks in order to gain a deep understanding of cutting-edge research in autonomous driving tasks, including perception, localization and control.|
After attending this course, students will:
1) understand the core technologies of building a self-driving car;
2) have a good overview over the current state of the art in self-driving cars;
3) be able to critically analyze and evaluate current research in this area;
4) be able to implement basic systems for multiple autonomous driving tasks.
|Content||We will focus on teaching the following topics centered on autonomous driving: deep learning, automotive sensors, multimodal driving datasets, road scene perception, ego-vehicle localization, path planning, and control.|
The course covers the following main areas:
a) Fundamentals of a self-driving car
b) Fundamentals of deep-learning
a) Semantic segmentation and lane detection
b) Depth estimation with images and sparse LiDAR data
c) 3D object detection with images and LiDAR data
d) Object tracking and motion prediction
a) GPS-based and Vision-based Localization
b) Visual Odometry and Lidar Odometry
IV) Path Planning and Control
a) Path planning for autonomous driving
b) Motion planning and vehicle control
c) Imitation learning and reinforcement learning for self driving cars
The exercise projects will involve training complex neural networks and applying them on real-world, multimodal driving datasets. In particular, students should be able to develop systems that deal with the following problems:
- Sensor calibration and synchronization to obtain multimodal driving data;
- Semantic segmentation and depth estimation with deep neural networks ;
- Learning to drive with images and map data directly (a.k.a. end-to-end driving)
|Lecture notes||The lecture slides will be provided as a PDF.|
|Prerequisites / Notice||This is an advanced grad-level course. Students must have taken courses on machine learning and computer vision or have acquired equivalent knowledge. Students are expected to have a solid mathematical foundation, in particular in linear algebra, multivariate calculus, and probability. All practical exercises will require basic knowledge of Python and will use libraries such as PyTorch, scikit-learn and scikit-image.|
|227-0707-00L||Optimization Methods for Engineers||W||3 credits||2G||P. Leuchtmann|
|Abstract||First half of the semester: Introduction to the main methods of numerical optimization with focus on stochastic methods such as genetic algorithms, evolutionary strategies, etc.|
Second half of the semester: Each participant implements a selected optimizer and applies it on a problem of practical interest.
|Objective||Numerical optimization is of increasing importance for the development of devices and for the design of numerical methods. The students shall learn to select, improve, and combine appropriate procedures for efficiently solving practical problems.|
|Content||Typical optimization problems and their difficulties are outlined. Well-known deterministic search strategies, combinatorial minimization, and evolutionary algorithms are presented and compared. In engineering, optimization problems are often very complex. Therefore, new techniques based on the generalization and combination of known methods are discussed. To illustrate the procedure, various problems of practical interest are presented and solved with different optimization codes.|
|Lecture notes||PDF of a short skript (39 pages) plus the view graphs are provided|
|Prerequisites / Notice||Lecture only in the first half of the semester, exercises in form of small projects in the second half, presentation of the results in the last week of the semester.|
|227-0948-00L||Magnetic Resonance Imaging in Medicine||W||4 credits||3G||S. Kozerke, M. Weiger Senften|
|Abstract||Introduction to magnetic resonance imaging and spectroscopy, encoding and contrast mechanisms and their application in medicine.|
|Objective||Understand the basic principles of signal generation, image encoding and decoding, contrast manipulation and the application thereof to assess anatomical and functional information in-vivo.|
|Content||Introduction to magnetic resonance imaging including basic phenomena of nuclear magnetic resonance; 2- and 3-dimensional imaging procedures; fast and parallel imaging techniques; image reconstruction; pulse sequences and image contrast manipulation; equipment; advanced techniques for identifying activated brain areas; perfusion and flow; diffusion tensor imaging and fiber tracking; contrast agents; localized magnetic resonance spectroscopy and spectroscopic imaging; diagnostic applications and applications in research.|
|Lecture notes||D. Meier, P. Boesiger, S. Kozerke|
Magnetic Resonance Imaging and Spectroscopy
|227-1032-00L||Neuromorphic Engineering II |
Information for UZH students:
Enrolment to this course unit only possible at ETH. No enrolment to module INI405 at UZH.
Please mind the ETH enrolment deadlines for UZH students: Link
|W||6 credits||5G||S.‑C. Liu, T. Delbrück, G. Indiveri|
|Abstract||This course teaches the basics of analog chip design and layout with an emphasis on neuromorphic circuits, which are introduced in the fall semester course "Neuromorphic Engineering I".|
|Objective||Design of a neuromorphic circuit for implementation with CMOS technology.|
|Content||This course teaches the basics of analog chip design and layout with an emphasis on neuromorphic circuits, which are introduced in the autumn semester course "Neuromorphic Engineering I".|
The principles of CMOS processing technology are presented. Using a set of inexpensive software tools for simulation, layout and verification, suitable for neuromorphic circuits, participants learn to simulate circuits on the transistor level and to make their layouts on the mask level. Important issues in the layout of neuromorphic circuits will be explained and illustrated with examples. In the latter part of the semester students simulate and layout a neuromorphic chip. Schematics of basic building blocks will be provided. The layout will then be fabricated and will be tested by students during the following fall semester.
|Literature||S.-C. Liu et al.: Analog VLSI Circuits and Principles; software documentation.|
|Prerequisites / Notice||Prerequisites: Neuromorphic Engineering I strongly recommended|
|151-0566-00L||Recursive Estimation||W||4 credits||2V + 1U||R. D'Andrea|
|Abstract||Estimation of the state of a dynamic system based on a model and observations in a computationally efficient way.|
|Objective||Learn the basic recursive estimation methods and their underlying principles.|
|Content||Introduction to state estimation; probability review; Bayes' theorem; Bayesian tracking; extracting estimates from probability distributions; Kalman filter; extended Kalman filter; particle filter; observer-based control and the separation principle.|
|Lecture notes||Lecture notes available on course website: http://www.idsc.ethz.ch/education/lectures/recursive-estimation.html|
|Prerequisites / Notice||Requirements: Introductory probability theory and matrix-vector algebra.|
|252-0526-00L||Statistical Learning Theory||W||7 credits||3V + 2U + 1A||J. M. Buhmann, C. Cotrini Jimenez|
|Abstract||The course covers advanced methods of statistical learning: |
- Variational methods and optimization.
- Deterministic annealing.
- Clustering for diverse types of data.
- Model validation by information theory.
|Objective||The course surveys recent methods of statistical learning. The fundamentals of machine learning, as presented in the courses "Introduction to Machine Learning" and "Advanced Machine Learning", are expanded from the perspective of statistical learning.|
|Content||- Variational methods and optimization. We consider optimization approaches for problems where the optimizer is a probability distribution. We will discuss concepts like maximum entropy, information bottleneck, and deterministic annealing.|
- Clustering. This is the problem of sorting data into groups without using training samples. We discuss alternative notions of "similarity" between data points and adequate optimization procedures.
- Model selection and validation. This refers to the question of how complex the chosen model should be. In particular, we present an information theoretic approach for model validation.
- Statistical physics models. We discuss approaches for approximately optimizing large systems, which originate in statistical physics (free energy minimization applied to spin glasses and other models). We also study sampling methods based on these models.
|Lecture notes||A draft of a script will be provided. Lecture slides will be made available.|
|Literature||Hastie, Tibshirani, Friedman: The Elements of Statistical Learning, Springer, 2001.|
L. Devroye, L. Gyorfi, and G. Lugosi: A probabilistic theory of pattern recognition. Springer, New York, 1996
|Prerequisites / Notice||Knowledge of machine learning (introduction to machine learning and/or advanced machine learning)|
Basic knowledge of statistics.
|252-0579-00L||3D Vision||W||5 credits||3G + 1A||M. Pollefeys, V. Larsson|
|Abstract||The course covers camera models and calibration, feature tracking and matching, camera motion estimation via simultaneous localization and mapping (SLAM) and visual odometry (VO), epipolar and mult-view geometry, structure-from-motion, (multi-view) stereo, augmented reality, and image-based (re-)localization.|
|Objective||After attending this course, students will:|
1. understand the core concepts for recovering 3D shape of objects and scenes from images and video.
2. be able to implement basic systems for vision-based robotics and simple virtual/augmented reality applications.
3. have a good overview over the current state-of-the art in 3D vision.
4. be able to critically analyze and asses current research in this area.
|Content||The goal of this course is to teach the core techniques required for robotic and augmented reality applications: How to determine the motion of a camera and how to estimate the absolute position and orientation of a camera in the real world. This course will introduce the basic concepts of 3D Vision in the form of short lectures, followed by student presentations discussing the current state-of-the-art. The main focus of this course are student projects on 3D Vision topics, with an emphasis on robotic vision and virtual and augmented reality applications.|
|227-0973-00L||Translational Neuromodeling||W||8 credits||3V + 2U + 1A||K. Stephan|
|Abstract||This course provides a systematic introduction to Translational Neuromodeling (the development of mathematical models for diagnostics of brain diseases) and their application to concrete clinical questions (Computational Psychiatry/Psychosomatics). It focuses on a generative modeling strategy and teaches (hierarchical) Bayesian models of neuroimaging data and behaviour, incl. exercises.|
|Objective||To obtain an understanding of the goals, concepts and methods of Translational Neuromodeling and Computational Psychiatry/Psychosomatics, particularly with regard to Bayesian models of neuroimaging (fMRI, EEG) and behavioural data.|
|Content||This course provides a systematic introduction to Translational Neuromodeling (the development of mathematical models for diagnostics of brain diseases) and their application to concrete clinical questions (Computational Psychiatry/Psychosomatics). The first part of the course will introduce disease concepts from psychiatry and psychosomatics, their history, and clinical priority problems. The second part of the course concerns computational modeling of neuronal and cognitive processes for clinical applications. A particular focus is on Bayesian methods and generative models, for example, dynamic causal models for inferring neuronal processes from neuroimaging data, and hierarchical Bayesian models for inference on cognitive processes from behavioural data. The course discusses the mathematical and statistical principles behind these models, illustrates their application to various psychiatric diseases, and outlines a general research strategy based on generative models. |
Lecture topics include:
1. Introduction to Translational Neuromodeling and Computational Psychiatry/Psychosomatics
2. Psychiatric nosology
3. Pathophysiology of psychiatric disease mechanisms
4. Principles of Bayesian inference and generative modeling
5. Variational Bayes (VB)
6. Bayesian model selection
7. Markov Chain Monte Carlo techniques (MCMC)
8. Bayesian frameworks for understanding psychiatric and psychosomatic diseases
9. Generative models of fMRI data
10. Generative models of electrophysiological data
11. Generative models of behavioural data
12. Computational concepts of schizophrenia, depression and autism
13. Model-based predictions about individual patients
Practical exercises include mathematical derivations and the implementation of specific models and inference methods. In additional project work, students are required to use one of the examples discussed in the course as a basis for developing their own generative model and use it for simulations and/or inference in application to a clinical question. Group work (up to 3 students) is permitted.
|Literature||See TNU website:|
|Prerequisites / Notice||Good knowledge of principles of statistics, good programming skills (MATLAB or Python)|
|263-5904-00L||Deep Learning for Computer Vision: Seminal Work |
Number of participants limited to 24.
The deadline for deregistering expires at the end of the second week of the semester. Students who are still registered after that date, but do not attend the seminar, will officially fail the seminar.
|W||2 credits||2S||M. R. Oswald, Z. Cui|
|Abstract||This seminar covers seminal papers on the topic of deep learning for computer vision. The students will present and discuss the papers and gain an understanding of the most influential research in this area - both past and present.|
|Objective||The objectives of this seminar are two-fold. Firstly, the aim is to provide a solid understanding of key contributions to the field of deep learning for vision (including a historical perspective as well as recent work). Secondly, the students will learn to critically read and analyse original research papers and judge their impact, as well as how to give a scientific presentation and lead a discussion on their topic.|
|Content||The seminar will start with introductory lectures to provide (1) a compact overview of challenges and relevant machine learning and deep learning research, and (2) a tutorial on critical analysis and presentation of research papers. Each student then chooses one paper from the provided collection to present during the remainder of the seminar. The students will be supported in the preparation of their presentation by the seminar assistants.|
|Lecture notes||The selection of research papers will be presented at the beginning of the semester.|
|Literature||The course "Machine Learning" is recommended.|
|252-3900-00L||Big Data for Engineers |
This course is not intended for Computer Science and Data Science MSc students!
|W||6 credits||2V + 2U + 1A||G. Fourny|
|Abstract||This course is part of the series of database lectures offered to all ETH departments, together with Information Systems for Engineers. It introduces the most recent advances in the database field: how do we scale storage and querying to Petabytes of data, with trillions of records? How do we deal with heterogeneous data sets? How do we deal with alternate data shapes like trees and graphs?|
|Objective||This lesson is complementary with Information Systems for Engineers as they cover different time periods of database history and practices -- you can even take both lectures at the same time.|
The key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations.
This combination of requirements, together with the technologies that have emerged in order to address them, is typically referred to as "Big Data." This revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm".
Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it were and are still needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small.
The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof.
After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each business use case efficiently and consistently.
|Content||This course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe. |
It targets specifically students with a scientific or Engineering, but not Computer Science, background.
We take the monolithic, one-machine relational stack from the 1970s, smash it down and rebuild it on top of large clusters: starting with distributed storage, and all the way up to syntax, models, validation, processing, indexing, and querying. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem.
No data is harmed during this course, however, please be psychologically prepared that our data may not always be in normal form.
- physical storage: distributed file systems (HDFS), object storage(S3), key-value stores
- logical storage: document stores (MongoDB), column stores (HBase)
- data formats and syntaxes (XML, JSON, RDF, CSV, YAML, protocol buffers, Avro)
- data shapes and models (tables, trees)
- type systems and schemas: atomic types, structured types (arrays, maps), set-based type systems (?, *, +)
- an overview of functional, declarative programming languages across data shapes (SQL, JSONiq)
- the most important query paradigms (selection, projection, joining, grouping, ordering, windowing)
- paradigms for parallel processing, two-stage (MapReduce) and DAG-based (Spark)
- resource management (YARN)
- what a data center is made of and why it matters (racks, nodes, ...)
- underlying architectures (internal machinery of HDFS, HBase, Spark)
- optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing)
Large scale analytics and machine learning are outside of the scope of this course.
|Literature||Papers from scientific conferences and journals. References will be given as part of the course material during the semester.|
|Prerequisites / Notice||This course is not intended for Computer Science and Data Science students. Computer Science and Data Science students interested in Big Data MUST attend the Master's level Big Data lecture, offered in Fall.|
Requirements: programming knowledge (Java, C++, Python, PHP, ...) as well as basic knowledge on databases (SQL). If you have already built your own website with a backend SQL database, this is perfect.
Attendance is especially recommended to those who attended Information Systems for Engineers last Fall, which introduced the "good old databases of the 1970s" (SQL, tables and cubes). However, this is not a strict requirement, and it is also possible to take the lectures in reverse order.
|263-5300-00L||Guarantees for Machine Learning||W||5 credits||2V + 2A||F. Yang|
|Abstract||This course teaches classical and recent methods in statistics and optimization commonly used to prove theoretical guarantees for machine learning algorithms. The knowledge is then applied in project work that focuses on understanding phenomena in modern machine learning.|
|Objective||This course is aimed at advanced master and doctorate students who want to understand and/or conduct independent research on theory for modern machine learning. For this purpose, students will learn common mathematical techniques from statistical learning theory. In independent project work, they then apply their knowledge and go through the process of critically questioning recently published work, finding relevant research questions and learning how to effectively present research ideas to a professional audience.|
|Content||This course teaches some classical and recent methods in statistical learning theory aimed at proving theoretical guarantees for machine learning algorithms, including topics in|
- concentration bounds, uniform convergence
- high-dimensional statistics (e.g. Lasso)
- prediction error bounds for non-parametric statistics (e.g. in kernel spaces)
- minimax lower bounds
- regularization via optimization
The project work focuses on active theoretical ML research that aims to understand modern phenomena in machine learning, including but not limited to
- how overparameterization could help generalization ( interpolating models, linearized NN )
- how overparameterization could help optimization ( non-convex optimization, loss landscape )
- complexity measures and approximation theoretic properties of randomly initialized and
- generalization of robust learning ( adversarial robustness, standard and robust error tradeoff )
- prediction with calibrated confidence ( conformal prediction, calibration )
|Prerequisites / Notice||It’s absolutely necessary for students to have a strong mathematical background (basic real analysis, probability theory, linear algebra) and good knowledge of core concepts in machine learning taught in courses such as “Introduction to Machine Learning”, “Regression”/ “Statistical Modelling”. It's also helpful to have heard an optimization course or approximation theoretic course. In addition to these prerequisites, this class requires a certain degree of mathematical maturity—including abstract thinking and the ability to understand and write proofs.|
- Page 1 of 1