Search result: Catalogue data in Autumn Semester 2021

Doctoral Dep. of Information Technology and Electrical Engineering Information
Doctoral and Post-Doctoral Courses
A minimum of 12 ECTS credit points must be obtained during doctoral studies.

The courses on offer below are only a small selection out of a much larger available number of courses. Please discuss your course selection with your PhD supervisor.
NumberTitleTypeECTSHoursLecturers
» Course Catalogue of ETH Zurich
151-0371-00LAdvanced Model Predictive Control Restricted registration - show details
Number of participants limited to 60.
W4 credits2V + 1UM. Zeilinger, A. Carron, L. Hewing, J. Köhler
AbstractModel predictive control (MPC) has established itself as a powerful control technique for complex systems under state and input constraints. This course discusses the theory and application of recent advanced MPC concepts, focusing on system uncertainties and safety, as well as data-driven formulations and learning-based control.
Learning objectiveDesign, implement and analyze advanced MPC formulations for robust and stochastic uncertainty descriptions, in particular with data-driven formulations.
ContentTopics include
- Review of Bayesian statistics, stochastic systems and Stochastic Optimal Control
- Nominal MPC for uncertain systems (nominal robustness)
- Robust MPC
- Stochastic MPC
- Set-membership Identification and robust data-driven MPC
- Bayesian regression and stochastic data-driven MPC
- MPC as safety filter for reinforcement learning
Lecture notesLecture notes will be provided.
Prerequisites / NoticeBasic courses in control, advanced course in optimal control, basic MPC course (e.g. 151-0660-00L Model Predictive Control) strongly recommended.
Background in linear algebra and stochastic systems recommended.
227-0105-00LIntroduction to Estimation and Machine Learning Restricted registration - show details W6 credits4GH.‑A. Loeliger
AbstractMathematical basics of estimation and machine learning, with a view towards applications in signal processing.
Learning objectiveStudents master the basic mathematical concepts and algorithms of estimation and machine learning.
ContentReview of probability theory;
basics of statistical estimation;
least squares and linear learning;
Hilbert spaces;
Gaussian random variables;
singular-value decomposition;
kernel methods, neural networks, and more
Lecture notesLecture notes will be handed out as the course progresses.
Prerequisites / Noticesolid basics in linear algebra and probability theory
227-0146-00LAnalog-to-Digital Converters Information
Does not take place this semester.
W6 credits2V + 2U
AbstractThis course provides a thorough treatment of integrated data conversion systems from system level specifications and trade-offs, over architecture choice down to circuit implementation.
Learning objectiveData conversion systems are substantial sub-parts of many electronic systems, e.g. the audio conversion system of a home-cinema systems or the base-band front-end of a wireless modem. Data conversion systems usually determine the performance of the overall system in terms of dynamic range and linearity. The student will learn to understand the basic principles behind data conversion and be introduced to the different methods and circuit architectures to implement such a conversion. The conversion methods such as successive approximation or algorithmic conversion are explained with their principle of operation accompanied with the appropriate mathematical calculations, including the effects of non-idealties in some cases. After successful completion of the course the student should understand the concept of an ideal ADC, know all major converter architectures, their principle of operation and what governs their performance.
Content- Introduction: information representation and communication; abstraction, categorization and symbolic representation; basic conversion algorithms; data converter application; tradeoffs among key parameters; ADC taxonomy.
- Dual-slope & successive approximation register (SAR) converters: dual slope principle & converter; SAR ADC operating principle; SAR implementation with a capacitive array; range extension with segmented array.
- Algorithmic & pipelined A/D converters: algorithmic conversion principle; sample & hold stage; pipe-lined converter; multiplying DAC; flash sub-ADC and n-bit MDAC; redundancy for correction of non-idealties, error correction.
- Performance metrics and non-linearity: ideal ADC; offset, gain error, differential and integral non-linearities; capacitor mismatch; impact of capacitor mismatch on SAR ADC's performance.
- Flash, folding an interpolating analog-to-digital converters: flash ADC principle, thermometer to binary coding, sparkle correction; limitations of flash converters; the folding principle, residue extraction; folding amplifiers; cascaded folding; interpolation for folding converters; cascaded folding and interpolation.
- Noise in analog-to-digital converters: types of noise; noise calculation in electronic circuit, kT/C-noise, sampled noise; noise analysis in switched-capacitor circuits; aperture time uncertainty and sampling jitter.
- Delta-sigma A/D-converters: linearity and resolution; from delta-modulation to delta-sigma modulation; first-oder delta-sigma modulation, circuit level implementation; clock-jitter & SNR in delta-sigma modulators; second-order delta-sigma modulation, higher-order modulation, design procedure for a single-loop modulator.
- Digital-to-analog converters: introduction; current scaling D/A converter, current steering DAC, calibration for improved performance.
Lecture notesSlides are available online under https://iis-students.ee.ethz.ch/lectures/analog-to-digital-converters/
Literature- B. Razavi, Principles of Data Conversion System Design, IEEE Press, 1994
- M. Gustavsson et. al., CMOS Data Converters for Communications, Springer, 2010
- R.J. van de Plassche, CMOS Integrated Analog-to-Digital and Digital-to-Analog Converters, Springer, 2010
Prerequisites / NoticeIt is highly recommended to attend the course "Analog Integrated Circuits" of Prof. T. Jang as a preparation for this course.
227-0225-00LLinear System TheoryW6 credits5GA. Iannelli
AbstractThe class is intended to provide a comprehensive overview of the theory of linear dynamical systems, stability analysis, and their use in control and estimation. The focus is on the mathematics behind the physical properties of these systems and on understanding and constructing proofs of properties of linear control systems.
Learning objectiveStudents should be able to apply the fundamental results in linear system theory to analyze and control linear dynamical systems.
Content- Proof techniques and practices.
- Linear spaces, normed linear spaces and Hilbert spaces.
- Ordinary differential equations, existence and uniqueness of solutions.
- Continuous and discrete-time, time-varying linear systems. Time domain solutions. Time invariant systems treated as a special case.
- Controllability and observability, duality. Time invariant systems treated as a special case.
- Stability and stabilization, observers, state and output feedback, separation principle.
Lecture notesAvailable on the course Moodle platform.
Prerequisites / NoticeSufficient mathematical maturity, in particular in linear algebra, analysis.
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Problem-solvingassessed
Personal CompetenciesCreative Thinkingfostered
Critical Thinkingfostered
Integrity and Work Ethicsfostered
227-0377-10LPhysics of Failure and Reliability of Electronic Devices and SystemsW3 credits2VI. Shorubalko, M. Held
AbstractUnderstanding the physics of failures and failure mechanisms enables reliability analysis and serves as a practical guide for electronic devices design, integration, systems development and manufacturing. The field gains additional importance in the context of managing safety, sustainability and environmental impact for continuously increasing complexity and scaling-down trends in electronics.
Learning objectiveProvide an understanding of the physics of failure and reliability. Introduce the degradation and failure mechanisms, basics of failure analysis, methods and tools of reliability testing.
ContentSummary of reliability and failure analysis terminology; physics of failure: materials properties, physical processes and failure mechanisms; failure analysis; basics and properties of instruments; quality assurance of technical systems (introduction); introduction to stochastic processes; reliability analysis; component selection and qualification; maintainability analysis (introduction); design rules for reliability, maintainability, reliability tests (introduction).
Lecture notesComprehensive copy of transparencies
LiteratureReliability Engineering: Theory and Practice, 8th Edition, Springer 2017, DOI 10.1007/978-3-662-54209-5
Reliability Engineering: Theory and Practice, 8th Edition (2017), DOI 10.1007/978-3-662-54209-5
227-0417-00LInformation Theory IW6 credits4GA. Lapidoth
AbstractThis course covers the basic concepts of information theory and of communication theory. Topics covered include the entropy rate of a source, mutual information, typical sequences, the asymptotic equi-partition property, Huffman coding, channel capacity, the channel coding theorem, the source-channel separation theorem, and feedback capacity.
Learning objectiveThe fundamentals of Information Theory including Shannon's source coding and channel coding theorems
ContentThe entropy rate of a source, Typical sequences, the asymptotic equi-partition property, the source coding theorem, Huffman coding, Arithmetic coding, channel capacity, the channel coding theorem, the source-channel separation theorem, feedback capacity
LiteratureT.M. Cover and J. Thomas, Elements of Information Theory (second edition)
227-0427-00LSignal Analysis, Models, and Machine Learning
Does not take place this semester.
This course was replaced by "Introduction to Estimation and Machine Learning" and "Advanced Signal Analysis, Modeling, and Machine Learning".
W6 credits4GH.‑A. Loeliger
AbstractMathematical methods in signal processing and machine learning.
I. Linear signal representation and approximation: Hilbert spaces, LMMSE estimation, regularization and sparsity.
II. Learning linear and nonlinear functions and filters: neural networks, kernel methods.
III. Structured statistical models: hidden Markov models, factor graphs, Kalman filter, Gaussian models with sparse events.
Learning objectiveThe course is an introduction to some basic topics in signal processing and machine learning.
ContentPart I - Linear Signal Representation and Approximation: Hilbert spaces, least squares and LMMSE estimation, projection and estimation by linear filtering, learning linear functions and filters, L2 regularization, L1 regularization and sparsity, singular-value decomposition and pseudo-inverse, principal-components analysis.
Part II - Learning Nonlinear Functions: fundamentals of learning, neural networks, kernel methods.
Part III - Structured Statistical Models and Message Passing Algorithms: hidden Markov models, factor graphs, Gaussian message passing, Kalman filter and recursive least squares, Monte Carlo methods, parameter estimation, expectation maximization, linear Gaussian models with sparse events.
Lecture notesLecture notes.
Prerequisites / NoticePrerequisites:
- local bachelors: course "Discrete-Time and Statistical Signal Processing" (5. Sem.)
- others: solid basics in linear algebra and probability theory
227-0689-00LSystem IdentificationW4 credits2V + 1UR. Smith
AbstractTheory and techniques for the identification of dynamic models from experimentally obtained system input-output data.
Learning objectiveTo provide a series of practical techniques for the development of dynamical models from experimental data, with the emphasis being on the development of models suitable for feedback control design purposes. To provide sufficient theory to enable the practitioner to understand the trade-offs between model accuracy, data quality and data quantity.
ContentIntroduction to modeling: Black-box and grey-box models; Parametric and non-parametric models; ARX, ARMAX (etc.) models.

Predictive, open-loop, black-box identification methods. Time and frequency domain methods. Subspace identification methods.

Optimal experimental design, Cramer-Rao bounds, input signal design.

Parametric identification methods. On-line and batch approaches.

Closed-loop identification strategies. Trade-off between controller performance and information available for identification.
Literature"System Identification; Theory for the User" Lennart Ljung, Prentice Hall (2nd Ed), 1999.

Additional papers will be available via the course Moodle.
Prerequisites / NoticeControl systems (227-0216-00L) or equivalent.
227-0955-00LSeminar in Electromagnetics, Photonics and Terahertz Information W3 credits2SJ. Leuthold
AbstractSelected topics of the current research activities at the IEF and closely related institutions are discussed.
Learning objectiveHave an overview on the research activities of the IEF institute.
227-0974-00LTNU Colloquium Restricted registration - show details W0 credits2KK. Stephan
AbstractThis colloquium for MSc/PhD students at D-ITET discusses research in Translational Neuromodeling (development of mathematical models for diagnostics of brain diseases) and application to Computational Psychiatry/Psychosomatics. The range of topics is broad, incl. computational (generative) modeling, experimental paradigms (fMRI, EEG, behaviour), and clinical questions.
Learning objectivesee above
ContentThis colloquium for MSc/PhD students at D-ITET discusses research in Translational Neuromodeling (development of mathematical models for diagnostics of brain diseases) and application to Computational Psychiatry/Psychosomatics. The range of topics is broad, incl. computational (generative) modeling, experimental paradigms (fMRI, EEG, behaviour), and clinical questions.
252-0535-00LAdvanced Machine Learning Information W10 credits3V + 2U + 4AJ. M. Buhmann, C. Cotrini Jimenez
AbstractMachine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects.
Learning objectiveStudents will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data.
ContentThe theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data.

Topics covered in the lecture include:

Fundamentals:
What is data?
Bayesian Learning
Computational learning theory

Supervised learning:
Ensembles: Bagging and Boosting
Max Margin methods
Neural networks

Unsupservised learning:
Dimensionality reduction techniques
Clustering
Mixture Models
Non-parametric density estimation
Learning Dynamical Systems
Lecture notesNo lecture notes, but slides will be made available on the course webpage.
LiteratureC. Bishop. Pattern Recognition and Machine Learning. Springer 2007.

R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley &
Sons, second edition, 2001.

T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference and Prediction. Springer, 2001.

L. Wasserman. All of Statistics: A Concise Course in Statistical
Inference. Springer, 2004.
Prerequisites / NoticeThe course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments.
Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution.

PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points.
252-0417-00LRandomized Algorithms and Probabilistic MethodsW10 credits3V + 2U + 4AA. Steger
AbstractLas Vegas & Monte Carlo algorithms; inequalities of Markov, Chebyshev, Chernoff; negative correlation; Markov chains: convergence, rapidly mixing; generating functions; Examples include: min cut, median, balls and bins, routing in hypercubes, 3SAT, card shuffling, random walks
Learning objectiveAfter this course students will know fundamental techniques from probabilistic combinatorics for designing randomized algorithms and will be able to apply them to solve typical problems in these areas.
ContentRandomized Algorithms are algorithms that "flip coins" to take certain decisions. This concept extends the classical model of deterministic algorithms and has become very popular and useful within the last twenty years. In many cases, randomized algorithms are faster, simpler or just more elegant than deterministic ones. In the course, we will discuss basic principles and techniques and derive from them a number of randomized methods for problems in different areas.
Lecture notesYes.
Literature- Randomized Algorithms, Rajeev Motwani and Prabhakar Raghavan, Cambridge University Press (1995)
- Probability and Computing, Michael Mitzenmacher and Eli Upfal, Cambridge University Press (2005)
263-4500-00LAdvanced Algorithms Information
Takes place for the last time.
W9 credits3V + 2U + 3AM. Ghaffari, G. Zuzic
AbstractThis is a graduate-level course on algorithm design (and analysis). It covers a range of topics and techniques in approximation algorithms, sketching and streaming algorithms, and online algorithms.
Learning objectiveThis course familiarizes the students with some of the main tools and techniques in modern subareas of algorithm design.
ContentThe lectures will cover a range of topics, tentatively including the following: graph sparsifications while preserving cuts or distances, various approximation algorithms techniques and concepts, metric embeddings and probabilistic tree embeddings, online algorithms, multiplicative weight updates, streaming algorithms, sketching algorithms, and derandomization.
Lecture noteshttps://people.inf.ethz.ch/gmohsen/AA21/
Prerequisites / NoticeThis course is designed for masters and doctoral students and it especially targets those interested in theoretical computer science, but it should also be accessible to last-year bachelor students.

Sufficient comfort with both (A) Algorithm Design & Analysis and (B) Probability & Concentrations. E.g., having passed the course Algorithms, Probability, and Computing (APC) is highly recommended, though not required formally. If you are not sure whether you're ready for this class or not, please consult the instructor.
327-2132-00LMultifunctional Ferroic Materials: Growth and CharacterisationW2 credits2GM. Trassin
AbstractThe course will explore the growth of (multi-) ferroic oxide thin films. The structural characterization and ferroic state investigation by force microscopy and by laser-optical techniques will be addressed.
Oxide electronics device concepts will be discussed.
Learning objectiveOxide films with a thickness of just a few atoms can now be grown with a precision matching that of semiconductors. This opens up a whole world of functional device concepts and fascinating phenomena that would not occur in the expanded bulk crystal. Particularly interesting phenomena occur in films showing magnetic or electric order or, even better, both of these ("multiferroics").

In this course students will obtain an overarching view on oxide thin epitaxial films and heterostructures design, reaching from their growth by pulsed laser deposition to an understanding of their magnetoelectric functionality from advanced characterization techniques. Students will therefore understand how to fabricate and characterize highly oriented films with magnetic and electric properties not found in nature.
ContentTypes of ferroic order, multiferroics, oxide materials, thin-film growth by pulsed laser deposition, molecular beam epitaxy, RF sputtering, structural characterization (reciprocal space - basics-, XRD for thin films, RHEED) epitaxial strain related effects, scanning probe microscopy techniques, laser-optical characterization, oxide thin film based devices and examples.
401-3055-64LAlgebraic Methods in Combinatorics Information W6 credits2V + 1UB. Sudakov
AbstractCombinatorics is a fundamental mathematical discipline as well as an essential component of many mathematical areas, and its study has experienced an impressive growth in recent years. This course provides a gentle introduction to Algebraic methods, illustrated by examples and focusing on basic ideas and connections to other areas.
Learning objectiveThe students will get an overview of various algebraic methods for solving combinatorial problems. We expect them to understand the proof techniques and to use them autonomously on related problems.
ContentCombinatorics is a fundamental mathematical discipline as well as an essential component of many mathematical areas, and its study has experienced an impressive growth in recent years. While in the past many of the basic combinatorial results were obtained mainly by ingenuity and detailed reasoning, the modern theory has grown out of this early stage and often relies on deep, well-developed tools.

One of the main general techniques that played a crucial role in the development of Combinatorics was the application of algebraic methods. The most fruitful such tool is the dimension argument. Roughly speaking, the method can be described as follows. In order to bound the cardinality of of a discrete structure A one maps its elements to vectors in a linear space, and shows that the set A is mapped to linearly independent vectors. It then follows that the cardinality of A is bounded by the dimension of the corresponding linear space. This simple idea is surprisingly powerful and has many famous applications.

This course provides a gentle introduction to Algebraic methods, illustrated by examples and focusing on basic ideas and connections to other areas. The topics covered in the class will include (but are not limited to):

Basic dimension arguments, Spaces of polynomials and tensor product methods, Eigenvalues of graphs and their application, the Combinatorial Nullstellensatz and the Chevalley-Warning theorem. Applications such as: Solution of Kakeya problem in finite fields, counterexample to Borsuk's conjecture, chromatic number of the unit distance graph of Euclidean space, explicit constructions of Ramsey graphs and many others.

The course website can be found at
https://moodle-app2.let.ethz.ch/course/view.php?id=15757
Lecture notesLectures will be on the blackboard only, but there will be a set of typeset lecture notes which follow the class closely.
Prerequisites / NoticeStudents are expected to have a mathematical background and should be able to write rigorous proofs.
401-5680-00LFoundations of Data Science Seminar Information Z0 creditsP. L. Bühlmann, H. Bölcskei, A. Sousa Bandeira, F. Yang
AbstractResearch colloquium
Learning objective
  •  Page  1  of  1