Search result: Catalogue data in Autumn Semester 2021
Doctoral Dep. of Information Technology and Electrical Engineering ![]() More Information at: https://www.ethz.ch/en/doctorate.html | |||||||||||||||||||||||||||
![]() A minimum of 12 ECTS credit points must be obtained during doctoral studies. The courses on offer below are only a small selection out of a much larger available number of courses. Please discuss your course selection with your PhD supervisor. | |||||||||||||||||||||||||||
Number | Title | Type | ECTS | Hours | Lecturers | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
» Course Catalogue of ETH Zurich | |||||||||||||||||||||||||||
151-0371-00L | Advanced Model Predictive Control ![]() Number of participants limited to 60. | W | 4 credits | 2V + 1U | M. Zeilinger, A. Carron, L. Hewing, J. Köhler | ||||||||||||||||||||||
Abstract | Model predictive control (MPC) has established itself as a powerful control technique for complex systems under state and input constraints. This course discusses the theory and application of recent advanced MPC concepts, focusing on system uncertainties and safety, as well as data-driven formulations and learning-based control. | ||||||||||||||||||||||||||
Learning objective | Design, implement and analyze advanced MPC formulations for robust and stochastic uncertainty descriptions, in particular with data-driven formulations. | ||||||||||||||||||||||||||
Content | Topics include - Review of Bayesian statistics, stochastic systems and Stochastic Optimal Control - Nominal MPC for uncertain systems (nominal robustness) - Robust MPC - Stochastic MPC - Set-membership Identification and robust data-driven MPC - Bayesian regression and stochastic data-driven MPC - MPC as safety filter for reinforcement learning | ||||||||||||||||||||||||||
Lecture notes | Lecture notes will be provided. | ||||||||||||||||||||||||||
Prerequisites / Notice | Basic courses in control, advanced course in optimal control, basic MPC course (e.g. 151-0660-00L Model Predictive Control) strongly recommended. Background in linear algebra and stochastic systems recommended. | ||||||||||||||||||||||||||
227-0105-00L | Introduction to Estimation and Machine Learning ![]() | W | 6 credits | 4G | H.‑A. Loeliger | ||||||||||||||||||||||
Abstract | Mathematical basics of estimation and machine learning, with a view towards applications in signal processing. | ||||||||||||||||||||||||||
Learning objective | Students master the basic mathematical concepts and algorithms of estimation and machine learning. | ||||||||||||||||||||||||||
Content | Review of probability theory; basics of statistical estimation; least squares and linear learning; Hilbert spaces; Gaussian random variables; singular-value decomposition; kernel methods, neural networks, and more | ||||||||||||||||||||||||||
Lecture notes | Lecture notes will be handed out as the course progresses. | ||||||||||||||||||||||||||
Prerequisites / Notice | solid basics in linear algebra and probability theory | ||||||||||||||||||||||||||
227-0146-00L | Analog-to-Digital Converters ![]() Does not take place this semester. | W | 6 credits | 2V + 2U | |||||||||||||||||||||||
Abstract | This course provides a thorough treatment of integrated data conversion systems from system level specifications and trade-offs, over architecture choice down to circuit implementation. | ||||||||||||||||||||||||||
Learning objective | Data conversion systems are substantial sub-parts of many electronic systems, e.g. the audio conversion system of a home-cinema systems or the base-band front-end of a wireless modem. Data conversion systems usually determine the performance of the overall system in terms of dynamic range and linearity. The student will learn to understand the basic principles behind data conversion and be introduced to the different methods and circuit architectures to implement such a conversion. The conversion methods such as successive approximation or algorithmic conversion are explained with their principle of operation accompanied with the appropriate mathematical calculations, including the effects of non-idealties in some cases. After successful completion of the course the student should understand the concept of an ideal ADC, know all major converter architectures, their principle of operation and what governs their performance. | ||||||||||||||||||||||||||
Content | - Introduction: information representation and communication; abstraction, categorization and symbolic representation; basic conversion algorithms; data converter application; tradeoffs among key parameters; ADC taxonomy. - Dual-slope & successive approximation register (SAR) converters: dual slope principle & converter; SAR ADC operating principle; SAR implementation with a capacitive array; range extension with segmented array. - Algorithmic & pipelined A/D converters: algorithmic conversion principle; sample & hold stage; pipe-lined converter; multiplying DAC; flash sub-ADC and n-bit MDAC; redundancy for correction of non-idealties, error correction. - Performance metrics and non-linearity: ideal ADC; offset, gain error, differential and integral non-linearities; capacitor mismatch; impact of capacitor mismatch on SAR ADC's performance. - Flash, folding an interpolating analog-to-digital converters: flash ADC principle, thermometer to binary coding, sparkle correction; limitations of flash converters; the folding principle, residue extraction; folding amplifiers; cascaded folding; interpolation for folding converters; cascaded folding and interpolation. - Noise in analog-to-digital converters: types of noise; noise calculation in electronic circuit, kT/C-noise, sampled noise; noise analysis in switched-capacitor circuits; aperture time uncertainty and sampling jitter. - Delta-sigma A/D-converters: linearity and resolution; from delta-modulation to delta-sigma modulation; first-oder delta-sigma modulation, circuit level implementation; clock-jitter & SNR in delta-sigma modulators; second-order delta-sigma modulation, higher-order modulation, design procedure for a single-loop modulator. - Digital-to-analog converters: introduction; current scaling D/A converter, current steering DAC, calibration for improved performance. | ||||||||||||||||||||||||||
Lecture notes | Slides are available online under https://iis-students.ee.ethz.ch/lectures/analog-to-digital-converters/ | ||||||||||||||||||||||||||
Literature | - B. Razavi, Principles of Data Conversion System Design, IEEE Press, 1994 - M. Gustavsson et. al., CMOS Data Converters for Communications, Springer, 2010 - R.J. van de Plassche, CMOS Integrated Analog-to-Digital and Digital-to-Analog Converters, Springer, 2010 | ||||||||||||||||||||||||||
Prerequisites / Notice | It is highly recommended to attend the course "Analog Integrated Circuits" of Prof. T. Jang as a preparation for this course. | ||||||||||||||||||||||||||
227-0225-00L | Linear System Theory | W | 6 credits | 5G | A. Iannelli | ||||||||||||||||||||||
Abstract | The class is intended to provide a comprehensive overview of the theory of linear dynamical systems, stability analysis, and their use in control and estimation. The focus is on the mathematics behind the physical properties of these systems and on understanding and constructing proofs of properties of linear control systems. | ||||||||||||||||||||||||||
Learning objective | Students should be able to apply the fundamental results in linear system theory to analyze and control linear dynamical systems. | ||||||||||||||||||||||||||
Content | - Proof techniques and practices. - Linear spaces, normed linear spaces and Hilbert spaces. - Ordinary differential equations, existence and uniqueness of solutions. - Continuous and discrete-time, time-varying linear systems. Time domain solutions. Time invariant systems treated as a special case. - Controllability and observability, duality. Time invariant systems treated as a special case. - Stability and stabilization, observers, state and output feedback, separation principle. | ||||||||||||||||||||||||||
Lecture notes | Available on the course Moodle platform. | ||||||||||||||||||||||||||
Prerequisites / Notice | Sufficient mathematical maturity, in particular in linear algebra, analysis. | ||||||||||||||||||||||||||
Competencies![]() |
| ||||||||||||||||||||||||||
227-0377-10L | Physics of Failure and Reliability of Electronic Devices and Systems | W | 3 credits | 2V | I. Shorubalko, M. Held | ||||||||||||||||||||||
Abstract | Understanding the physics of failures and failure mechanisms enables reliability analysis and serves as a practical guide for electronic devices design, integration, systems development and manufacturing. The field gains additional importance in the context of managing safety, sustainability and environmental impact for continuously increasing complexity and scaling-down trends in electronics. | ||||||||||||||||||||||||||
Learning objective | Provide an understanding of the physics of failure and reliability. Introduce the degradation and failure mechanisms, basics of failure analysis, methods and tools of reliability testing. | ||||||||||||||||||||||||||
Content | Summary of reliability and failure analysis terminology; physics of failure: materials properties, physical processes and failure mechanisms; failure analysis; basics and properties of instruments; quality assurance of technical systems (introduction); introduction to stochastic processes; reliability analysis; component selection and qualification; maintainability analysis (introduction); design rules for reliability, maintainability, reliability tests (introduction). | ||||||||||||||||||||||||||
Lecture notes | Comprehensive copy of transparencies | ||||||||||||||||||||||||||
Literature | Reliability Engineering: Theory and Practice, 8th Edition, Springer 2017, DOI 10.1007/978-3-662-54209-5 Reliability Engineering: Theory and Practice, 8th Edition (2017), DOI 10.1007/978-3-662-54209-5 | ||||||||||||||||||||||||||
227-0417-00L | Information Theory I | W | 6 credits | 4G | A. Lapidoth | ||||||||||||||||||||||
Abstract | This course covers the basic concepts of information theory and of communication theory. Topics covered include the entropy rate of a source, mutual information, typical sequences, the asymptotic equi-partition property, Huffman coding, channel capacity, the channel coding theorem, the source-channel separation theorem, and feedback capacity. | ||||||||||||||||||||||||||
Learning objective | The fundamentals of Information Theory including Shannon's source coding and channel coding theorems | ||||||||||||||||||||||||||
Content | The entropy rate of a source, Typical sequences, the asymptotic equi-partition property, the source coding theorem, Huffman coding, Arithmetic coding, channel capacity, the channel coding theorem, the source-channel separation theorem, feedback capacity | ||||||||||||||||||||||||||
Literature | T.M. Cover and J. Thomas, Elements of Information Theory (second edition) | ||||||||||||||||||||||||||
227-0427-00L | Signal Analysis, Models, and Machine Learning Does not take place this semester. This course was replaced by "Introduction to Estimation and Machine Learning" and "Advanced Signal Analysis, Modeling, and Machine Learning". | W | 6 credits | 4G | H.‑A. Loeliger | ||||||||||||||||||||||
Abstract | Mathematical methods in signal processing and machine learning. I. Linear signal representation and approximation: Hilbert spaces, LMMSE estimation, regularization and sparsity. II. Learning linear and nonlinear functions and filters: neural networks, kernel methods. III. Structured statistical models: hidden Markov models, factor graphs, Kalman filter, Gaussian models with sparse events. | ||||||||||||||||||||||||||
Learning objective | The course is an introduction to some basic topics in signal processing and machine learning. | ||||||||||||||||||||||||||
Content | Part I - Linear Signal Representation and Approximation: Hilbert spaces, least squares and LMMSE estimation, projection and estimation by linear filtering, learning linear functions and filters, L2 regularization, L1 regularization and sparsity, singular-value decomposition and pseudo-inverse, principal-components analysis. Part II - Learning Nonlinear Functions: fundamentals of learning, neural networks, kernel methods. Part III - Structured Statistical Models and Message Passing Algorithms: hidden Markov models, factor graphs, Gaussian message passing, Kalman filter and recursive least squares, Monte Carlo methods, parameter estimation, expectation maximization, linear Gaussian models with sparse events. | ||||||||||||||||||||||||||
Lecture notes | Lecture notes. | ||||||||||||||||||||||||||
Prerequisites / Notice | Prerequisites: - local bachelors: course "Discrete-Time and Statistical Signal Processing" (5. Sem.) - others: solid basics in linear algebra and probability theory | ||||||||||||||||||||||||||
227-0689-00L | System Identification | W | 4 credits | 2V + 1U | R. Smith | ||||||||||||||||||||||
Abstract | Theory and techniques for the identification of dynamic models from experimentally obtained system input-output data. | ||||||||||||||||||||||||||
Learning objective | To provide a series of practical techniques for the development of dynamical models from experimental data, with the emphasis being on the development of models suitable for feedback control design purposes. To provide sufficient theory to enable the practitioner to understand the trade-offs between model accuracy, data quality and data quantity. | ||||||||||||||||||||||||||
Content | Introduction to modeling: Black-box and grey-box models; Parametric and non-parametric models; ARX, ARMAX (etc.) models. Predictive, open-loop, black-box identification methods. Time and frequency domain methods. Subspace identification methods. Optimal experimental design, Cramer-Rao bounds, input signal design. Parametric identification methods. On-line and batch approaches. Closed-loop identification strategies. Trade-off between controller performance and information available for identification. | ||||||||||||||||||||||||||
Literature | "System Identification; Theory for the User" Lennart Ljung, Prentice Hall (2nd Ed), 1999. Additional papers will be available via the course Moodle. | ||||||||||||||||||||||||||
Prerequisites / Notice | Control systems (227-0216-00L) or equivalent. | ||||||||||||||||||||||||||
227-0955-00L | Seminar in Electromagnetics, Photonics and Terahertz ![]() | W | 3 credits | 2S | J. Leuthold | ||||||||||||||||||||||
Abstract | Selected topics of the current research activities at the IEF and closely related institutions are discussed. | ||||||||||||||||||||||||||
Learning objective | Have an overview on the research activities of the IEF institute. | ||||||||||||||||||||||||||
227-0974-00L | TNU Colloquium ![]() | W | 0 credits | 2K | K. Stephan | ||||||||||||||||||||||
Abstract | This colloquium for MSc/PhD students at D-ITET discusses research in Translational Neuromodeling (development of mathematical models for diagnostics of brain diseases) and application to Computational Psychiatry/Psychosomatics. The range of topics is broad, incl. computational (generative) modeling, experimental paradigms (fMRI, EEG, behaviour), and clinical questions. | ||||||||||||||||||||||||||
Learning objective | see above | ||||||||||||||||||||||||||
Content | This colloquium for MSc/PhD students at D-ITET discusses research in Translational Neuromodeling (development of mathematical models for diagnostics of brain diseases) and application to Computational Psychiatry/Psychosomatics. The range of topics is broad, incl. computational (generative) modeling, experimental paradigms (fMRI, EEG, behaviour), and clinical questions. | ||||||||||||||||||||||||||
252-0535-00L | Advanced Machine Learning ![]() | W | 10 credits | 3V + 2U + 4A | J. M. Buhmann, C. Cotrini Jimenez | ||||||||||||||||||||||
Abstract | Machine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects. | ||||||||||||||||||||||||||
Learning objective | Students will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data. | ||||||||||||||||||||||||||
Content | The theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data. Topics covered in the lecture include: Fundamentals: What is data? Bayesian Learning Computational learning theory Supervised learning: Ensembles: Bagging and Boosting Max Margin methods Neural networks Unsupservised learning: Dimensionality reduction techniques Clustering Mixture Models Non-parametric density estimation Learning Dynamical Systems | ||||||||||||||||||||||||||
Lecture notes | No lecture notes, but slides will be made available on the course webpage. | ||||||||||||||||||||||||||
Literature | C. Bishop. Pattern Recognition and Machine Learning. Springer 2007. R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley & Sons, second edition, 2001. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, 2001. L. Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer, 2004. | ||||||||||||||||||||||||||
Prerequisites / Notice | The course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments. Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution. PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points. | ||||||||||||||||||||||||||
252-0417-00L | Randomized Algorithms and Probabilistic Methods | W | 10 credits | 3V + 2U + 4A | A. Steger | ||||||||||||||||||||||
Abstract | Las Vegas & Monte Carlo algorithms; inequalities of Markov, Chebyshev, Chernoff; negative correlation; Markov chains: convergence, rapidly mixing; generating functions; Examples include: min cut, median, balls and bins, routing in hypercubes, 3SAT, card shuffling, random walks | ||||||||||||||||||||||||||
Learning objective | After this course students will know fundamental techniques from probabilistic combinatorics for designing randomized algorithms and will be able to apply them to solve typical problems in these areas. | ||||||||||||||||||||||||||
Content | Randomized Algorithms are algorithms that "flip coins" to take certain decisions. This concept extends the classical model of deterministic algorithms and has become very popular and useful within the last twenty years. In many cases, randomized algorithms are faster, simpler or just more elegant than deterministic ones. In the course, we will discuss basic principles and techniques and derive from them a number of randomized methods for problems in different areas. | ||||||||||||||||||||||||||
Lecture notes | Yes. | ||||||||||||||||||||||||||
Literature | - Randomized Algorithms, Rajeev Motwani and Prabhakar Raghavan, Cambridge University Press (1995) - Probability and Computing, Michael Mitzenmacher and Eli Upfal, Cambridge University Press (2005) | ||||||||||||||||||||||||||
263-4500-00L | Advanced Algorithms ![]() Takes place for the last time. | W | 9 credits | 3V + 2U + 3A | M. Ghaffari, G. Zuzic | ||||||||||||||||||||||
Abstract | This is a graduate-level course on algorithm design (and analysis). It covers a range of topics and techniques in approximation algorithms, sketching and streaming algorithms, and online algorithms. | ||||||||||||||||||||||||||
Learning objective | This course familiarizes the students with some of the main tools and techniques in modern subareas of algorithm design. | ||||||||||||||||||||||||||
Content | The lectures will cover a range of topics, tentatively including the following: graph sparsifications while preserving cuts or distances, various approximation algorithms techniques and concepts, metric embeddings and probabilistic tree embeddings, online algorithms, multiplicative weight updates, streaming algorithms, sketching algorithms, and derandomization. | ||||||||||||||||||||||||||
Lecture notes | https://people.inf.ethz.ch/gmohsen/AA21/ | ||||||||||||||||||||||||||
Prerequisites / Notice | This course is designed for masters and doctoral students and it especially targets those interested in theoretical computer science, but it should also be accessible to last-year bachelor students. Sufficient comfort with both (A) Algorithm Design & Analysis and (B) Probability & Concentrations. E.g., having passed the course Algorithms, Probability, and Computing (APC) is highly recommended, though not required formally. If you are not sure whether you're ready for this class or not, please consult the instructor. | ||||||||||||||||||||||||||
327-2132-00L | Multifunctional Ferroic Materials: Growth and Characterisation | W | 2 credits | 2G | M. Trassin | ||||||||||||||||||||||
Abstract | The course will explore the growth of (multi-) ferroic oxide thin films. The structural characterization and ferroic state investigation by force microscopy and by laser-optical techniques will be addressed. Oxide electronics device concepts will be discussed. | ||||||||||||||||||||||||||
Learning objective | Oxide films with a thickness of just a few atoms can now be grown with a precision matching that of semiconductors. This opens up a whole world of functional device concepts and fascinating phenomena that would not occur in the expanded bulk crystal. Particularly interesting phenomena occur in films showing magnetic or electric order or, even better, both of these ("multiferroics"). In this course students will obtain an overarching view on oxide thin epitaxial films and heterostructures design, reaching from their growth by pulsed laser deposition to an understanding of their magnetoelectric functionality from advanced characterization techniques. Students will therefore understand how to fabricate and characterize highly oriented films with magnetic and electric properties not found in nature. | ||||||||||||||||||||||||||
Content | Types of ferroic order, multiferroics, oxide materials, thin-film growth by pulsed laser deposition, molecular beam epitaxy, RF sputtering, structural characterization (reciprocal space - basics-, XRD for thin films, RHEED) epitaxial strain related effects, scanning probe microscopy techniques, laser-optical characterization, oxide thin film based devices and examples. | ||||||||||||||||||||||||||
401-3055-64L | Algebraic Methods in Combinatorics ![]() | W | 6 credits | 2V + 1U | B. Sudakov | ||||||||||||||||||||||
Abstract | Combinatorics is a fundamental mathematical discipline as well as an essential component of many mathematical areas, and its study has experienced an impressive growth in recent years. This course provides a gentle introduction to Algebraic methods, illustrated by examples and focusing on basic ideas and connections to other areas. | ||||||||||||||||||||||||||
Learning objective | The students will get an overview of various algebraic methods for solving combinatorial problems. We expect them to understand the proof techniques and to use them autonomously on related problems. | ||||||||||||||||||||||||||
Content | Combinatorics is a fundamental mathematical discipline as well as an essential component of many mathematical areas, and its study has experienced an impressive growth in recent years. While in the past many of the basic combinatorial results were obtained mainly by ingenuity and detailed reasoning, the modern theory has grown out of this early stage and often relies on deep, well-developed tools. One of the main general techniques that played a crucial role in the development of Combinatorics was the application of algebraic methods. The most fruitful such tool is the dimension argument. Roughly speaking, the method can be described as follows. In order to bound the cardinality of of a discrete structure A one maps its elements to vectors in a linear space, and shows that the set A is mapped to linearly independent vectors. It then follows that the cardinality of A is bounded by the dimension of the corresponding linear space. This simple idea is surprisingly powerful and has many famous applications. This course provides a gentle introduction to Algebraic methods, illustrated by examples and focusing on basic ideas and connections to other areas. The topics covered in the class will include (but are not limited to): Basic dimension arguments, Spaces of polynomials and tensor product methods, Eigenvalues of graphs and their application, the Combinatorial Nullstellensatz and the Chevalley-Warning theorem. Applications such as: Solution of Kakeya problem in finite fields, counterexample to Borsuk's conjecture, chromatic number of the unit distance graph of Euclidean space, explicit constructions of Ramsey graphs and many others. The course website can be found at https://moodle-app2.let.ethz.ch/course/view.php?id=15757 | ||||||||||||||||||||||||||
Lecture notes | Lectures will be on the blackboard only, but there will be a set of typeset lecture notes which follow the class closely. | ||||||||||||||||||||||||||
Prerequisites / Notice | Students are expected to have a mathematical background and should be able to write rigorous proofs. | ||||||||||||||||||||||||||
401-5680-00L | Foundations of Data Science Seminar ![]() | Z | 0 credits | P. L. Bühlmann, H. Bölcskei, A. Sousa Bandeira, F. Yang | |||||||||||||||||||||||
Abstract | Research colloquium | ||||||||||||||||||||||||||
Learning objective |
Page 1 of 1