Search result: Catalogue data in Autumn Semester 2017
|Computational Science and Engineering Master|
|151-0113-00L||Applied Fluid Dynamics||W||4 credits||2V + 1U||J.‑P. Kunsch|
|Abstract||Applied Fluid Dynamics|
The methods of fluid dynamics play an important role in the description of a chain of events, involving the release, spreading and dilution of dangerous fluids in the environment.
Tunnel ventilation systems and strategies are studied, which must meet severe requirements during normal operation and in emergency situations (tunnel fires etc.).
|Objective||Generally applicable methods in fluid dynamics and gas dynamics are illustrated and practiced using selected current examples.|
|Content||Often experts fall back on the methodology of fluid dynamics when involved in the construction of environmentally friendly processing and incineration facilities, as well as when choosing safe transport and storage options for dangerous materials. As a result of accidents, but also in normal operations, dangerous gases and liquids may escape and be transported further by wind or flowing water.|
There are many possible forms that the resulting damage may take, including fire and explosion when flammable substances are mixed. The topics covered include: Emissions of liquids and gases from containers and pipelines, evaporation from pools and vaporization of gases kept under pressure, the spread and dilution of waste gas plumes in the wind, deflagration and detonation of inflammable gases, fireballs in gases held under pressure, pollution and exhaust gases in tunnels (tunnel fires etc.)
|Lecture notes||not available|
|Prerequisites / Notice||Requirements: successful attendance at lectures "Fluiddynamik I und II", "Thermodynamik I und II"|
|151-0709-00L||Stochastic Methods for Engineers and Natural Scientists |
Number of participants limited to 30.
|W||4 credits||3G||D. W. Meyer-Massetti|
|Abstract||The course provides an introduction into stochastic methods that are applicable for example for the description and modeling of turbulent and subsurface flows. Moreover, mathematical techniques are presented that are used to quantify uncertainty in various engineering applications.|
|Objective||By the end of the course you should be able to mathematically describe random quantities and their effect on physical systems. Moreover, you should be able to develop basic stochastic models of such systems.|
|Content||- Probability theory, single and multiple random variables, mappings of random variables|
- Estimation of statistical moments and probability densities based on data
- Stochastic differential equations, Ito calculus, PDF evolution equations
- Polynomial chaos and other expansion methods
All topics are illustrated with engineering applications.
|Lecture notes||Detailed lecture notes will be provided.|
|Literature||Some textbooks related to the material covered in the course:|
Stochastic Methods: A Handbook for the Natural and Social Sciences, Crispin Gardiner, Springer, 2010
The Fokker-Planck Equation: Methods of Solutions and Applications, Hannes Risken, Springer, 1996
Turbulent Flows, S.B. Pope, Cambridge University Press, 2000
Spectral Methods for Uncertainty Quantification, O.P. Le Maitre and O.M. Knio, Springer, 2010
|151-0317-00L||Visualization, Simulation and Interaction - Virtual Reality II||W||4 credits||3G||A. Kunz|
|Abstract||This lecture provides deeper knowledge on the possible applications of virtual reality, its basic technolgy, and future research fields. The goal is to provide a strong knowledge on Virtual Reality for a possible future use in business processes.|
|Objective||Virtual Reality can not only be used for the visualization of 3D objects, but also offers a wide application field for small and medium enterprises (SME). This could be for instance an enabling technolgy for net-based collaboration, the transmission of images and other data, the interaction of the human user with the digital environment, or the use of augmented reality systems.|
The goal of the lecture is to provide a deeper knowledge of today's VR environments that are used in business processes. The technical background, the algorithms, and the applied methods are explained more in detail. Finally, future tasks of VR will be discussed and an outlook on ongoing international research is given.
|Content||Introduction into Virtual Reality; basisc of augmented reality; interaction with digital data, tangible user interfaces (TUI); basics of simulation; compression procedures of image-, audio-, and video signals; new materials for force feedback devices; intorduction into data security; cryptography; definition of free-form surfaces; digital factory; new research fields of virtual reality|
|Lecture notes||The handout is available in German and English.|
|Prerequisites / Notice||Prerequisites:|
"Visualization, Simulation and Interaction - Virtual Reality I" is recommended.
The course consists of lectures and exercises.
|151-0833-00L||Principles of Nonlinear Finite-Element-Methods||W||5 credits||2V + 2U||N. Manopulo, B. Berisha|
|Abstract||Most problems in engineering are of nonlinear nature. The nonlinearities are caused basically due to the nonlinear material behavior, contact conditions and instability of structures. The principles of the nonlinear Finite-Element-Method (FEM) will be introduced in the scope of this lecture for treating such problems.|
|Objective||The goal of the lecture is to provide the students with the fundamentals of the non linear Finite Element Method (FEM). The lecture focuses on the principles of the nonlinear Finite-Element-Method based on explicit and implicit formulations. Typical applications of the nonlinear Finite-Element-Methods are simulations of:|
- Collapse of structures
- Materials in Biomechanics (soft materials)
- General forming processes
Special attention will be paid to the modeling of the nonlinear material behavior, thermo-mechanical processes and processes with large plastic deformations. The ability to independently create a virtual model which describes the complex non linear systems will be acquired through accompanying exercises. These will include the Matlab programming of important model components such as constitutive equations
|Content||- Fundamentals of continuum mechanics to characterize large plastic deformations|
- Elasto-plastic material models
- Updated-Lagrange (UL), Euler and combined Euler-Lagrange (ALE) approaches
- FEM implementation of constitutive equations
- Element formulations
- Implicit and explicit FEM methods
- FEM formulations of coupled thermo-mechanical problems
- Modeling of tool contact and the influence of friction
- Solvers and convergence
- Modeling of crack propagation
- Introduction of advanced FE-Methods
|Literature||Bathe, K. J., Finite-Element-Procedures, Prentice-Hall, 1996|
|Prerequisites / Notice||If we will have a large number of students, two dates for the exercises will be offered.|
|263-5001-00L||Introduction to Finite Elements and Sparse Linear System Solving||W||4 credits||2V + 1U||P. Arbenz|
|Abstract||The finite element (FE) method is the method of choice for (approximately) solving partial differential equations on complicated domains. In the first third of the lecture, we give an introduction to the method. The rest of the lecture will be devoted to methods for solving the large sparse linear systems of equation that a typical for the FE method. We will consider direct and iterative methods.|
|Objective||Students will know the most important direct and iterative solvers for sparse linear systems. They will be able to determine which solver to choose in particular situations.|
|Content||I. THE FINITE ELEMENT METHOD|
(1) Introduction, model problems.
(2) 1D problems. Piecewise polynomials in 1D.
(3) 2D problems. Triangulations. Piecewise polynomials in 2D.
(4) Variational formulations. Galerkin finite element method.
(5) Implementation aspects.
II. DIRECT SOLUTION METHODS
(6) LU and Cholesky decomposition.
(7) Sparse matrices.
(8) Fill-reducing orderings.
III. ITERATIVE SOLUTION METHODS
(9) Stationary iterative methods, preconditioning.
(10) Preconditioned conjugate gradient method (PCG).
(11) Incomplete factorization preconditioning.
(12) Multigrid preconditioning.
(13) Nonsymmetric problems (GMRES, BiCGstab).
(14) Indefinite problems (SYMMLQ, MINRES).
|Literature|| M. G. Larson, F. Bengzon: The Finite Element Method: Theory, Implementation, and Applications. Springer, Heidelberg, 2013.|
 H. Elman, D. Sylvester, A. Wathen: Finite elements and fast iterative solvers. OUP, Oxford, 2005.
 Y. Saad: Iterative methods for sparse linear systems (2nd ed.). SIAM, Philadelphia, 2003.
 T. Davis: Direct Methods for Sparse Linear Systems. SIAM, Philadelphia, 2006.
 H.R. Schwarz: Die Methode der finiten Elemente (3rd ed.). Teubner, Stuttgart, 1991.
|Prerequisites / Notice||Prerequisites: Linear Algebra, Analysis, Computational Science.|
The exercises are made with Matlab.
|263-5200-00L||Data Mining: Learning from Large Data Sets||W||4 credits||2V + 1U||A. Krause, Y. Levy|
|Abstract||Many scientific and commercial applications require insights from massive, high-dimensional data sets. This courses introduces principled, state-of-the-art techniques from statistics, algorithms and discrete and convex optimization for learning from such large data sets. The course both covers theoretical foundations and practical applications.|
|Objective||Many scientific and commercial applications require us to obtain insights from massive, high-dimensional data sets. In this graduate-level course, we will study principled, state-of-the-art techniques from statistics, algorithms and discrete and convex optimization for learning from such large data sets. The course will both cover theoretical foundations and practical applications.|
- Dealing with large data (Data centers; Map-Reduce/Hadoop; Amazon Mechanical Turk)
- Fast nearest neighbor methods (Shingling, locality sensitive hashing)
- Online learning (Online optimization and regret minimization, online convex programming, applications to large-scale Support Vector Machines)
- Multi-armed bandits (exploration-exploitation tradeoffs, applications to online advertising and relevance feedback)
- Active learning (uncertainty sampling, pool-based methods, label complexity)
- Dimension reduction (random projections, nonlinear methods)
- Data streams (Sketches, coresets, applications to online clustering)
- Recommender systems
|Prerequisites / Notice||Prerequisites: Solid basic knowledge in statistics, algorithms and programming. Background in machine learning is helpful but not required.|
|263-2800-00L||Design of Parallel and High-Performance Computing||W||7 credits||3V + 2U + 1A||T. Hoefler, M. Püschel|
|Abstract||Advanced topics in parallel / concurrent programming.|
|Objective||Understand concurrency paradigms and models from a higher perspective and acquire skills for designing, structuring and developing possibly large concurrent software systems. Become able to distinguish parallelism in problem space and in machine space. Become familiar with important technical concepts and with concurrency folklore.|
|227-0102-00L||Discrete Event Systems||W||6 credits||4G||L. Thiele, L. Vanbever, R. Wattenhofer|
|Abstract||Introduction to discrete event systems. We start out by studying popular models of discrete event systems. In the second part of the course we analyze discrete event systems from an average-case and from a worst-case perspective. Topics include: Automata and Languages, Specification Models, Stochastic Discrete Event Systems, Worst-Case Event Systems, Verification, Network Calculus.|
|Objective||Over the past few decades the rapid evolution of computing, communication, and information technologies has brought about the proliferation of new dynamic systems. A significant part of activity in these systems is governed by operational rules designed by humans. The dynamics of these systems are characterized by asynchronous occurrences of discrete events, some controlled (e.g. hitting a keyboard key, sending a message), some not (e.g. spontaneous failure, packet loss). |
The mathematical arsenal centered around differential equations that has been employed in systems engineering to model and study processes governed by the laws of nature is often inadequate or inappropriate for discrete event systems. The challenge is to develop new modeling frameworks, analysis techniques, design tools, testing methods, and optimization processes for this new generation of systems.
In this lecture we give an introduction to discrete event systems. We start out the course by studying popular models of discrete event systems, such as automata and Petri nets. In the second part of the course we analyze discrete event systems. We first examine discrete event systems from an average-case perspective: we model discrete events as stochastic processes, and then apply Markov chains and queuing theory for an understanding of the typical behavior of a system. In the last part of the course we analyze discrete event systems from a worst-case perspective using the theory of online algorithms and adversarial queuing.
2. Automata and Languages
3. Smarter Automata
4. Specification Models
5. Stochastic Discrete Event Systems
6. Worst-Case Event Systems
7. Network Calculus
|Literature||[bertsekas] Data Networks |
Dimitri Bersekas, Robert Gallager
Prentice Hall, 1991, ISBN: 0132009161
[borodin] Online Computation and Competitive Analysis
Allan Borodin, Ran El-Yaniv.
Cambridge University Press, 1998
[boudec] Network Calculus
J.-Y. Le Boudec, P. Thiran
[cassandras] Introduction to Discrete Event Systems
Christos Cassandras, Stéphane Lafortune.
Kluwer Academic Publishers, 1999, ISBN 0-7923-8609-4
[fiat] Online Algorithms: The State of the Art
A. Fiat and G. Woeginger
[hochbaum] Approximation Algorithms for NP-hard Problems (Chapter 13 by S. Irani, A. Karlin)
[schickinger] Diskrete Strukturen (Band 2: Wahrscheinlichkeitstheorie und Statistik)
T. Schickinger, A. Steger
Springer, Berlin, 2001
[sipser] Introduction to the Theory of Computation
PWS Publishing Company, 1996, ISBN 053494728X
|227-0116-00L||VLSI I: From Architectures to VLSI Circuits and FPGAs||W||6 credits||5G||F. K. Gürkaynak, L. Benini|
|Abstract||This first course in a series that extends over three consecutive terms is concerned with tailoring algorithms and with devising high performance hardware architectures for their implementation as ASIC or with FPGAs. The focus is on front end design using HDLs and automatic synthesis for producing industrial-quality circuits.|
|Objective||Understand Very-Large-Scale Integrated Circuits (VLSI chips), Application-Specific Integrated Circuits (ASIC), and Field-Programmable Gate-Arrays (FPGA). Know their organization and be able to identify suitable application areas. Become fluent in front-end design from architectural conception to gate-level netlists. How to model digital circuits with VHDL or SystemVerilog. How to ensure they behave as expected with the aid of simulation, testbenches, and assertions. How to take advantage of automatic synthesis tools to produce industrial-quality VLSI and FPGA circuits. Gain practical experience with the hardware description language VHDL and with industrial Electronic Design Automation (EDA) tools.|
|Content||This course is concerned with system-level issues of VLSI design and FPGA implementations. Topics include:|
- Overview on design methodologies and fabrication depths.
- Levels of abstraction for circuit modeling.
- Organization and configuration of commercial field-programmable components.
- VLSI and FPGA design flows.
- Dedicated and general purpose architectures compared.
- How to obtain an architecture for a given processing algorithm.
- Meeting throughput, area, and power goals by way of architectural transformations.
- Hardware Description Languages (HDL) and the underlying concepts.
- VHDL and SystemVerilog compared.
- VHDL (IEEE standard 1076) for simulation and synthesis.
- A suitable nine-valued logic system (IEEE standard 1164).
- Register Transfer Level (RTL) synthesis and its limitations.
- Building blocks of digital VLSI circuits.
- Functional verification techniques and their limitations.
- Modular and largely reusable testbenches.
- Assertion-based verification.
- Synchronous versus asynchronous circuits.
- The case for synchronous circuits.
- Periodic events and the Anceau diagram.
- Case studies, ASICs compared to microprocessors, DSPs, and FPGAs.
During the exercises, students learn how to model digital ICs with VHDL. They write testbenches for simulation purposes and synthesize gate-level netlists for VLSI chips and FPGAs. Commercial EDA software by leading vendors is being used throughout.
|Lecture notes||Textbook and all further documents in English.|
|Literature||H. Kaeslin: "Top-Down Digital VLSI Design, from Architectures to Gate-Level Circuits and FPGAs", Elsevier, 2014, ISBN 9780128007303.|
|Prerequisites / Notice||Prerequisites: |
Basics of digital circuits.
In written form following the course semester (spring term). Problems are given in English, answers will be accepted in either English oder German.
|227-0148-00L||VLSI III: Test and Fabrication of VLSI Circuits |
Does not take place this semester.
|W||6 credits||4G||L. Benini|
|Abstract||In this course, we will cover how modern microchips are fabricated, and we will focus on methods and tools to uncover fabrication defects, if any, in these microchips. As part of the exercises, students will get to work on an industrial 1 million dollar automated test equipment.|
|Objective||Learn about modern IC manufacturing methodologies, understand the problem of IC testing. Cover the basic methods, algorithms and techniques to test circuits in an efficient way. Learn about practical aspects of IC testing and apply what you learn in class using a state-of-the art tester.|
|Content||In this course we will deal with modern integrated circuit (IC) manufacturing technology and cover topics such as:|
- Today's nanometer CMOS fabrication processes (HKMG).
- Optical and post optical Photolithography.
- Potential alternatives to CMOS technology and MOSFET devices.
- Evolution paths for design methodology.
- Industrial roadmaps for the future evolution of semiconductor technology (ITRS).
If you want to earn money by selling ICs, you will have to deliver a product that will function properly with a very large probability. The main emphasis of the lecture will be discussing how this can be achieved. We will discuss fault models and practical techniques to improve testability of VLSI circuits. At the IIS we have a state-of-the-art automated test equipment (Advantest SoC V93000) that we will make available for in class exercises and projects. At the end of the lecture you will be able to design state-of-the art digital integrated circuits such as to make them testable and to use automatic test equipment (ATE) to carry out the actual testing.
During the first weeks of the course there will be weekly practical exercises where you will work in groups of two. For the last 5 weeks of the class students will be able to choose a class project that can be:
- The test of their own chip developed during a previous semester thesis
- Developing new setups and measurement methods in C++ on the tester
- Helping to debug problems encountered in previous microchips by IIS.
Half of the oral exam will consist of a short presentation on this class project.
|Lecture notes||Main course book: "Essentials of Electronic Testing for Digital, Memory and Mixed-Signal VLSI Circuits" by Michael L. Bushnell and Vishwani D. Agrawal, Springer, 2004. This book is available online within ETH through |
|Prerequisites / Notice||Although this is the third part in a series of lectures on VLSI design, you can follow this course even if you have not visited VLSI I and VLSI II lectures. An interest in integrated circuit design, and basic digital circuit knowledge is required though.|
|227-0197-00L||Wearable Systems I||W||6 credits||4G||G. Tröster, U. Blanke|
|Abstract||Context recognition in mobile communication systems like mobile phone, smart watches and wearable computer will be studied using advanced methods from sensor data fusion, pattern recognition, statistics, data mining and machine learning.|
Context comprises the behavior of individuals and of groups, their activites as well as the local and social environment.
|Objective||Using internal sensors and sensors in our environment including data from the wristwatch, bracelet or internet (crowd sourcing), our 'smart phone' detects our context continuously, e.g. where we are, what we are doing, with whom we are together, what is our constitution, what are our needs. Based on this information our 'smart phone' offers us the appropriate services like a personal assistant.Context comprises user's behavior, his activities, his local and social environment.|
In the data path from the sensor level to signal segmentation to the classification of the context, advanced methods of signal processing, pattern recognition and machine learning will be applied. Sensor data generated by crowdsouring methods are integrated. The validation using MATLAB is followed by implementation and testing on a smart phone.
Context recognition as the crucial function of mobile systems is the main focus of the course. Using MatLab the participants implement and verify the discussed methods also using a smart phone.
|Content||Using internal sensors and sensors in our environment including data from the wristwatch, bracelet or internet (crowd sourcing), our 'smart phone' detects our context continuously, e.g. where we are, what we are doing, with whom we are together, what is our constitution, what are our needs. Based on this information our 'smart phone' offers us the appropriate services like a personal assistant. Context recognition - what is the situation of the user, his activity, his environment, how is he doing, what are his needs - as the central functionality of mobile systems constitutes the focus of the course.|
The main topics of the course include
Sensor nets, sensor signal processing, data fusion, time series (segmentation, similariy measures), supervised learning (Bayes Decision Theory, Decision Trees, Random Forest, kNN-Methods, Support Vector Machine, Adaboost, Deep Learning), clustering (k-means, dbscan, topic models), Recommender Systems, Collaborative Filtering, Crowdsourcing.
The exercises show concrete design problems like motion and gesture recognition using distributed sensors, detection of activity patterns and identification of the local environment.
Presentations of the PhD students and the visit at the Wearable Computing Lab introduce in current research topics and international research projects.
Language: german/english (depending on the participants)
|Lecture notes||Lecture notes for all lessons, assignments and solutions. |
|Literature||Literature will be announced during the lessons.|
|Prerequisites / Notice||No special prerequisites|
|227-0447-00L||Image Analysis and Computer Vision||W||6 credits||3V + 1U||L. Van Gool, O. Göksel, E. Konukoglu|
|Abstract||Light and perception. Digital image formation. Image enhancement and feature extraction. Unitary transformations. Color and texture. Image segmentation and deformable shape matching. Motion extraction and tracking. 3D data extraction. Invariant features. Specific object recognition and object class recognition.|
|Objective||Overview of the most important concepts of image formation, perception and analysis, and Computer Vision. Gaining own experience through practical computer and programming exercises.|
|Content||The first part of the course starts off from an overview of existing and emerging applications that need computer vision. It shows that the realm of image processing is no longer restricted to the factory floor, but is entering several fields of our daily life. First it is investigated how the parameters of the electromagnetic waves are related to our perception. Also the interaction of light with matter is considered. The most important hardware components of technical vision systems, such as cameras, optical devices and illumination sources are discussed. The course then turns to the steps that are necessary to arrive at the discrete images that serve as input to algorithms. The next part describes necessary preprocessing steps of image analysis, that enhance image quality and/or detect specific features. Linear and non-linear filters are introduced for that purpose. The course will continue by analyzing procedures allowing to extract additional types of basic information from multiple images, with motion and depth as two important examples. The estimation of image velocities (optical flow) will get due attention and methods for object tracking will be presented. Several techniques are discussed to extract three-dimensional information about objects and scenes. Finally, approaches for the recognition of specific objects as well as object classes will be discussed and analyzed.|
|Lecture notes||Course material Script, computer demonstrations, exercises and problem solutions|
|Prerequisites / Notice||Prerequisites: |
Basic concepts of mathematical analysis and linear algebra. The computer exercises are based on Linux and C.
The course language is English.
|227-0417-00L||Information Theory I||W||6 credits||4G||A. Lapidoth|
|Abstract||This course covers the basic concepts of information theory and of communication theory. Topics covered include the entropy rate of a source, mutual information, typical sequences, the asymptotic equi-partition property, Huffman coding, channel capacity, the channel coding theorem, the source-channel separation theorem, and feedback capacity.|
|Objective||The fundamentals of Information Theory including Shannon's source coding and channel coding theorems|
|Content||The entropy rate of a source, Typical sequences, the asymptotic equi-partition property, the source coding theorem, Huffman coding, Arithmetic coding, channel capacity, the channel coding theorem, the source-channel separation theorem, feedback capacity|
|Literature||T.M. Cover and J. Thomas, Elements of Information Theory (second edition)|
|227-0427-00L||Signal and Information Processing: Modeling, Filtering, Learning||W||6 credits||4G||H.‑A. Loeliger|
|Abstract||Fundamentals in signal processing, detection/estimation, and machine learning. |
I. Linear signal representation and approximation: Hilbert spaces, LMMSE estimation, regularization and sparsity.
II. Learning linear and nonlinear functions and filters: kernel methods, neural networks.
III. Structured statistical models: hidden Markov models, factor graphs, Kalman filter, parameter estimation.
|Objective||The course is an introduction to some basic topics in signal processing, detection/estimation theory, and machine learning.|
|Content||Part I - Linear Signal Representation and Approximation: Hilbert spaces, least squares and LMMSE estimation, projection and estimation by linear filtering, learning linear functions and filters, L2 regularization, L1 regularization and sparsity, singular-value decomposition and pseudo-inverse, principal-components analysis.|
Part II - Learning Nonlinear Functions: fundamentals of learning, neural networks, kernel methods.
Part III - Structured Statistical Models and Message Passing Algorithms: hidden Markov models, factor graphs, Gaussian message passing, Kalman filter and recursive least squares, Monte Carlo methods, parameter estimation, expectation maximization, sparse Bayesian learning.
|Lecture notes||Lecture notes.|
|Prerequisites / Notice||Prerequisites: |
- local bachelors: course "Discrete-Time and Statistical Signal Processing" (5. Sem.)
- others: solid basics in linear algebra and probability theory
|227-0627-00L||Applied Computer Architecture||W||6 credits||4G||A. Gunzinger|
|Abstract||This lecture gives an overview of the requirements and the architecture of parallel computer systems, performance, reliability and costs.|
|Objective||Understand the function, the design and the performance modeling of parallel computer systems.|
|Content||The lecture "Applied Computer Architecture" gives technical and corporate insights in the innovative Computer Systems/Architectures (CPU, GPU, FPGA, special processors) and their real implementations and applications. Often the designs have to deal with technical limits.|
Which computer architecture allows the control of the over 1000 magnets at the Swiss Light Source (SLS)?
Which architecture is behind the alarm center of the Swiss Railway (SBB)?
Which computer architectures are applied for driver assistance systems?
Which computer architecture is hidden behind a professional digital audio mixing desk?
How can data streams of about 30 TB/s, produced by a protone accelerator, be processed in real time?
Can the weather forecast also be processed with GPUs?
How can a good computer architecture be found?
Which are the driving factors in succesful computer architecture design?
|Lecture notes||Script and exercices sheets.|
|Prerequisites / Notice||Prerequisites: |
Basics of computer architecture.
|252-0237-00L||Concepts of Object-Oriented Programming||W||6 credits||3V + 2U||P. Müller|
|Abstract||Course that focuses on an in-depth understanding of object-oriented programming and compares designs of object-oriented programming languages. Topics include different flavors of type systems, inheritance models, encapsulation in the presence of aliasing, object and class initialization, program correctness, reflection|
|Objective||After this course, students will: |
Have a deep understanding of advanced concepts of object-oriented programming and their support through various language features. Be able to understand language concepts on a semantic level and be able to compare and evaluate language designs.
Be able to learn new languages more rapidly.
Be aware of many subtle problems of object-oriented programming and know how to avoid them.
|Content||The main goal of this course is to convey a deep understanding of the key concepts of sequential object-oriented programming and their support in different programming languages. This is achieved by studying how important challenges are addressed through language features and programming idioms. In particular, the course discusses alternative language designs by contrasting solutions in languages such as C++, C#, Eiffel, Java, Python, and Scala. The course also introduces novel ideas from research languages that may influence the design of future mainstream languages.|
The topics discussed in the course include among others:
The pros and cons of different flavors of type systems (for instance, static vs. dynamic typing, nominal vs. structural, syntactic vs. behavioral typing)
The key problems of single and multiple inheritance and how different languages address them
Generic type systems, in particular, Java generics, C# generics, and C++ templates
The situations in which object-oriented programming does not provide encapsulation, and how to avoid them
The pitfalls of object initialization, exemplified by a research type system that prevents null pointer dereferencing
How to maintain the consistency of data structures
|Literature||Will be announced in the lecture.|
|Prerequisites / Notice||Prerequisites:|
Mastering at least one object-oriented programming language (this course will NOT provide an introduction to object-oriented programming); programming experience
|252-0417-00L||Randomized Algorithms and Probabilistic Methods||W||8 credits||3V + 2U + 2A||A. Steger, E. Welzl|
|Abstract||Las Vegas & Monte Carlo algorithms; inequalities of Markov, Chebyshev, Chernoff; negative correlation; Markov chains: convergence, rapidly mixing; generating functions; Examples include: min cut, median, balls and bins, routing in hypercubes, 3SAT, card shuffling, random walks|
|Objective||After this course students will know fundamental techniques from probabilistic combinatorics for designing randomized algorithms and will be able to apply them to solve typical problems in these areas.|
|Content||Randomized Algorithms are algorithms that "flip coins" to take certain decisions. This concept extends the classical model of deterministic algorithms and has become very popular and useful within the last twenty years. In many cases, randomized algorithms are faster, simpler or just more elegant than deterministic ones. In the course, we will discuss basic principles and techniques and derive from them a number of randomized methods for problems in different areas.|
|Literature||- Randomized Algorithms, Rajeev Motwani and Prabhakar Raghavan, Cambridge University Press (1995)|
- Probability and Computing, Michael Mitzenmacher and Eli Upfal, Cambridge University Press (2005)
|252-0546-00L||Physically-Based Simulation in Computer Graphics||W||4 credits||2V + 1U||M. Bächer, V. da Costa de Azevedo|
|Abstract||This lecture provides an introduction to physically-based animation in computer graphics and gives an overview of fundamental methods and algorithms. The practical exercises include three assignments which are to be solved in small groups. In an addtional course project, topics from the lecture will be implemented into a 3D game or a comparable application.|
|Objective||This lecture provides an introduction to physically-based animation in computer graphics and gives an overview of fundamental methods and algorithms. The practical exercises include three assignments which are to be solved in small groups. In an addtional course project, topics from the lecture will be implemented into a 3D game or a comparable application.|
|Content||The lecture covers topics in physically-based modeling,|
such as particle systems, mass-spring models, finite difference and finite element methods. These approaches are used to represent and simulate deformable objects or fluids with applications in animated movies, 3D games and medical systems. Furthermore, the lecture covers topics such as rigid body dynamics, collision detection, and character animation.
|Prerequisites / Notice||Fundamentals of calculus and physics, basic concepts of algorithms and data structures, basic programming skills in C++. Knowledge on numerical mathematics as well as ordinary and partial differential equations is an asset, but not required.|
|261-5100-00L||Computational Biomedicine |
Number of participants limited to 60.
|W||4 credits||2V + 1U||G. Rätsch|
|Abstract||The course critically reviews central problems in Biomedicine and discusses the technical foundations and solutions for these problems.|
|Objective||Over the past years, rapid technological advancements have transformed classical disciplines such as biology and medicine into fields of apllied data science. While the sheer amount of the collected data often makes computational approaches inevitable for analysis, it is the domain specific structure and close relation to research and clinic, that call for accurate, robust and efficient algorithms. In this course we will critically review central problems in Biomedicine and will discuss the technical foundations and solutions for these problems.|
|Content||The course will consist of three topic clusters that will cover different aspects of data science problems in Biomedicine: |
1) String algorithms for the efficient representation, search, comparison, composition and compression of large sets of strings, mostly originating from DNA or RNA Sequencing. This includes genome assembly, efficient index data structures for strings and graphs, alignment techniques as well as quantitative approaches.
2) Statistical models and algorithms for the assessment and functional analysis of individual genomic variations. this includes the identification of variants, prediction of functional effects, imputation and integration problems as well as the association with clinical phenotypes.
3) Models for organization and representation of large scale biomedical data. This includes ontolgy concepts, biomedical databases, sequence annotation and data compression.
|Prerequisites / Notice||Data Structures & Algorithms, Introduction to Machine Learning, Statistics/Probability, Programming in Python, Unix Command Line|
|401-4619-67L||Advanced Topics in Computational Statistics||W||4 credits||2V||N. Meinshausen|
|Abstract||This lecture covers selected advanced topics in computational statistics. This year the focus will be on graphical modelling.|
|Objective||Students learn the theoretical foundations of the selected methods, as well as practical skills to apply these methods and to interpret their outcomes.|
|Content||The main focus will be on graphical models in various forms: |
Markov properties of undirected graphs; Belief propagation; Hidden Markov Models; Structure estimation and parameter estimation; inference for high-dimensional data; causal graphical models
|Prerequisites / Notice||We assume a solid background in mathematics, an introductory lecture in probability and statistics, and at least one more advanced course in statistics.|
- Page 1 of 2 All