Search result: Catalogue data in Autumn Semester 2016
|Computational Science and Engineering Master|
|151-0113-00L||Applied Fluid Dynamics||W||4 credits||2V + 1U||J.‑P. Kunsch|
|Abstract||Applied Fluid Dynamics|
The methods of fluid dynamics play an important role in the description of a chain of events, involving the release, spreading and dilution of dangerous fluids in the environment.
Tunnel ventilation systems and strategies are studied, which must meet severe requirements during normal operation and in emergency situations (tunnel fires etc.).
|Objective||Generally applicable methods in fluid dynamics and gas dynamics are illustrated and practiced using selected current examples.|
|Content||Often experts fall back on the methodology of fluid dynamics when involved in the construction of environmentally friendly processing and incineration facilities, as well as when choosing safe transport and storage options for dangerous materials. As a result of accidents, but also in normal operations, dangerous gases and liquids may escape and be transported further by wind or flowing water.|
There are many possible forms that the resulting damage may take, including fire and explosion when flammable substances are mixed. The topics covered include: Emissions of liquids and gases from containers and pipelines, evaporation from pools and vaporization of gases kept under pressure, the spread and dilution of waste gas plumes in the wind, deflagration and detonation of inflammable gases, fireballs in gases held under pressure, pollution and exhaust gases in tunnels (tunnel fires etc.)
|Lecture notes||not available|
|Prerequisites / Notice||Requirements: successful attendance at lectures "Fluiddynamik I und II", "Thermodynamik I und II"|
|151-0709-00L||Stochastic Methods for Engineers and Natural Scientists||W||4 credits||3G||D. W. Meyer-Massetti, N. Noiray|
|Abstract||The course provides an introduction into stochastic methods that are applicable for example for the description and modeling of turbulent and subsurface flows. Moreover, mathematical techniques are presented that are used to quantify uncertainty in various engineering applications.|
|Objective||By the end of the course you should be able to mathematically describe random quantities and their effect on physical systems. Moreover, you should be able to develop basic stochastic models of such systems.|
|Content||- Probability theory, single and multiple random variables, mappings of random variables|
- Stochastic differential equations, Ito calculus, PDF evolution equations
- Polynomial chaos and other expansion methods
All topics are illustrated with application examples from engineering.
|Lecture notes||Detailed lecture notes will be provided.|
|Literature||Some textbooks related to the material covered in the course:|
Stochastic Methods: A Handbook for the Natural and Social Sciences, Crispin Gardiner, Springer, 2010
The Fokker-Planck Equation: Methods of Solutions and Applications, Hannes Risken, Springer, 1996
Turbulent Flows, S.B. Pope, Cambridge University Press, 2000
Spectral Methods for Uncertainty Quantification, O.P. Le Maitre and O.M. Knio, Springer, 2010
|151-0317-00L||Visualization, Simulation and Interaction - Virtual Reality II||W||4 credits||3G||A. Kunz|
|Abstract||This lecture provides deeper knowledge on the possible applications of virtual reality, its basic technolgy, and future research fields. The goal is to provide a strong knowledge on Virtual Reality for a possible future use in business processes.|
|Objective||Virtual Reality can not only be used for the visualization of 3D objects, but also offers a wide application field for small and medium enterprises (SME). This could be for instance an enabling technolgy for net-based collaboration, the transmission of images and other data, the interaction of the human user with the digital environment, or the use of augmented reality systems.|
The goal of the lecture is to provide a deeper knowledge of today's VR environments that are used in business processes. The technical background, the algorithms, and the applied methods are explained more in detail. Finally, future tasks of VR will be discussed and an outlook on ongoing international research is given.
|Content||Introduction into Virtual Reality; basisc of augmented reality; interaction with digital data, tangible user interfaces (TUI); basics of simulation; compression procedures of image-, audio-, and video signals; new materials for force feedback devices; intorduction into data security; cryptography; definition of free-form surfaces; digital factory; new research fields of virtual reality|
|Lecture notes||The handout is available in German and English.|
|Prerequisites / Notice||Prerequisites:|
"Visualization, Simulation and Interaction - Virtual Reality I" is recommended.
The course consists of lectures and exercises.
|151-0833-00L||Principles of Nonlinear Finite-Element-Methods||W||5 credits||2V + 2U||N. Manopulo, B. Berisha, P. Hora|
|Abstract||Most problems in engineering are of nonlinear nature. The nonlinearities are caused basically due to the nonlinear material behavior, contact conditions and instability of structures. The principles of the nonlinear Finite-Element-Method (FEM) will be introduced in the scope of this lecture for treating such problems.|
|Objective||The goal of the lecture is to provide the students with the fundamentals of the non linear Finite Element Method (FEM). The lecture focuses on the principles of the nonlinear Finite-Element-Method based on explicit and implicit formulations. Typical applications of the nonlinear Finite-Element-Methods are simulations of:|
- Collapse of structures
- Materials in Biomechanics (soft materials)
- General forming processes
Special attention will be paid to the modeling of the nonlinear material behavior, thermo-mechanical processes and processes with large plastic deformations. The ability to independently create a virtual model which describes the complex non linear systems will be acquired through accompanying exercises. These will include the Matlab programming of important model components such as constitutive equations
|Content||- Fundamentals of continuum mechanics to characterize large plastic deformations|
- Elasto-plastic material models
- Updated-Lagrange (UL), Euler and combined Euler-Lagrange (ALE) approaches
- FEM implementation of constitutive equations
- Element formulations
- Implicit and explicit FEM methods
- FEM formulations of coupled thermo-mechanical problems
- Modeling of tool contact and the influence of friction
- Solvers and convergence
- Modeling of crack propagation
- Introduction of advanced FE-Methods
|Literature||Bathe, K. J., Finite-Element-Procedures, Prentice-Hall, 1996|
|Prerequisites / Notice||If we will have a large number of students, two dates for the exercises will be offered.|
|263-5001-00L||Introduction to Finite Elements and Sparse Linear System Solving||W||4 credits||2V + 1U||P. Arbenz|
|Abstract||The finite element (FE) method is the method of choice for (approximately) solving partial differential equations on complicated domains. In the first third of the lecture, we give an introduction to the method. The rest of the lecture will be devoted to methods for solving the large sparse linear systems of equation that a typical for the FE method. We will consider direct and iterative methods.|
|Objective||Students will know the most important direct and iterative solvers for sparse linear systems. They will be able to determine which solver to choose in particular situations.|
|Content||I. THE FINITE ELEMENT METHOD|
(1) Introduction, model problems.
(2) 1D problems. Piecewise polynomials in 1D.
(3) 2D problems. Triangulations. Piecewise polynomials in 2D.
(4) Variational formulations. Galerkin finite element method.
(5) Implementation aspects.
II. DIRECT SOLUTION METHODS
(6) LU and Cholesky decomposition.
(7) Sparse matrices.
(8) Fill-reducing orderings.
III. ITERATIVE SOLUTION METHODS
(9) Stationary iterative methods, preconditioning.
(10) Preconditioned conjugate gradient method (PCG).
(11) Incomplete factorization preconditioning.
(12) Multigrid preconditioning.
(13) Nonsymmetric problems (GMRES, BiCGstab).
(14) Indefinite problems (SYMMLQ, MINRES).
|Literature|| M. G. Larson, F. Bengzon: The Finite Element Method: Theory, Implementation, and Applications. Springer, Heidelberg, 2013.|
 H. Elman, D. Sylvester, A. Wathen: Finite elements and fast iterative solvers. OUP, Oxford, 2005.
 Y. Saad: Iterative methods for sparse linear systems (2nd ed.). SIAM, Philadelphia, 2003.
 T. Davis: Direct Methods for Sparse Linear Systems. SIAM, Philadelphia, 2006.
 H.R. Schwarz: Die Methode der finiten Elemente (3rd ed.). Teubner, Stuttgart, 1991.
|Prerequisites / Notice||Prerequisites: Linear Algebra, Analysis, Computational Science.|
The exercises are made with Matlab.
|263-3010-00L||Big Data||W||6 credits||2V + 2U + 1A||G. Fourny|
|Abstract||The key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations.|
|Objective||This combination of requirements, together with the technologies that have emerged in order to address them, is typically referred to as "Big Data." This revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm".|
Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it were and are still needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small.
The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof.
After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each business use case efficiently and consistently.
|Content||This course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe. The material is organized along three axes: data in the large, data in the small, data in the very small. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem.|
- physical storage (HDFS, S3)
- logical storage (key-value stores, document stores, column stores, key-value stores, data warehouses)
- data formats and syntaxes (XML, JSON, CSV, XBRL)
- data shapes and models (tables, trees, graphs, cubes)
- an overview of programming languages with a focus on their type systems (SQL, XQuery, MDX)
- the most important query paradigms (selection, projection, joining, grouping, ordering, windowing)
- paradigms for parallel processing (MapReduce) and technologies (Hadoop, Spark)
- optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing)
We will also host two guest lectures to get insights from the industry: UBS and Google.
Large scale analytics and machine learning are outside of the scope of this course.
|Literature||Papers from scientific conferences and journals. References will be given as part of the course material during the semester.|
|263-5200-00L||Data Mining: Learning from Large Data Sets||W||4 credits||2V + 1U||A. Krause|
|Abstract||Many scientific and commercial applications require insights from massive, high-dimensional data sets. This courses introduces principled, state-of-the-art techniques from statistics, algorithms and discrete and convex optimization for learning from such large data sets. The course both covers theoretical foundations and practical applications.|
|Objective||Many scientific and commercial applications require us to obtain insights from massive, high-dimensional data sets. In this graduate-level course, we will study principled, state-of-the-art techniques from statistics, algorithms and discrete and convex optimization for learning from such large data sets. The course will both cover theoretical foundations and practical applications.|
- Dealing with large data (Data centers; Map-Reduce/Hadoop; Amazon Mechanical Turk)
- Fast nearest neighbor methods (Shingling, locality sensitive hashing)
- Online learning (Online optimization and regret minimization, online convex programming, applications to large-scale Support Vector Machines)
- Multi-armed bandits (exploration-exploitation tradeoffs, applications to online advertising and relevance feedback)
- Active learning (uncertainty sampling, pool-based methods, label complexity)
- Dimension reduction (random projections, nonlinear methods)
- Data streams (Sketches, coresets, applications to online clustering)
- Recommender systems
|Prerequisites / Notice||Prerequisites: Solid basic knowledge in statistics, algorithms and programming. Background in machine learning is helpful but not required.|
|263-2800-00L||Design of Parallel and High-Performance Computing||W||7 credits||3V + 2U + 1A||T. Hoefler, M. Püschel|
|Abstract||Advanced topics in parallel / concurrent programming.|
|Objective||Understand concurrency paradigms and models from a higher perspective and acquire skills for designing, structuring and developing possibly large concurrent software systems. Become able to distinguish parallelism in problem space and in machine space. Become familiar with important technical concepts and with concurrency folklore.|
|263-3210-00L||Deep Learning |
Number of participants limited to 120.
|W||4 credits||2V + 1U||T. Hofmann|
|Abstract||Deep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations.|
|Objective||In recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the fundamentals of deep learning and provide a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology.|
|Prerequisites / Notice||The participation in the course is subject to the following conditions:|
1) The number of participants is limited to 120 students (MSc and PhDs).
2) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge.
|227-0102-00L||Discrete Event Systems||W||6 credits||4G||L. Thiele, L. Vanbever, R. Wattenhofer|
|Abstract||Introduction to discrete event systems. We start out by studying popular models of discrete event systems. In the second part of the course we analyze discrete event systems from an average-case and from a worst-case perspective. Topics include: Automata and Languages, Specification Models, Stochastic Discrete Event Systems, Worst-Case Event Systems, Verification, Network Calculus.|
|Objective||Over the past few decades the rapid evolution of computing, communication, and information technologies has brought about the proliferation of new dynamic systems. A significant part of activity in these systems is governed by operational rules designed by humans. The dynamics of these systems are characterized by asynchronous occurrences of discrete events, some controlled (e.g. hitting a keyboard key, sending a message), some not (e.g. spontaneous failure, packet loss). |
The mathematical arsenal centered around differential equations that has been employed in systems engineering to model and study processes governed by the laws of nature is often inadequate or inappropriate for discrete event systems. The challenge is to develop new modeling frameworks, analysis techniques, design tools, testing methods, and optimization processes for this new generation of systems.
In this lecture we give an introduction to discrete event systems. We start out the course by studying popular models of discrete event systems, such as automata and Petri nets. In the second part of the course we analyze discrete event systems. We first examine discrete event systems from an average-case perspective: we model discrete events as stochastic processes, and then apply Markov chains and queuing theory for an understanding of the typical behavior of a system. In the last part of the course we analyze discrete event systems from a worst-case perspective using the theory of online algorithms and adversarial queuing.
2. Automata and Languages
3. Smarter Automata
4. Specification Models
5. Stochastic Discrete Event Systems
6. Worst-Case Event Systems
7. Network Calculus
|Literature||[bertsekas] Data Networks |
Dimitri Bersekas, Robert Gallager
Prentice Hall, 1991, ISBN: 0132009161
[borodin] Online Computation and Competitive Analysis
Allan Borodin, Ran El-Yaniv.
Cambridge University Press, 1998
[boudec] Network Calculus
J.-Y. Le Boudec, P. Thiran
[cassandras] Introduction to Discrete Event Systems
Christos Cassandras, Stéphane Lafortune.
Kluwer Academic Publishers, 1999, ISBN 0-7923-8609-4
[fiat] Online Algorithms: The State of the Art
A. Fiat and G. Woeginger
[hochbaum] Approximation Algorithms for NP-hard Problems (Chapter 13 by S. Irani, A. Karlin)
[schickinger] Diskrete Strukturen (Band 2: Wahrscheinlichkeitstheorie und Statistik)
T. Schickinger, A. Steger
Springer, Berlin, 2001
[sipser] Introduction to the Theory of Computation
PWS Publishing Company, 1996, ISBN 053494728X
|227-0197-00L||Wearable Systems I||W||6 credits||4G||G. Tröster, U. Blanke|
|Abstract||Context recognition in mobile communication systems like mobile phone, smart watches and wearable computer will be studied using advanced methods from sensor data fusion, pattern recognition, statistics, data mining and machine learning.|
Context comprises the behavior of individuals and of groups, their activites as well as the local and social environment.
|Objective||Using internal sensors and sensors in our environment including data from the wristwatch, bracelet or internet (crowd sourcing), our 'smart phone' detects our context continuously, e.g. where we are, what we are doing, with whom we are together, what is our constitution, what are our needs. Based on this information our 'smart phone' offers us the appropriate services like a personal assistant.Context comprises user's behavior, his activities, his local and social environment.|
In the data path from the sensor level to signal segmentation to the classification of the context, advanced methods of signal processing, pattern recognition and machine learning will be applied. Sensor data generated by crowdsouring methods are integrated. The validation using MATLAB is followed by implementation and testing on a smart phone.
Context recognition as the crucial function of mobile systems is the main focus of the course. Using MatLab the participants implement and verify the discussed methods also using a smart phone.
|Content||Using internal sensors and sensors in our environment including data from the wristwatch, bracelet or internet (crowd sourcing), our 'smart phone' detects our context continuously, e.g. where we are, what we are doing, with whom we are together, what is our constitution, what are our needs. Based on this information our 'smart phone' offers us the appropriate services like a personal assistant. Context recognition - what is the situation of the user, his activity, his environment, how is he doing, what are his needs - as the central functionality of mobile systems constitutes the focus of the course.|
The main topics of the course include
Sensor nets, sensor signal processing, data fusion, time series (segmentation, similariy measures), supervised learning (Bayes Decision Theory, Decision Trees, Random Forest, kNN-Methods, Support Vector Machine, Adaboost, Deep Learning), clustering (k-means, dbscan, topic models), Recommender Systems, Collaborative Filtering, Crowdsourcing.
The exercises show concrete design problems like motion and gesture recognition using distributed sensors, detection of activity patterns and identification of the local environment.
Presentations of the PhD students and the visit at the Wearable Computing Lab introduce in current research topics and international research projects.
Language: german/english (depending on the participants)
|Lecture notes||Lecture notes for all lessons, assignments and solutions. |
|Literature||Literature will be announced during the lessons.|
|Prerequisites / Notice||No special prerequisites|
|227-0447-00L||Image Analysis and Computer Vision||W||6 credits||3V + 1U||L. Van Gool, O. Göksel, E. Konukoglu|
|Abstract||Light and perception. Digital image formation. Image enhancement and feature extraction. Unitary transformations. Color and texture. Image segmentation and deformable shape matching. Motion extraction and tracking. 3D data extraction. Invariant features. Specific object recognition and object class recognition.|
|Objective||Overview of the most important concepts of image formation, perception and analysis, and Computer Vision. Gaining own experience through practical computer and programming exercises.|
|Content||The first part of the course starts off from an overview of existing and emerging applications that need computer vision. It shows that the realm of image processing is no longer restricted to the factory floor, but is entering several fields of our daily life. First it is investigated how the parameters of the electromagnetic waves are related to our perception. Also the interaction of light with matter is considered. The most important hardware components of technical vision systems, such as cameras, optical devices and illumination sources are discussed. The course then turns to the steps that are necessary to arrive at the discrete images that serve as input to algorithms. The next part describes necessary preprocessing steps of image analysis, that enhance image quality and/or detect specific features. Linear and non-linear filters are introduced for that purpose. The course will continue by analyzing procedures allowing to extract additional types of basic information from multiple images, with motion and depth as two important examples. The estimation of image velocities (optical flow) will get due attention and methods for object tracking will be presented. Several techniques are discussed to extract three-dimensional information about objects and scenes. Finally, approaches for the recognition of specific objects as well as object classes will be discussed and analyzed.|
|Lecture notes||Course material Script, computer demonstrations, exercises and problem solutions|
|Prerequisites / Notice||Prerequisites: |
Basic concepts of mathematical analysis and linear algebra. The computer exercises are based on Linux and C.
The course language is English.
|227-0417-00L||Information Theory I||W||6 credits||4G||A. Lapidoth|
|Abstract||This course covers the basic concepts of information theory and of communication theory. Topics covered include the entropy rate of a source, mutual information, typical sequences, the asymptotic equi-partition property, Huffman coding, channel capacity, the channel coding theorem, the source-channel separation theorem, and feedback capacity.|
|Objective||The fundamentals of Information Theory including Shannon's source coding and channel coding theorems|
|Content||The entropy rate of a source, Typical sequences, the asymptotic equi-partition property, the source coding theorem, Huffman coding, Arithmetic coding, channel capacity, the channel coding theorem, the source-channel separation theorem, feedback capacity|
|Literature||T.M. Cover and J. Thomas, Elements of Information Theory (second edition)|
|227-0427-00L||Signal and Information Processing: Modeling, Filtering, Learning||W||6 credits||4G||H.‑A. Loeliger|
|Abstract||Fundamentals in signal processing, detection/estimation, and machine learning. |
I. Linear signal representation and approximation: Hilbert spaces, LMMSE estimation, regularization and sparsity.
II. Learning linear and nonlinear functions and filters: kernel methods, neural networks.
III. Structured statistical models: hidden Markov models, factor graphs, Kalman filter, parameter estimation.
|Objective||The course is an introduction to some basic topics in signal processing, detection/estimation theory, and machine learning.|
|Content||Part I - Linear Signal Representation and Approximation: Hilbert spaces, least squares and LMMSE estimation, projection and estimation by linear filtering, learning linear functions and filters, L2 regularization, L1 regularization and sparsity, singular-value decomposition and pseudo-inverse, principal-components analysis.|
Part II - Learning Nonlinear Functions: fundamentals of learning, neural networks, kernel methods.
Part III - Structured Statistical Models and Message Passing Algorithms: hidden Markov models, factor graphs, Gaussian message passing, Kalman filter and recursive least squares, Monte Carlo methods, parameter estimation, expectation maximisation, sparse Bayesian learning.
|Lecture notes||Lecture notes.|
|Prerequisites / Notice||Prerequisites: |
- local bachelors: course "Discrete-Time and Statistical Signal Processing" (5. Sem.)
- others: solid basics in linear algebra and probability theory
|227-0627-00L||Applied Computer Architecture||W||6 credits||4G||A. Gunzinger|
|Abstract||This lecture gives an overview of the requirements and the architecture of parallel computer systems, performance, reliability and costs.|
|Objective||Understand the function, the design and the performance modeling of parallel computer systems.|
|Content||The lecture "Applied Computer Architecture" gives technical and corporate insights in the innovative Computer Systems/Architectures (CPU, GPU, FPGA, special processors) and their real implementations and applications. Often the designs have to deal with technical limits.|
Which computer architecture allows the control of the over 1000 magnets at the Swiss Light Source (SLS)?
Which architecture is behind the alarm center of the Swiss Railway (SBB)?
Which computer architectures are applied for driver assistance systems?
Which computer architecture is hidden behind a professional digital audio mixing desk?
How can data streams of about 30 TB/s, produced by a protone accelerator, be processed in real time?
Can the weather forecast also be processed with GPUs?
How can a good computer architecture be found?
Which are the driving factors in succesful computer architecture design?
|Lecture notes||Script and exercices sheets.|
|Prerequisites / Notice||Prerequisites: |
Basics of computer architecture.
|252-0237-00L||Concepts of Object-Oriented Programming||W||6 credits||3V + 2U||P. Müller|
|Abstract||Course that focuses on an in-depth understanding of object-oriented programming and compares designs of object-oriented programming languages. Topics include different flavors of type systems, inheritance models, encapsulation in the presence of aliasing, object and class initialization, program correctness, reflection|
|Objective||After this course, students will: |
Have a deep understanding of advanced concepts of object-oriented programming and their support through various language features. Be able to understand language concepts on a semantic level and be able to compare and evaluate language designs.
Be able to learn new languages more rapidly.
Be aware of many subtle problems of object-oriented programming and know how to avoid them.
|Content||The main goal of this course is to convey a deep understanding of the key concepts of sequential object-oriented programming and their support in different programming languages. This is achieved by studying how important challenges are addressed through language features and programming idioms. In particular, the course discusses alternative language designs by contrasting solutions in languages such as C++, C#, Eiffel, Java, Python, and Scala. The course also introduces novel ideas from research languages that may influence the design of future mainstream languages.|
The topics discussed in the course include among others:
The pros and cons of different flavors of type systems (for instance, static vs. dynamic typing, nominal vs. structural, syntactic vs. behavioral typing)
The key problems of single and multiple inheritance and how different languages address them
Generic type systems, in particular, Java generics, C# generics, and C++ templates
The situations in which object-oriented programming does not provide encapsulation, and how to avoid them
The pitfalls of object initialization, exemplified by a research type system that prevents null pointer dereferencing
How to maintain the consistency of data structures
|Literature||Will be announced in the lecture.|
|Prerequisites / Notice||Prerequisites:|
Mastering at least one object-oriented programming language (this course will NOT provide an introduction to object-oriented programming); programming experience
|252-0417-00L||Randomized Algorithms and Probabilistic Methods||W||7 credits||3V + 2U + 1A||A. Steger, E. Welzl|
|Abstract||Las Vegas & Monte Carlo algorithms; inequalities of Markov, Chebyshev, Chernoff; negative correlation; Markov chains: convergence, rapidly mixing; generating functions; Examples include: min cut, median, balls and bins, routing in hypercubes, 3SAT, card shuffling, random walks|
|Objective||After this course students will know fundamental techniques from probabilistic combinatorics for designing randomized algorithms and will be able to apply them to solve typical problems in these areas.|
|Content||Randomized Algorithms are algorithms that "flip coins" to take certain decisions. This concept extends the classical model of deterministic algorithms and has become very popular and useful within the last twenty years. In many cases, randomized algorithms are faster, simpler or just more elegant than deterministic ones. In the course, we will discuss basic principles and techniques and derive from them a number of randomized methods for problems in different areas.|
|Literature||- Randomized Algorithms, Rajeev Motwani and Prabhakar Raghavan, Cambridge University Press (1995)|
- Probability and Computing, Michael Mitzenmacher and Eli Upfal, Cambridge University Press (2005)
|252-0546-00L||Physically-Based Simulation in Computer Graphics||W||4 credits||2V + 1U||B. Solenthaler, B. Thomaszewski|
|Abstract||This lecture provides an introduction to physically-based animation in computer graphics and gives an overview of fundamental methods and algorithms. The practical exercises include three assignments which are to be solved in small groups. In an addtional course project, topics from the lecture will be implemented into a 3D game or a comparable application.|
|Objective||This lecture provides an introduction to physically-based animation in computer graphics and gives an overview of fundamental methods and algorithms. The practical exercises include three assignments which are to be solved in small groups. In an addtional course project, topics from the lecture will be implemented into a 3D game or a comparable application.|
|Content||The lecture covers topics in physically-based modeling,|
such as particle systems, mass-spring models, finite difference and finite element methods. These approaches are used to represent and simulate deformable objects or fluids with applications in animated movies, 3D games and medical systems. Furthermore, the lecture covers topics such as rigid body dynamics, collision detection, and character animation.
|Prerequisites / Notice||Fundamentals of calculus and physics, basic concepts of algorithms and data structures, basic programming skills in C++. Knowledge on numerical mathematics as well as ordinary and partial differential equations is an asset, but not required.|
|401-3611-00L||Advanced Topics in Computational Statistics|
Does not take place this semester.
|W||4 credits||2V||M. H. Maathuis|
|Abstract||This lecture covers selected advanced topics in computational statistics, including various classification methods, the EM algorithm, clustering, handling missing data, and graphical modelling.|
|Objective||Students learn the theoretical foundations of the selected methods, as well as practical skills to apply these methods and to interpret their outcomes.|
|Content||The course is roughly divided in three parts: (1) Supervised learning via (variations of) nearest neighbor methods, (2) the EM algorithm and clustering, (3) handling missing data and graphical models.|
|Lecture notes||Lecture notes.|
|Prerequisites / Notice||We assume a solid background in mathematics, an introductory lecture in probability and statistics, and at least one more advanced course in statistics.|
Does not take place this semester.
|W||4 credits||2V||P. L. Bühlmann|
|Abstract||"High-Dimensional Statistics" deals with modern methods and theory for statistical inference when the number of unknown parameters is of much larger order than sample size. Statistical estimation and algorithms for complex models and aspects of multiple testing will be discussed.|
|Objective||Knowledge of methods and basic theory for high-dimensional statistical inference|
|Content||Lasso and Group Lasso for high-dimensional linear and generalized linear models; Additive models and many smooth univariate functions; Non-convex loss functions and l1-regularization; Stability selection, multiple testing and construction of p-values; Undirected graphical modeling|
|Literature||Peter Bühlmann and Sara van de Geer (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer Verlag. |
|Prerequisites / Notice||Knowledge of basic concepts in probability theory, and intermediate knowledge of statistics (e.g. a course in linear models or computational statistics).|
- Page 1 of 2 All