Search result: Catalogue data in Spring Semester 2023
Computer Science Master | |||||||||||||||||||||
Minors | |||||||||||||||||||||
Minor in Data Management | |||||||||||||||||||||
Number | Title | Type | ECTS | Hours | Lecturers | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
227-0558-00L | Principles of Distributed Computing | W | 7 credits | 2V + 2U + 2A | R. Wattenhofer | ||||||||||||||||
Abstract | We study the fundamental issues underlying the design of distributed systems: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques. | ||||||||||||||||||||
Learning objective | Distributed computing is essential in modern computing and communications systems. Examples are on the one hand large-scale networks such as the Internet, and on the other hand multiprocessors such as your new multi-core laptop. This course introduces the principles of distributed computing, emphasizing the fundamental issues underlying the design of distributed systems and networks: communication, coordination, fault-tolerance, locality, parallelism, self-organization, symmetry breaking, synchronization, uncertainty. We explore essential algorithmic ideas and lower bound techniques, basically the "pearls" of distributed computing. We will cover a fresh topic every week. | ||||||||||||||||||||
Content | Distributed computing models and paradigms, e.g. message passing, shared memory, synchronous vs. asynchronous systems, time and message complexity, peer-to-peer systems, small-world networks, social networks, sorting networks, wireless communication, and self-organizing systems. Distributed algorithms, e.g. leader election, coloring, covering, packing, decomposition, spanning trees, mutual exclusion, store and collect, arrow, ivy, synchronizers, diameter, all-pairs-shortest-path, wake-up, and lower bounds | ||||||||||||||||||||
Lecture notes | Available. | ||||||||||||||||||||
Literature | Lecture Notes By Roger Wattenhofer. These lecture notes are taught at about a dozen different universities through the world. Mastering Distributed Algorithms Roger Wattenhofer Inverted Forest Publishing, 2020. ISBN 979-8628688267 Distributed Computing: Fundamentals, Simulations and Advanced Topics Hagit Attiya, Jennifer Welch. McGraw-Hill Publishing, 1998, ISBN 0-07-709352 6 Introduction to Algorithms Thomas Cormen, Charles Leiserson, Ronald Rivest. The MIT Press, 1998, ISBN 0-262-53091-0 oder 0-262-03141-8 Disseminatin of Information in Communication Networks Juraj Hromkovic, Ralf Klasing, Andrzej Pelc, Peter Ruzicka, Walter Unger. Springer-Verlag, Berlin Heidelberg, 2005, ISBN 3-540-00846-2 Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes Frank Thomson Leighton. Morgan Kaufmann Publishers Inc., San Francisco, CA, 1991, ISBN 1-55860-117-1 Distributed Computing: A Locality-Sensitive Approach David Peleg. Society for Industrial and Applied Mathematics (SIAM), 2000, ISBN 0-89871-464-8 | ||||||||||||||||||||
Prerequisites / Notice | Course pre-requisites: Interest in algorithmic problems. (No particular course needed.) | ||||||||||||||||||||
Competencies |
| ||||||||||||||||||||
263-3855-00L | Cloud Computing Architecture | W | 9 credits | 3V + 2U + 3A | G. Alonso, A. Klimovic | ||||||||||||||||
Abstract | Cloud computing hosts a wide variety of online services that we use on a daily basis, including web search, social networks, and video streaming. This course will cover how datacenter hardware, systems software, and application frameworks are designed for the cloud. | ||||||||||||||||||||
Learning objective | After successful completion of this course, students will be able to: 1) reason about performance, energy efficiency, and availability tradeoffs in the design of cloud system software, 2) describe how datacenter hardware is organized and explain why it is organized as such, 3) implement cloud applications as well as analyze and optimize their performance. | ||||||||||||||||||||
Content | In this course, we study how datacenter hardware, systems software, and applications are designed at large scale for the cloud. The course covers topics including server design, cluster management, large-scale storage systems, serverless computing, data analytics frameworks, and performance analysis. | ||||||||||||||||||||
Lecture notes | Lecture slides will be available on the course website. | ||||||||||||||||||||
Prerequisites / Notice | Undergraduate courses in 1) computer architecture and 2) operating systems, distributed systems, and/or database systems are strongly recommended. | ||||||||||||||||||||
263-5354-00L | Large Language Models | W | 8 credits | 3V + 2U + 2A | R. Cotterell, M. Sachan, F. Tramèr, C. Zhang | ||||||||||||||||
Abstract | Large language models have become one of the most commonly deployed NLP inventions. In the past half-decade, their integration into core natural language processing tools has dramatically increased the performance of such tools, and they have entered the public discourse surrounding artificial intelligence. | ||||||||||||||||||||
Learning objective | To understand the mathematical foundations of large language models as well as how to implement them. | ||||||||||||||||||||
Content | We start with the probabilistic foundations of language models, i.e., covering what constitutes a language model from a formal, theoretical perspective. We then discuss how to construct and curate training corpora, and introduce many of the neural-network architectures often used to instantiate language models at scale. The course covers aspects of systems programming, discussion of privacy and harms, as well as applications of language models in NLP and beyond. | ||||||||||||||||||||
Literature | The lecture notes will be supplemented with various readings from the literature. |
- Page 1 of 1