Suchergebnis: Katalogdaten im Frühjahrssemester 2018
Data Science Master | ||||||
Kernfächer | ||||||
Datenanalyse | ||||||
Information and Learning | ||||||
Nummer | Titel | Typ | ECTS | Umfang | Dozierende | |
---|---|---|---|---|---|---|
227-0434-10L | Mathematics of Information | W | 8 KP | 3V + 2U + 2A | H. Bölcskei | |
Kurzbeschreibung | The class focuses on fundamental mathematical aspects of data sciences: Information theory (lossless and lossy compression), sampling theory, compressed sensing, dimensionality reduction (Johnson-Lindenstrauss Lemma), randomized algorithms for large-scale numerical linear algebra, approximation theory, neural networks as function approximators, mathematical foundations of deep learning. | |||||
Lernziel | After attending this lecture, participating in the exercise sessions, and working on the homework problem sets, students will have acquired a working knowledge of the most commonly used mathematical theories in data science. Students will also have to carry out a research project, either individually or in groups, with presentations at the end of the semester. | |||||
Inhalt | 1. Information theory: Entropy, mutual information, lossy compression, rate-distortion theory, lossless compression, arithmetic coding, Lempel-Ziv compression 2. Signal representations: Frames in finite-dimensional spaces, frames in Hilbert spaces, wavelets, Gabor expansions 3. Sampling theorems: The sampling theorem as a frame expansion, irregular sampling, multi-band sampling, density theorems, spectrum-blind sampling 4. Sparsity and compressed sensing: Uncertainty principles, recovery algorithms, Lasso, matching pursuits, compressed sensing, non-linear approximation, best k-term approximation, super-resolution 5. High-dimensional data and dimensionality reduction: Random projections, the Johnson-Lindenstrauss Lemma, sketching 6. Randomized algorithms for large-scale numerical linear algebra: Large-scale matrix computations, randomized algorithms for approximate matrix factorizations, matrix sketching, fast algorithms for large-scale FFTs 7. Mathematics of (deep) neural networks: Universal function approximation with single-and multi-layer networks, fundamental limits on compressibility of signal classes, Kolmogorov epsilon-entropy of signal classes, geometry of decision surfaces, convolutional neural networks, scattering networks | |||||
Skript | Detailed lecture notes will be provided as we go along. | |||||
Voraussetzungen / Besonderes | This course is aimed at students with a background in basic linear algebra, analysis, and probability. We will, however, review required mathematical basics throughout the semester in the exercise sessions. |
- Seite 1 von 1