Name | Prof. Dr. Helmut Bölcskei |
Field | Mathematical Information Science |
Address | Professur Math. Informationswiss. ETH Zürich, ETF E 122 Sternwartstrasse 7 8092 Zürich SWITZERLAND |
Telephone | +41 44 632 34 33 |
hboelcskei@ethz.ch | |
URL | https://www.mins.ee.ethz.ch/people/show/boelcskei |
Department | Information Technology and Electrical Engineering |
Relationship | Full Professor |
Number | Title | ECTS | Hours | Lecturers | |
---|---|---|---|---|---|
227-0434-10L | Mathematics of Information | 8 credits | 3V + 2U + 2A | H. Bölcskei | |
Abstract | The class focuses on fundamental mathematical aspects of data sciences: Information theory (lossless and lossy compression), sampling theory, compressed sensing, dimensionality reduction (Johnson-Lindenstrauss Lemma), randomized algorithms for large-scale numerical linear algebra, approximation theory, neural networks as function approximators, mathematical foundations of deep learning. | ||||
Objective | After attending this lecture, participating in the exercise sessions, and working on the homework problem sets, students will have acquired a working knowledge of the most commonly used mathematical theories in data science. Students will also have to carry out a research project, either individually or in groups, with presentations at the end of the semester. | ||||
Content | 1. Information theory: Entropy, mutual information, lossy compression, rate-distortion theory, lossless compression, arithmetic coding, Lempel-Ziv compression 2. Signal representations: Frames in finite-dimensional spaces, frames in Hilbert spaces, wavelets, Gabor expansions 3. Sampling theorems: The sampling theorem as a frame expansion, irregular sampling, multi-band sampling, density theorems, spectrum-blind sampling 4. Sparsity and compressed sensing: Uncertainty principles, recovery algorithms, Lasso, matching pursuits, compressed sensing, non-linear approximation, best k-term approximation, super-resolution 5. High-dimensional data and dimensionality reduction: Random projections, the Johnson-Lindenstrauss Lemma, sketching 6. Randomized algorithms for large-scale numerical linear algebra: Large-scale matrix computations, randomized algorithms for approximate matrix factorizations, matrix sketching, fast algorithms for large-scale FFTs 7. Mathematics of (deep) neural networks: Universal function approximation with single-and multi-layer networks, fundamental limits on compressibility of signal classes, Kolmogorov epsilon-entropy of signal classes, geometry of decision surfaces, convolutional neural networks, scattering networks | ||||
Lecture notes | Detailed lecture notes will be provided as we go along. | ||||
Prerequisites / Notice | This course is aimed at students with a background in basic linear algebra, analysis, and probability. We will, however, review required mathematical basics throughout the semester in the exercise sessions. |