Martin Vechev: Katalogdaten im Herbstsemester 2018
|Name||Herr Prof. Dr. Martin Vechev|
Inst. Programmiersprachen u. -syst
ETH Zürich, CAB H 69.1
|Telefon||+41 44 632 98 48|
|252-2600-05L||Software Engineering Seminar |
Number of participants limited to 22.
The deadline for deregistering expires at the end of the second week of the semester. Students who are still registered after that date, but do not attend the seminar, will officially fail the seminar.
|2 KP||2S||M. Vechev, D. Drachsler Cohen|
|Kurzbeschreibung||The course is an introduction to research in software engineering, based on reading and presenting high quality research papers in the field. The instructor may choose a variety of topics or one topic that is explored through several papers.|
|Lernziel||The main goals of this seminar are 1) learning how to read and understand a recent research paper in computer science; and 2) learning how to present a technical topic in computer science to an audience of peers.|
|Inhalt||The technical content of this course falls into the general area of software engineering but will vary from semester to semester.|
|263-2400-00L||Reliable and Interpretable Artificial Intelligence||4 KP||2V + 1U||M. Vechev|
|Kurzbeschreibung||Creating reliable and explainable probabilistic models is a fundamental challenge to solving the artificial intelligence problem. This course covers some of the latest and most exciting advances that bring us closer to constructing such models.|
|Lernziel||The main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.|
To facilitate deeper understanding, an important part of the course will be a group hands-on programming project where students will build a system based on the learned material.
|Inhalt||The course covers the following inter-connected directions. |
Part I: Robust and Explainable Deep Learning
Deep learning technology has made impressive advances in recent years. Despite this progress however, the fundamental challenge with deep learning remains that of understanding what a trained neural network has actually learned, and how stable that solution is. Forr example: is the network stable to slight perturbations of the input (e.g., an image)? How easy it is to fool the network into mis-classifying obvious inputs? Can we guide the network in a manner beyond simple labeled data?
- Attacks: Finding adversarial examples via state-of-the-art attacks (e.g., FGSM, PGD attacks).
- Defenses: Automated methods and tools which guarantee robustness of deep nets (e.g., using abstract domains, mixed-integer solvers)
- Combing differentiable logic with gradient-based methods so to train networks to satisfy richer properties.
- Frameworks: AI2, DiffAI, Reluplex, DQL, DeepPoly, etc.
Part II: Program Synthesis/Induction
Synthesis is a new frontier in AI where the computer programs itself via user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.
- Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, lower bounds on learning.
- Applications of techniques: synthesis for end users (e.g., spreadsheets) and data analytics.
- Combining synthesis with learning: application to learning from code.
- Frameworks: PHOG, DeepCode.
Part III: Probabilistic Programming
Probabilistic programming is an emerging direction, recently also pushed by various companies (e.g., Facebook, Uber, Google) whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.
- Probabilistic Inference: sampling based, exact symbolic inference, semantics
- Applications of probabilistic programming: bias in deep learning, differential privacy (connects to Part I).
- Frameworks: PSI, Edward2, Venture.
|Voraussetzungen / Besonderes||The course material is self-contained: needed background is covered in the lectures and exercises, and additional pointers.|
|264-5810-00L||Programming Languages Seminar|
Findet dieses Semester nicht statt.
|2 KP||2S||P. Müller, M. Vechev|
|Kurzbeschreibung||This graduate seminar provides doctoral students in computer science a chance to read and discuss current research papers. Enrollment requires permission of the instructors. Credit units are granted only to active participants.|
|Lernziel||Learn about current research results in the area of programming languages, static program analysis, program verification, and related areas; practice of scientific presentations.|
|Inhalt||The seminar will explore different topics from a research perspective.|
|Skript||Supporting material will be distributed during the seminar.|
|Voraussetzungen / Besonderes||The seminar is open to assistants of the Chair of Programming Methodology and the Software Reliability Lab (Department of Computer Science). Others should contact the instructors.|