Martin Vechev: Catalogue data in Autumn Semester 2017

Name Prof. Dr. Martin Vechev
FieldComputer Science
Address
Inst. Programmiersprachen u. -syst
ETH Zürich, CAB H 69.1
Universitätstrasse 6
8092 Zürich
SWITZERLAND
Telephone+41 44 632 98 48
E-mailmartin.vechev@inf.ethz.ch
URLhttp://www.srl.inf.ethz.ch/
DepartmentComputer Science
RelationshipFull Professor

NumberTitleECTSHoursLecturers
263-2100-00LResearch Topics in Software Engineering Information Restricted registration - show details
Number of participants limited to 22.
2 credits2SP. Müller, T. Gross, M. Püschel, M. Vechev
AbstractThis seminar is an opportunity to become familiar with current research in software engineering and more generally with the methods and challenges of scientific research.
ObjectiveEach student will be asked to study some papers from the recent software engineering literature and review them. This is an exercise in critical review and analysis. Active participation is required (a presentation of a paper as well as participation in discussions).
ContentThe aim of this seminar is to introduce students to recent research results in the area of programming languages and software engineering. To accomplish that, students will study and present research papers in the area as well as participate in paper discussions. The papers will span topics in both theory and practice, including papers on program verification, program analysis, testing, programming language design, and development tools. A particular focus will be on domain-specific languages.
LiteratureThe publications to be presented will be announced on the seminar home page at least one week before the first session.
Prerequisites / NoticeOrganizational note: the seminar will meet only when there is a scheduled presentation. Please consult the seminar's home page for information.
263-2400-00LReliable and Interpretable Artificial Intelligence Information 4 credits2V + 1UM. Vechev
AbstractCreating reliable and explainable probabilistic models is a major challenge to solving the artificial intelligence problem. This course covers some of the latest advances that bring us closer to constructing such models. These advances span the areas of program synthesis/induction, programming languages, machine learning, and probabilistic programming.
ObjectiveThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.
ContentThe material draws on some of the latest research advances in several areas of computer science: program synthesis/induction, programming languages, deep learning, and probabilistic programming.

The material consists of three interconnected parts:

Part I: Program Synthesis/Induction
----------------------------------------

Synthesis is a new frontier in AI where the computer programs itself from user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.

Topics covered:

- Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, synthesis from noisy examples, learning with few examples, compositional synthesis, lower bounds on learning.

- Applications of techniques: synthesis for end users (e.g., spreadsheets), data analytics and financial computing, interpretable machine learning models for structured data.

- Combining neural networks and synthesis

Part II: Robustness of Deep Learning
-----------------------------------------

Deep learning methods based on neural networks have made impressive advances in recent years. A fundamental challenge with these models is that of understanding what the trained neural network has actually learned, for example, how stable / robust the network is to slight variations of the input (e.g., an image or a video), how easy it is to fool the network into mis-classifying obvious inputs, etc.

Topics covered:

- Basics of neural networks: fully connected, convolutional networks, residual networks, activation functions

- Finding adversarial examples in deep learning with SMT

- Methods and tools to guarantee robustness of deep nets (e.g., via affine arithmetic, SMT solvers); synthesis of robustness specs


Part III: Probabilistic Programming
--------------------------------------

Probabilistic programming is an emerging direction whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.

Topics covered:

- Inference: MCMC samplers and tactics (approximate), symbolic inference (exact).

- Semantics: basic measure theoretic semantics of probability; bridging measure theory and symbolic inference.

- Frameworks and languages: WebPPL (MIT/Stanford), PSI (ETH), Picture/Venture (MIT), Anglican (Oxford).

- Synthesis for probabilistic programs: this connects to Part I

- Applications of probabilistic programming: using the above solvers for reasoning about bias in machine learning models (connects to Part II), reasoning about computer networks, security protocols, approximate computing, cognitive models, rational agents.
263-2920-00LMachine Learning for Interactive Systems and Advanced Programming Tools Information
Does not take place this semester.
2 credits2SO. Hilliges, M. Vechev
AbstractSeminar on the intersection of machine learning, interactive systems and advanced concepts in programming and programming tools.
ObjectiveThe seminar will cover a variety of machine learning models and algorithms (including deep neural networks) and will discuss their applications in a diverse set of domains. Furthermore, the seminar will discuss how domain knowledge is integrated into vanilla ML models.
ContentSeminars often suffer from poor attention retention and low student engagement. This is often due to the format of the seminar where only one student reads papers in-depth and then prepares a long presentation about one or sometimes several papers. There is little reason for the other students to really pay attention or engage in the discussion.

To improve this the seminar will use a case-study format where all students read the same paper each week but fulfill different roles and hence prepare with different viewpoints in mind.

Student roles/instructions

The seminar is organized with each student taking one of the following roles on a rotating basis:

Conference Reviewer (e.g., reviewer of UIST/ICML/PLDI ): Complete a full critical review of the paper. Use the original review from and come to a recommendation whether the paper should be accepted or not.

Historian: Find out how this paper sits in the context of the related work. Use bibliography tools to find the most influential papers cited by this work and at least one paper influenced by the work (and summarize the two papers).

PhD student: Propose a follow-up project for your own research based on this paper - importantly the project should be directly inspired by the paper or even use/extend the method proposed.

Hacker: Implement a (simplified) version of the core aspect of the paper. Prepare a demo for the seminar. In case the complexity is too high perform an in-depth analysis of reproducibility of the paper.

Detective: Find out background information about the authors. Where did they work when the paper was published; what was their role; who else have they published with; which prior work of the authors may have inspired the current paper? Students may contact the authors (but need to adhere to politeness and courteous manners and stay on topic in their conversations).

All students (every week): Come up with alternative title; find a missing result that the paper should have included.
Prerequisites / NoticeParticipation will be limited subject to available topics.
264-5810-00LProgramming Languages Seminar2 credits2SP. Müller, M. Vechev
AbstractThis graduate seminar provides doctoral students in computer science a chance to read and discuss current research papers. Enrollment requires permission of the instructors. Credit units are granted only to active participants.
ObjectiveLearn about current research results in the area of programming languages, static program analysis, program verification, and related areas; practice of scientific presentations.
ContentThe seminar will explore different topics from a research perspective.
Lecture notesSupporting material will be distributed during the seminar.
Prerequisites / NoticeThe seminar is open to assistants of the Chair of Programming Methodology and the Software Reliability Lab (Department of Computer Science). Others should contact the instructors.