263-2400-00L  Reliable and Interpretable Artificial Intelligence

SemesterAutumn Semester 2017
LecturersM. Vechev
Periodicityyearly recurring course
Language of instructionEnglish



Courses

NumberTitleHoursLecturers
263-2400-00 VReliable and Interpretable Artificial Intelligence2 hrs
Tue10:15-12:00HG E 3 »
19.09.10:15-12:00CAB G 59 »
26.09.10:15-12:00CAB G 59 »
M. Vechev
263-2400-00 UReliable and Interpretable Artificial Intelligence1 hrs
Tue14:15-15:00HG F 26.3 »
Wed11:15-12:00CAB G 59 »
M. Vechev

Catalogue data

AbstractCreating reliable and explainable probabilistic models is a major challenge to solving the artificial intelligence problem. This course covers some of the latest advances that bring us closer to constructing such models. These advances span the areas of program synthesis/induction, programming languages, machine learning, and probabilistic programming.
ObjectiveThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.
ContentThe material draws on some of the latest research advances in several areas of computer science: program synthesis/induction, programming languages, deep learning, and probabilistic programming.

The material consists of three interconnected parts:

Part I: Program Synthesis/Induction
----------------------------------------

Synthesis is a new frontier in AI where the computer programs itself from user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.

Topics covered:

- Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, synthesis from noisy examples, learning with few examples, compositional synthesis, lower bounds on learning.

- Applications of techniques: synthesis for end users (e.g., spreadsheets), data analytics and financial computing, interpretable machine learning models for structured data.

- Combining neural networks and synthesis

Part II: Robustness of Deep Learning
-----------------------------------------

Deep learning methods based on neural networks have made impressive advances in recent years. A fundamental challenge with these models is that of understanding what the trained neural network has actually learned, for example, how stable / robust the network is to slight variations of the input (e.g., an image or a video), how easy it is to fool the network into mis-classifying obvious inputs, etc.

Topics covered:

- Basics of neural networks: fully connected, convolutional networks, residual networks, activation functions

- Finding adversarial examples in deep learning with SMT

- Methods and tools to guarantee robustness of deep nets (e.g., via affine arithmetic, SMT solvers); synthesis of robustness specs


Part III: Probabilistic Programming
--------------------------------------

Probabilistic programming is an emerging direction whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.

Topics covered:

- Inference: MCMC samplers and tactics (approximate), symbolic inference (exact).

- Semantics: basic measure theoretic semantics of probability; bridging measure theory and symbolic inference.

- Frameworks and languages: WebPPL (MIT/Stanford), PSI (ETH), Picture/Venture (MIT), Anglican (Oxford).

- Synthesis for probabilistic programs: this connects to Part I

- Applications of probabilistic programming: using the above solvers for reasoning about bias in machine learning models (connects to Part II), reasoning about computer networks, security protocols, approximate computing, cognitive models, rational agents.

Performance assessment

Performance assessment information (valid until the course unit is held again)
Performance assessment as a semester course
ECTS credits4 credits
ExaminersM. Vechev
Typesession examination
Language of examinationEnglish
RepetitionThe performance assessment is only offered in the session after the course unit. Repetition only possible after re-enrolling.
Mode of examinationwritten 90 minutes
Written aidsNone
This information can be updated until the beginning of the semester; information on the examination timetable is binding.

Learning materials

 
Main linkInformation
Only public learning materials are listed.

Groups

No information on groups available.

Restrictions

There are no additional restrictions for the registration.

Offered in

ProgrammeSectionType
CAS in Computer ScienceFocus Courses and ElectivesWInformation
Data Science MasterCore ElectivesWInformation
Computer Science MasterFocus Elective Courses Information SystemsWInformation
Computer Science MasterFocus Elective Courses General StudiesWInformation
Computer Science MasterFocus Elective Courses Visual ComputingWInformation
Computer Science MasterFocus Elective Courses Software EngineeringWInformation