Search result: Catalogue data in Autumn Semester 2017

Computer Science Master Information
Focus Courses
Focus Courses in Information Systems
Focus Elective Courses Information Systems
NumberTitleTypeECTSHoursLecturers
252-0373-00LMobile and Personal Information Systems Information
The course will be offered for the last time.
W4 credits2V + 1UM. Norrie
AbstractThe course examines how traditional information system architectures and technologies have been adapted to support various forms of mobile and personal information systems. Topics to be covered include: databases of mobile objects; context-aware services; opportunistic information sharing; ambient information; pervasive display systems.
ObjectiveStudents will be introduced to a variety of novel information services and architectures developed for mobile environments in order to gain insight into the requirements and processes involved in designing and developing such systems and learning to think beyond traditional information systems.
ContentAdvances in mobile devices and communication technologies have led to a rapid increase in demands for various forms of mobile information systems where the users, the applications and the databases themselves may be mobile. Based on both lectures and breakout sessions, this course examines the impact of the different forms of mobility and collaboration that systems require nowadays and how these influence the design of systems at the database, the application and the user interface level. For example, traditional data management techniques have to be adapted to meet the requirements of such systems and cope with new connection, access and synchronisation issues. As mobile devices have increasingly become integrated into the users' lives and are expected to support a range of activities in different environments, applications should be context-aware, adapting functionality, information delivery and the user interfaces to the current environment and task. Various forms of software and hardware sensors may be used to determine the current context, raising interesting issues for discussion. Finally, user mobility, and the varying and intermittent connectivity that it implies, gives rise to new forms of dynamic collaboration that require lightweight, but flexible, mechanisms for information synchronisation and consistency maintenance. Here, the interplay of mobile, personal and social context will receive special attention.
263-3010-00LBig Data Information Restricted registration - show details W8 credits3V + 2U + 2AG. Fourny
AbstractThe key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations.
ObjectiveThis combination of requirements, together with the technologies that have emerged in order to address them, is typically referred to as "Big Data." This revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm".

Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it were and are still needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small.

The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof.

After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each business use case efficiently and consistently.
ContentThis course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe. The material is organized along three axes: data in the large, data in the small, data in the very small. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem.

- physical storage: distributed file systems (HDFS), object storage(S3), key-value stores

- logical storage: document stores (MongoDB), column stores (HBase), graph databases (neo4j), data warehouses (ROLAP)

- data formats and syntaxes (XML, JSON, RDF, Turtle, CSV, XBRL, YAML, protocol buffers, Avro)

- data shapes and models (tables, trees, graphs, cubes)

- type systems and schemas: atomic types, structured types (arrays, maps), set-based type systems (?, *, +)

- an overview of functional, declarative programming languages across data shapes (SQL, XQuery, JSONiq, Cypher, MDX)

- the most important query paradigms (selection, projection, joining, grouping, ordering, windowing)

- paradigms for parallel processing, two-stage (MapReduce) and DAG-based (Spark)

- resource management (YARN)

- what a data center is made of and why it matters (racks, nodes, ...)

- underlying architectures (internal machinery of HDFS, HBase, Spark, neo4j)

- optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing)

- applications.

Large scale analytics and machine learning are outside of the scope of this course.

Guest lectures planned so far:
- Bart Samwel (Google) on F1, Spanner
LiteraturePapers from scientific conferences and journals. References will be given as part of the course material during the semester.
Prerequisites / NoticeThis course, in the autumn semester, is only intended for:
- Computer Science students
- Data Science students
- CBB students with a Computer Science background

Another version of this course will be offered in Spring for students of other departments. However, if you would like to already start learning about databases now, a course worth taking as a preparation/good prequel to the Spring edition of Big Data is the "Information Systems for Engineers" course, offered this Fall for other departments as well, and introducing relational databases and SQL.
263-3210-00LDeep Learning Information Restricted registration - show details
Number of participants limited to 300.
W4 credits2V + 1UT. Hofmann
AbstractDeep learning is an area within machine learning that deals with algorithms and models that automatically induce multi-level data representations.
ObjectiveIn recent years, deep learning and deep networks have significantly improved the state-of-the-art in many application domains such as computer vision, speech recognition, and natural language processing. This class will cover the mathematical foundations of deep learning and provide insights into model design, training, and validation. The main objective is a profound understanding of why these methods work and how. There will also be a rich set of hands-on tasks and practical projects to familiarize students with this emerging technology.
Prerequisites / NoticeThis is an advanced level course that requires some basic background in machine learning. More importantly, students are expected to have a very solid mathematical foundation, including linear algebra, multivariate calculus, and probability. The course will make heavy use of mathematics and is not (!) meant to be an extended tutorial of how to train deep networks with tools like Torch or Tensorflow, although that may be a side benefit.

The participation in the course is subject to the following conditions:
1) The number of participants is limited to 300 students (MSc and PhDs).
2) Students must have taken the exam in Machine Learning (252-0535-00) or have acquired equivalent knowledge, see exhaustive list below:

Machine Learning
Link

Computational Intelligence Lab
Link

Learning and Intelligent Systems
Link

Statistical Learning Theory
Link

Computational Statistics
Link

Probabilistic Artificial Intelligence
Link

Data Mining: Learning from Large Data Sets
Link
263-3300-00LData Science Lab Information Restricted registration - show details
Number of participants limited to 30.

In the Master Programme max. 10 credits can be accounted by Labs on top of the Interfocus Courses. Additional Labs will be listed on the Addendum.
W10 credits9PC. Zhang, K. Schawinski
AbstractIn this class, we bring together data science applications
provided by ETH researchers outside computer science and
teams of computer science master's students. Two to three
students will form a team working on data science/machine
learning-related research topics provided by scientists in
a diverse range of domains such as astronomy, biology,
social sciences etc.
ObjectiveThe goal of this class if for students to gain experience
of dealing with data science and machine learning applications
"in the wild". Students are expected to go through the full
process starting from data cleaning, modeling, execution,
debugging, error analysis, and quality/performance refinement.

Website: Link
Prerequisites / NoticeEach student is required to send the lecturer their CV
and transcript and the lecturer will decide the enrollment
on a per-student basis. Moreover, the students are expected
to have experience about machine learning and deep learning.

EMAIL to send CV: Link
263-5200-00LData Mining: Learning from Large Data Sets Information W4 credits2V + 1UA. Krause, Y. Levy
AbstractMany scientific and commercial applications require insights from massive, high-dimensional data sets. This courses introduces principled, state-of-the-art techniques from statistics, algorithms and discrete and convex optimization for learning from such large data sets. The course both covers theoretical foundations and practical applications.
ObjectiveMany scientific and commercial applications require us to obtain insights from massive, high-dimensional data sets. In this graduate-level course, we will study principled, state-of-the-art techniques from statistics, algorithms and discrete and convex optimization for learning from such large data sets. The course will both cover theoretical foundations and practical applications.
ContentTopics covered:
- Dealing with large data (Data centers; Map-Reduce/Hadoop; Amazon Mechanical Turk)
- Fast nearest neighbor methods (Shingling, locality sensitive hashing)
- Online learning (Online optimization and regret minimization, online convex programming, applications to large-scale Support Vector Machines)
- Multi-armed bandits (exploration-exploitation tradeoffs, applications to online advertising and relevance feedback)
- Active learning (uncertainty sampling, pool-based methods, label complexity)
- Dimension reduction (random projections, nonlinear methods)
- Data streams (Sketches, coresets, applications to online clustering)
- Recommender systems
Prerequisites / NoticePrerequisites: Solid basic knowledge in statistics, algorithms and programming. Background in machine learning is helpful but not required.
263-5210-00LProbabilistic Artificial Intelligence Information W4 credits2V + 1UA. Krause
AbstractThis course introduces core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet.
ObjectiveHow can we build systems that perform well in uncertain environments and unforeseen situations? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as sensor networks, robotics, and the Internet. The course is designed for upper-level undergraduate and graduate students.
ContentTopics covered:
- Search (BFS, DFS, A*), constraint satisfaction and optimization
- Tutorial in logic (propositional, first-order)
- Probability
- Bayesian Networks (models, exact and approximative inference, learning) - Temporal models (Hidden Markov Models, Dynamic Bayesian Networks)
- Probabilistic palnning (MDPs, POMPDPs)
- Reinforcement learning
- Combining logic and probability
Prerequisites / NoticeSolid basic knowledge in statistics, algorithms and programming
252-0341-01LInformation Retrieval Information
Does not take place this semester.
W4 credits2V + 1UT. Hofmann
AbstractIntroduction to information retrieval with a focus on text documents and images. Main topics comprise extraction of characteristic features from documents, index structures, retrieval models, search algorithms, benchmarking, and feedback mechanisms. Searching the web, images and XML collections demonstrate recent applications of information retrieval and their implementation.
ObjectiveIn depth understanding of managing, indexing, and retrieving documents with text, image and XML content. Knowledge about basic search algorithms on the web, benchmarking of search algorithms, and relevance feedback methods.
263-2400-00LReliable and Interpretable Artificial Intelligence Information W4 credits2V + 1UM. Vechev
AbstractCreating reliable and explainable probabilistic models is a major challenge to solving the artificial intelligence problem. This course covers some of the latest advances that bring us closer to constructing such models. These advances span the areas of program synthesis/induction, programming languages, machine learning, and probabilistic programming.
ObjectiveThe main objective of this course is to expose students to the latest and most exciting research in the area of explainable and interpretable artificial intelligence, a topic of fundamental and increasing importance. Upon completion of the course, the students should have mastered the underlying methods and be able to apply them to a variety of problems.
ContentThe material draws on some of the latest research advances in several areas of computer science: program synthesis/induction, programming languages, deep learning, and probabilistic programming.

The material consists of three interconnected parts:

Part I: Program Synthesis/Induction
----------------------------------------

Synthesis is a new frontier in AI where the computer programs itself from user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.

Topics covered:

- Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, synthesis from noisy examples, learning with few examples, compositional synthesis, lower bounds on learning.

- Applications of techniques: synthesis for end users (e.g., spreadsheets), data analytics and financial computing, interpretable machine learning models for structured data.

- Combining neural networks and synthesis

Part II: Robustness of Deep Learning
-----------------------------------------

Deep learning methods based on neural networks have made impressive advances in recent years. A fundamental challenge with these models is that of understanding what the trained neural network has actually learned, for example, how stable / robust the network is to slight variations of the input (e.g., an image or a video), how easy it is to fool the network into mis-classifying obvious inputs, etc.

Topics covered:

- Basics of neural networks: fully connected, convolutional networks, residual networks, activation functions

- Finding adversarial examples in deep learning with SMT

- Methods and tools to guarantee robustness of deep nets (e.g., via affine arithmetic, SMT solvers); synthesis of robustness specs


Part III: Probabilistic Programming
--------------------------------------

Probabilistic programming is an emerging direction whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.

Topics covered:

- Inference: MCMC samplers and tactics (approximate), symbolic inference (exact).

- Semantics: basic measure theoretic semantics of probability; bridging measure theory and symbolic inference.

- Frameworks and languages: WebPPL (MIT/Stanford), PSI (ETH), Picture/Venture (MIT), Anglican (Oxford).

- Synthesis for probabilistic programs: this connects to Part I

- Applications of probabilistic programming: using the above solvers for reasoning about bias in machine learning models (connects to Part II), reasoning about computer networks, security protocols, approximate computing, cognitive models, rational agents.
  •  Page  1  of  1