Search result: Catalogue data in Autumn Semester 2024
Space Systems Master | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Deep Track Courses At least 20 credits must be completed within the deep track courses. Surplus credit points can be counted towards the electives. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Robotics | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Deep Track Robotics These courses can be credited either as a specialization subject or as an elective subject. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Number | Title | Type | ECTS | Hours | Lecturers | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
151-0323-00L | Hands-on Self-Driving Cars with Duckietown Number of participants limited to 30. Note: The previous course title until HS20 "Autonomous Mobility on Demand: From Car to Fleet". | W+ | 4 credits | 4G | M. Di Cicco | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This course is a hands-on introduction to self-driving cars using the Duckietown platform. Each student is given a mobile wheeled robot and throughout the class must configure and program. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | This course covers the basics of modeling, perception, planning, control, and learning for autonomous systems. The focus is on learning the foundational elements of a robotics platform and understanding how these components integrate and interact. The objective of the class is to provide students with a practical understanding of what it takes to design and operate an autonomous mobile system, from a single unit up to a full fleet of robotic systems. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Perception, planning, modeling, and control, leveraging primarily on vision data. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Lecture notes, primarily in the form of slides and tutorials, will be accessible from Moodle. Additional materials can also be accessed from the EdX MOOC called "Self-driving cars with Duckietown". | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | Course notes will be provided in an electronic form. These are some books that can be used to provide background information or consulted as references: (1) Siegwart, Nourbakhsh, Scaramuzza - Introduction to autonomous mobile robots; (2) Norvig, Russell - Artificial Intelligence, a modern approach. (3) Peter Corke - Robotics Vision and Control (4) Oussama Khatib, Bruno Siciliano - Handbook of Robotics | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Students should have taken a basic course in probability theory, computer vision, and control systems. It is crucial that they are not only familiar but also comfortable with programming (Python), Linux, GIT utilization, and the Robot Operating System (ROS), as these tools will be fundamental throughout the course. A shared space will be available to work with the robots. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-0325-00L | Planning and Decision Making for Autonomous Robots | W+ | 4 credits | 2V + 1U | E. Frazzoli | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | Planning safe and efficient motions for robots in complex environments, often shared with humans and other robots, is a difficult problem combining discrete and continuous mathematics, as well as probabilistic, game-theoretic, and ethical/regulatory aspects. This course will cover the algorithmic foundations of motion planning, with an eye to real-world implementation issues. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The students will learn how to design and implement state-of-the-art algorithms for planning the motion of robots executing challenging tasks in complex environments. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Discrete planning, shortest path problems. Planning under uncertainty. Game-theoretic planning. Geometric Representations. Steering methods. Configuration space and collision checking. Potential and Navigation functions. Grids, lattices, visibility graphs. Mathematical Programming. Sampling-based methods. Planning with limited information. Multi-agent Planning. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Course notes and other education material will be provided for free in an electronic form. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | There is no required textbook, but an excellent reference is Steve Lavalle's book on "Planning Algorithms." | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Students should have taken basic courses in optimization, control systems, probability theory, and should be familiar with modern programming languages and practices (e.g., Python, and/or C/C++). Previous exposure to robotic systems is a definite advantage. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-0371-00L | Advanced Model Predictive Control Number of participants limited to 60. | W+ | 4 credits | 2V + 1U | M. Zeilinger, A. Carron, L. Hewing, J. Köhler | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | Model predictive control (MPC) has established itself as a powerful control technique for complex systems under state and input constraints. This course discusses the theory and application of recent advanced MPC concepts, focusing on system uncertainties and safety, as well as data-driven formulations and learning-based control. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Design, implement and analyze advanced MPC formulations for robust and stochastic uncertainty descriptions, in particular with data-driven formulations. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Topics include - Nominal MPC for uncertain systems (nominal robustness) - Robust MPC - Stochastic MPC - Review of regression methods - Set-membership Identification and robust data-driven MPC - Bayesian regression and stochastic data-driven MPC - MPC as safety filter for reinforcement learning | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Lecture notes will be provided. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Basic courses in control, advanced course in optimal control, basic MPC course (e.g. 151-0660-00L Model Predictive Control in Spring Semester) strongly recommended. Background in linear algebra and stochastic systems recommended. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-0563-01L | Dynamic Programming and Optimal Control | W+ | 4 credits | 2V + 1U | R. D'Andrea | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | Introduction to Dynamic Programming and Optimal Control. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Covers the fundamental concepts of Dynamic Programming & Optimal Control. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Dynamic Programming Algorithm; Deterministic Systems and Shortest Path Problems; Infinite Horizon Problems, Bellman Equation; Deterministic Continuous-Time Optimal Control. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | Dynamic Programming and Optimal Control by Dimitri P. Bertsekas, Vol. I, 3rd edition, 2005, 558 pages, hardcover. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Requirements: Knowledge of advanced calculus, introductory probability theory, and matrix-vector algebra. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-0615-00L | Real-World Robotics - A Hands-On Project Class Registration is only possible up to 18.09.2024. Students must also complete a Google Form (https://forms.gle/pnMHTCdZwgdawb519) by 18.09.2024 to be considered for the course. Registered students and students on the waiting list will all be considered based on their submitted Google Forms. | W+ | 4 credits | 9A | R. Katzschmann | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | During this course, the students will develop an articulated robotic hand to solve a real-world robotic challenge: the robot must autonomously grasp and place objects. The students will learn the key theoretical concepts required to model, manufacture, control, and test their robot, alongside developing machine learning, programming, hardware, and engineering skills through the hands-on project. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | * Learning Objective 1: High-Level System Design System and product design combined with requirement generation and verification are essential for this robotics project. The students will apply previously acquired system design knowledge and methods to a hands-on challenge. * Learning Objective 2: Robot Design and Simulation Students will gain experience implementing and simulating robotic systems using modern design, modeling, and simulation techniques such as CAD and Isaac Gym. These techniques are essential in any design process to understand the expected system behavior. This requires a thorough understanding of the system’s kinematics, dynamics, material, actuation principle, and physical limitations. Students will learn the theory and limitations behind modeling and simulation software. * Learning Objective 3: Robot Fabrication Students will learn to use the previously designed CAD models for successful robot fabrication. Additionally, the iterative nature of the process will allow them to develop their critical thinking skills in assessing the limitations of their design and possible sources for improvements. Building the robot will equip students with essential skills for using robots in the real world. * Learning Objective 4: Control, Integration, and Testing Students can apply the knowledge acquired in their control and machine learning courses. They will gain theoretical knowledge on modeling and developing intelligent control algorithms. They will be taught perception methods and state-of-the-art machine-learning techniques. They will gain experience testing their robots’ performance in both hard and software to enhance their design and suggest future improvements. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | During this course, the students will be divided into teams, and each group will independently develop an articulated robotic arm to solve a real-world robotic challenge. The students will learn the key theoretical concepts required to model, manufacture, control, and test a robot and develop programming, machine learning, hardware, and engineering skills through hands-on workshops. The students will compete in a real-world robotic challenge that takes place at the end of the course. This course is composed of tutorials, which will be available on the course website, where the lecturer will provide all the necessary theoretical input, focus talks where robotic experts will present a particular aspect of the manipulator in detail, and workshops where the students will have the possibility to hands-on learn how to implement the solutions required to solve their challenge. Finally, there will be time slots to autonomously work on the manufacturing and developing the team's robot. An online forum will be available to help the students throughout the course. This course is divided into six parts: Part 1: Challenge introduction - Identify the functional requirements necessary for the final challenge - Evaluate the existing manipulator designs to optimize them for the specific task Part 2: Robot Design - Develop a CAD model based on the high-level system design. - Integrate motors, pneumatics components, and other required materials in the design Part 3: Robot Fabrication - Come up with a fabrication method and plan using the presented fabrication skills. - Fabricate the robot and its actuators based on the CAD model. - Evaluate, modify, and enhance the fabrication approach. Part 4: Soft Robot Simulation - Simulate the soft manipulator through a simulation framework - Optimize the simulation parameters to reflect the experimental setup Part 5: Control, Integration, and Testing - Formulate the dynamic skills needed for real-life application. - Develop traditional and learning-based control algorithms and test them in simulation. - Integrate controller design into the fabricated robot. - Build, test, fail, and repeat until the soft robot works as desired in simple tasks. - Upgrade and validate the robot for performance in real-world conditions and verify requirements. Part 6: Product development - Understand the challenges associated with the manufacturing process to bring the robot from a prototype to the final product - Optimize the robot for production | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | All class materials, including slides, video tutorials, and supporting literature, can be found on the class webpage (https://rwr.ethz.ch) and Moodle, supported by discussion and Q&A forums. Focus talks, Q&A sessions, and workshops will happen on Mondays between 14:00 and 16:00. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | 1) Toshimitsu, Y., Forrai, B., Cangan, B. G., Steger, U., Knecht, M., Weirich, S., & Katzschmann, R. K. (2023, December). Getting the Ball Rolling: Learning a Dexterous Policy for a Biomimetic Tendon-Driven Hand with Rolling Contact Joints. In 2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids) (pp. 1-7). IEEE. 2) Liconti, D., Toshimitsu, Y., & Katzschmann, R. (2024). Leveraging Pretrained Latent Representations for Few-Shot Imitation Learning on a Dexterous Robotic Hand. arXiv preprint arXiv:2404.16483. 3) Egli, J., Forrai, B., Buchner, T., Su, J., Chen, X., & Katzschmann, R. K. (2024). Sensorized Soft Skin for Dexterous Robotic Hands. arXiv preprint arXiv:2404.19448. 4) Yasa, O., Toshimitsu, Y., Michelis, M. Y., Jones, L. S., Filippi, M., Buchner, T., & Katzschmann, R. K. (2023). An overview of soft robotics. Annual Review of Control, Robotics, and Autonomous Systems, 6, 1-29. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Students are expected to have attended introductory courses in dynamics, control systems, and robotics. Registration for this course is limited due to the amount of resources needed to make this course happen. For this reason, it is required to apply through the following module: https://forms.gle/pnMHTCdZwgdawb519 The course's graded semester performance consists of the final team performance in the class challenge, a final team presentation and report, weekly Moodle quizzes, and attendance at the focus talks and workshops. Focus talks, Q&A, and workshops are Mondays from 14:00 to 16:00. Focus talks + Q&A will be from 14:00 to 15:00 in CLA E4. Workshops will be from 15:00 to 16:00 in CLA E32. The student teams can work every day at any time on their projects in the course room CLA E32. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-0632-00L | Vision Algorithms for Mobile Robotics (University of Zurich) No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student. UZH Module Code: DINF2039 Mind the enrolment deadlines at UZH: https://www.uzh.ch/cmsssl/en/studies/application/deadlines.html | W+ | 6 credits | 2V + 2U | D. Scaramuzza | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | For a robot to be autonomous, it has to perceive and understand the world around it. This course introduces you to the key computer vision algorithms used in mobile robotics, such as feature extraction, structure from motion, dense reconstruction, tracking, image retrieval, event-based vision, and visual-inertial odometry (the algorithms behind Hololens, Oculus Quest, and the NASA Mars rovers). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Learn the fundamental computer vision algorithms used in mobile robotics, in particular: filtering, feature extraction, structure from motion, multiple view geometry, dense reconstruction, tracking, image retrieval, event-based vision, and visual-inertial odometry and Simultaneous Localization And Mapping (SLAM) (the algorithms behind Hololens, Facebook-Oculus Quest, and the NASA Mars rovers). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Each lecture will be followed by a lab session where you will learn to implement a building block of a visual odometry algorithm in Matlab. By the end of the course, you will integrate all these building blocks into a working visual odometry algorithm. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Lecture slides will be made available on the course official website: http://rpg.ifi.uzh.ch/teaching.html | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | [1] Computer Vision: Algorithms and Applications, by Richard Szeliski, Springer, 2010. [2] Robotics Vision and Control: Fundamental Algorithms, by Peter Corke 2011. [3] An Invitation to 3D Vision, by Y. Ma, S. Soatto, J. Kosecka, S.S. Sastry. [4] Multiple view Geometry, by R. Hartley and A. Zisserman. [5] Introduction to autonomous mobile robots 2nd Edition, by R. Siegwart, I.R. Nourbakhsh, and D. Scaramuzza, February, 2011 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Fundamentals of algebra, geomertry, matrix calculus, and Matlab programming. Note: If you are interested in taking UZH courses, you must register as an incoming mobility student at UZH. For details, see as follows: UZH course enrollment for ETH student at University of Zurich (UZH) > Mobility within Switzerland – Incoming > Module Mobility: The easiest way to take individual modules/courses to supplement your studies at your home university is with module mobility. This option is not available to students who have dropped out of their home university or have been definitely excluded or banned from the relevant a program > Application and Deadlines: Applications are submitted via the UZH application portal (https://www.uzh.ch/cmsssl/en/studies/application/chmobilityin.html) Step-by-step guidelines on how ETH students can register for this course, are given on the official course website: https://rpg.ifi.uzh.ch/teaching.html ATTENTION: When you book the course at UZH, you are automatically registered for the exam at UZH and you can unregister until the October deadline. After registering for the course, you as an ETH student need to check out your **UZH email account** to receive the relelated information from the lecturer. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-0851-00L | Robot Dynamics | W+ | 4 credits | 2V + 2U | M. Hutter, R. Siegwart | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | We will provide an overview on how to kinematically and dynamically model typical robotic systems such as robot arms, legged robots, rotary wing systems, or fixed wing. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The primary objective of this course is that the student deepens an applied understanding of how to model the most common robotic systems. The student receives a solid background in kinematics, dynamics, and rotations of multi-body systems. On the basis of state of the art applications, he/she will learn all necessary tools to work in the field of design or control of robotic systems. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | The course consists of three parts: First, we will refresh and deepen the student's knowledge in kinematics, dynamics, and rotations of multi-body systems. In this context, the learning material will build upon the courses for mechanics and dynamics available at ETH, with the particular focus on their application to robotic systems. The goal is to foster the conceptual understanding of similarities and differences among the various types of robots. In the second part, we will apply the learned material to classical robotic arms as well as legged systems and discuss kinematic constraints and interaction forces. In the third part, focus is put on modeling fixed wing aircraft, along with related design and control concepts. In this context, we also touch aerodynamics and flight mechanics to an extent typically required in robotics. The last part finally covers different helicopter types, with a focus on quadrotors and the coaxial configuration which we see today in many UAV applications. Case studies on all main topics provide the link to real applications and to the state of the art in robotics. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | The contents of the following ETH Bachelor lectures or equivalent are assumed to be known: Mechanics and Dynamics, Control, Basics in Fluid Dynamics. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-1116-00L | Introduction to Aircraft and Car Aerodynamics | W+ | 4 credits | 3G | M. Immer, F. Schröder-Pernet | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | Aircraft aerodynamics: Atmosphere; aerodynamic forces (lift, drag); thrust. Vehicle aerodynamics: Aerodynamic and mass forces, drag, lift, car aerodynamics and performence. Passenger cars, trucks, racing cars. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | An introduction to the basic principles and interrelationships of aircraft and automotive aerodynamics. To understand the basic relations of the origin of aerodynamic forces (ie lift, drag). To quantify the aerodynamic forces for basic configurations of aircraft and car components. Illustration of the intrinsic problems and results using examples. Using experimental and theoretical methods to illustrate possibilities and limits. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Aircraft aerodynamics: atmosphere, aerodynamic forces (ascending force: profile, wings. Resistance, residual resistance, induced resistance); thrust (overview of the propulsion system, aerodynamics of the propellers), introduction to static longitudinal stability. Automobile aerodynamics: Basic principles: aerodynamic force and the force of inertia, resistance, drive, aerodynamic and driving performance. Cars commercial vehicles, racing cars. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Preparation materials & slides are provided prior to each class | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | Aircraft Aerodynamics: - Anderson Jr, John D: Introduction to Flight, Mc Graw Hill, Ed 06, 2007; ISBN: 9780073529394 - Mc Cormick, B.W.: Aerodynamics, Aeronautics and Flight Mechanics, John Wiley and Sons, 1979 - Wilcox, David C, Basic Fluid Mechanics. DCW Industries, Inc., 1997 - Schlichting,H. und Truckenbrodt, E: Aerodynamik des Flugzeuges (Bd I und II), Springer Verlag, 1960 - Abbott, I. and van Doenhoff, A.: Theory of Wing Sections, McGraw-Hill Book Company, Inc., 1949 - Hoerner, S.F.: Fluid Dynamic Drag, Hoerner Fluid Dynamics, 1951/1965 - Hoerner, S.F.: Fluid Dynamic Lift, Hoerner Fluid Dynamics, 1975 - Perkins, C.D. and Hage, R.E.: Airplane Performance, Stability and Control, John Wiley ans Sons, 1949 Vehicle Aerodynamics - Hucho, Wolf-Heinrich: Aerodynamics of Road Vehicles, SAE International, 1998 - Gillespi, Thomas D: Fundamentals of Vehicle Dynamics, SAE, 1992 - Katz Joseph: New Directions in Race Car Aerodynamics, Robert Bentley Publishers, 1995 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
151-0623-00L | ETH Zurich Distinguished Seminar in Robotics, Systems and Controls | W+ | 1 credit | 1S | B. Nelson, M. Hutter, R. Katzschmann, C. Menon, R. Riener, R. Siegwart | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This course consists of a series of seven lectures given by researchers who have distinguished themselves in the area of Robotics, Systems, and Controls. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Obtain an overview of various topics in Robotics, Systems, and Controls from leaders in the field. Please see Link for a list of upcoming lectures. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | This course consists of a series of seven lectures given by researchers who have distinguished themselves in the area of Robotics, Systems, and Controls. MSc students in Robotics, Systems, and Controls are required to attend every lecture. Attendance will be monitored. If for some reason a student cannot attend one of the lectures, the student must select another ETH or University of Zurich seminar related to the field and submit a one page description of the seminar topic. Please see Link for a suggestion of other lectures. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Students are required to attend all seven lectures to obtain credit. If a student must miss a lecture then attendance at a related special lecture will be accepted that is reported in a one page summary of the attended lecture. No exceptions to this rule are allowed. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
227-0124-00L | Embedded Systems | W+ | 6 credits | 4G | M. Magno | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | An embedded system is a combination of hardware and software, either fixed in function or programmable, that is designed for a specific application scenario or for a specific task within a larger system. They are part of industrial machines such as agricultural and manufacturing equipment, automotive systems, medical equipment, household appliances, sensor networks, and the Internet of Things. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Understanding the specific requirements and problems that arise in embedded system applications. Understanding the hardware structure of a microcontroller and an embedded system; memory architecture and memory map, internal and external peripherals, low-power and low-energy design as well as instruction sets and computational accelerators. Understanding the firmware structure of a microcontroller and an embedded system; low-level instruction set, hardware-software interfaces, communication between components, embedded real-time operating systems, real-time scheduling, shared resources, low-power and low-energy programming as well as computational accelerators. Using formal models and methods for designing and optimizing embedded systems. Gaining experience with practical applications of the C programming language, embedded real-time operating systems, and debug functionalities of the associated design environment to design, implement, and verify embedded firmware. Through project-based activities, students will gain substantial experience in applying the C programming language in the context of embedded systems. Projects will involve developing and implementing firmware, utilizing embedded real-time operating systems, and exploring the debugging functionalities within design environments. This hands-on approach aims to bridge the gap between theoretical knowledge and practical application, allowing students to experience the full lifecycle of embedded system development from design to implementation and verification. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | This lecture focuses on the design of embedded systems using formal models and methods. Besides the theoretical lecture, the course contains laboratory sessions where students transfer the learned theoretical aspects into praxis by programming a microcontroller and interfacing it with sensors and actuators. Students will be exposed to a commercial microcontroller, and the development board extend with a custom-designed embedded systems educational platform. Specifically, the following topics will be covered in the course: hardware and software structures of embedded systems, low-level instruction set, memory architecture and memory map, peripherals, hardware-software interfaces, communication between components, firmware design methodologies, firmware design using the C programming language, embedded real-time operating systems, real-time scheduling, shared resources, low-power, and low-energy designs well as computational accelerators. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Lecture material, publications, exercise sheets, and laboratory documentation will be available on the course's Moodle page. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | Yifeng Zhu: Embedded Systems with Arm Cortex-M Microcontrollers in Assembly Language and C - Fourth Edition, E-Man Press LLC, ISBN: 978-0982692677, 2023 Giorgio C. Butazzo: Hard Real-Time Computing Systems. Predictable Scheduling Algorithms and Applications, Springer, ISBN 978-1-4614-3019-3, 2011 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Prerequisites: C programming, circuit theory, digital logic, binary number representations. Recommended: basic knowledge of assembly programming and computer architecture. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
227-0447-00L | Image Analysis and Computer Vision | W+ | 6 credits | 3V + 1U | E. Konukoglu, E. Erdil, F. Yu | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | Light and perception. Digital image formation. Image enhancement and feature extraction. Unitary transformations. Color and texture. Image segmentation. Motion extraction and tracking. 3D data extraction. Invariant features. Specific object recognition and object class recognition. Deep learning and Convolutional Neural Networks. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Overview of the most important concepts of image formation, perception and analysis, and Computer Vision. Gaining own experience through practical computer and programming exercises. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | This course aims at offering a self-contained account of computer vision and its underlying concepts, including the recent use of deep learning. The first part starts with an overview of existing and emerging applications that need computer vision. It shows that the realm of image processing is no longer restricted to the factory floor, but is entering several fields of our daily life. First the interaction of light with matter is considered. The most important hardware components such as cameras and illumination sources are also discussed. The course then turns to image discretization, necessary to process images by computer. The next part describes necessary pre-processing steps, that enhance image quality and/or detect specific features. Linear and non-linear filters are introduced for that purpose. The course will continue by analyzing procedures allowing to extract additional types of basic information from multiple images, with motion and 3D shape as two important examples. Finally, approaches for the recognition of specific objects as well as object classes will be discussed and analyzed. A major part at the end is devoted to deep learning and AI-based approaches to image analysis. Its main focus is on object recognition, but also other examples of image processing using deep neural nets are given. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Course material Script, computer demonstrations, exercises and problem solutions | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Prerequisites: Basic concepts of mathematical analysis and linear algebra. The computer exercises are based on Python and Linux. The course language is English. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
227-0560-00L | Computer Vision and Artificial Intelligence for Autonomous Cars Up until FS2022 offered as Deep Learning for Autonomous Driving | W+ | 6 credits | 3V + 2P | C. Sakaridis | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This course introduces the core computer vision techniques and algorithms that autonomous cars use to perceive the semantics and geometry of their driving environment, localize themselves in it, and predict its dynamic evolution. Emphasis is placed on techniques tailored for real-world settings, such as multi-modal fusion, domain-adaptive and outlier-aware architectures, and multi-agent methods. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Students will learn about the fundamentals of autonomous cars and of the computer vision models and methods these cars use to analyze their environment and navigate themselves in it. Students will be presented with state-of-the-art representations and algorithms for semantic, geometric and temporal visual reasoning in automated driving and will gain hands-on experience in developing computer vision algorithms and architectures for solving such tasks. After completing this course, students will be able to: 1. understand the operating principles of visual sensors in autonomous cars 2. differentiate between the core architectural paradigms and components of modern visual perception models and describe their logic and the role of their parameters 3. systematically categorize the main visual tasks related to automated driving and understand the primary representations and algorithms which are used for solving them 4. critically analyze and evaluate current research in the area of computer vision for autonomous cars 5. practically reproduce state-of-the-art computer vision methods in automated driving 6. independently develop new models for visual perception | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | The content of the lectures consists in the following topics: 1. Fundamentals (a) Fundamentals of autonomous cars and their visual sensors (b) Fundamental computer vision architectures and algorithms for autonomous cars 2. Semantic perception (a) Semantic segmentation (b) Object detection (c) Instance segmentation and panoptic segmentation 3. Geometric perception and localization (a) Depth estimation (b) 3D reconstruction (c) Visual localization (d) Unimodal visual/lidar 3D object detection 4. Robust perception: multi-modal, multi-domain and multi-agent methods (a) Multi-modal 2D and 3D object detection (b) Visual grounding and verbo-visual fusion (c) Domain-adaptive and outlier-aware semantic perception (d) Vehicle-to-vehicle communication for perception 5. Temporal perception (a) Multiple object tracking (b) Motion prediction The practical projects involve implementing complex computer vision architectures and algorithms and applying them to real-world, multi-modal driving datasets. In particular, students will develop models and algorithms for: 1. Semantic segmentation and depth estimation 2. Sensor calibration for multi-modal 3D driving datasets 3. 3D object detection using lidars | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Lecture slides are provided in PDF format. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Students are expected to have a solid basic knowledge of linear algebra, multivariate calculus, and probability theory, and a basic background in computer vision and machine learning. All practical projects will require solid background in programming and will be based on Python and libraries of it such as PyTorch. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
227-0689-00L | System Identification | W+ | 4 credits | 2V + 1U | R. Smith | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | Theory and techniques for the identification of dynamic models from experimentally obtained system input-output data. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | To provide a series of practical techniques for the development of dynamical models from experimental data, with the emphasis being on the development of models suitable for feedback control design purposes. To provide sufficient theory to enable the practitioner to understand the trade-offs between model accuracy, data quality and data quantity. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Introduction to modeling: Black-box and grey-box models; Parametric and non-parametric models; ARX, ARMAX (etc.) models. Predictive, open-loop, black-box identification methods. Time and frequency domain methods. Subspace identification methods. Optimal experimental design, Cramer-Rao bounds, input signal design. Parametric identification methods. On-line and batch approaches. Closed-loop identification strategies. Trade-off between controller performance and information available for identification. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | "System Identification; Theory for the User" Lennart Ljung, Prentice Hall (2nd Ed), 1999. Additional papers will be available via the course Moodle. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Control systems (227-0216-00L) or equivalent. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
252-0535-00L | Advanced Machine Learning | W+ | 10 credits | 3V + 2U + 4A | J. M. Buhmann, C. Cotrini Jimenez | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | Machine learning algorithms provide analytical methods to search data sets for characteristic patterns. Typical tasks include the classification of data, function fitting and clustering, with applications in image and speech analysis, bioinformatics and exploratory data analysis. This course is accompanied by practical machine learning projects. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | Students will be familiarized with advanced concepts and algorithms for supervised and unsupervised learning; reinforce the statistics knowledge which is indispensible to solve modeling problems under uncertainty. Key concepts are the generalization ability of algorithms and systematic approaches to modeling and regularization. Machine learning projects will provide an opportunity to test the machine learning algorithms on real world data. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | The theory of fundamental machine learning concepts is presented in the lecture, and illustrated with relevant applications. Students can deepen their understanding by solving both pen-and-paper and programming exercises, where they implement and apply famous algorithms to real-world data. Topics covered in the lecture include: Fundamentals: What is data? Bayesian Learning Computational learning theory Supervised learning: Ensembles: Bagging and Boosting Max Margin methods Neural networks Unsupservised learning: Dimensionality reduction techniques Clustering Mixture Models Non-parametric density estimation Learning Dynamical Systems | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | No lecture notes, but slides will be made available on the course webpage. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | C. Bishop. Pattern Recognition and Machine Learning. Springer 2007. R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley & Sons, second edition, 2001. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, 2001. L. Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer, 2004. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | The course requires solid basic knowledge in analysis, statistics and numerical methods for CSE as well as practical programming experience for solving assignments. Students should have followed at least "Introduction to Machine Learning" or an equivalent course offered by another institution. PhD students are required to obtain a passing grade in the course (4.0 or higher based on project and exam) to gain credit points. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
252-3110-00L | Human Computer Interaction | W+ | 8 credits | 3V + 2U + 2A | C. Holz, A. Wang | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | The course provides an introduction to the field of human-computer interaction and focuses on role of the user in system design. Methods used to analyze the user experience will be introduced to show how they inform the design of new interfaces, systems, and technologies. Emerging methods and tools in computational interaction and optimization for UI design will also be introduced. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The goal of the course is for students to understand the principles of user-centered design and be able to apply these in practice. Another goal is to understand the basic notions of Computational Design in a HCI context. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | The course will introduce students to several methods of analysing the user experience, showing how these can be used at different stages of system development from requirements analysis through to usability testing. Students will get experience of designing and carrying out user studies as well as analysing results. The course will also cover the basic principles of interaction design. Practical exercises related to touch and gesture-based interaction will be used to reinforce the concepts introduced in the lecture. To get students to further think beyond traditional system design, we will discuss issues related to ambient information and awareness. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | All materials and details accessible through https://siplab.ethz.ch/courses/human_computer_interaction/2024 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | Will be provided as part of the course | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
263-5210-00L | Probabilistic Artificial Intelligence | W+ | 8 credits | 3V + 2U + 2A | A. Krause | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This course introduces core modeling techniques and algorithms from machine learning, optimization and control for reasoning and decision making under uncertainty, and study applications in areas such as robotics. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | How can we build systems that perform well in uncertain environments? How can we develop systems that exhibit "intelligent" behavior, without prescribing explicit rules? How can we build systems that learn from experience in order to improve their performance? We will study core modeling techniques and algorithms from statistics, optimization, planning, and control and study applications in areas such as robotics. The course is designed for graduate students. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Topics covered: - Probability - Probabilistic inference (variational inference, MCMC) - Bayesian learning (Gaussian processes, Bayesian deep learning) - Probabilistic planning (MDPs, POMPDPs) - Multi-armed bandits and Bayesian optimization - Reinforcement learning | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Solid basic knowledge in statistics, algorithms and programming. The material covered in the course "Introduction to Machine Learning" is considered as a prerequisite. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
263-5902-00L | Computer Vision | W+ | 8 credits | 3V + 1U + 3A | M. Pollefeys, S. Tang | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | The goal of this course is to provide students with a good understanding of computer vision and image analysis techniques. The main concepts and techniques will be studied in depth and practical algorithms and approaches will be discussed and explored through the exercises. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The objectives of this course are: 1. To introduce the fundamental problems of computer vision. 2. To introduce the main concepts and techniques used to solve those. 3. To enable participants to implement solutions for reasonably complex problems. 4. To enable participants to make sense of the computer vision literature. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | Camera models and calibration, invariant features, Multiple-view geometry, Model fitting, Stereo Matching, Segmentation, 2D Shape matching, Shape from Silhouettes, Optical flow, Structure from motion, Tracking, Object recognition, Object category recognition | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | It is recommended that students have taken the Visual Computing lecture or a similar course introducing basic image processing concepts before taking this course. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
376-1504-00L | Physical Human Robot Interaction (pHRI) | W+ | 4 credits | 2V + 2U | O. Lambercy, P. Wolf | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract | This course focuses on the emerging, interdisciplinary field of physical human-robot interaction, bringing together themes from robotics, real-time control, human factors, haptics, virtual environments, interaction design and other fields to enable the development of human-oriented robotic systems. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Learning objective | The objective of this course is to give an introduction to the fundamentals of physical human robot interaction, through lectures on the underlying theoretical/mechatronics aspects and application fields, in combination with a hands-on lab tutorial. The course will guide students through the design and evaluation process of such systems. By the end of this course, you should understand the critical elements in human-robot interactions - both in terms of engineering and human factors - and use these to evaluate and de- sign safe and efficient assistive and rehabilitative robotic systems. Specifically, you should be able to: 1) identify critical human factors in physical human-robot interaction and use these to derive design requirements; 2) compare and select mechatronic components that optimally fulfill the defined design requirements; 3) derive a model of the device dynamics to guide and optimize the selection and integration of selected components into a functional system; 4) design control hardware and software and implement and test human-interactive control strategies on the physical setup; 5) characterize and optimize such systems using both engineering and psychophysical evaluation metrics; 6) investigate and optimize one aspect of the physical setup and convey and defend the gained insights in a technical presentation. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Content | This course provides an introduction to fundamental aspects of physical human-robot interaction. After an overview of human haptic, visual and auditory sensing, neurophysiology and psychophysics, principles of human-robot interaction systems (kinematics, mechanical transmissions, robot sensors and actuators used in these systems) will be introduced. Throughout the course, students will gain knowledge of interaction control strategies including impedance/admittance and force control, haptic rendering basics and issues in device design for humans such as transparency and stability analysis, safety hardware and procedures. The course is organized into lectures that aim to bring students up to speed with the basics of these systems, readings on classical and current topics in physical human-robot interaction, laboratory sessions and lab visits. Students will attend periodic laboratory sessions where they will implement the theoretical aspects learned during the lectures. Here the salient features of haptic device design will be identified and theoretical aspects will be implemented in a haptic system based on the haptic paddle (https://relab.ethz.ch/downloads/open-hardware/haptic-paddle.html), by creating simple dynamic haptic virtual environments and understanding the performance limitations and causes of instabilities (direct/virtual coupling, friction, damping, time delays, sampling rate, sensor quantization, etc.) during rendering of different mechanical properties. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Lecture notes | Will be distributed on Moodle before the lectures. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Literature | Abbott, J. and Okamura, A. (2005). Effects of position quantization and sampling rate on virtual-wall passivity. Robotics, IEEE Transactions on, 21(5):952 - 964. Adams, R. and Hannaford, B. (1999). Stable haptic interaction with virtual environments. Robotics and Automation, IEEE Transactions on, 15(3):465 - 474. Buerger, S. and Hogan, N. (2007). Complementary stability and loop shaping for improved human-robot interaction. Robotics, IEEE Transactions on, 23(2):232 - 244. Burdea, G. and Brooks, F. (1996). Force and touch feedback for virtual reality. John Wiley & Sons New York NY. Colgate, J. and Brown, J. (1994). Factors affecting the z-width of a haptic display. In Robotics and Automation, 1994. Proceedings., 1994 IEEE International Conference on, pages 3205 -3210 vol. 4. Diolaiti, N., Niemeyer, G., Barbagli, F., and Salisbury, J. (2006). Stability of haptic rendering: Discretization, quantization, time delay, and coulomb effects. Robotics, IEEE Transactions on, 22(2):256 - 268. Gillespie, R. and Cutkosky, M. (1996). Stable user-specific haptic rendering of the virtual wall. In Proceedings of the ASME International Mechanical Engineering Congress and Exhibition, volume 58, pages 397 - 406. Hannaford, B. and Ryu, J.-H. (2002). Time-domain passivity control of haptic interfaces. Robotics and Automation, IEEE Transactions on, 18(1):1 - 10. Hashtrudi-Zaad, K. and Salcudean, S. (2001). Analysis of control architectures for teleoperation systems with impedance/admittance master and slave manipulators. The International Journal of Robotics Research, 20(6):419. Hayward, V. and Astley, O. (1996). Performance measures for haptic interfaces. In ROBOTICS RESEARCH-INTERNATIONAL SYMPOSIUM, volume 7, pages 195-206. Citeseer. Hayward, V. and Maclean, K. (2007). Do it yourself haptics: part i. Robotics Automation Magazine, IEEE, 14(4):88 - 104. Leskovsky, P., Harders, M., and Szeekely, G. (2006). Assessing the fidelity of haptically rendered deformable objects. In Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2006 14th Symposium on, pages 19 - 25. MacLean, K. and Hayward, V. (2008). Do it yourself haptics: Part ii [tutorial]. Robotics Automation Magazine, IEEE, 15(1):104 - 119. Mahvash, M. and Hayward, V. (2003). Passivity-based high-fidelity haptic rendering of contact. In Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on, volume 3, pages 3722 - 3728. Mehling, J., Colgate, J., and Peshkin, M. (2005). Increasing the impedance range of a haptic display by adding electrical damping. In Eurohaptics Conference, 2005 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. World Haptics 2005. First Joint, pages 257 - 262. Okamura, A., Richard, C., and Cutkosky, M. (2002). Feeling is believing: Using a force-feedback joystick to teach dynamic systems. JOURNAL OF ENGINEERING EDUCATION-WASHINGTON, 91(3):345 - 350. O'Malley, M. and Goldfarb, M. (2004). The effect of virtual surface stiffness on the haptic perception of detail. Mechatronics, IEEE/ASME Transactions on, 9(2):448 - 454. Richard, C. and Cutkosky, M. (2000). The effects of real and computer generated friction on human performance in a targeting task. In Proceedings of the ASME Dynamic Systems and Control Division, volume 69, page 2. Salisbury, K., Conti, F., and Barbagli, F. (2004). Haptic rendering: Introductory concepts. Computer Graphics and Applications, IEEE, 24(2):24 - 32. Weir, D., Colgate, J., and Peshkin, M. (2008). Measuring and increasing z-width with active electrical damping. In Haptic interfaces for virtual environment and teleoperator systems, 2008. haptics 2008. symposium on, pages 169 - 175. Yasrebi, N. and Constantinescu, D. (2008). Extending the z-width of a haptic device using acceleration feedback. Haptics: Perception, Devices and Scenarios, pages 157-162. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites / Notice | Notice: The registration is limited to 26 students There are 4 credit points for this lecture. The lecture will be held in English. The students are expected to have basic control knowledge from previous classes. http://www.relab.ethz.ch/education/courses/phri.html | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Competencies |
|
- Page 1 of 1