Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Event-based Vision for Autonomous Driving
Collaborate with Volkswagen's research division to create a high-quality driving dataset including event cameras
Keywords: event cameras, autonomous driving
Billions of dollars are spent each year to bring autonomous vehicles closer to reality. One of the remaining challenges is the design of reliable algorithms that work in a diverse set of environments and scenarios. At the core of this problem is the choice of sensor setup. Ideally, there is a certain redundancy in the setup while each sensor should also excel at a certain task. Sampling based sensors (e.g. LIDAR, standard cameras, etc.) are today's essential building blocks of autonomous vehicles. However, they typically oversample far-away structure (e.g. building 200 meters away) and undersample close structure (e.g. fast bike crossing in front of the car). Thus, they enforce a trade-off between sampling frequency and computational budget. Unlike sampling-based sensors, event cameras capture change in their field-of-view with precise timing and do not record redundant information. As a result, they are well suited for highly dynamic scenarios such as driving on roads. There are also other benefits such as very high dynamic range, unmatched by standard cameras.
Billions of dollars are spent each year to bring autonomous vehicles closer to reality. One of the remaining challenges is the design of reliable algorithms that work in a diverse set of environments and scenarios. At the core of this problem is the choice of sensor setup. Ideally, there is a certain redundancy in the setup while each sensor should also excel at a certain task. Sampling based sensors (e.g. LIDAR, standard cameras, etc.) are today's essential building blocks of autonomous vehicles. However, they typically oversample far-away structure (e.g. building 200 meters away) and undersample close structure (e.g. fast bike crossing in front of the car). Thus, they enforce a trade-off between sampling frequency and computational budget. Unlike sampling-based sensors, event cameras capture change in their field-of-view with precise timing and do not record redundant information. As a result, they are well suited for highly dynamic scenarios such as driving on roads. There are also other benefits such as very high dynamic range, unmatched by standard cameras.
Event-based vision is a fast growing field in need of high-quality datasets. In this project, we explore the utility of event cameras in an autonomous car scenario. In order to achieve this, a high-quality driving dataset will be created that incorporates not only common sensors such as standard cameras, GPS, IMU and possibly LIDAR but also state-of-the-art event cameras. You will be collaborating with a research division of Volkswagen to join expertise in event-based vision and autonomous driving for high-quality results. We seek a highly motivated student with the following minimum qualifications:
- Experience with programming microcontrollers or motivation to acquire it quickly
- Good coding skills in Python and C++
- At least one course in computer vision
- Strong work ethic
Preferred qualifications:
- Background in robotics and experience with ROS
- Experience with deep learning
- Experience with event-based vision
Event-based vision is a fast growing field in need of high-quality datasets. In this project, we explore the utility of event cameras in an autonomous car scenario. In order to achieve this, a high-quality driving dataset will be created that incorporates not only common sensors such as standard cameras, GPS, IMU and possibly LIDAR but also state-of-the-art event cameras. You will be collaborating with a research division of Volkswagen to join expertise in event-based vision and autonomous driving for high-quality results. We seek a highly motivated student with the following minimum qualifications:
- Experience with programming microcontrollers or motivation to acquire it quickly - Good coding skills in Python and C++ - At least one course in computer vision - Strong work ethic
Preferred qualifications:
- Background in robotics and experience with ROS - Experience with deep learning - Experience with event-based vision
Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)
Please add CV + transcripts (Bachelor and Master)
Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)