Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Wide-Baseline Place Recognition for SLAM
The goal of this project is to integrate an existing place recognition system that was already proven to work well for images captured from very wide baselines (e.g. air-ground images) into a state-of-the-art keyframe-based visual-inertial odometry system to effectively correct the effects of drift
Keywords: Aerial Systems: Perception and Autonomy;
Visual-Based Navigation; SLAM; Localization; Place Recognition;
Simultaneous Localisation And Mapping (SLAM) methods have been developed with the aim of automating robot navigation. Place recognition (or loop closure detection) is considered a key element, complementary to sequential motion estimation done in visual odometry/SLAM, to enable global accurate maps, relocalization and even collaboration between different robots performing SLAM (see image above).
With the aim of developing general and practical systems, the use of external tracking or positioning systems (e.g. GPS) is typically avoided, albeit due to the inherent inaccuracy of real world sensors, any odometry measure inevitably introduces drift and therefore inconsistency in the estimated robot’s trajectory and map. By detecting when a robot returns to a previously visited place has long been known to offer useful cues for diminishing the effects of drift. Following such a loop detection in the robot’s trajectory, non-linear optimization is typically employed over all pose-to-pose and pose-to-features SLAM constraints established to propagate the accumulated error back to the rest of the SLAM graph.
The goal of this project is to integrate an existing place recognition system that was already proven to work well for images captured from very wide baselines (e.g. air-ground images) into a state-of-the-art keyframe-based visual-inertial odometry system to effectively correct the effects of drift that accumulates over time during the estimation processes. The novel pipeline aims to enable robust relocalization and build more accurate maps in applications where the robot can revisit the scene from very different viewpoints or where ground and aerial robots work in collaboration.
Simultaneous Localisation And Mapping (SLAM) methods have been developed with the aim of automating robot navigation. Place recognition (or loop closure detection) is considered a key element, complementary to sequential motion estimation done in visual odometry/SLAM, to enable global accurate maps, relocalization and even collaboration between different robots performing SLAM (see image above).
With the aim of developing general and practical systems, the use of external tracking or positioning systems (e.g. GPS) is typically avoided, albeit due to the inherent inaccuracy of real world sensors, any odometry measure inevitably introduces drift and therefore inconsistency in the estimated robot’s trajectory and map. By detecting when a robot returns to a previously visited place has long been known to offer useful cues for diminishing the effects of drift. Following such a loop detection in the robot’s trajectory, non-linear optimization is typically employed over all pose-to-pose and pose-to-features SLAM constraints established to propagate the accumulated error back to the rest of the SLAM graph.
The goal of this project is to integrate an existing place recognition system that was already proven to work well for images captured from very wide baselines (e.g. air-ground images) into a state-of-the-art keyframe-based visual-inertial odometry system to effectively correct the effects of drift that accumulates over time during the estimation processes. The novel pipeline aims to enable robust relocalization and build more accurate maps in applications where the robot can revisit the scene from very different viewpoints or where ground and aerial robots work in collaboration.
- WP1: Familiarization with our existing visual-inertial odometry system and place recognition system implementation.
- WP2: Research into state-of-the-art Bundle adjustment techniques.
- WP3: Integration of the place recognition system into a state-of-the-art visual-inertial odometry system.
- WP4: Implementation of a Bundle Adjustment to effectively close the loop and correct the drift in the robot’s trajectory and map.
- WP5: Experimentation and evaluation of this method in terms of runtime and accuracy of estimation.
- WP6: Experimentation of the method against state-of-the-art approaches and report writing.
- WP1: Familiarization with our existing visual-inertial odometry system and place recognition system implementation. - WP2: Research into state-of-the-art Bundle adjustment techniques. - WP3: Integration of the place recognition system into a state-of-the-art visual-inertial odometry system. - WP4: Implementation of a Bundle Adjustment to effectively close the loop and correct the drift in the robot’s trajectory and map. - WP5: Experimentation and evaluation of this method in terms of runtime and accuracy of estimation. - WP6: Experimentation of the method against state-of-the-art approaches and report writing.
The student taking this project needs to be highly motivated. The following skills are desirable:
- Strong interest in active perception and vision-based navigation.
- Strong analytical skills.
- Background in C++.
The student taking this project needs to be highly motivated. The following skills are desirable:
- Strong interest in active perception and vision-based navigation. - Strong analytical skills. - Background in C++.
Your application must contain your most recent transcripts from bachelor and master studies.
- Fabiola Maffra, fmaffra@mavt.ethz.ch
- Marco Karrer, marco.karrer@mavt.ethz.ch
Your application must contain your most recent transcripts from bachelor and master studies. - Fabiola Maffra, fmaffra@mavt.ethz.ch - Marco Karrer, marco.karrer@mavt.ethz.ch