Simultaneous Localization And Mapping (SLAM) is the task of moving in a previously unknown environment while mapping the robot's workspace and simultaneously estimating its position in this map. Providing the robot with an understanding of its current environment, this is a cornerstone when a robot is supposed to autonomously execute a task. While state-of-the-art SLAM system have reached considerable robustness and accuracy, a common assumption in SLAM is that the environment is completely static, therefore leading to **serious estimation errors in dynamic environments**.
Since in many real-world environments, such as streets or public places, dynamic objects are present (e.g. cars, trams, or humans), the Vision for Robotics Lab (V4RL) has done some research into dynamic SLAM in the last year, showing how an existing SLAM system [1] can be extended to deal with dynamic environments. This project should continue this research with the goal of further **improving the performance of SLAM in dynamic environments**, also proving the effectiveness of the proposed approach in different SLAM or Visual Odometry (VO) systems, e.g [2].
[1] Qin et al.: Vins-mono: _A robust and versatile monocular visual-inertial state estimator_, T-RO 2018
[2] Leutenegger et al.: _Keyframe-based visual--inertial odometry using nonlinear optimization_, IJRR 2015
Simultaneous Localization And Mapping (SLAM) is the task of moving in a previously unknown environment while mapping the robot's workspace and simultaneously estimating its position in this map. Providing the robot with an understanding of its current environment, this is a cornerstone when a robot is supposed to autonomously execute a task. While state-of-the-art SLAM system have reached considerable robustness and accuracy, a common assumption in SLAM is that the environment is completely static, therefore leading to **serious estimation errors in dynamic environments**.
Since in many real-world environments, such as streets or public places, dynamic objects are present (e.g. cars, trams, or humans), the Vision for Robotics Lab (V4RL) has done some research into dynamic SLAM in the last year, showing how an existing SLAM system [1] can be extended to deal with dynamic environments. This project should continue this research with the goal of further **improving the performance of SLAM in dynamic environments**, also proving the effectiveness of the proposed approach in different SLAM or Visual Odometry (VO) systems, e.g [2].
[1] Qin et al.: Vins-mono: _A robust and versatile monocular visual-inertial state estimator_, T-RO 2018
[2] Leutenegger et al.: _Keyframe-based visual--inertial odometry using nonlinear optimization_, IJRR 2015
- WP1: Research into existing algorithms and systems for SLAM, Visual Odometry and the handling of dynamic scenes in state estimation processes.
- WP2: Analysis of problems of existing approaches in handling dynamic environments in SLAM to identify problem cases.
- WP3: Adaption of the current approach for dynamic environments with the goal to improve the performance of SLAM (accuracy and robustness).
- WP4: Performance analysis of the proposed approach.
- WP1: Research into existing algorithms and systems for SLAM, Visual Odometry and the handling of dynamic scenes in state estimation processes. - WP2: Analysis of problems of existing approaches in handling dynamic environments in SLAM to identify problem cases. - WP3: Adaption of the current approach for dynamic environments with the goal to improve the performance of SLAM (accuracy and robustness). - WP4: Performance analysis of the proposed approach.
- C++ programming experience
- Background knowledge in computer vision, non-linear optimization and 3D geometry is beneficial
- C++ programming experience - Background knowledge in computer vision, non-linear optimization and 3D geometry is beneficial
Patrik Schmuck (pschmuck@ethz.ch)
Marco Karrer (marco.karrer@mavt.ethz.ch)