Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Towards Collaborative SLAM using Smartphones
The goal of this project is to investigate and improve the performance of visual-inertial odometry running on smartphones in order to boost the performance of collaborative SLAM with portable devices.
Simultaneous Localization And Mapping (**SLAM**) is the task of moving in a previously unknown environment while mapping the robot's workspace and simultaneously estimating its position in this map. Providing the robot with an understanding of its current environment, this is a cornerstone when a robot is supposed to autonomously execute a task. While this challenge is one of the most studied in mobile robotics, recent research directions aim at taking SLAM one step further towards multiple robots performing the SLAM estimation process collaboratively as a team, opening up a wide range of possible applications such as industrial inspection or search-and-rescue missions. As a result of this research at V4RL, systems have been developed allowing **team of multiple robots to perform SLAM in a collaborative manner** [1],[2].
While [1] and [2] were designed for mobile robots equipped with high-quality visual(-inertial) sensors suites, collaborative scene estimation also offers the opportunity of **collaborations among multiple human users equipped with smartphones**, with great potential impact on various applications ranging from collaborative scene reconstruction of an area to collaborative AR/VR gaming. Recent research at the lab has shown promising results in this direction. However, smartphone platforms exhibit different constraints than mobile robots, such as less onboard calculation power and sensors of lesser quality (e.g. rolling shutter cameras), which has to be taken into account. Therefore, the goal of this project is to **investigate and improve the efficiency of the current visual-inertial odometry front-end for smartphones**, in order to boost the performance and applicability of the collaborative SLAM system those phones.
[1] Schmuck and Chli: CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams, JFR 2018
Karrer et al.: CVI-SLAM—Collaborative Visual-[2] Inertial SLAM, RA-L 2018
Simultaneous Localization And Mapping (**SLAM**) is the task of moving in a previously unknown environment while mapping the robot's workspace and simultaneously estimating its position in this map. Providing the robot with an understanding of its current environment, this is a cornerstone when a robot is supposed to autonomously execute a task. While this challenge is one of the most studied in mobile robotics, recent research directions aim at taking SLAM one step further towards multiple robots performing the SLAM estimation process collaboratively as a team, opening up a wide range of possible applications such as industrial inspection or search-and-rescue missions. As a result of this research at V4RL, systems have been developed allowing **team of multiple robots to perform SLAM in a collaborative manner** [1],[2].
While [1] and [2] were designed for mobile robots equipped with high-quality visual(-inertial) sensors suites, collaborative scene estimation also offers the opportunity of **collaborations among multiple human users equipped with smartphones**, with great potential impact on various applications ranging from collaborative scene reconstruction of an area to collaborative AR/VR gaming. Recent research at the lab has shown promising results in this direction. However, smartphone platforms exhibit different constraints than mobile robots, such as less onboard calculation power and sensors of lesser quality (e.g. rolling shutter cameras), which has to be taken into account. Therefore, the goal of this project is to **investigate and improve the efficiency of the current visual-inertial odometry front-end for smartphones**, in order to boost the performance and applicability of the collaborative SLAM system those phones.
[1] Schmuck and Chli: CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams, JFR 2018 Karrer et al.: CVI-SLAM—Collaborative Visual-[2] Inertial SLAM, RA-L 2018
- WP1: Research into existing algorithms and systems, e.g. [1],[2]
- WP2: Familiarization with the existing VIO front-end, peculiarities of smartphone usage (such as increased motion blur), and investigation of performance bottlenecks.
- WP3: Adaption of the existing VIO-system to the characteristics of smartphone platforms.
- WP4: Testing and evaluation of the proposed method.
- WP1: Research into existing algorithms and systems, e.g. [1],[2] - WP2: Familiarization with the existing VIO front-end, peculiarities of smartphone usage (such as increased motion blur), and investigation of performance bottlenecks. - WP3: Adaption of the existing VIO-system to the characteristics of smartphone platforms. - WP4: Testing and evaluation of the proposed method.
- C++ programming experience
- Background knowledge in computer vision and/or 3D geometry is beneficial
- C++ programming experience - Background knowledge in computer vision and/or 3D geometry is beneficial