Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Semantic segmentation: Scene segmentation and classification for vision-based autonomous landing site selection and hazard avoidance onboard of a fixed-wing UAV
Design and train a semantic segmentation architecture that performs segmentation and multi-class segmentation using a RGB and thermal camera
Keywords: Computer vision, deep learning, semantic segmentation, segmentation, classification, scene understanding
ASL has been working on fixed-wing UAVs since 2007, more recently shifting the research focus from solar powered UAVs to autonomous planes that can also operate beyond visual line-of-sight to the (safety) pilot. For example in the case of a motor failure, it is important that the UAV is able to autonomously find a suited landing spot and safely execute the landing approach. For this purpose, ASL has developed an autonomous landing site selection framework [1]. However, in this framework the segmentation and binary classification (grass, not grass) is based on classical methods (e.g. Canny Edge Detector, Random Forest). The goal of this project is therefore to develop and train a deep learning based semantic segmentation architecture that implicitly performs segmentation and multi-class segmentation (e.g. water, road, house, forest, grass) using imagery from a RGB-thermal camera rig.
Related Literature
- Hinzmann et al., “Free LSD: Prior-Free Visual Landing Site Detection for Autonomous Planes”, 2018
- Long et al., “Fully convolutional networks for semantic segmentation”, 2015
- Garcia-Garcia et al., “A Review on Deep Learning Techniques Applied to Semantic Segmentation”, 2017
ASL has been working on fixed-wing UAVs since 2007, more recently shifting the research focus from solar powered UAVs to autonomous planes that can also operate beyond visual line-of-sight to the (safety) pilot. For example in the case of a motor failure, it is important that the UAV is able to autonomously find a suited landing spot and safely execute the landing approach. For this purpose, ASL has developed an autonomous landing site selection framework [1]. However, in this framework the segmentation and binary classification (grass, not grass) is based on classical methods (e.g. Canny Edge Detector, Random Forest). The goal of this project is therefore to develop and train a deep learning based semantic segmentation architecture that implicitly performs segmentation and multi-class segmentation (e.g. water, road, house, forest, grass) using imagery from a RGB-thermal camera rig.
Related Literature
- Hinzmann et al., “Free LSD: Prior-Free Visual Landing Site Detection for Autonomous Planes”, 2018 - Long et al., “Fully convolutional networks for semantic segmentation”, 2015 - Garcia-Garcia et al., “A Review on Deep Learning Techniques Applied to Semantic Segmentation”, 2017
- Literature review on semantic segmentation
- Development of a ground truth data collection strategy: from simulation and/or real-world datasets
- Development of a deep learning architecture taking advantage of the thermal and visual (RGB) spectrum.
- Mode A: Semantic segmentation during high altitude flights for landing site selection. Classification into e.g. water, road, house, forest, grass.
- Mode B: Semantic segmentation for low-altitude flight when the landing site has already been selected but unmapped (e.g. smaller stones) or dynamic obstacles are detected in the live image stream. Based on the these hazards, a local path planner (existing) recomputes a safer landing spot.
- Real-time deployment on the fixed-wing UAV
- Literature review on semantic segmentation - Development of a ground truth data collection strategy: from simulation and/or real-world datasets - Development of a deep learning architecture taking advantage of the thermal and visual (RGB) spectrum. - Mode A: Semantic segmentation during high altitude flights for landing site selection. Classification into e.g. water, road, house, forest, grass. - Mode B: Semantic segmentation for low-altitude flight when the landing site has already been selected but unmapped (e.g. smaller stones) or dynamic obstacles are detected in the live image stream. Based on the these hazards, a local path planner (existing) recomputes a safer landing spot. - Real-time deployment on the fixed-wing UAV
- Courses and experience in computer vision and (deep) learning
- C++, python
- Courses and experience in computer vision and (deep) learning - C++, python
For more information, visit: https://docs.google.com/presentation/d/1DVAy-Jl4dDyL4uEeAYm6aD9b2YS_-oTauESIQ7nvhxE/edit?usp=sharing
Please send your CV and transcript of records to hitimo@ethz.ch and tstastny@ethz.ch
For more information, visit: https://docs.google.com/presentation/d/1DVAy-Jl4dDyL4uEeAYm6aD9b2YS_-oTauESIQ7nvhxE/edit?usp=sharing
Please send your CV and transcript of records to hitimo@ethz.ch and tstastny@ethz.ch