Register now After registration you will be able to apply for this opportunity online.
This opportunity is not published. No applications will be accepted.
Opening doors ain’t easy, let’s use deep learning!
This projects aims to tackle mobile manipulation with legged robots using deep reinforcement learning techniques and develop combined locomotion and manipulation policies.
Keywords: Robotics, Mobile manipulation, Deep reinforcement learning, Deep learning, Computer Vision
We seek to extend the capabilities of our quadrupedal robot ANYmal to incorporate mobile manipulation. To this end we have integrated a light-weight robotic manipulator with a three-finger gripper on the robot and now pursue means of designing joint locomotion-manipulation policies based on neural networks.
Such capabilities would enable the robot to manipulate objects in its workspace and perform tasks such as opening doors and grasping objects. Classical approaches usually consist of sophisticated computational pipeline which separates door/handle detection, arm control, and base locomotion to work independently of each other.
However, recent advances in deep reinforcement learning and computer vision pose very promising approaches which combine perception and control. This ambitious project aims to apply such methods and show that it is possible to train ANYmal equipped with an arm to open doors, end-to-end from camera images to actuator commands.
Relevant publications:
- https://arxiv.org/abs/1502.05477
- https://arxiv.org/abs/1504.00702
- https://ieeexplore.ieee.org/document/7758092/
We seek to extend the capabilities of our quadrupedal robot ANYmal to incorporate mobile manipulation. To this end we have integrated a light-weight robotic manipulator with a three-finger gripper on the robot and now pursue means of designing joint locomotion-manipulation policies based on neural networks.
Such capabilities would enable the robot to manipulate objects in its workspace and perform tasks such as opening doors and grasping objects. Classical approaches usually consist of sophisticated computational pipeline which separates door/handle detection, arm control, and base locomotion to work independently of each other.
However, recent advances in deep reinforcement learning and computer vision pose very promising approaches which combine perception and control. This ambitious project aims to apply such methods and show that it is possible to train ANYmal equipped with an arm to open doors, end-to-end from camera images to actuator commands.
Relevant publications:
- https://arxiv.org/abs/1502.05477
- https://arxiv.org/abs/1504.00702
- https://ieeexplore.ieee.org/document/7758092/
1. Review relevant literature in deep reinforcement learning and mobile manipulation
2. Set up the simulation environment for offline training of visuomotor policies
3. Development of a door-opening test-bed consisting of a real door and frame
4. Experimental verification of methods developed in simulation
1. Review relevant literature in deep reinforcement learning and mobile manipulation 2. Set up the simulation environment for offline training of visuomotor policies 3. Development of a door-opening test-bed consisting of a real door and frame 4. Experimental verification of methods developed in simulation
- Hands-on experience in deep learning and robotics
- Excellent working knowledge of C++
- Experience with TensorFlow is a plus
- Experience with ROS is a plus
- Hands-on experience in deep learning and robotics - Excellent working knowledge of C++ - Experience with TensorFlow is a plus - Experience with ROS is a plus