Robotic Systems LabOpen OpportunitiesThe remarkable agility of animals, characterized by their rapid, fluid movements and precise interaction with their environment, serves as an inspiration for advancements in legged robotics. Recent progress in the field has underscored the potential of learning-based methods for robot control. These methods streamline the development process by optimizing control mechanisms directly from sensory inputs to actuator outputs, often employing deep reinforcement learning (RL) algorithms. By training in simulated environments, these algorithms can develop locomotion skills that are subsequently transferred to physical robots. Although this approach has led to significant achievements in achieving robust locomotion, mimicking the wide range of agile capabilities observed in animals remains a significant challenge. Traditionally, manually crafted controllers have succeeded in replicating complex behaviors, but their development is labor-intensive and demands a high level of expertise in each specific skill. Reinforcement learning offers a promising alternative by potentially reducing the manual labor involved in controller development. However, crafting learning objectives that lead to the desired behaviors in robots also requires considerable expertise, specific to each skill.
- Information, Computing and Communication Sciences
- Master Thesis
| Humanoid robots, designed to mimic the structure and behavior of humans, have seen significant advancements in kinematics, dynamics, and control systems. Teleoperation of humanoid robots involves complex control strategies to manage bipedal locomotion, balance, and interaction with environments. Research in this area has focused on developing robots that can perform tasks in environments designed for humans, from simple object manipulation to navigating complex terrains. Reinforcement learning has emerged as a powerful method for enabling robots to learn from interactions with their environment, improving their performance over time without explicit programming for every possible scenario. In the context of humanoid robotics and teleoperation, RL can be used to optimize control policies, adapt to new tasks, and improve the efficiency and safety of human-robot interactions. Key challenges include the high dimensionality of the action space, the need for safe exploration, and the transfer of learned skills across different tasks and environments. Integrating human motion tracking with reinforcement learning on humanoid robots represents a cutting-edge area of research. This approach involves using human motion data as input to train RL models, enabling the robot to learn more natural and human-like movements. The goal is to develop systems that can not only replicate human actions in real-time but also adapt and improve their responses over time through learning. Challenges in this area include ensuring real-time performance, dealing with the variability of human motion, and maintaining stability and safety of the humanoid robot.
- Information, Computing and Communication Sciences
- Master Thesis
| In recent years, advancements in reinforcement learning have achieved remarkable success in teaching robots discrete motor skills. However, this process often involves intricate reward structuring and extensive hyperparameter adjustments for each new skill, making it a time-consuming and complex endeavor. This project proposes the development of a skill generator operating within a continuous latent space. This innovative approach contrasts with the discrete skill learning methods currently prevalent in the field. By leveraging a continuous latent space, the skill generator aims to produce a diverse range of skills without the need for individualized reward designs and hyperparameter configurations for each skill. This method not only simplifies the skill generation process but also promises to enhance the adaptability and efficiency of skill learning in robotics. - Engineering and Technology, Information, Computing and Communication Sciences
- Master Thesis
| In recent years, using deep Reinforcement Learning (RL) for robotic motion policies has demonstrated impressive performance, yielding unprecedented robustness on real hardware. Current sim2real approaches rely on large-scale pre-training with domain randomization to make policies robust but struggle with high-dimensional spaces. Current RL methods are primarily limited by their low sample efficiency. Leveraging differentiable simulators for first-order gradient information shows great results for enhancing sample efficiency. Although promising simulation results exist, deployment on hardware is not usually done. The goal of this thesis is to train quadrupedal locomotion policies in a differentiable simulation framework, and then enable real-world deployment by modifying the simulation, the policy training, or the learning algorithm. Ideally, we can leverage properties of differentiable simulators in this process to improve sim2real transfer by fitting real data. - Intelligent Robotics, Robotics and Mechatronics
- Master Thesis, Semester Project
| Navigating the unpredictable off-road environment, autonomous robots require a tailored approach to overcome obstacles and optimize pathfinding. Our proposed terrain cost mapping system goes beyond traditional processing by factoring in each robot's specific kinematic abilities. We introduce a novel simulation-based Roll-Out technique to predict a robot's stability over varied terrains, thereby calculating a precise terrain cost. This innovative strategy promises to enhance autonomous navigation by ensuring safe and efficient traversal tailored to individual robotic capabilities. - Intelligent Robotics
- Master Thesis, Semester Project
| This project aims to answer the unsolved question of how to guarantee (in a computationally efficient way) hard convex constraints on the output of a network when the parameters that define the constraints change. - Geometry, Intelligent Robotics, Optimisation
- Collaboration, Master Thesis, Semester Project
| This project aims to build an intuitive mixed reality (MR) interface for robot operators using Appleās ARKit capabilities. Building on an existing iOS application, we want to provide the operators with a more immersive and informative interface. This involves showing the robot's sensory readings, sketching a global path for it, and playing back the robot's states in the application.
- Intelligent Robotics, Virtual Reality and Related Simulation
- Bachelor Thesis, Semester Project
| Model-based state estimation for locomotion has shown some significant drawbacks, especially in the case of complex contact scenarios. At the same time, locomotion controllers are evolving, now purposely using knee contacts or wheel slippage for advanced motions. The current model-based state estimation techniques often cannot supply sufficiently accurate observations for these controllers, leading to major estimation drifts and thus potential failures. In this project, we aim to leverage learning-based methods not only for locomotion control, but also for state estimation. Preliminary work shows that creating a state estimation through supervised learning from recorded simulation data can produce a viable solution. Furthermore, fusing these approaches with classical filtering theory opens a promising realm of research. The project should also compare the developed methods with existing approaches on real hardware. If time permits, we are interested in learning state estimation and locomotion jointly. - Intelligent Robotics, Robotics and Mechatronics
- Master Thesis, Semester Project
| This project addresses the task of 6D pose estimation for general-purpose objects, particularly when dealing with occlusion. We aim to leverage recent deep learning methods and synthetic data generation schemes to enable robust object manipulation. - Intelligent Robotics
- Master Thesis, Semester Project
|
Transport of packages of various dimensions is often mentioned as one of the most viable use cases for autonomous mobile robots. The ability to autonomously pick up and self-load a package is, however, a functionality that many systems are still lacking. Preliminary work showed that quadrupedal robots have the potential to execute this skill by manipulating payloads with their legs or main body. In this project, we aim to investigate how legged and wheeled legged robots can achieve autonomous package pick-and-load tasks with practical design modifications and clever maneuvers.
- Intelligent Robotics, Mechanical Engineering, Robotics and Mechatronics
- Master Thesis, Semester Project
|
|