Register now After registration you will be able to apply for this opportunity online.
Egocentric Hand-Tool Pose Estimation during Surgical Tool Usage
The goal of this thesis is to create a system to perform Hand-Tool Pose Estimation during Surgical Tool Usage. We are looking for a motivated student who is passionate about applying machine learning and computer vision methods and working with a multi-camera setup in a surgical setting.
To improve the support offered by mixed reality applications during surgical procedures, process monitoring systems can be utilized to recognize distinct activities performed by the surgeon. One of the most critical interactions to recognize are surgical tool interactions – a key element in any surgery that must be recognized correctly and robustly to infer the performed surgical workflow. The goal of this thesis is to develop an hand-tool pose estimation for surgical tools during spinal fusion surgery from egocentric videos.
To improve the support offered by mixed reality applications during surgical procedures, process monitoring systems can be utilized to recognize distinct activities performed by the surgeon. One of the most critical interactions to recognize are surgical tool interactions – a key element in any surgery that must be recognized correctly and robustly to infer the performed surgical workflow. The goal of this thesis is to develop an hand-tool pose estimation for surgical tools during spinal fusion surgery from egocentric videos.
The project will utilize multiple ZED RGB-D cameras and consist of the following tasks:
System Integration:
- Synchronized recording of external ZED 2i RGB-D cameras
- Point cloud fusion from multiple depth-images
- Color image and point cloud segmentation
- Hand and tool pose ground truth extraction
Benchmarking SOTA Egocentric Hand-Tool Pose Methods
The project will utilize multiple ZED RGB-D cameras and consist of the following tasks:
System Integration:
- Synchronized recording of external ZED 2i RGB-D cameras - Point cloud fusion from multiple depth-images - Color image and point cloud segmentation - Hand and tool pose ground truth extraction
Benchmarking SOTA Egocentric Hand-Tool Pose Methods
... a very autonomous and methodical way of working. You know how to structure a project, how to derive meaningful work packages and how to systematically develop solutions. … Previous project experience with computer vision and machine learning (PyTorch, OpenCV, Open3D, ...)
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.
As part of our research at the AR Lab within the Human Behavior Group we are working on automatically analyzing a user’s interaction with his environment in scenarios such as surgery or in industrial machine interactions. By collecting real-world datasets during those scenarios and using them for machine learning tasks such as activity recognition, object pose estimation or image segmentation we can gain an understanding of how a user performed during a given task. We can then utilize this information to provide the user with real-time feedback on his task using mixed reality devices, such as the Microsoft HoloLens, that can guide him and prevent him from doing mistakes.