In this project, we want to learn human-like robotic grasping skills in a reinforcement learning setting. The goal is to leverage visual information of human-object interactions to guide the learning process.
Goal and motivation for robotic grasping In this project, we want to learn a policy for dexterous robotic grasping using human-object contact supervision. Learning how to grasp and manipulate objects is an essential skill for robots to assist our daily life with affordances such as organizing household items, passing items to people, and handling kitchen appliances. Reinforcement learning offers a potential solution to learning grasping skills that can be robust and work in real-time. However, most often, learned policies do not resemble human grasping unless trained with human demonstrations. Such demonstrations are costly to obtain and ignore contact information between humans and objects.
Limitation of existing work Learning from sparse rewards in robotic grasping tasks is quite challenging due to the large continuous exploration space. Since a reward is only provided upon task completion, the robot needs to solve the final task via random exploration in order to learn it. For example, the robot needs to find and grasp the object, and then move it to a target location in a pick and place setting by randomly trying all possible solutions.
In this project, we would like to leverage human hand grasp contact supervision to learn a reward function to guide the robot towards more efficiently learning a grasping task. Moreover, the guidance should also yield more natural grasping behavior.
Tasks In particular, the student will 1) Implement a reinforcement learning environment for robotic grasping in a physics engine (MuJoCo); 2) Explore a method for integrating vision-based grasp information into a reward function; 3) Extend the learning setup with the explored data-driven, vision-based guidance and analyze whether more efficient learning and natural grasps of objects can be achieved.
Requirements We are looking for independent and highly motivated students who 1) Have taken a recognized deep learning or a modern computer vision course (preferably, Machine Perception); 2) Is skilled in Python and PyTorch.
Optional Prior knowledge with physics-based engines is beneficial.
The projects are research-oriented, and we encourage students to submit to top-tier computer vision conferences. We work closely with students during their projects and the master thesis is a great introduction to PhD positions in our lab.
Reference Dexterous Robotic Grasping with Object-Centric Visual Affordances: https://arxiv.org/pdf/2009.01439.pdf