Learning to Pour Using Warped Features

If we want to use robots to perform everyday tasks they have to work with various objects. A robot can be trained using demonstration and reinforced learning on single objects. Though it would take too much time to train a robot for every object from scratch. The robots have to adapt and generalize their knowledge so they can perform the tasks with other object. In this thesis an approach is introduced how to generalise the knowledge using high level features for a pouring task. The high level features depend on the shapes and parts of objects. They are directly computed from the parts of an object. Therefore the parts of an objects have to be identified. In this thesis model meshes were created using a 3D-scanner and the marching cubes algorithm. By labelling the vertices of a mesh to get the object parts the high level features can be directly computed using the labelled vertices. First a source mesh was manual labelled to avoid an initial error. Afterwards other objects were labelled using a warping approach and the already labelled mesh. To test the high level features experiments were run on a simulator. The simulator is based on the bullet physic engine and uses another project to integrate a fluid simulation. In the first experiment a classifier was trained on one object to identify fluid particles which will increase the fluid volume in a container. In the second experiment another classifier was trained on one object to identify when a container starts to pour. Both classifier were used on other objects to see how well the high level features generalises. For both experiments the learned classifiers were able to generalise between the objects and were producing good results.