Carolina Bianchi

Extracting contact surfaces from point-cloud data for autonomous placing of rigid objects.

Abstract

Nowadays, thousands of human workers engage daily in non-creative and physically demanding tasks such as order-picking-and-placing. This task consists of collecting different goods from warehouse shelves and placing them in a container to fulfill an order. The robotics research community has put much effort into investigating the automation of the picking problem for robotic manipulators. The placing problem, however, has received comparably less attention.

A robot tasked with placing a grasped object has to choose a suitable pose and motion to release the item inside a container, that may be partially filled with other goods. The aim of this thesis is to develop and evaluate a system that automates the placing of objects in a container, whose content is perceived through an RGB-D camera. To accomplish this goal, we developed a perception module that, taking as input the RGB-D data, estimates the volume of the objects inside the container and extracts horizontal surfaces as potential supporting regions for the object. We integrated this module with a state-of-the-art placement planner to compute placement poses for the grasped object, that are stable and reachable by the robot among the perceived obstacles.

We evaluated the system by manually reproducing the computed placements in different test scenarios. Our experiments confirm that with the developed pipeline it is possible to automatically compute feasible and stable placement poses for different objects in containers filled with various objects, perceived through an RGB-D camera.