Recent work by Christian Dondrup and Marc Hanheide on how to convert qualitative descriptions of a human and a robot moving in a confined shared space into movement commands for a mobile robot has been accepted to the next RO-MAN conference in New York City.
This publication builds on previous work introducing a qualitative description of Human-Robot Spatial Interactions and is part of ongoing research in the STRANDS project (http://www.strands-project.eu). This particular work aims at shaping Human-Robot Interaction with a mobile robot towards the preferences of the users. Being able to generate low-level movement commands for a mobile robot from a high-level qualitative description of the scene is an important step towards this final goal and presents a first approach of using Qualitative Spatial Relations for robot navigation.
In this work, we propose the combination of a state-of-the-art sampling-based local planner with so-called Velocity Costmaps to achieve human-aware robot navigation. Instead of introducing humans as “special obstacles” into the representation of the environment, we restrict the sample space of a “Dynamic Window Approach” local planner to only allow trajectories based on a qualitative description of the future unfolding of the encounter. To achieve this, we use a Bayesian temporal model based on a Qualitative Trajectory Calculus to represent the mutual navigation intent of human and robot, and translate these descriptors into sample space constraints for trajectory generation. We show how to learn these models from demonstration and evaluate our approach against standard Gaussian cost models in simulation and in real-world using a non-holonomic mobile robot. Our experiments show that our approach exceeds the performance and safety of the Gaussian models in pass-by and path crossing situations.