“Are You Still With Me?” Discover our latest research published in Frontiers In Rob. & AI

We are pleased to announce that our paper titled “Are You Still With Me? Continuous Engagement Assessment From a Robot’s Point of View” by Francesco Del Duchetto, Dr. Paul Baxter, and Prof. Marc Hanheide has been published in the Journal of Frontiers in Robotics and AI. This work originates from the research produced with the long term deployment of our robot Lindsey at the Collection Museum as part of the  “Lindsey – A Robot Tour Guide” project. 

The goal of this work is to give robots the ability to understand the users’ engagement level during human-robot interactions. This capability is really important for social robots because it allows them to plan their behavior with the aim of maintaining a high level of engagement during the course of the interaction. We provide a ready-to-use model that can predict in real-time from an egocentric camera the engagement level of people interacting with the robot.

Lindsey, the tour guide robot deployed at The Collection museum.

Lindsey, the tour guide robot deployed at The Collection museum.

The characterization of engagement is multi-faceted and has still not been clearly specified yet. In order to overcome this inherent difficulty, we gathered a large number of videos during interactions from our robot’s head camera and asked a number of human coders to report their intuitive assessment of the engagement level while watching such videos. The data and the annotations have been collected in the TOur GUide RObot (TOGURO) dataset. Successively, we have trained a deep regression model, on the TOGURO dataset, to predict a single scalar value of engagement from a short sequence of video frames.

Engagement annotated values and our model's predictions over a guided tour interaction sequence recorded from our robot's head camera.

Engagement annotated values and our model’s predictions over a guided tour interaction sequence recorded from our robot’s head camera.

In our experiments, we show that the engagement model is able to capture the inherent human interpretation of engagement provided by the annotators and that it is generic enough to be successfully applied in scenarios completely different from the one in our museum project, featuring different environments, on a different robot with a different camera, and with different tasks and people.

This article is openly accessible at: https://doi.org/10.3389/frobt.2020.00116

Our ready-to-use engagement model is available at: https://github.com/LCAS/engagement_detector.