Collected by the Kinect and the Optris PI-450 thermal imager
This dataset contains RGB-D(epth)-T(hermal) images of different people acquired in the Lincoln Centre for Autonomous Systems (L-CAS) at the University of Lincoln, UK. Data were recorded into different rosbag files, each containing RGB, Depth, and Thermal images of a person, together with 2D laser data. The dataset consist of four parts: i) a person standing still and turning around at 2m, 3.5m, 5m., ii) a person walking freely, iii) a person sitting on a sofa while the robot observes the person at front, left side and right side, and iv) a person turning around while wearing hat, glasses, and a scarf occluding different parts of his/her face.
This dataset provides:
- Robot Operating System (ROS) rosbags. Each rosbag contains about 1,000 continuous thermal, color, and depth images.
- Ground truth
- Calibration of RGB-D and thermal cameras
Note: the RGB-D images and calibration files can be provided only for research purposes according to Privacy Rule, please send us your request by email.
If you are considering using this data, please reference the following:
S. Cosar, C. Coppola, N. Bellotto, Volume-based Human Re-identification with RGB- Cameras, 12th International Conference on Computer Vision Theory and Applications (VISAPP),Feb 2017, Porto, Portugal.
The Optris PI-450 thermal imager was mounted at 1.3m from the floor, on the top of a Kompaï robot. The Kinect was mounted at 1.1m. The distance between the robot and the face was about 1.3m. Thermal images were recorded using the Optris official driver. Temperature data is encoded as:
Since the whole RGB-D-T data is big, please send an email to email@example.com for instructions to download the full dataset. We also provide subsets of the dataset below.
Thermal Images can be downloaded by the following links:
A sample data (one person only) for the thermal images can be downloaded below:
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Copyright (c) 2016 Serhan Cosar, and Nicola Bellotto.
This work was funded in part by the EU Horizon 2020 project ENRICHME, H2020-ICT-2014-1, Grant agreement no.: 643691.