One-day workshop at IEEE/RSJ IROS, 3 October 2019, Macau, China

For an up-to-date webpage of this workshop please visit: http://www.ai.rug.nl/oel/

this workshop is supported by the following IEEE/RAS technical committees: 

  • Robotic Hand Grasping and Manipulation
  • Robot Learning
  • Cognitive robotics

To have the support letters please contact aghalamzanesfahani@lincoln.ac.uk

Open-Ended Learning for Object Perception and Grasping:

Current Successes and Future Challenges

In order for robots to operate in human-centric environments,  robotic perception and grasping must be properly addressed. Although many related problems have already been understood and solved successfully, many such research questions still remain open. Open-ended learning is one of such challenges waiting for many improvements. Cognitive science revealed that humans learn to recognize object categories and grasp points ceaselessly over time. This ability allows them to adapt to new environments, by enhancing their knowledge from the accumulation of experiences and the conceptualization of new object categories. Taking this theory as an inspiration, an autonomous robot must have the ability to process visual information and conduct learning and recognition tasks in a concurrent and open-ended fashion. This cognitive robotic approach allows us better integrate them into our societies.  

In state-of-the-art, deep convolutional neural network (CNN) has advanced object perception and grasping. Given a fixed set of object categories and a large number of examples per category, CNN-based approaches yield good results for both object perception and grasping. These approaches are incremental by nature but not open-ended, since the inclusion of novel categories enforces a restructuring in the topology of the network. as such, training a CNN is required implying a long training time. In fact, it is unpractical to pre-program all necessary object categories and predict all the exceptions for the robots.  In practice and for a robot to operate in human-centric environments, the robot is desired to learn autonomously about novel concepts from only a few examples online. As such, the robot competence increases over the course of actions making it robust and tolerating to some extent faults. We will discuss the current and future challenges and opportunities for open-ended approaches in this workshop by considering the following questions:

  1. What can be transferred from human human cognition to cognitive robotics?
  2. How human cognition translates to open-ended learning approach in robotics?
  3. What challenges, opportunities the open-ended approaches provide for incremental robot learning and learning from observations?
  4. What would be the target for open-ended approaches? To what extend open-ended approaches must generalise?
  5. How a robot (supervised or unsupervised) should incrementally collect data of its own experiences during interaction with objects?
  6. How object perception (cognition) can be extended to provide generative model of a scene (probabilistic scene descriptions)? And How this can be used for task informed grasping
  7. How to evaluate performance of open-ended learning approaches? What are the right metrics to do so?
  8. What would be the right benchmarks, datasets that helps evaluate approaches and compare progress in this field?

Topics of interest
Topics of interest include but not limited to the following:

  • Incremental learning
  • Transfer learning from one to another type of robot hand
  • Open-ended grasping of deformable objects
  • Architectures for open-ended learning
  • Lifelong learning and adaptation for autonomous robots
  • Cognitive robotics
  • Deep learning for task-informed grasping
  • Deep transfer learning for object perception
  • Knowledge transfer and avoidance of catastrophic forgetting
  • Affordance learning and task informed grasping
  • Challenges of Human-Robot collaborative manipulation
  • Grasping and object manipulation
  • 3D object category learning and recognition
  • Active perception and scene interpretation
  • Coupling between object perception and manipulation
  • Learning from demonstrations

Great line-up of speakers:

  • Henrik Iskov Christensen – KUKA Chair of Robotics (confirmed) — Topic: 3D Pose Estimation of Daily Objects Using an RGB-D Camera; and Robust grasp preimages under unknown mass and friction distributions.
  • Tamim Asfour – Institute for Anthropomatics and Robotics, Germany (tentatively confirmed) — Topic: Current Successes and Future Challenges in Humanoid Grasping and Manipulation in the Real World.
  • Serena Ivaldi  – INRIA, France (confirmed) — Topic: Teaching a Robot to Grasp Irregular Objects with Machine Learning and Human-in-the-Loop Approaches.
  • Jens Kober & Carlos Celemin Paez – Delft University, The Netherlands (confirmed) — Topic: Teaching Robots Interactively; and Enabling Robots to Learn “How to Perform Manipulation Tasks from Few Human Demonstrations”.
  • Shuran Song  – Columbia University (confirmed) — Topic: Multi-View Self-Supervised Deep Learning for 6D Pose Estimation; and Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching.
  • Vincent Vanhoucke & Julian Ibarz –  Google (confirmed) — Topic: Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping; and Grasp2Vec: Learning Object Representations from Self-Supervised Grasping.
  • Luis Seabra Lopes – University of Aveiro, Portugal (confirmed) —  Topic: Interactive Open-Ended Learning Approaches for 3D Object Recognition.
  • Hao Su –  UC San Diego, USA (confirmed) — Topic: Semantic Scene Segmentation using PartNet Models.
  • Yukie Nagai – Osaka University, Japan (confirmed) — Topic: Cognitive Development Based on Sensorimotor Predictive Learning.
  • Chelsea Finn – Stanford University & Google Brain (tentatively confirmed) —  Topic: Learn to Learn Many Other Tasks from One Demonstration.