A Full-day workshop at Robotics: Science and Systems (RSS), 22nd June, 2019 – Messe Freiburg, Germany
This workshop was sponsored by Robert Bosch GmbH. Bosch also Kindly funded the best paper award.
This workshop is supported by IEEE/RAS Technical Committees on: (1) Robotic Hand, Grasping and Manipulation, (2) Cognitive Robotics, and (3) Robot Learning.
If you require further information on support/endorsement for this workshop, please email: email@example.com
TIG-I workshop at IEEE/RAS IROS 2018 can be found at https://www.birmingham.ac.uk/iros18
Robots need to physically interact with their environment to perform the desired manipulative tasks for efficient functioning in our society. Such interactions, e.g. grasping, require complex cognitive processes such as to perceive, plan, predict and to act. In robotic research, each of these sub-problems of grasping is typically considered in isolation. This is in contrast to the finding in cognitive science research on primates. Hence, robots are still far away from primates grasping ability. For instance, current robotic approaches separately detect an object in an image, segment it, synthesis grasp configuration using geometrical features of the perceived point cloud of the object, plan the manipulative movements, and perform the actions to deliver the object at the desired pose. This sequential open-loop pipeline of grasping and manipulative movements is not robust as a task constraint does neither influence the grasping, segmentation nor object detection whereas each of these components directly determines limitation on the solution space of the subsequent ones. In this workshop, we would like to discuss task-informed perception, grasping and manipulation, and the critical role of cognition. Specifically, the robot performs active perception to gain information sufficient for synthesising the grasps that facilitate performing the desired task. In such scenarios, the robot actively perceives the state of the environment and evaluates its actions to guarantee its own success across all the components of the manipulation task. This allows the robot to find a solution to each component of the manipulation pipeline that provides sufficient initial conditions for successfully performing the successive tasks in the pipeline; otherwise, for example, a grasp choice does not allow manipulative movements.
This workshop brings together researchers working in the area of robotic grasping/manipulation, planning, robot learning, and cognitive robotics; and it discusses the possible solution to the corresponding problems.
Topics of interest include (but not limited to):
- Deep learning for task-informed grasping
- Affordance informed grasping
- Grasping and manipulative movements in cluttered environments
- Task driven robotic perception
- Active perception
- joint planning of grasping pose and manipulative movements
- Benchmarking and dataset for grasping and manipulation
- Challenges of soft manipulation
Best paper award:
Bosch sponsored the best paper award of the TIG-II workshop (Prize: €500)
Karthik Desingh, Jana Pavlasek, Cigdem Kokenoz, Odest Chadwicke Jenkins “Tracking Large Scale Articulated Models with Belief Propagation for Task Informed Grasping and Manipulation”.
Deadline: 8 June 2019 (no extension)
Notification of acceptance: 16 June 2019
Camera ready: 18 June 2019
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Extended abstract (maximum 2 pages) will be accepted for poster presentation at this workshop.
- Full papers Contributed papers (maximum 6 pages) will be accepted based on their quality, originality, and relevance to the workshop. Submitted papers should not be under consideration for publication anywhere else. Submissions should follow the IEEE/RAS format.
Please submit your paper by emailing it to firstname.lastname@example.org
|Amir Ghalamzan||09:00 — 9:05||Welcome|
|Hao Su||09:05 — 9:35||Understanding 3D environment for interactions|
|Serena Ivaldi||09:35 — 10:10||
Improving object grasping and manipulation by exploiting uncertainty and human interaction
|Chelsea Finn||10:10 — 10:45||Learning Compound Tasks through Interaction and Observation|
|—||10:45 — 11:00||Coffee break|
|Ken Goldberg||11:00 — 11:35||DexNet, Perception and grasp synthesis, Deep learning for robust grasping of generic objects|
|11:35 — 12:10||Analysis of grippers and grasps for cloth manipulation|
|Andras Kupcsik||12:10 — 12:20||Robot Learning at Bosch Center for Artificial intelligence|
|Lightning talks of posters||12:10 — 12:45||3 mins presentation — Slides|
|—||12:45 — 13:45||Lunch|
|Posters||13:45 — 14:30||Interactive poster session and coffee break|
|—||14:30 — 14:45||Coffee break|
|Matteo Bianchi||14:45 — 15:15||Human-inspired strategies for grasping with SoftHands|
|Uwe Zimmermann||15:15 — 15:45||Mobile Manipulation – new chances and challenges in industrial automation|
|Cynthia Matuszek||15:45 — 16:15||
Learning Models of Language, Action and Perception for Human-Robot Collaboration
|Oliver Brock||16:15 — 16:45||A Critical Review of the State of Affairs in Grasping and Manipulation|
|16:45 – 17:30||Panel discussion — Closing remark|
- M. Mathew, A. Hayashi, J. Wyatt, Online Learning of Feed-Forward Models for Variable Impedance Control in Manipulation Tasks
- C. Zito, T. Deregowski and R. Stolkin, , 2D Linear Time-Variant Controller for Human’s Intention Detection
- A. Cosgun, S. D’Lima and T. Drummond,Embracing Contact: Pushing Multiple Objects with Robot’s Forearm
- T. Pardi, R. Stolkin and A. Ghalamzan, A Grasping Configurations manifold: A framework based on dual-quaternions
- K. Desingh, C. Kokenoz, O. Chadwicke Jenkins, Tracking Large Scale Articulated Models with Belief Propagation for Task Informed Grasping (Best Paper Award: €500 sponsored by Bosch)
- R. Holladay, T. Lozano-Pe ́rez and A. Rodriguez, Force-and-Motion Constrained Grasp Planning for Tool Use
- C. Zito, M. Hansard and Rustam Stolkin, Metrics and Benchmarks for Remote Shared Controllers in Industrial Applications
- B. Denoun, L. Jamone and M. Hansard , Robust and fast generation of top and side grasps for unknown objects
- E. Arruda, M. Kopicki and Jeremy L. Wyatt, Generative grasp synthesis from demonstration using parametric mixtures
- G. Lee, T. Bhattacharjee and S. S. Srinivasa, Bite Acquisition of Soft Food Items via Reconfiguration
- L. Rozo, A. Kupcsik and M. Burger, Fast Learning and Sequencing of Object-centric Manipulation Skills
Yikun Li, Mario Rios-Munoz, Lambert Schomaker, S. Hamidreza Kasaei, Learning to Detect Grasp Affordances of 3D Objects using Deep Convolutional Neural Networks
Title: Understanding the 3D Environments for Interactions
Abstract: Being able to understand the surrounding in both geometry and physics attributes as we humans do is a key step for building intelligent autonomous agents. This talk will cover a series of research progress in my lab towards this direction, focusing on how machine learning, especially deep learning, can be used to address challenging problems in 3D reconstruction, semantic recognition, and mobility structure induction.
Bio: Hao Su has been in UC San Diego as Assistant Professor of Computer Science and Engineering since July 2017. He is affiliated with the Contextual Robotics Institute and Center for Visual Computing. He served on the program committee of multiple conferences and workshops on computer vision, computer graphics, and machine learning. Professor Su is interested in fundamental problems in broad disciplines related to artificial intelligence, including machine learning, computer vision, computer graphics, robotics, and smart manufacturing. His work of ShapeNet, PointNet series, and graph CNNs have significantly impacted the emergence and growth of a new field, 3D deep learning. He used to work on ImageNet, a large-scale 2D image database, which is important for the recent breakthrough of computer vision. Applications of Su’s research include robotics, autonomous driving, virtual/augmented reality, smart manufacturing, etc. Hao Su obtained his Ph.D. in Computer Science from Stanford in 2018.
|Chelsea Finn||Title: Learning Compound Tasks through Interaction and Observation
Bio: Chelsea Finn is a research scientist at Google Brain and a post-doctoral scholar at UC Berkeley. In September 2019, Finn will be joining Stanford’s computer science department as an assistant professor. Finn’s research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, Finn has developed deep learning algorithms for concurrently learning visual perception and control in robotic manipulation skills, inverse reinforcement methods for scalable acquisition of nonlinear reward functions, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelors degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through an NSF graduate fellowship, a Facebook fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. With Sergey Levine and John Schulman, Finn also designed and taught a course on deep reinforcement learning, with thousands of followers online. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across three universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers.
Title: DexNet, Perception and grasp synthesis, Deep learning for robust grasping of generic objects
Abstract: Despite 50 years of research, robots remain remarkably clumsy, limiting their reliability for warehouse order fulfillment, robot-assisted surgery, and home decluttering. The First Wave of grasping research is purely analytical, applying variations of screw theory to exact knowledge of pose, shape, and contact mechanics. The Second Wave is purely empirical: end-to-end hyperparametric function approximation (aka Deep Learning) based on human demonstrations or time-consuming self-exploration. A “New Wave” of research considers hybrid methods that combine analytic models with stochastic sampling and Deep Learning models. I’ll present this history with new results from our lab on grasping diverse and previously-unknown objects and discuss exciting future research including cloud and fog robotics.
Bio: Jingyi Xu (represented Ken) is a PhD student at the Technical University of Munich (TUM) with Prof. Eckehard Steinbach. She studied Electrical Engineering and Information Technology at TUM for both Bachelor and Master. Currently, she is visiting the AutoLab at UC Berkely with Prof. Ken Goldberg. Her main research focus is grasping deformable fragile objects and contact modeling for soft fingers.
Ken Goldberg is an artist, inventor, and roboticist. He is William S. Floyd Jr Distinguished Chair in Engineering at UC Berkeley and Chief Scientist at Ambidextrous Robotics. Ken is on the Editorial Board of the journal Science Robotics, served as Chair of the Industrial Engineering and Operations Research Department, and co-founded the IEEE Transactions on Automation Science and Engineering. Short documentary films he co-wrote were selected for Sundance and one was nominated for an Emmy Award. Ken and his students have published 300 peer-reviewed papers, 9 US patents, and created award-winning artworks featured in 70 exhibits worldwide.
Title: Improving object grasping and manipulation by exploiting uncertainty and human interaction
Abstract: In this talk I will present how we approached the problem of grasping different objects with the iCub robot. In absence of tactile feedback, we explicitly considered the uncertainty in the object’s localisation, induced by the noisy cameras, and the manipulator’s kinematics to choose the best among a set of ranked grasps.
Bio: Serena Ivaldi is a tenured research scientist at Inria, leading the humanoid and human-robot interaction activities of the Team Larsen in Inria Nancy, France. She earned her Ph.D. in Humanoid Technologies in 2011 at the Italian Institute of Technology. Prior to joining Inria, she was post-doctoral researcher in UPMC in Paris, France, then at the University of Darmstadt, Germany. She was PI of the EU projects CoDyCo (FP7); she is currently PI of the EU projects AnDy (H2020) and Heap (CHIST-ERA). She is also involved in the French ANR project Flying CoWorker. Her research is focused on humanoid robotics and human-robot collaboration, using machine learning to improve the control, prediction and interaction skills of robots. She strongly believes in user evaluation, i.e., making potential end-users evaluate the robotics technologies to improve usability, trust and acceptance.
Title: Human-inspired strategies for grasping with SoftHands
Abstract: The avenue of soft robotic end effectors has determined a paradigm shift in grasping execution and planning. The classic approach used to grasp with rigid robotic hands generally favored object-centric analytical solutions. More specifically, a set of available contact points is hypothesized, while their position and contact forces are evaluated from the object knowledge. On the contrary, in soft artificial hands part of the control intelligence has been directly embedded in their mechanism, through the purposeful introduction of elastic elements and under-actuation patterns. Thanks to their intrinsic compliance, soft hands can mold around the external items and exploit their environment, thus multiplying their grasping capabilities, as humans actually do. For these reasons, an approximated estimation of the relative hand-object pose is sufficient to ensure a successful grasp. In this talk, I will discuss how human-inspiration, machine learning, minimal sensing and mechatronic design can be combined to allow soft grippers to autonomously grasp items, capitalizing on the interaction with the environment.
Bio: Matteo Bianchi is currently an Assistant Professor at the Research Centre “E. Piaggio” and the Department of Information Engineering (DII) of the Università di Pisa. He is also clinical research affiliate at Mayo Clinic (Rochester, USA) and serves as co-Chair of the RAS Technical Committee on Robot Hands, Grasping and Manipulation and Vice-Chair for Information and Dissemination of the RAS Technical Committee on Haptics. He is the Principal Investigator of the EU Project SoftPro (No.688857) for the Research Centre “E. Piaggio”. His research interests include: human and robotic hands: optimal sensing and control; and haptics. He is an author of more than 100 peer-reviewed contributions and serves as member of the editorial/organizing board of international conferences and journals. He is editor of the book “Human and Robot Hands”, Springer International Publishing. He is recipient of several national and international awards, including the JCTF novel technology paper award at the IEEE/RSJ IROS Conference in Villamoura, Portugal (2012) and the Best Paper Award at the IEEE-RAS Haptics Symposium in Philadelphia, USA (2016). He is a member of the IEEE.
Title: A Critical Review of the State of Affairs in Grasping and Manipulation
Abstract: The field of robotics is rediscovering manipulation. After many years of focusing on robots that do not manipulate the world, for example in research on SLAM, autonomous driving, or drones, there is an increasing amount of research on manipulation. This rekindled interest is fueled by many new and promising ideas. In this talk I would like to give my personal perspective on what the new insights, or maybe even “schools of thought” are, what their advantages and challenges are, and how they might enable us to build highly competent manipulation systems. Where appropriate, I will illustrate this discussion with recent work from my group on soft hands, sensorization, grasping with environmental constraints, interactive perception, and insights from studying human grasping.
Bio: Oliver Brock is the Alexander-von-Humboldt Professor of Robotics in the School of Electrical Engineering and Computer Science at the Technische Universität Berlin in Germany. He received his Ph.D. from Stanford University in the year 2000 and held post-doctoral positions at Rice University and Stanford University. He was an Assistant Professor and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst, before to moving back to the Technische Universität Berlin in 2009. The research of Brock’s lab, the Robotics and Biology Laboratory, focuses on robot intelligence, mobile manipulation, interactive perception, grasping, manipulation, soft material robotics, interactive machine learning, deep learning, motion generation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology. He is an IEEE Fellow and was president of the Robotics: Science and Systems Foundation from 2012 until 2019.
Bio: Cynthia Matuszek is an assistant professor of computer science and electrical engineering at the University of Maryland, Baltimore County. Her research focuses on robots’ acquisition of grounded language, and includes work in human-robot interfaces, natural language, machine learning, and collaborative robot learning. She has developed a number of algorithms and approaches that make it possible for robots to learn about their environment and how to follow instructions from interactions with non-technical end users. She received her Ph.D. in computer science and engineering from the University of Washington in 2014. Dr Matuszek was recently named one of IEEE’s bi-annual “10 to watch in AI.”
Title: An analysis of grippers and grasps for cloth manipulation
Abstract: Compliant and soft hands have gained a lot of attention in the past decade because of their ability to adapt to the shape of the objects, increasing their effectiveness for grasping. However, when it comes to grasping highly flexible objects such as textiles, we face the dual problem: it is the object that will adapt to the shape of the hand or gripper. The talk will summarize the challenging requirements of cloth manipulation, and explain a novel definition of textile object grasps that we propose, which abstracts from the robotic embodiment or hand shape and recovers concepts from the early neuroscience literature on hand prehension skills. The analysis is generic, provides a classification of cloth manipulation primitives and can inspire gripper design and benchmark construction for cloth manipulation.
Bio: Júlia Borràs (who will deliver the talk on behalf of Carme Torras) is a Mathematician and Computer Scientist since 2004 and 2006, respectively, and she obtained her European Ph.D. degree in 2011 working on kinematics and reconfiguration designs for the Stewart-Gough parallel platform. She worked abroad for 6 years as a postdoc, two years at Prof. Aaron Dollar GrabLab group from Yale University and four years at the Karlsruhe Institute of Technology (KIT) with prof. Tamim Asfour H2T group. She has worked on parallel robots, underactuated robot hands, grasping, dextrous manipulation, whole-body motion analysis, humanoid robot locomotion, novel designs for robotic hands and grippers, and robotic cloth manipulation. In 2017 she was awarded a Ramon y Cajal scholarship, one of the most prestigious senior postdoctoral scholarships in Spain. Recently, she has earned a tenured position at the Spanish Scientific Research Council (CSIC).
Title: Mobile Manipulation – new chances and challenges in industrial automation
Abstract: Collaborative robots are already more and more used in industry. Nowadays, the next generation of robots that leave research labs and find their way into industrial production are mobile manipulators. Mobile manipulators offer higher flexibility and allow automation of industrial processes that could not be solved by stationary robots. But with introducing this novel technology for industrial use cases many challenges are arising that could not be covered by the typical solutions found in robotic automation for high-volume production in static environments. In this talk I will present the major challenges we faced during our past and actual (research) projects and especially focus on the gaps between existing solutions and industrial needs based on our experiences. One major topic will be flexible grasping of objects but the talk will also include challenges/solutions from other robotic areas related to mobile manipulation in general.
Bio: Uwe Zimmermann received his “Diplom-Ingenieur” degree in electrical engineering in 1998 and his doctorate degree in Information Technology in 2005, both from the University of Karlsruhe (TH), Germany. From 1998-2005 he has been a member of the research group at the Institute for Process Control and Robotics, University of Karlsruhe (TH) working on robotic and automation related topics, in particular on software engineering and software architectures including real-time systems, component technologies and embedded systems. In 2005, he joined KUKA Corporate Research as a Project Manager for cooperative research projects. Since 2011 he is additionally leading a team of 6 persons dealing with environment models, motion planning, whole body control and mobile manipulation and has been in charge of setting up a new team dealing with communication infrastructure, machine learning, semantics and data analytics.
Title: Robot Learning at Bosch Center for Artificial intelligence
Abstract: Industry 4.0 offers many opportunities for robot learning. As Bosch has global expertise in manufacturing in all scales, AI and robotics are focus topics at Bosch. At Bosch Center for Artificial Intelligence (BCAI) we are investigating if AI can be the next disruption in industrial robotics, especially for challenging manipulation tasks. In our view robots are not as widely useable as they could be in industrial settings, as they are still expensive, lack human-level dexterity and they are far from being as easy to use as smartphones. In this talk we will give a brief overview on research topics at BCAI and present an industrial view on robot learning for manipulation.
Bio: Andras Kupcsik is a research scientist at Bosch Center for Artificial Intelligence, his research focuses on machine learning solutions for manipulator robotics. He completed his PhD degree at the National University of Singapore (NUS) where he worked on data-efficient robot skill learning using model-based reinforcement learning. After finishing his PhD studies he worked as a postdoc with David Hsu at the School of Computing (NUS) on dexterous human robot collaboration and with Sylvain Calinon (IDIAP research institute) on tele-operated underwater robot manipulation in the DEXROV H2020 project. At BCAI he is now focusing on geometry-aware robot skill learning and control.
|Amir Masoud Ghalamzan Esfahani, Ph.D.
College of Science
The University of Lincoln, The United Kingdom
|S. Hamidreza Kasaei, Ph.D.
Faculty of Science and Engineering
The University of Groningen, The Netherlands
|Gerhard Neuman, Ph.D.
Professor of robotic and autonomous system
Director of Computational Learning for Autonomous Systems (CLAS)
College of Science
The University of Lincoln, The United Kingdom