Task-Informed Grasping: Agri-Food manipulation (TIG-III)

A half-day workshop at International Conference on Robotics and Automation
From 30 May to 5 June 2021
Xi’an – China

Previous workshops from this series:

TIG-II at RSS 2019 ( https://lcas.lincoln.ac.uk/wp/tig-ii/ )

TIG-I at IROS 2018 (https://www.birmingham.ac.uk/iros18)

For any problem related to MS Teams access, please follow the MS teams web browser version. You will have 2 options to post your questions & ideas: (i) slido + code TIGstorm, (ii) in the meeting workspace   

This workshop is supported by IEEE/RAS Technical Committees on: (1) Mobile Manipulation, (2) Agricultural Robotics & Automation, (3) Algorithms for Planning and Control of Robot Motion, (4) Soft Robotics and (5) Robot Learning.

If you require further information on this workshop, please email:

smghames@lincoln.ac.uk or aghalamzanesfahani@lincoln.ac.uk

The workshop will be held on May 31, 2021 @ 14:00p.m-18:00p.m (CEST) or 12:00p.m – 16:00p.m (GMT) or 5:00a.m – 9:00a.m (PDT)

A 1st, 2nd and 3rd prize for the best Papers/Posters will be sponsored by the Advanced Intelligent Systems (AISY) Journal at Wiley

Please register your attendance: REGISTER

Join the Microsoft Teams Link for the Live Event on May 31: Join Microsoft Teams Meeting

You can post any question or raise an idea before or during the meeting at slido + CODE: TIGstorm (accessed also through Teams meeting workspace in case you have the app version)

Pre-recorded talks can be watched at the following workshop Youtube Channel:  Watch Full Pre-recorded Talks

TIG-I and II workshop IROS 2018 and RSS 2019 can be found at

https://lcas.lincoln.ac.uk/wp/tig-ii/ and https://www.birmingham.ac.uk/iros18

Abstract

Agri-food robotics is a key element for precision agriculture (and precision food production) which has been identified as a solution to the increasing demands for agri-food products due to the population growth and raised labour shortage. One of the critical challenges of agri-food robotics is grasping and manipulation of agri-food objects. This workshop aims to bring together world-leading experts in robotic grasping/manipulation applied in the agri-food sector to discuss challenges and possible approaches of agri-food manipulation, ranging from agriculture (e.g. robotic harvesting) to food production (e.g. sandwich making). The challenges involved in the agri-food tasks, such as differentiating between items (e.g. toppings) when making a sandwich or picking ripe fruits from clusters,  create many interesting research questions for the robotics community. In TIG-I and TIG-II, we brought together frontiers in control and perception to discuss the general robotic manipulation problem.
This workshop, instead, will mainly focus on the research challenges, e.g. motion planning and control, robot perception, task scheduling, soft robotics and human-robot interaction in the agri-food sector. The workshop will therefore overview the state-of-the-art in agri-food robotics through invited research talks, paper and poster presentations to raise awareness about the scientific and societal challenges in the community.

Topics of interest include:

  • Agri-food manipulation,
  • Mobile manipulation,
  • Data-driven manipulation for agri-food applications,
  • Grasping and manipulative movements in cluttered environments,
  • Motion planning and control challenges for agri-food robotics
  • Tactile sensing and enhanced manipulation
  • Agri-food sensing technologies
  • Robotic vision challenges in agri-food applications
  • Soft-Robotics for agri-food applications
  • Task-oriented end-effectors: requirements and specifications
  • Benchmarks and datasets for grasping and manipulation in cluttered environments

 

Call for Papers

Deadline: 07 extended to 10 May 2021

Notification of acceptance: 16 extended to 19 May 2021

We invite researchers to submit their recent work on/around the topics of interest mentioned above. All papers must be original and not simultaneously submitted to another journal or conference. The organisers will ensure no affiliation relations exist between the authors of the submitted papers and the reviewers. The accepted papers will be placed on our website.

The following paper categories are welcome:

  • Extended abstract (maximum 2 pages) will be accepted for poster presentation at this workshop. 
  • Full papers Contributed papers (maximum 6 pages) will be accepted based on their quality, originality, and relevance to the workshop. Submitted papers should not be under consideration for publication anywhere else. Submissions should follow the IEEE/RAS format.

Please submit your paper by emailing it to  workshop.tig@gmail.com

The prize for the best Papers/Posters will be sponsored by AISY in the form of $400 voucher distributed as $150 for 1st place, $150 for 2nd place and $100 for 3rd place.

Programme 

Schedule virtual event

A short event of 4-hours will be hosted by Zoom cloud meetings application. In the first 3-hours, the speakers will give short talks of 10-minutes followed each by 5-mins Q&A session. Then, spotlight paper and poster presentations will last for 12-minutes. 3-minutes of Q&A will follow the spotlight talks interactively between the young researchers and the audience. Following this, the worshop sponsor AISY-Wiley will present their Journal Family for 5-mins before a panel discussion will be in place, live, for 35-minutes duration. The length of the live event will allow a large number of researchers to attend independently from different time-zones. Along with the 3-hours event, the speakers (as listed in the schedule of the physical event above) will record a longer presentation of 25-mins, that we will release on a Youtube channel 2 days before the date of the workshop. Researchers whose extended abstract or full paper is accepted to the workshop will have to record a 5-minutes presentation of their work, which we will upload alongside the speakers’ talk. Moreover, they will have a 1-minute spotlight during the live event. The audience will have the chance to see the full talks on the Youtube channel of the workshop before listening to the live shorter talks.
 

Live Virtual Event TimetableDownload Timetable

Organisers 14:00 — 14:05 (CEST)  Opening
Matthew Howard 14:05 — 14:15
Empowering Growers to Exploit Adaptive Robots
Pieter Abbeel  14:20 — 14:30
Toward More Effective Reinforcement Learning
Robert Katzschmann 14:35 — 14:55
Soft robotics tackling object manipulation in kitchens and beyond
Salah Sukkarieh  15:00 –15:10
Autonomous Soft Fruit Harvesting: Challenges and Lessons
Manoj Karkee  15:15 — 15:25
AI, Machine Vision and Robotics for Fruit Harvesting and Fruit Tree Pruning
Fei Chen   15:30 — 15:40
 Towards Vineyard Automation: Robot Pruning and Harvesting
Maximo Roa  15:45 — 16:05 Compliant end-effectors for food handling: the CLASH hand
Lorenzo Jamone   16:10 — 16:20
Tactile sensing for robotic manipulation and fruit quality control
Peter Allen  Pre-recorded talk, released on the Youtube Channel
Generative Attention Learning – A “GenerAL” Framework for High-Performance
Multi-fingered Grasping in Clutter Using Vision and Touch
Marc Hanheide Pre-recorded talk, released on the Youtube Channel Mobile robotics and HRI in the fruit harvesting chain 
Zhuoling Huang   16:25 — 16:35 Planning the Harvesting Sequence in Complex Environment for Fruit-picking Robot
Lewis Anderson 16:40 — 16:50 Feeding the world with 8-armed outdoor robots
    16:55 — 17:07 Lightning talks of posters and papers
  17:07 — 17:10 Spotlight talks Q&A
AISY – Wiley 17:10 — 17:15
AISY-Wiley 5-mins Presentation
  17:15 — 17:50 Panel discussion — Closing remarks

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Accepted papers:

  1. Learning to pick apples using an apple proxy; Alejandro Velasquez, Joseph R. Davidson, Cindy Grimm; Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University. [16:55 – 16:56pm CEST]
  2. Comparison of Manual and Robotic Picking Actions for Strawberry Harvesting: A Discussion; Rajendran S Vishnu, Simon Parsons, Amir Ghalamzan Esfahani; Lincoln Agri-Robotics, University of Lincoln. [16:56 – 16:57pm CEST]
  3. Robotic harvesting from peppers to berries; Marsela Polic, Jelena Tabak and Matko Orsag; University of Zagreb [16:57 – 16:58pm CEST]
  4. Modelling Soft Fruit Clusters for Controlled Harvesting; Willow Mandil and Amir Ghalamzan E. ; University of Lincoln. [16:58 – 16:59pm CEST]
  5. Execution Monitoring for Robotic Pruning using Force Feedback; Alexander You, Nidhi Parayil, Cindy Grimm, Joseph R. Davidson; Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University. [16:59 – 17:00pm CEST]
  6. An End-to-end Deep Learning Stereo Matching Method for Robotic Leaf Sampling; Qingyu Wang, Mingchuan Zhou, Huanyu Jiang, Yibin Ying; College of Biosystems Engineering and Food Science, Zhejiang University. [17:00 – 17:01pm CEST]
  7. Teat Pose Estimation via RGBD Segmentation for Automated Milking; Nicolas Borla, Fabian Kuster, Jonas Langenegger, Juan Ribera, Marcel Honegger, Giovanni Toffetti; Zurich University of Applied Sciences. [17:01 – 17:02pm CEST]
  8. Perception in Agri-food Manipulation: A Review; Jack Foster, Mazvydas Gudelis, Amir Ghalamzan E. ; University of Lincoln. [17:02 – 17:03pm CEST]
  9. Robotics and agriculture: from the labs to the fields; A. Di Fava, J. Pages, L. Marchionni, F. Ferro; PAL Robotics. [17:03 – 17:04pm CEST]
  10. State Estimation of an Underactuated Gripper during Robotic Fruit Picking; Miranda Cravetz, Cindy Grimm, Joseph R. Davidson; Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University. [17:04 – 17:05pm CEST]
  11. Current Advances in Hardware for Agri-Food Manipulation – A Review; Joshua Davy, Amie Owen, Bradley Hurst, Amir Ghalamzan E. ; University of Lincoln.[17:05 – 17:06pm CEST]
  12. Robotic Manipulators in Agriculture: A Brief Review; Harry Rogers, Haihui Yan, Onyinyechi Chukwuma, Owen Would, Elijah Elmanzor, William Rohde, Ayandeep Kundu, Ni Wang, and Amir Ghalamzan E. ; University of Lincoln. [17:06 – 17:07pm CEST]

 

Invited Speakers:

Matthew Howard

Title: Empowering Growers to Exploit Adaptive Robots

Abstract: One of the key challenges faced by horticultural producers in the UK is labour shortage, driven by a combination of factors most recently travel restrictions due to the Covid-19 pandemic. Collaborative and learning robots are a potential solution where off-the-shelf automation is unavailable, but this presents the problem as to how to best equip growers to make use of this technology for their production setting. This talk will outline some of the efforts of the Robot Learning Lab at King’s in this area, ranging from training growers to become effective teachers of cobots, instrumentation of clothing to gain motor skill data, to direct automation of challenging fresh herb manipulation problems.

Bio: Dr Matthew Howard is a senior lecturer at the Centre for Robotics Research, Department of Engineering, King’s College London. Prior to joining King’s in 2013, he held a Japan Society for Promotion of Science fellowship at the Department of Mechanoinformatics at the University of Tokyo and was a research fellow at the University of Edinburgh from 2009-2012. He also obtained his PhD in 2009 at Edinburgh with an EPSRC CASE award sponsored by Honda Research. His research interests span the fields of robotics and autonomous systems, statistical machine learning and adaptive control. His current work focuses on teaching and learning of robotic motor skills by demonstration, especially for soft robotic devices, based on human musculoskeletal control. He works with a number of large companies in protected ornamental and food crop production, looking at automation through collaborative robotics in horticulture.

Pieter Abbeel

Title: Toward More Effective Reinforcement Learning

Abstract: Grasping is one of the most fundamental robotics problems, and becomes an even bigger challenge in task context. One of the big recent drivers towards better grasp policies has been data-driven approaches, especially reinforcement learning. Interestingly, it’s often only the data that’s grasping-specific, and the reinforcement learning algorithms uses are quite general and applicable across many other tasks. I believe that general advances in reinforcement learning will play a key role in the future of grasping and will present two major recent advances: (i) How to effectively combine representation learning (specifically contrastive learning) with reinforcement learning, resulting in learning almost as efficiently from image input as when given access to the underlying state. We evaluated this in the DMControl Suite as well as on a real robot. (ii) How to incorporate human feedback to avoid issues with reward misspecification. We showcase effective human-in-the-loop reinforcement learning, including teaching a range of behaviors in less than 1 hour of human time.

Bio: Professor Pieter Abbeel is Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab. Abbeel’s research strives to build ever more intelligent systems, with main emphasis on deep reinforcement learning and deep unsupervised learning. His lab also investigates how AI could advance other science and engineering disciplines. Abbeel has founded several companies, including Gradescope (AI to help instructors with grading homework and exams), Covariant (AI for robotic automation of warehouses and factories). Abbeel has received many awards and honors, including the PECASE, NSF-CAREER, ONR-YIP, Darpa-YFA, TR35. His work is frequently featured in the press, including the New York Times, Wall Street Journal, BBC, Rolling Stone, Wired, and Tech Review. 

Robert Katzschmann

Title: Soft robotics tackling object manipulation in kitchens and beyond

Abstract: Soft robotics focuses on the deformable and non-rigid character of our world. Soft robots made of deformable materials surpass the limited degrees of freedom of rigid robots and therefore potentially offer adaptable and inherently safe ways of achieving versatile forms in locomotion and manipulation. Soft robots can homogeneously combine actuation, sensing, and structural complexity within the same component. Two core challenges must be overcome to achieve capable soft robots: First, reproducibly construct their body with fully integrated actuation and sensing. Second, develop robotic “brains” to fully understand and exploit the unique properties of soft robots and the deformable world we life in. In the first part of this talk, we will discuss some of the challenges faced when one tries to make rigid robots handle soft and highly deformable food items for the preparation of meals in industrial kitchens. In the second part of this talk, we will cover some of our lab’s recent developments in soft robotics that overcome the challenges in the creation, modeling, and control of soft robotic systems.

Bio: Robert Katzschmann is an Assistant Professor of Robotics at ETH Zurich. Robert earned his Diplom-Ingenieur in 2013 from the Karlsruhe Institute of Technology (KIT) and his Ph.D. in Mechanical Engineering in 2018 from the Computer Science and Artificial Intelligence Lab (CSAIL) at the Massachusetts Institute of Technology (MIT) in 2018. Robert worked on robotic manipulation technologies as Applied Scientist at Amazon Robotics and as CTO at Dexai Robotics. In July 2020, Robert founded the Soft Robotics Lab at ETH Zurich to push robots’ abilities for real-life applications by being more compliant and better adapt to their environment to solve challenging tasks. His group develops soft robots whose compliant properties resemble living organisms and advances modeling, control, and learning techniques tailored to the needs of soft robots. His work has appeared in leading academic journals, including Science Robotics, and has been featured in major news outlets, including the New York Times. Robert is a member of the ETH AI Center, the Max Planck ETH Center for Learning Systems (CLS), and the ETH Competence Center for Materials and Processes (MaP). Robert is an Area Chair for Robotics Science and Systems (RSS), a Guest Editor for the International Journal on Robotics Research (IJRR), and a reviewer for leading peer-reviewed journals, including Science and Nature.

Salah Sukkarieh

Title: Autonomous Soft Fruit Harvesting: Challenges and Lessons

Abstract: Autonomous harvesting has been demonstrated for many tree crop varieties, with some even at the point of commercialisation. However less common crops, such as plums, remain unaddressed. Plums are a type of soft fruit, and as such require careful motion planning to avoid scratches and stem pull out, which is complicated by mixed hard and soft obstacles. This results in a range of robotics challenges when adapting existing harvesting techniques to soft fruit. We present some of the lessons learned when building and testing a robotic plum harvesting development platform that must overcome these challenges.

Prof. Sukkarieh Bio: Professor of Robotics and Intelligent Systems at the University of Sydney, and is the CEO of Agerris, a new Agtech startup company  from the ACFR developing autonomous robotic solutions to improve agricultural productivity and environmental sustainability.

Second Speaker Bio: Jasper Brown received a Bachelor degree in Mechanical Engineering from the University of Sydney, Australia. He is currently a PhD student at the Australian Centre for Field Robotics, the University of Sydney, Australia. His research interests include robotic crop interaction and grasping, active perception, and machine learning.

Manoj Karkee

Title: AI, Machine Vision and Robotics for Fruit Harvesting and Fruit Tree Pruning

Abstract: Decreasing availability and increasing cost of farm labor is a critical challenge faced by agricultural industry around the world. Robotics has played a key role in reducing labor use and increasing productivity in farming. Modular automation and robotics technologies developed in recent years, decreasing cost and increasing capabilities of sensing, control and automation technologies such as UAVs, and increasing emphasis by governments around the world in mechanizing and automating agriculture have created a conductive environment to develop and adopt smart farming technologies for the benefit of agricultural industries around the world including smaller, subsistence farming operations common in developing and underdeveloped countries. In this presentation, the author will first discuss the importance of precision and automated/robotic systems for the future of farming (Smart Farming, Ag 4.0). He will then summarize past efforts and current status of agricultural automation and robotics, particularly for fruit harvesting and fruit tree pruning, followed by an introduction of the novel systems being developed in his program. The technologies to be introduced include robotic fruit harvesting, targeted shake-and-catch harvesting, and fruit tree and berry bush pruning. At the end, major challenges and opportunities in agricultural robotics and related areas, as well as potential future directions in research and development will be discussed

Bio: Dr. Manoj Karkee is an Associate Professor in the Biological Systems Engineering Department at Washington State University (WSU). He received his undergraduate in Computer Engineering from Tribhuvan University, Nepal and his MS in remote sensing and GIS from Asian Institute of Technology, Thailand . His PhD was in Agricultural Engineering and Human Computer Interaction from Iowa State University. Dr. Karkee leads a strong research program in the area of sensing, machine vision and agricultural robotics at the WSU Center for Precision and Automated Agricultural Systems. He has published widely in such journals as ‘Journal of Field Robotics’, ‘Computers and Electronics in Agriculture’, and ‘Transactions of the ASABE’, and has been an invited speaker at numerous national and international conferences and universities. Dr. Karkee is currently serving as an elected chair for International Federation of Automatic Control (IFAC) Technical Committee 8.1, Control in Agriculture, as an associate editor for ‘Transactions of the ASABE’, as a guest editor for ‘Sensors’, and Editorial Committee Member of Information Processing in Agriculture. Dr. Karkee was awarded ‘2020 Railbird Engineering Concept of the Year’ by American Society of Agricultural and Biological Engineers, and was recognized as ‘2019 Pioneer in Artificial Intelligence and IoT’ by Connected World magazine.

 
Fei Chen
 

Title:  Towards Vineyard Automation: Robot Pruning and Harvesting

Abstract: Grapevine winter pruning is a complex task, that requires skilled workers to execute it correctly. The complexity of this task is also the reason why it is time consuming. Considering that this operation takes about 80-120 hours/ha to be completed, and therefore is even more crucial in large size vineyards, an automated system can help to speed up the process. To this end, this talk presents a novel multidisciplinary approach that tackles this challenging task by performing object segmentation on grapevine images, used to create a representative model of the grapevine plants. Then a set of potential pruning points is generated from this plant representation. The technologies will be applied to various mobile platforms, the wheeled platform as well as the quadruped robot which is able to adapt to various challenging terrains. Similar pipeline is also implemented for the table grape harvesting application.

Bio: Dr. Fei Chen is an assistant professor with T-Stone Robotics Institute (CURI), The Chinese University of Hong Kong (CUHK), as well as the Hong Kong Center for Logistics Robotics (HKCLR). Before joining CUHK in 2020, he has been leading the Active Perception and Robot Interactive Learning (APRIL) laboratory with Department of Advanced Robotics at Italian Institute of Technology (IIT), Italy. He was the IIT PI of several EU and Italian projects. His research interests lie in robot perception, learning, planning and control of mobile manipulation for various application, such as manufacturing, health-care, agri-food. He is the co-chair of IEEE Robotics and Automation Society Technical Committee on Neuro-Robotics Systems, as well as chairs for several international conferences and workshops.

Zhuoling Huang 

Title: Planning the Harvesting Sequence in Complex Environment for Fruit-picking Robot

Abstract: This talk will introduce a piece of work about planning the sequence of tasks for automated strawberry harvesting. In this research, we created a video game in order to collect data for an application of learning from demonstration during the Covid-19 pandemic. The game simulates the arrangement of strawberries on a farm, and it requires players to make decisions about harvesting. The data we collected from players indicates that there is a pattern to human behaviour when planning the sequence in which strawberries are harvested.

Bio: Zhuoling Huang is a PhD student in Computer Science at King’s College London supervised by Professor Simon Parsons and Professor Elizabeth Sklar. Her research interests mainly focus on the design of automated fruit harvesting robot, especially strawberry-picking robots that can work in a complex environment.

Lorenzo Jamone

Title: Tactile sensing for robotic manipulation and fruit quality control

Abstract: Most of today’s robots are employed in heavy manufacturing industries (e.g. automotive): however, future robots will be more and more employed in other applications, e.g. light manufacturing industries, logistics, healthcare, agriculture. To achieve this, several improvements in robotic dexterity will be needed. Explicit insights from biology and psychology, well established control and engineering principles, modern AI techniques, novel sensing devices, have to be combined and properly integrated. In the talk I will discuss some of our recent activities in this area, focusing in particular on new soft tactile sensors and their use for robotic object manipulation and fruit quality control.

Bio: Lorenzo Jamone is a Senior Lecturer in Robotics at the Queen Mary University of London (QMUL), where he is leading the CRISP Team. He received his BSc and MSc degrees in computer engineering from the University of Genoa in 2003 and 2006 (with honors), and his PhD in humanoid technologies from the University of Genoa and the Italian Institute of Technology (IIT) in 2010. Before joining QMUL in 2016, he was a Research Fellow at the RBCS Department of the IIT in 2010, Associate Researcher at Takanishi Lab (Waseda University, Tokyo, Japan) from 2010 to 2012, and Associate Researcher at VisLab (the Computer Vision Laboratory of the Instituto de Sistemas e Robotica, Instituto Superior Tecnico, Lisbon, Portugal) from 2013 to 2016.  Lorenzo’s main research interests include: sensorimotor learning and control in humans and robots; robotic reaching, grasping, manipulation and tool use; force and tactile sensing; intelligent systems and cognitive developmental robotics.

Peter Allen

Title: Generative Attention Learning – A “GenerAL” Framework for High-Performance Multi-fingered Grasping in Clutter Using Vision and Touch

Abstract: Generative Attention Learning (GenerAL) is a framework for high-DOF multi-fingered grasping that is not only robust to dense clutter and novel objects but also effective with a variety of different parallel-jaw and multi-fingered robot hands. This framework introduces a novel attention mechanism that substantially improves the grasp success rate in clutter. Its generative nature allows the learning of full-DOF grasps with flexible end-effector positions and orientations, as well as all finger joint angles of the hand. Trained purely in simulation, this framework skillfully closes the sim-to-real gap. To close the visual sim-to-real gap, this framework uses a single depth image as input. To close the dynamics sim-to-real gap, this framework circumvents continuous motor control with a direct mapping from pixel to Cartesian space inferred from the same depth image.  This framework also demonstrates inter-robot generality by achieving over 92% real-world grasp success rates in cluttered scenes with novel objects using two multi-fingered robotic hand-arm systems with different degrees of freedom.  We also show how we can close the remaining failure cases with the use of tactile sensing.   When this tactile methodology is used to realize grasps from
coarse initial positions provided by a vision-only planner, the system is made dramatically more robust to calibration errors in the camera-robot transform.

Bio: Peter Allen is Professor of Computer Science at Columbia University, and Director of the Columbia Robotics Lab.  His work includes building the GraspIt! grasping simulator, the IREP surgical robot, and the Visibot disposable laparoscope.  He is the recipient of the CBS Foundation Fellowship, Army Research Office fellowship, the Rubinoff Award for innovative uses of computers, and the NSF PYI award.  His current research interests include robotic grasping, medical robotics and Brain-Computer Interfaces for Human-Robot interaction.

Marc Hanheide

Title: Mobile robotics and HRI in the fruit harvesting chain 

Abstract: As part of the overall harvesting challenge, product flow post-picking, and the in-field logistics pose additional challenges and opportunities for automation agriculture. What happens to a crop once it has been successfully removed from the plant? How can we ensure logistics can keep up with harvesting speed? And how can we facilitate the transition from the current manual harvesting arrangements gradually towards further automation to offer the productivity gains the sector needs. In this talk I’ll reflect on some of the recent R&D developments in the context of strawberry production, ranging from fleet coordination (offering scalable solutions in robot harvesting) to interactions with human workers in the farm environment. 

Bio: Marc Hanheide is a Professor of Intelligent Robotics & Interactive Systems in the School of Computer Science at the University of Lincoln, UK, and the director of the University’s cross-disciplinary research centre in Robotics, the Lincoln Centre for Autonomous Systems (L-CAS).

Maximo Roa

Title: Compliant end-effectors for food handling: the CLASH hand

Abstract: This talk presents the main features of the CLASH hand, a compliant end effector developed especially for food handling. The main feature of the hand is the variable stiffness, which allows its adaptation to the object weight and properties, while being able to withstand collisions with the environment. Embedded ToF and tactile sensors, among others, provide the information required to quickly recover from potential failures, and to continue operation even in the case of problems in the motors or tendons. The hand can thus pick very delicate and light objects, such as a strawberry, or heavy and rigid objects, such as a melon. The performance of the hand is evaluated using standard robotic manipulation benchmarks.

Bio: Maximo Roa is a Senior Scientific Researcher at the Institute of Robotics and Mechatronics in the German Aerospace Center – DLR. He is Group Leader for Robotic Planning and Manipulation, focused on development and implementation of locomotion and manipulation skills at different levels for industrial, service and humanoid robots. He received his doctoral degree in 2009 from Universitat Politecnica de Catalunya, and also holds the Project Management Professional (PMP) Certification since 2016. Dr. Roa is an IEEE Senior Member, and served as co-chair of the IEEE-RAS Technical Committee on Mobile Manipulation from 2013 until 2019. He is also member of ASME, IAU, and PMI.

Lewis Anderson

Title: Feeding the world with 8-armed outdoor robots

Abstract: Our world’s agricultural system faces an urgent need to automate most aspects of fruit and vegetable production. Automated growing and harvesting requires a robotics platform of unprecedented scope and capability, many aspects of which have yet to be solved. Successful grasping and manipulation requires perceiving a complex scene, accurately determining the position of an object in 3d, placing an end effector at that position using a robot arm mounted on an unstable platform, and planning routes that avoid obstacles and optimize performance. This must all be done quickly in order to achieve important economic requirements. Finally, it must be reliable enough to pick 100,000s of berries per day, 6 days/week, 300+ days/year, for years on end. We rely on recent advances in machine learning and other areas of robotics combined with novel improvements to certain critical areas in order to deploy a robotics platform that solves an urgent need.

Bio: Lewis Anderson is Co-founder and CEO of Traptic, which he started to solve a critical labor shortage in the world’s food production system using giant robots. Currently, $200B worth of produce is picked by hand, but a large percentage is wasted because not enough people are available to pick it. Traptic’s giant robots are saving the world’s food production system by doing the work people don’t want to do. Before starting Traptic, Lewis built software for PowerPoint at Microsoft which has been used by hundreds of millions of users. He started his career building autonomous airplanes and studying Computer Science at UC San Diego.

Papers Awarded:

1st Prize: Learning to pick apples using an apple proxy; Alejandro Velasquez, Joseph R. Davidson, Cindy Grimm; Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University

2nd Prize: Execution Monitoring for Robotic Pruning using Force Feedback; Alexander You, Nidhi Parayil, Cindy Grimm, Joseph R. Davidson; Collaborative Robotics and Intelligent Systems (CoRIS) Institute, Oregon State University.

3rd Prize: An End-to-end Deep Learning Stereo Matching Method for Robotic Leaf Sampling; Qingyu Wang, Mingchuan Zhou, Huanyu Jiang, Yibin Ying; College of Biosystems Engineering and Food Science, Zhejiang University

 

Recordings from the Live Event

We note that between Part-I and Part-II the talk of Prof. Robert Katzschmann was not recorded due to materials privacy associated with Dexai Robotics, Inc.

Recording Part-I

 

Recording Part-II

 

Live Talk of Prof. Salah Sukkarieh

 

Organisers

Sariah Mghames, PhD.
Postdoc Research Fellow
School of Computer Science
University of Lincoln, United Kingdom
Tommaso Pardi,
PhD Candidate
School of Metallurgy and Materials
University of Birmingham, United Kingdom
Fumiya Iida, PhD.
Reader in Robotics
Department of Engineering
University of Cambridge, United Kingdom
Mehmet Dogar, PhD.
Associate Professor
School of Computing
University of Leeds, United Kingdom
Amir Masoud Ghalamzan Esfahani, PhD. 
Associate Professor
Lincoln Institute for Agri-Food Technology – Lincoln Agri Robotics
University of Lincoln, United Kingdom

Sponsors