Our current projects are ordered approximately below according to funding sources including EU consortium awards, UK Research Councils, Innovate UK, industry funded and internally funded.
ILIAD – Intra-Logistics with Integrated Automatic Deployment: safe and scalable fleets in shared spaces
Horizon 2020 Research and Innovation Action, 2017-20.
The ILIAD project is driven by the application needs of fleets of robots that operate in intralogistics applications with a high demand on flexibility in environments shared with humans. In particular, the project aims to enable automatic deployment of a fleet of autonomous forklift trucks, which will continuously optimise its performance over time by learning from collected data. The University of Lincoln’s contributions to the project will include ensuring long-term operation of the ILIAD system, maintaining its environment representations over time while learning and predicting activity patterns of human co-workers; developing qualitative models for human-robot spatial interaction; systems architecture development and systems integration; and managing experimental work at test facilities, including the University’s National Centre for Food Manufacturing. ILIAD is a collaborative project involving four academic institutes and five industrial partners across four European countries.
STEP2DYNA: Spatial-temporal information processing for collision detection in dynamic environments
Horizon 2020 EU.1.3.3. – Stimulating innovation by means of cross-fertilisation of knowledge
In the real world, collisions happen at every second – often resulting in serious accidents and fatalities. Autonomous unmanned aerial vehicles (UAVs) have demonstrated great potential in serving human society such as delivering goods to households and precision farming, but are restricted due to a lack of collision detection capability. The current approaches for collision detection such as radar, laser-based Ladar and GPS are far from acceptable in terms of reliability, energy consumption and size. A new type of low cost, low energy consumption and miniaturized collision detection sensors is badly needed not only to save millions of people’s lives but also make autonomous UAVs and robots safe to serve human society. The STEP2DYNA consortium proposes an innovative bio-inspired solution for collision detection in dynamic environments. It takes the advantages of low cost spatial-temporal and parallel computing capacity of visual neural systems and will realize a new chip specifically for collision detection in dynamic environments.
Contact: Shigang Yue (PI)
EU Horizon 2020, Innovation Action, 645376
FLOBOT is a collaborative project, involving academic institutes and industrial partners across five European countries. The project will develop a floor washing robot for industrial, commercial, civil and service premises, such as supermarkets and airports. Floor washing tasks have many demanding aspects, including autonomy of operation, navigation and path optimization, safety with regards to humans and goods, interaction with human personnel, easy set-up and reprogramming. FLOBOT addresses these problems by integrating existing and new solutions to produce a professional floor washing robot for wide areas. Our research contribution in this project is focussed in the area of robot perception, based on laser range-finder and RGB-D sensors, for human detection, tracking and motion analysis in dynamic environments. Primary tasks include developing novel algorithms and approaches for enabling the acquisition, maintenance and refinement of multiple human motion trajectories for collision avoidance and path optimization, as well as integration of the algorithms with the robot navigation and on-board floor inspection system.
NCNR – National Centre for Nuclear Robotics
EPSRC RAI Hub, robotics for extreme environments, 2017-2021.
The National Centre for Nuclear Robotics (NCNR) is a multi-disciplinary EPSRC RAI (Robotics and Artificial Intelligence) Hub consisting of most leading nuclear robotics experts in the UK including Universities of Birmingham, Queen Mary, Essex, Bristol, Edinburgh, Lancaster and Lincoln. Under this project, more than 40 postdoctoral researchers and PhD researchers form a team to develop cutting edge scientific solutions to all aspects of nuclear robotics such as sensor and manipulator design, computer vision, robotic grasping and manipulation, mobile robotics, intuitive user interfaces and shared autonomy.
At the University of Lincoln, we will develop new machine learning algorithms for several crucial applications in nuclear robotics such as waste handling, cell decommissioning and site monitoring with mobile robots. Clean-up and decommissioning of nuclear waste is one of the biggest challenges for our and the next generations with enormous predicted costs (up to £200Bn over the next hundred years). Moreover, recent disaster situations such as Fukushima have shown the crucial importance of robotics technology for monitoring and intervention, which is missing up to date. Our team will focus on algorithms for vision guided robot grasping and manipulation, cutting, shared control and semi-autonomy, mobile robot navigation and outdoor mapping and navigation with a strong focus on machine learning and adaptation techniques. A dedicated bimanual robot arm platform is being developed, mounted a mobile platform, and to be operated using shared autonomy, tele-operation and augmented reality concepts to be developed.
Synthesis of remote sensing and novel ground truth sensors to develop high-resolution soil moisture forecasts in China and the UK
Science and Technology Facilities Council (STFC) – Newton Fund: Earth Observation, Modelling and Autonomous Systems For Agri-Tech In China, 2016-19.
The availability of water is a key driver of agricultural productivity, and the impact of water availability on global food production is seen as a key global risk and challenge. This project seeks to develop agri-tech solutions to help alleviate the issue of water availability in agriculture, and for producers to ultimately drive water use efficiency. Currently, there is no system to measure soil moisture distribution accurately across a field, and the resolution of remote sensing has not been sufficient for agricultural applications or for local water management to reduce flood risk. The project will deploy two new sensors (one static, one mobile) within China that measure soil moisture content as a function of the albedo of cosmically generated fast neutrons (Cosmos sensor, designed by Hydroinova, US). The project coordinates the expertise of four key groups, the University of Lincoln (robotics, mapping and deployment of autonomous vehicles); the Institute of Ecology and Agrometeorology (IEAM) of Chinese Academy of Meteorological Sciences, University of Information Science &Technology; the Centre for Ecology and Hydrology (Wallingford); and the School of Geography and Earth Sciences, The University of Aberystwyth.
RASBerry – Robotics and Autonomous Systems for Berry Production
Innovate UK, Innovation in health and life sciences round 1, 2017-19
The RASBerry project (Robotics and Autonomous Systems for Berry Production) will develop autonomous fleets of robots for horticultural industry. In particular, the project will consider strawberry production in polytunnels. The first major objective is to support in-field transportation operations to aid and complement human fruit pickers. A dedicated mobile platform is being developed, together with software components for fleet management, in-field navigation and mapping, long-term operation, and safe human-robot collaboration. A solution for autonomous in-field transportation will significantly decrease strawberry production costs, address labour shortages and be the first step towards fully autonomous robotic systems for berry production.
RASBerry is a collaboration between the Norwegian University of Life Sciences (NMBU), University of Lincoln, Saga Robotics, CBS Ltd., Berry Gardens Growers and Ekeberg Myhrene. Further funds are provided by Innovasjon Norge, NFR Forny and Innovate UK.
Robo-Pick: Robots for Autonomous Mushroom Picking
This project aims to develop a new robotic picking system to harvest fresh mushrooms reducing labour demands by ca. 66%. The work will be carried out by a consortium comprising: Littleport Mushroom Farms, a major UK mushroom supplier; ABB, a major UK-based robotic supplier; Stelram, a small specialist UK developer of robotic solutions; and the University of Lincoln, a leading research group focusing on robotic application in the food industry.
The project will integrate novel soft robotic actuators, vision systems and data analysis with autonomous robots and will deliver an end to end solution to a problem that has challenged the industry for many years. It will greatly increase the competitiveness of UK production and the outcomes are directly transferable to many sectors of the UK food and manufacturing industries.
Contact: Gerhard Neumann (PI)
3D Vision-based Crop-Weed Discrimination for Automated Weeding Operations
BBSRC and Innovate UK (Agri-Tech Catalyst, Early Stage Feasibility), 2016-18
This project will investigate the technical foundations for the next generation of robotic weeding machinery, enabling selective and accurate treatment of specific weeds. The proposed technology is a novel combination of low-cost 3D sensing and learning software together with a suitable weed destruction technique. The proposed developments will lead to more efficient cultural weeding equipment resulting in better management of weeds and reduced input use, bringing several benefits to food producers, sellers and society. The technical objectives of the project include detection of plants using a low-cost 3D camera vision system, discrimination of target crop plants from weeds at different growth stages, providing an intuitive system training interface for rapid deployment, development of a proof-of-concept weed destruction technique and finally integration and evaluation of the developed technology in automated weeding products.
Learn-Cars: Structured Deep Learning for Autonomous Driving
Funded by Toyota Europe
We will follow a data-driven approach to achieve human-like driving styles with human-level adaptability and personalization to the human driver/passenger. We will estimate driving controllers from collected experience and we will extract a library of different maneuvers from demonstrated data. We will use the maneuver library to plan the trajectory of the car by switching between different maneuvers. We will also use optimal control and reinforcement learning techniques to improve the single maneuvers such that the maneuvers generalize to unseen situations and possibly even outperform the human drivers. In particular, we will concentrate on learning to resolve dangerous situations such as avoiding an unexpected obstacle. An important research question for using maneuver libraries is how to switch between maneuvers. The system should produce as little number of switches as necessary to generate a smooth driving behavior. Moreover, we need to incorporate high-dimensional sensory input from the environment in the switching decision. To do so, we will investigate the use of deep learning techniques.
Contact: Gerhard Neumann (PI)
Google Faculty Research Award, Winter 2015
The research proposed in this project is driven by the need of independent mobility for the visually impaired. It addresses the fundamental problem of active vision with human-in-the-loop, which allows for improved navigation experience, including real-time categorization of indoor environments with a handheld RGB-D camera. This is particularly challenging due to the unpredictability of human motion and sensor uncertainty. While visual-inertial systems can be used to estimate the position of a handheld camera, often the latter must also be pointed towards observable objects and features to facilitate particular navigation tasks, e.g. to enable place categorization. An attention mechanism for purposeful perception, which drives human actions to focus on surrounding points of interest, is therefore needed. This project proposes a novel active vision system with human-in-the-loop that anticipates, guides and adapts to the actions of a moving user, implemented and validated on a mobile device to aid the indoor navigation of the visually impaired.
Application of machine learning and high speed 3D vision algorithms for real time detection of fruit
Collaborative Training Partnership for Fruit Crop Research funded by BBSRC and Industry, 2017-21
The main objective of this project is to deploy novel machine learning technologies to detect, locate and measure (size and colour) fruit in real time. This work fundamentally underpins the development of all crop-picking robots. The project will develop advanced machine learning algorithms to measure, identify and detect fruit in real time and in 3D. The algorithms will be trainable (so that a range of fruit types can be identified) and provide a world x,y,z co-ordinate of the fruit. A similar system was developed for broccoli (Kusumam, 2017) which showed that 3D cameras could be deployed in field environments. The new challenge for this project will be to minimise processing requirements to identify fruit whilst maximising processing speed and recognition fidelity. This project will initially focus on strawberry and be anticipated to include apple.
Contact: Grzegorz Cielniak
Development and Demonstration of an Automated, Selective Broccoli Harvester
Agriculture and Horticulture Development Board, 2017-21
Broccoli is one of the world’s largest vegetable crops, and almost all of it is currently harvested by hand. Development of an automated harvester would help to increase productivity and improve growers’ ability to control production costs. In this project, we will develop 3D imaging technologies to accurately identify broccoli plants in the field in all light conditions including at night, accurately measure the size of each plant head and compare it against pre-agreed criteria in order to establish whether or not it is suitable for cutting, and establish the precise location of each broccoli head selected for cutting. Working together with industry partners, the developed technologies will integrated into a prototype robotic system for automatic selective harvesting of broccoli, which can work for long periods to cut, lift and collect heads of the preferred size.
Autonomous Field Rover for Agricultural Research
Research Investment Fund (RIF), University of Lincoln, 2016-18
BBSRC Seeding Catalyst, 2017-18
This project targets the challenge of developing technology for autonomous soil sampling, extending the state of the art for soil quality assessment and correlating measured soil variability with crop yield. We therefore propose to develop a novel autonomous vehicle for the agri-industry which can amalgamate multiple sensors to develop highly precise soil fertility maps. This cross-disciplinary project is based within the newly formed Lincoln Institute for Agri-Food Technology and includes contributions from the Schools of Computer Science, Engineering, Life Sciences and the National Centre for Food Manufacturing at the University of Lincoln.
FInCoR: Facilitating Individualised Collaboration with Robots
Research Investment Fund (RIF), University of Lincoln
The FInCoR project will investigate novel ways to facilitate individualised Human-Robot Collaboration through long-term adaptation on the level of joint tasks. This will enable robots to work with human more effectively in scenario such as high value manufacturing and assistive care. Imagine a robot helping to assemble a car’s dashboard more effectively, knowing that it is working with a left-handed person; or a robot assisting an elderly employee in a car factory who is skilled in fitting a speedometer, but requires a third-hand holding the heavy mounting frame in place. Despite significant progress in human-robot collaboration, today’s robotic systems still lack the ability to adjust to an individual’s needs. FInCoR will overcome this limitation by developing online, in-situ adaptation, putting the “human in the loop”. It will bring together flexible task representations (e.g. Markov Decision Processes), machine learning (e.g. reinforcement learning), advanced robot perception (e.g. body tracking), and robot control (e.g. reactive planning) to make progress from pre-scripted tasks to individualised models. These models account for preferences, abilities, and limitations of each individual human through long-term adaptation. Hence, FInCoR will enable processes known from human-human collaboration, such as two colleagues working together and learning more about each other’s strengths, preferences, and strategies, to take place in human-robot teams.
Mobile Robotics for Ambient Assisted Living
Research Investment Fund (RIF), University of Lincoln
The life span of ordinary people is increasing steadily and many developed countries, including UK, are facing the big challenge of dealing with an ageing population at greater risk of impairments and cognitive disorders, which hinder their quality of life. Early detection and monitoring of human activities of daily living (ADLs) is important in order to identify potential health problems and apply corrective strategies as soon as possible. In this context, the main aim of the current research is to monitor human activities in an ambient assisted living (AAL) environment, using a mobile robot for 3D perception, high-level reasoning and representation of such activities. The robot will enable constant but discrete monitoring of people in need of home care, complementing other fixed monitoring systems and proactively engaging in case of emergency. The goal of this research will be achieved by developing novel qualitative models of ADLs, including new techniques for 3D sensing of human motion and RFID-based object recognition. This research will be further extended by new solutions in long-term human monitoring for anomaly detection.
Autonomous Conversational Systems
We investigate interactive machines that perceive, act, communicate, and learn. This is motivated by the fact that communicating with multiple modalities such as speaking, touching and pointing is a natural and efficient form of interaction among humans. Our approach is three-fold. First, we treat (multimodal) perception, action and communication jointly rather than independently. Second, we aim to increase the autonomy of intelligent conversational systems by reducing the amount of human intervention (development-wise) in order to train machines from example interactions rather than contrived for the purpose of system training. Third, we use conversational systems as a test bed in order to challenge machine learning algorithms in their application to real-world systems that can operate in realistic scenarios — beyond lab environments. Example applications include robots playing games and personal assistants providing information in hands-free scenarios.
Contact: Heriberto Cuayahuitl (PI)
Cognitive Social Robotics
Internally funded, University of Lincoln & NVidia hardware grant
To be useful for the real world, robots will need to be social to effectively interact and help people. The robots should take into account the cognitive abilities and limitations of their human partners, just as people do when interacting with others. To do so involves the combined application of artificial intelligence, cognitive science, psychology, and robotics. This research develops autonomous, cognitively capable, and social robots to interact with people in real environments to achieve measurable benefits. Evaluation work is conducted in the real-world wherever possible, in the domains of education, healthcare, and other collaborative applications.
Contact: Paul Baxter (PI)
ENRICHME: Enabling Robot and assisted living environment for Independent Care and Health Monitoring of the Elderly
EU Horizon 2020, Research & Innovation Action, 643691
ENRICHME is a collaborative project, involving academic institutes, industrial partners and charity organizations across six European countries. It tackles the progressive decline of cognitive capacity in the ageing population proposing an integrated platform for Ambient Assisted Living (AAL) with a mobile robot for long-term human monitoring and interaction, which helps the elderly to remain independent and active for longer. The system will contribute and build on recent advances in mobile robotics and AAL, exploiting new non-invasive techniques for physiological and activity monitoring, as well as adaptive human-robot interaction, to provide services in support to mental fitness and social inclusion. Our research contribution in this project focuses in the area of robot perception and ambient intelligence for human tracking and identity verification, as well as physiological and long-term activity monitoring of the elderly at home. Primary tasks include developing novel algorithms and approaches for enabling the acquisition, maintenance and refinement of models to describe human motion behaviors over extended periods. as well as integration of the algorithms with the AAL system.
STRANDS: Spatio-Temporal Representations and Activities for Cognitive Control in Long-Term Scenarios
EU FP7 Integrating Project, 600623
STRANDS will produce intelligent mobile robots that are able to run for months in dynamic human environments. We will provide robots with the longevity and behavioural robustness necessary to make them truly useful assistants in a wide range of domains. Such long-lived robots will be able to learn from a wider range of experiences than has previously been possible, creating a whole new generation of autonomous systems able to extract and exploit the structure in their worlds.
L-CAS scientists in STRANDS are part of a European team of researchers and companies and contributes with their unique expertise in long-term mapping and behaviour generation in integrated robotic systems. Our robot developed in STRANDS is called “Linda”.
HAZCEPT: Towards zero road accidents – nature inspired hazard perception
The number of road traffic accident fatalities world wide has recently reached 1.3 million each year, with between 20 and 50 million injuries being caused by road accidents. In theory, all accidents can be avoided. Studies showed that more than 90% road accidents are caused by or related to human error. Developing an efficient system that can detect hazardous situations robustly is the key to reduce road accidents. This HAZCEPT consortium will focus on automatic hazard scene recognition for safe driving. HAZCEPT will address the hazard recognition from three aspects – lower visual level, cognitive level, and drivers’ factors in the safe driving loop.
Contact: Shigang Yue (PI)
LIVCODE: Life like information processing for robust collision detection
EU FP7-PEOPLE-2011-IRSES, Coordinator
Animals are especially good at collision avoidance even in a dense swarm. In the future, every kind of man made moving machine, such as ground vehicles, robots, UAVs aeroplanes, boats, even moving toys, should have the same ability to avoid collision with other things, if a robust collision detection sensor is available. The six partners of this EU FP7 project from UK, Germany, Japan and China will further look into insects visual pathways and take inspirations from animal vision systems to explore robust embedded solutions for vision based collision detection for future intelligent machines.
Contact: Shigang Yue (PI)
3D Vision Assisted Robotic Harvesting of Broccoli
BBSRC and Innovate UK (Agri-Tech Catalyst, Early Stage Feasibility)
There is an urgent need to reduce the costs of production of field brassica crops, in particular, broccoli. Labour costs are a significant proportion of overall production costs. High labour usage also drives complex management and potentially social issues. In this project, we will test whether 3D camera technology can be used to identify and select broccoli which is ready to harvest within commercial crops. This will provide a key underpinning step towards the development of a fully automatic and camera guided robotic harvesting system for broccoli. The commercial benefits are highly significant, as the broccoli crop is one of the worlds largest vegetable crops, and almost all of it is manually harvested.
Trainable Vision‐based Anomaly Detection and Diagnosis (TADD)
Technology Strategy Board funded Technology Inspired CR&D – ICT Project
Market demand for automation of food processing and packaging is increasing, leading to a demand for increased automation of industrial quality control (QC) procedures. This project is developing a multi‐purpose intelligent software technology using computer vision and machine learning for automatic detection and diagnosis of faulty products, including raw, processed and packaged food products. The developed vision systems are user-trainable, requiring minimal set‐up to work with a wide variety of products and processes. The technology will be refined and evaluated by testing in automated QC equipment in the food industry.
Contact: Tom Duckett
EYE2E: Building visual brains for fast human machine interaction
In the real world, many animals pocess almost perfect sensory systems for fast and efficient interactions within dynamic environments. Vision, as an evolved organ, plays a significant role in the survival of many animal species. The mechanisms in biological visual pathways provide nice models for developing artificial vision systems. The four partners of this consortium will work together to explore biological visual systems in both lower and higher level by modelling, simulation, integration and realization in chips, to investigate fast image processing methodologies for human machine interaction through VLSI chip design and robotic experiments.
Contact: Shigang Yue
Autonomous Control of Agricultural Sprayers
Technology Strategy Board funded CR&D Feasibility Study
This project aims to improve the efficiency of agricultural spraying vehicles by developing a robust control system for the height of the spraying booms, using laser sensing to model the 3d structure of the crop canopy and terrain ahead of the vehicle. This new technology will enable greater autonomy in agricultural sprayers, enhance and simplify interaction between the driver and the vehicle, and result in an optimised spraying process.
MultiDS: Multi-Domain Dialogue System Using Deep Reinforcement Learning
Funded by Samsung Electronics
Dialogue systems that interact with humans are still confined to restricted vocabularies and tasks, and rigid interactions. The later is of particular interest to multi-domain dialogue systems because users at times deviate from the system’s expected user behaviour (e.g. asking for restaurant information while searching for a hotel). The MultiDS project aims to overcome these limitations through the following research objectives: (1) create a novel learning algorithm for multi-domain (spoken) dialogue
management; (2) create a multi-domain dialogue system with support for flexible navigation across domains; and (3) evaluate the proposed dialogue system using spoken interaction.
Contact: Heriberto Cuayahuitl