Publications

download the BibTeX file of all L-CAS publications

2020

  • S. Mghames, M. Hanheide, and A. G. Esfahani, “Interactive movement primitives: planning to push occluding pieces for fruit picking,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2020.
    [BibTeX] [Abstract] [Download PDF]

    Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.

    @inproceedings{lincoln42217,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {October},
    title = {Interactive Movement Primitives: Planning to Push Occluding Pieces for Fruit Picking},
    author = {Sariah Mghames and Marc Hanheide and Amir Ghalamzan Esfahani},
    year = {2020},
    note = {{\copyright} 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.},
    keywords = {ARRAY(0x55e772d76840)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42217/},
    abstract = {Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.}
    }
  • W. Khan, G. Das, M. Hanheide, and G. Cielniak, “Incorporating spatial constraints into a bayesian tracking framework for improved localisation in agricultural environments,” in 2020 ieee/rsj international conference on intelligent robots and systems, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF). The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.

    @inproceedings{lincoln42419,
    booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems},
    month = {October},
    title = {Incorporating Spatial Constraints into a Bayesian Tracking Framework for Improved Localisation in Agricultural Environments},
    author = {Waqas Khan and Gautham Das and Marc Hanheide and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2020},
    keywords = {ARRAY(0x55e772d70928)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42419/},
    abstract = {Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF).
    The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.}
    }
  • M. T. Fountain, A. Badiee, S. Hemer, A. Delgado, M. Mangan, C. Dowding, F. Davis, and S. Pearson, “The use of light spectrum blocking films to reduce populations of drosophila suzukii matsumura in fruit crops,” Scientific reports, vol. 10, iss. 1, 2020. doi:10.1038/s41598-020-72074-8
    [BibTeX] [Abstract] [Download PDF]

    Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries. Effective control is challenging and reliant on integrated pest management which includes the use of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest that can be integrated into control strategies are urgently required. In many production regions, including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials. These can be modified to filter specific wavebands of light. We investigated whether targeted spectral modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii. We present a novel approach that starts from a neuroscientific investigation of insect sensory systems and ends with infield testing of new cladding materials inspired by the biological data. We show D. suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm (orange & red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting materials reduces pest populations up to 73\% in field trials.

    @article{lincoln42446,
    volume = {10},
    number = {1},
    month = {September},
    author = {Michelle T. Fountain and Amir Badiee and Sebastian Hemer and Alvaro Delgado and Michael Mangan and Colin Dowding and Frederick Davis and Simon Pearson},
    title = {The use of light spectrum blocking films to reduce populations of Drosophila suzukii Matsumura in fruit crops},
    publisher = {Nature Publishing Group},
    year = {2020},
    journal = {Scientific Reports},
    doi = {10.1038/s41598-020-72074-8},
    keywords = {ARRAY(0x55e772d70910)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42446/},
    abstract = {Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of
    multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries.
    Effective control is challenging and reliant on integrated pest management which includes the use
    of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest
    that can be integrated into control strategies are urgently required. In many production regions,
    including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials.
    These can be modified to filter specific wavebands of light. We investigated whether targeted spectral
    modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii.
    We present a novel approach that starts from a neuroscientific investigation of insect sensory systems
    and ends with infield testing of new cladding materials inspired by the biological data. We show D.
    suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm
    (orange \& red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting
    materials reduces pest populations up to 73\% in field trials.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Are you still with me? continuous engagement assessment from a robot’s point of view,” Frontiers in robotics and ai, vol. 7, iss. 116, 2020. doi:10.3389/frobt.2020.00116
    [BibTeX] [Abstract] [Download PDF]

    Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.

    @article{lincoln42433,
    volume = {7},
    number = {116},
    month = {September},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    title = {Are You Still With Me? Continuous Engagement Assessment From a Robot's Point of View},
    publisher = {Frontiers Media S.A.},
    year = {2020},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2020.00116},
    keywords = {ARRAY(0x55e772d768d0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42433/},
    abstract = {Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement\_detector, as a tool to measure engagement in a variety of settings.}
    }
  • S. Kottayil, P. Tsoleridis, K. Rossa, R. Connors, and C. Fox, “Investigation of driver route choice behaviour using bluetooth data,” in 15th world conference on transport research, 2020, p. 632–645. doi:10.1016/j.trpro.2020.08.065
    [BibTeX] [Abstract] [Download PDF]

    Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.

    @inproceedings{lincoln34791,
    volume = {48},
    month = {September},
    author = {Sreedevi Kottayil and Panagiotis Tsoleridis and Kacper Rossa and Richard Connors and Charles Fox},
    booktitle = {15th World Conference on Transport Research},
    title = {Investigation of Driver Route Choice Behaviour using Bluetooth Data},
    publisher = {Elsevier},
    year = {2020},
    journal = {Transportation Research Procedia},
    doi = {10.1016/j.trpro.2020.08.065},
    pages = {632--645},
    keywords = {ARRAY(0x55e772d768b8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34791/},
    abstract = {Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.}
    }
  • A. G. Esfahani, S. Parsa, K. Nazari, and M. Hanheide, “Haptic-guided shared control grasping: collision-free manipulation,” in Case 2020- international conference on automation science and engineering, 2020.
    [BibTeX] [Abstract] [Download PDF]

    We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.

    @inproceedings{lincoln41283,
    booktitle = {CASE 2020- International Conference on Automation Science and Engineering},
    month = {August},
    title = {Haptic-guided shared control grasping: collision-free manipulation},
    author = {Amir Ghalamzan Esfahani and Soran Parsa and Kiyanoush Nazari and Marc Hanheide},
    publisher = {IEEE},
    year = {2020},
    journal = {International Conference on Automation Science and Engineering (CASE)},
    keywords = {ARRAY(0x55e772d76918)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41283/},
    abstract = {We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.}
    }
  • G. Bosworth, C. Fox, L. Price, and M. Collison, “The future of rural mobility study (forms),” Midlands Connect, Project Report , 2020.
    [BibTeX] [Abstract] [Download PDF]

    Recognising the urban-focus of many national and regional transport strategies, the purpose of this project is to explore how emerging technologies could support rural economies across the Midlands. Fundamentally, the rationale for the study is to begin with an assessment of rural needs and then exploring a range of mobility innovations, including social innovations as well as technologies, that can provide place-based solutions designed for more rural areas. This avoids the National Transport Strategy assumption that new mobility innovations will inevitably occur in urban areas and then be rolled out across more rural places. While economic realities mean that many private sector transport innovations can start out in urban centres, their rural impacts may be quite different and require alternative responses from rural planners and policy-makers.

    @techreport{lincoln42273,
    month = {August},
    type = {Project Report},
    title = {The Future of Rural Mobility Study (FoRMS)},
    author = {Gary Bosworth and Charles Fox and Liz Price and Martin Collison},
    publisher = {Midlands Connect},
    year = {2020},
    institution = {Midlands Connect},
    keywords = {ARRAY(0x55e772d78e78)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42273/},
    abstract = {Recognising the urban-focus of many national and regional transport strategies, the purpose of this
    project is to explore how emerging technologies could support rural economies across the Midlands.
    Fundamentally, the rationale for the study is to begin with an assessment of rural needs and then
    exploring a range of mobility innovations, including social innovations as well as technologies, that
    can provide place-based solutions designed for more rural areas. This avoids the National Transport
    Strategy assumption that new mobility innovations will inevitably occur in urban areas and then be
    rolled out across more rural places. While economic realities mean that many private sector
    transport innovations can start out in urban centres, their rural impacts may be quite different and
    require alternative responses from rural planners and policy-makers.}
    }
  • I. J. Gould, I. Wright, M. Collison, E. Ruto, G. Bosworth, and S. Pearson, “The impact of coastal flooding on agriculture: a case study of lincolnshire, united kingdom,” Land degradation & development, vol. 31, iss. 12, p. 1545–1559, 2020. doi:10.1002/ldr.3551
    [BibTeX] [Abstract] [Download PDF]

    Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.

    @article{lincoln40049,
    volume = {31},
    number = {12},
    month = {July},
    author = {Iain J Gould and Isobel Wright and Martin Collison and Eric Ruto and Gary Bosworth and Simon Pearson},
    title = {The impact of coastal flooding on agriculture: a case study of Lincolnshire, United Kingdom},
    publisher = {Wiley},
    year = {2020},
    journal = {Land Degradation \& Development},
    doi = {10.1002/ldr.3551},
    pages = {1545--1559},
    keywords = {ARRAY(0x55e772d76858)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40049/},
    abstract = {Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.}
    }
  • F. Camara, N. Bellotto, S. Cosar, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, and C. Fox, “Pedestrian models for autonomous driving part i: low-level models, from sensing to tracking,” Ieee transactions on intelligent transport systems, 2020. doi:10.1109/TITS.2020.3006768
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.

    @article{lincoln41705,
    month = {July},
    title = {Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from Sensing to Tracking},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Dimitris Nathanael and Mathias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Charles Fox},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/TITS.2020.3006768},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    keywords = {ARRAY(0x55e772d76870)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41705/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.}
    }
  • F. Camara, N. Bellotto, S. Cosar, F. Weber, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, G. Markkula, A. Schieben, F. Tango, N. Merat, and C. Fox, “Pedestrian models for autonomous driving part ii: high-level models of human behavior,” Ieee transactions on intelligent transport systems, 2020. doi:10.1109/TITS.2020.3006767
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.

    @article{lincoln41706,
    month = {July},
    title = {Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Florian Weber and Dimitris Nathanael and Matthias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Gustav Markkula and Anna Schieben and Fabio Tango and Natasha Merat and Charles Fox},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/TITS.2020.3006767},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    keywords = {ARRAY(0x55e772d78ea8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41706/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.}
    }
  • L. Roberts-Elliott, M. Fernandez-Carmona, and M. Hanheide, “Towards safer robot motion: using a qualitative motion model to classify human-robot spatial interaction,” in 21st towards autonomous robotic systems conference, 2020.
    [BibTeX] [Abstract] [Download PDF]

    For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.

    @inproceedings{lincoln40186,
    booktitle = {21st Towards Autonomous Robotic Systems Conference},
    month = {July},
    title = {Towards Safer Robot Motion: Using a Qualitative Motion Model to Classify Human-Robot Spatial Interaction},
    author = {Laurence Roberts-Elliott and Manuel Fernandez-Carmona and Marc Hanheide},
    publisher = {Springer},
    year = {2020},
    keywords = {ARRAY(0x55e772d78ec0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40186/},
    abstract = {For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.}
    }
  • X. Sun, S. Yue, and M. Mangan, “A decentralised neural model explaining optimal integration of navigational strategies in insects,” Elife, vol. 9, 2020. doi:10.7554/eLife.54026
    [BibTeX] [Abstract] [Download PDF]

    Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.

    @article{lincoln41703,
    volume = {9},
    month = {July},
    author = {Xuelong Sun and Shigang Yue and Michael Mangan},
    title = {A decentralised neural model explaining optimal integration of navigational strategies in insects},
    publisher = {eLife Sciences Publications},
    journal = {eLife},
    doi = {10.7554/eLife.54026},
    year = {2020},
    keywords = {ARRAY(0x55e772d78ef0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41703/},
    abstract = {Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.}
    }
  • Q. Fu and S. Yue, “Modelling drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds,” Biological cybernetics, 2020. doi:10.1007/s00422-020-00841-x
    [BibTeX] [Abstract] [Download PDF]

    Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.

    @article{lincoln42133,
    month = {July},
    title = {Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds},
    author = {Qinbing Fu and Shigang Yue},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s00422-020-00841-x},
    journal = {Biological Cybernetics},
    keywords = {ARRAY(0x55e772d78f08)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42133/},
    abstract = {Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.}
    }
  • D. Liu, N. Bellotto, and S. Yue, “Deep spiking neural network for video-based disguise face recognition based on dynamic facial movements,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 6, p. 1843–1855, 2020. doi:10.1109/TNNLS.2019.2927274
    [BibTeX] [Abstract] [Download PDF]

    With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well – it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios

    @article{lincoln41718,
    volume = {31},
    number = {6},
    month = {June},
    author = {Daqi Liu and Nicola Bellotto and Shigang Yue},
    title = {Deep Spiking Neural Network for Video-based Disguise Face Recognition Based on Dynamic Facial Movements},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2927274},
    pages = {1843--1855},
    keywords = {ARRAY(0x55e772d78f38)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41718/},
    abstract = {With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well - it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios}
    }
  • K. Elgeneidy and K. Goher, “Structural optimization of adaptive soft fin ray fingers with variable stiffening capability,” in Ieee robosoft 2020, 2020. doi:10.1109/RoboSoft48309.2020.9115969
    [BibTeX] [Abstract] [Download PDF]

    Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.

    @inproceedings{lincoln40182,
    booktitle = {IEEE RoboSoft 2020},
    month = {June},
    title = {Structural Optimization of Adaptive Soft Fin Ray Fingers with Variable Stiffening Capability},
    author = {Khaled Elgeneidy and Khaled Goher},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/RoboSoft48309.2020.9115969},
    keywords = {ARRAY(0x55e772d78f68)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40182/},
    abstract = {Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.}
    }
  • R. Polvara, M. Fernandez-Carmona, M. Hanheide, and G. Neumann, “Next-best-sense: a multi-criteria robotic exploration strategy for rfid tags discovery,” Ieee robotics and automation letters, vol. 5, iss. 3, p. 4477–4484, 2020. doi:10.1109/LRA.2020.3001539
    [BibTeX] [Abstract] [Download PDF]

    Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation – a scan with the RFID reader – as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors’ repository.

    @article{lincoln41120,
    volume = {5},
    number = {3},
    month = {June},
    author = {Riccardo Polvara and Manuel Fernandez-Carmona and Marc Hanheide and Gerhard Neumann},
    title = {Next-Best-Sense: a multi-criteria robotic exploration strategy for RFID tags discovery},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2020.3001539},
    pages = {4477--4484},
    keywords = {ARRAY(0x55e772d78f98)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41120/},
    abstract = {Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation -- a scan with the RFID reader -- as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors' repository.}
    }
  • Q. Fu, H. Wang, J. Peng, and S. Yue, “Improved collision perception neuronal system model with adaptive inhibition mechanism and evolutionary learning,” Ieee access, vol. 8, p. 108896–108912, 2020. doi:10.1109/ACCESS.2020.3001396
    [BibTeX] [Abstract] [Download PDF]

    Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.

    @article{lincoln42131,
    volume = {8},
    month = {June},
    author = {Qinbing Fu and Huatian Wang and Jigen Peng and Shigang Yue},
    title = {Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2020.3001396},
    pages = {108896--108912},
    keywords = {ARRAY(0x55e772d78fc8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42131/},
    abstract = {Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.}
    }
  • Y. M. Lee, R. Madigan, O. Giles, L. Garach?Morcillo, G. Markkula, C. Fox, F. Camara, M. Rothmueller, S. A. Vendelbo?Larsen, P. H. Rasmussen, A. Dietrich, D. Nathanael, V. Portouli, A. Schieben, and N. Merat, “Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles,” Cognition, technology & work, 2020. doi:10.1007/s10111-020-00635-y
    [BibTeX] [Abstract] [Download PDF]

    To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu- nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred. This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle- based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit communication cues from drivers, although some cultural differences were observed. The implications of these findings are discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.

    @article{lincoln41217,
    month = {June},
    title = {Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles},
    author = {Yee Mun Lee and Ruth Madigan and Oscar Giles and Laura Garach?Morcillo and Gustav Markkula and Charles Fox and Fanta Camara and Markus Rothmueller and Signe Alexandra Vendelbo?Larsen and Pernille Holm Rasmussen and Andre Dietrich and Dimitris Nathanael and Villy Portouli and Anna Schieben and Natasha Merat},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s10111-020-00635-y},
    journal = {Cognition, Technology \& Work},
    keywords = {ARRAY(0x55e772d78ff8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41217/},
    abstract = {To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by
    road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu-
    nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European
    project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six
    observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using
    an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from
    the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand
    gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication
    techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred.
    This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed
    pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle-
    based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement
    information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit
    communication cues from drivers, although some cultural differences were observed. The implications of these findings are
    discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.}
    }
  • Z. Yan, S. Schreiberhuber, G. Halmetschlager, T. Duckett, M. Vincze, and N. Bellotto, “Robot perception of static and dynamic objects with an autonomous floor scrubber,” Intelligent service robotics, 2020. doi:10.1007/s11370-020-00324-9
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.

    @article{lincoln40882,
    month = {June},
    title = {Robot Perception of Static and Dynamic Objects with an Autonomous Floor Scrubber},
    author = {Zhi Yan and Simon Schreiberhuber and Georg Halmetschlager and Tom Duckett and Markus Vincze and Nicola Bellotto},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s11370-020-00324-9},
    journal = {Intelligent Service Robotics},
    keywords = {ARRAY(0x55e772d79028)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40882/},
    abstract = {This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.}
    }
  • I. Albayati, A. Postnikov, S. Pearson, R. Bickerton, A. Zolotas, and C. Bingham, “Power and energy analysis for a commercial retail refrigeration system responding to a static demand side response,” International journal of electrical power & energy systems, vol. 117, p. 105645, 2020. doi:10.1016/j.ijepes.2019.105645
    [BibTeX] [Abstract] [Download PDF]

    The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.

    @article{lincoln38163,
    volume = {117},
    month = {May},
    author = {Ibrahim Albayati and Andrey Postnikov and Simon Pearson and Ronald Bickerton and Argyrios Zolotas and Chris Bingham},
    title = {Power and Energy Analysis for a Commercial Retail Refrigeration System Responding to a Static Demand Side Response},
    publisher = {Elsevier},
    year = {2020},
    journal = {International Journal of Electrical Power \& Energy Systems},
    doi = {10.1016/j.ijepes.2019.105645},
    pages = {105645},
    keywords = {ARRAY(0x55e772d79058)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38163/},
    abstract = {The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.}
    }
  • F. Camara, P. Dickenson, N. Merat, and C. Fox, “Examining pedestrian-autonomous vehicle interactions in virtual reality,” in 8th transport research arena tra 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles now have well developed algorithms and open source software for localisation and navigation in static environments but their future interactions with other road users in mixed traffic environments, especially with pedestrians, raise some concerns. Pedestrian behaviour is complex to model and unpredictable, thus creating a big challenge for self-driving cars. This paper examines pedestrian behaviour during crossing scenarios with a game theoretic autonomous vehicle in virtual reality. In a first experiment, we recorded participants? trajectories and found that they were crossing more cautiously in VR than in previous laboratory experiments. In two other experiments, we used a gradient descent approach to investigate participants? preference for a certain AV driving style. We found that the majority of them were not expecting the car to stop in these scenarios. These results suggest that VR is an interesting tool for testing autonomous vehicle algorithms and for finding out about pedestrian preferences.

    @inproceedings{lincoln40029,
    booktitle = {8th Transport Research Arena TRA 2020},
    month = {April},
    title = {Examining Pedestrian-Autonomous Vehicle Interactions in Virtual Reality},
    author = {Fanta Camara and Patrick Dickenson and Natasha Merat and Charles Fox},
    year = {2020},
    keywords = {ARRAY(0x55e772d79088)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40029/},
    abstract = {Autonomous vehicles now have well developed algorithms and open source software for localisation and
    navigation in static environments but their future interactions with other road users in mixed traffic
    environments, especially with pedestrians, raise some concerns. Pedestrian behaviour is complex to model and
    unpredictable, thus creating a big challenge for self-driving cars. This paper examines pedestrian behaviour
    during crossing scenarios with a game theoretic autonomous vehicle in virtual reality. In a first experiment, we
    recorded participants? trajectories and found that they were crossing more cautiously in VR than in previous
    laboratory experiments. In two other experiments, we used a gradient descent approach to investigate
    participants? preference for a certain AV driving style. We found that the majority of them were not expecting the
    car to stop in these scenarios. These results suggest that VR is an interesting tool for testing autonomous vehicle
    algorithms and for finding out about pedestrian preferences.}
    }
  • V. R. Ponnambalam, J. P. Fentanes, G. Das, G. Cielniak, J. G. O. Gjevestad, and P. From, “Agri-cost-maps ? integration of environmental constraints into navigation systems for agricultural robot,” in 6th international conference on control, automation and robotics (iccar), 2020. doi:10.1109/ICCAR49639.2020.9108030
    [BibTeX] [Abstract] [Download PDF]

    Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.

    @inproceedings{lincoln42418,
    booktitle = {6th International Conference on Control, Automation and Robotics (ICCAR)},
    month = {April},
    title = {Agri-Cost-Maps ? Integration of Environmental Constraints into Navigation Systems for Agricultural Robot},
    author = {Vignesh Raja Ponnambalam and Jaime Pulido Fentanes and Gautham Das and Grzegorz Cielniak and Jon Glenn Omholt Gjevestad and Pal From},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/ICCAR49639.2020.9108030},
    keywords = {ARRAY(0x55e772d790b8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/42418/},
    abstract = {Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.}
    }
  • T. Pardi, V. Ortenzi, C. Fairbairn, T. Pipe, A. G. Esfahani, and R. Stolkin, “Planning maximum-manipulability cutting paths,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 1999–2006, 2020. doi:10.1109/LRA.2020.2970949
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.

    @article{lincoln41285,
    volume = {5},
    number = {2},
    month = {April},
    author = {Tommaso Pardi and Valerio Ortenzi and Colin Fairbairn and Tony Pipe and Amir Ghalamzan Esfahani and Rustam Stolkin},
    title = {Planning maximum-manipulability cutting paths},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2020.2970949},
    pages = {1999--2006},
    keywords = {ARRAY(0x55e772d790e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41285/},
    abstract = {This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.}
    }
  • W. Martindale, S. Pearson, M. Swainson, L. Korir, I. Wright, A. M. Opiyo, B. Karanja, S. Nyalala, and M. Kumar, “Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between kenyan, indian and united kingdom food manufacturers,” Emerald open research, vol. 2, iss. 12, 2020. doi:10.35241/emeraldopenres.13414.1
    [BibTeX] [Abstract] [Download PDF]

    The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.

    @article{lincoln40529,
    volume = {2},
    number = {12},
    month = {April},
    author = {Wayne Martindale and Simon Pearson and Mark Swainson and Lilian Korir and Isobel Wright and Arnold M. Opiyo and Benard Karanja and Samuel Nyalala and Mahesh Kumar},
    title = {Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between Kenyan, Indian and United Kingdom food manufacturers},
    publisher = {Emerald},
    year = {2020},
    journal = {Emerald Open Research},
    doi = {10.35241/emeraldopenres.13414.1},
    keywords = {ARRAY(0x55e772d79118)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40529/},
    abstract = {The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.}
    }
  • S. Cosar and N. Bellotto, “Human re-identification with a robot thermal camera using entropy-based sampling,” Journal of intelligent and robotic systems, vol. 98, iss. 1, p. 85–102, 2020. doi:10.1007/s10846-019-01026-w
    [BibTeX] [Abstract] [Download PDF]

    Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.

    @article{lincoln35778,
    volume = {98},
    number = {1},
    month = {April},
    author = {Serhan Cosar and Nicola Bellotto},
    title = {Human Re-Identification with a Robot Thermal Camera using Entropy-based Sampling},
    publisher = {Springer},
    year = {2020},
    journal = {Journal of Intelligent and Robotic Systems},
    doi = {10.1007/s10846-019-01026-w},
    pages = {85--102},
    keywords = {ARRAY(0x55e772d79148)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35778/},
    abstract = {Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.}
    }
  • X. Li, C. Fox, and S. Coutts, “Deep learning for robotic strawberry harvesting,” in Ukras20, 2020, p. 80–82. doi:10.31256/Bj3Kl5B
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}We develop a novel machine learning based robotic strawberry harvesting system for fruit counting, sizing/weighting, and yield prediction.

    @inproceedings{lincoln41273,
    month = {April},
    author = {Xiaodong Li and Charles Fox and Shaun Coutts},
    booktitle = {UKRAS20},
    title = {Deep learning for robotic strawberry harvesting},
    publisher = {UK-RAS},
    doi = {10.31256/Bj3Kl5B},
    pages = {80--82},
    year = {2020},
    keywords = {ARRAY(0x55e772d79178)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41273/},
    abstract = {Abstract{--}We develop a novel machine learning based robotic
    strawberry harvesting system for fruit counting, sizing/weighting,
    and yield prediction.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Automatic assessment and learning of robot social abilities,” in Companion of the 2020 acm/ieee international conference on human-robot interaction, 2020, p. 561–563. doi:10.1145/3371382.3377430
    [BibTeX] [Abstract] [Download PDF]

    One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].

    @inproceedings{lincoln40509,
    booktitle = {Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction},
    month = {March},
    title = {Automatic Assessment and Learning of Robot Social Abilities},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    year = {2020},
    pages = {561--563},
    doi = {10.1145/3371382.3377430},
    keywords = {ARRAY(0x55e772d791a8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40509/},
    abstract = {One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].}
    }
  • H. Wang, J. Peng, X. Zheng, and S. Yue, “A robust visual system for small target motion detection against cluttered moving backgrounds,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 3, p. 839–853, 2020. doi:10.1109/TNNLS.2019.2910418
    [BibTeX] [Abstract] [Download PDF]

    Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems–ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.

    @article{lincoln36114,
    volume = {31},
    number = {3},
    month = {March},
    author = {Hongxin Wang and Jigen Peng and Xuqiang Zheng and Shigang Yue},
    title = {A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2910418},
    pages = {839--853},
    keywords = {ARRAY(0x55e772d791d8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36114/},
    abstract = {Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems--ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.}
    }
  • J. L. Louedec, B. Li, and G. Cielniak, “Evaluation of 3d vision systems for detection of small objects in agricultural environments,” in The 15th international joint conference on computer vision, imaging and computer graphics theory and applications, 2020. doi:10.5220/0009182806820689
    [BibTeX] [Abstract] [Download PDF]

    3D information provides unique information about shape, localisation and relations between objects, not found in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a study on the application of modern 3D sensing technology together with the state-of-the-art machine learning algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for future work on 3D perception for challenging outdoor applications such as agriculture.

    @inproceedings{lincoln40456,
    booktitle = {The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications},
    month = {February},
    title = {Evaluation of 3D Vision Systems for Detection of Small Objects in Agricultural Environments},
    author = {Justin Le Louedec and Bo Li and Grzegorz Cielniak},
    publisher = {SciTePress},
    year = {2020},
    doi = {10.5220/0009182806820689},
    keywords = {ARRAY(0x55e772d79208)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40456/},
    abstract = {3D information provides unique information about shape, localisation and relations between objects, not found
    in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a
    study on the application of modern 3D sensing technology together with the state-of-the-art machine learning
    algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance
    of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks
    trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the
    current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for
    future work on 3D perception for challenging outdoor applications such as agriculture.}
    }
  • R. Polvara, M. Patacchiola, M. Hanheide, and G. Neumann, “Sim-to-real quadrotor landing via sequential deep q-networks and domain randomization,” Robotics, vol. 9, iss. 1, 2020. doi:doi:10.3390/robotics9010008
    [BibTeX] [Abstract] [Download PDF]

    The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.

    @article{lincoln40216,
    volume = {9},
    number = {1},
    month = {February},
    author = {Riccardo Polvara and Massimiliano Patacchiola and Marc Hanheide and Gerhard Neumann},
    title = {Sim-to-Real Quadrotor Landing via Sequential Deep Q-Networks and Domain Randomization},
    publisher = {MDPI},
    year = {2020},
    journal = {Robotics},
    doi = {doi:10.3390/robotics9010008},
    keywords = {ARRAY(0x55e772d79238)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40216/},
    abstract = {The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.}
    }
  • M. Bartlett, C. Costescu, P. Baxter, and S. Thill, “Requirements for robotic interpretation of social signals ?in the wild?: insights from diagnostic criteria of autism spectrum disorder,” Mdpi information, vol. 11, iss. 81, p. 1–20, 2020. doi:10.3390/info11020081
    [BibTeX] [Abstract] [Download PDF]

    The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.

    @article{lincoln40108,
    volume = {11},
    number = {81},
    month = {February},
    author = {M Bartlett and C Costescu and Paul Baxter and S Thill},
    title = {Requirements for Robotic Interpretation of Social Signals ?in the Wild?: Insights from Diagnostic Criteria of Autism Spectrum Disorder},
    publisher = {MDPI},
    year = {2020},
    journal = {MDPI Information},
    doi = {10.3390/info11020081},
    pages = {1--20},
    keywords = {ARRAY(0x55e772d79268)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40108/},
    abstract = {The last few decades have seen widespread advances in technological means to characterise
    observable aspects of human behaviour such as gaze or posture. Among others, these developments
    have also led to significant advances in social robotics. At the same time, however, social robots
    are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether
    the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we
    characterise the problems that a social robot in the real world may face, and review the technological
    state of the art in terms of addressing these. We do this by considering what it would entail
    to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD
    diagnosis fundamentally requires the ability to characterise human behaviour from observable
    aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis
    is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall,
    we demonstrate that even with relatively clear therapist-provided criteria and current technological
    progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have
    clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis,
    we provide a classification of criteria based on whether or not they depend on covert information
    and highlight present-day possibilities for supporting therapists in diagnosis through technological
    means. For social robotics, we highlight the fundamental role of covert behaviour, show that the
    current state-of-the-art is unable to characterise this, and emphasise that future research should tackle
    this explicitly in realistic settings.}
    }
  • B. Chen, J. Huang, Y. Huang, S. Kollias, and S. Yue, “Combining guaranteed and spot markets in display advertising: selling guaranteed page views with stochastic demand,” European journal of operational research, vol. 280, iss. 3, p. 1144–1159, 2020. doi:10.1016/j.ejor.2019.07.067
    [BibTeX] [Abstract] [Download PDF]

    While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.

    @article{lincoln39575,
    volume = {280},
    number = {3},
    month = {February},
    author = {Bowei Chen and Jingmin Huang and Yufei Huang and Stefanos Kollias and Shigang Yue},
    title = {Combining guaranteed and spot markets in display advertising: Selling guaranteed page views with stochastic demand},
    publisher = {Elsevier},
    year = {2020},
    journal = {European Journal of Operational Research},
    doi = {10.1016/j.ejor.2019.07.067},
    pages = {1144--1159},
    keywords = {ARRAY(0x55e772d79298)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39575/},
    abstract = {While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page
    views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.}
    }
  • J. P. Fentanes, A. Badiee, T. Duckett, J. Evans, S. Pearson, and G. Cielniak, “Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor,” Journal of field robotics, vol. 37, iss. 1, p. 122–136, 2020. doi:10.1002/rob.21914
    [BibTeX] [Abstract] [Download PDF]

    Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.

    @article{lincoln37350,
    volume = {37},
    number = {1},
    month = {January},
    author = {Jaime Pulido Fentanes and Amir Badiee and Tom Duckett and Jonathan Evans and Simon Pearson and Grzegorz Cielniak},
    title = {Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor},
    publisher = {Wiley Periodicals, Inc.},
    year = {2020},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21914},
    pages = {122--136},
    keywords = {ARRAY(0x55e772d792c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37350/},
    abstract = {Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.}
    }
  • P. Chudzik, A. Mitchell, M. Alkaseem, Y. Wu, S. Fang, T. Hudaib, S. Pearson, and B. Al-Diri, “Mobile real-time grasshopper detection and data aggregation framework,” Scientific reports, vol. 10, iss. 1150, 2020. doi:10.1038/s41598-020-57674-8
    [BibTeX] [Abstract] [Download PDF]

    nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.

    @article{lincoln39125,
    volume = {10},
    number = {1150},
    month = {January},
    author = {Piotr Chudzik and Arthur Mitchell and Mohammad Alkaseem and Yingie Wu and Shibo Fang and Taghread Hudaib and Simon Pearson and Bashir Al-Diri},
    title = {Mobile Real-Time Grasshopper Detection and Data Aggregation Framework},
    publisher = {Springer},
    year = {2020},
    journal = {Scientific Reports},
    doi = {10.1038/s41598-020-57674-8},
    keywords = {ARRAY(0x55e772d792f8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39125/},
    abstract = {nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images
    to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.}
    }
  • D. D. Barrie, R. Margetts, and K. Goher, “Simpa: soft-grasp infant myoelectric prosthetic arm,” Ieee robotics and automation letters, 2020. doi:10.1109/LRA.2019.2963820
    [BibTeX] [Abstract] [Download PDF]

    Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.

    @article{lincoln39383,
    month = {January},
    title = {SIMPA: Soft-Grasp Infant Myoelectric Prosthetic Arm},
    author = {Daniel De Barrie and Rebecca Margetts and Khaled Goher},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/LRA.2019.2963820},
    journal = {IEEE Robotics and Automation Letters},
    keywords = {ARRAY(0x55e772d79358)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39383/},
    abstract = {Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using
    predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.}
    }
  • R. Kirk, G. Cielniak, and M. Mangan, “L*a*b*fruits: a rapid and robust outdoor fruit detection system combining bio-inspired features with one-stage deep learning networks,” Sensors, vol. 20, iss. 1, p. 275, 2020. doi:10.3390/s20010275
    [BibTeX] [Abstract] [Download PDF]

    Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.

    @article{lincoln39423,
    volume = {20},
    number = {1},
    month = {January},
    author = {Raymond Kirk and Grzegorz Cielniak and Michael Mangan},
    title = {L*a*b*Fruits: A Rapid and Robust Outdoor Fruit Detection System Combining Bio-Inspired Features with One-Stage Deep Learning Networks},
    publisher = {MDPI},
    year = {2020},
    journal = {Sensors},
    doi = {10.3390/s20010275},
    pages = {275},
    keywords = {ARRAY(0x55e772d79388)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39423/},
    abstract = {Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.}
    }
  • P. Bosilj, E. Aptoula, T. Duckett, and G. Cielniak, “Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture,” Journal of field robotics, vol. 37, iss. 1, p. 7–19, 2020. doi:10.1002/rob.21869
    [BibTeX] [Abstract] [Download PDF]

    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.

    @article{lincoln35535,
    volume = {37},
    number = {1},
    month = {January},
    author = {Petra Bosilj and Erchan Aptoula and Tom Duckett and Grzegorz Cielniak},
    title = {Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture},
    publisher = {Wiley},
    year = {2020},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21869},
    pages = {7--19},
    keywords = {ARRAY(0x55e772d793b8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35535/},
    abstract = {Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.}
    }
  • C. Coppola, S. Cosar, D. R. Faria, and N. Bellotto, “Social activity recognition on continuous rgb-d video sequences,” International journal of social robotics, p. 1–15, 2020. doi:10.1007/s12369-019-00541-y
    [BibTeX] [Abstract] [Download PDF]

    Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.

    @article{lincoln35151,
    month = {January},
    author = {Claudio Coppola and Serhan Cosar and Diego R. Faria and Nicola Bellotto},
    title = {Social Activity Recognition on Continuous RGB-D Video Sequences},
    publisher = {Springer},
    journal = {International Journal of Social Robotics},
    doi = {10.1007/s12369-019-00541-y},
    pages = {1--15},
    year = {2020},
    keywords = {ARRAY(0x55e772d793e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35151/},
    abstract = {Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.}
    }
  • Z. Yan, T. Duckett, and N. Bellotto, “Online learning for 3d lidar-based human detection: experimental analysis of point cloud clustering and classification methods,” Autonomous robots, vol. 44, iss. 2, p. 147–164, 2020. doi:10.1007/s10514-019-09883-y
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of “experts” to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.

    @article{lincoln36535,
    volume = {44},
    number = {2},
    month = {January},
    author = {Zhi Yan and Tom Duckett and Nicola Bellotto},
    title = {Online Learning for 3D LiDAR-based Human Detection: Experimental Analysis of Point Cloud Clustering and Classification Methods},
    publisher = {Springer},
    year = {2020},
    journal = {Autonomous Robots},
    doi = {10.1007/s10514-019-09883-y},
    pages = {147--164},
    keywords = {ARRAY(0x55e772d79418)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36535/},
    abstract = {This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of "experts" to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.}
    }
  • J. Lock, I. Gilchrist, G. Cielniak, and N. Bellotto, “Experimental analysis of a spatialised audio interface for people with visual impairments,” Acm transactions on accessible computing, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.

    @article{lincoln41544,
    title = {Experimental Analysis of a Spatialised Audio Interface for People with Visual Impairments},
    author = {Jacobus Lock and Iain Gilchrist and Grzegorz Cielniak and Nicola Bellotto},
    publisher = {Association for Computing Machinery},
    year = {2020},
    journal = {ACM Transactions on Accessible Computing},
    keywords = {ARRAY(0x55e772d79478)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/41544/},
    abstract = {Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.}
    }
  • J. Singh, A. R. Srinivasan, G. Neumann, and A. Kucukyilmaz, “Haptic-guided teleoperation of a 7-dof collaborative robot arm with an identical twin master,” Ieee transactions on haptics, p. 1–1, 2020. doi:10.1109/TOH.2020.2971485
    [BibTeX] [Abstract] [Download PDF]

    In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.

    @article{lincoln40137,
    title = {Haptic-Guided Teleoperation of a 7-DoF Collaborative Robot Arm with an Identical Twin Master},
    author = {Jayant Singh and Aravinda Ramakrishnan Srinivasan and Gerhard Neumann and Ayse Kucukyilmaz},
    publisher = {IEEE},
    year = {2020},
    pages = {1--1},
    doi = {10.1109/TOH.2020.2971485},
    journal = {IEEE Transactions on Haptics},
    keywords = {ARRAY(0x55e772d794a8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40137/},
    abstract = {In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.}
    }

2019

  • C. Achillas, D. Bochtis, D. Aidonis, V. Marinoudi, and D. Folinas, “Voice-driven fleet management system for agricultural operations,” Information processing in agriculture, vol. 6, iss. 4, p. 471–478, 2019. doi:10.1016/j.inpa.2019.03.001
    [BibTeX] [Abstract] [Download PDF]

    Food consumption is constantly increasing at global scale. In this light, agricultural production also needs to increase in order to satisfy the relevant demand for agricultural products. However, due to by environmental and biological factors (e.g. soil compaction) the weight and size of the machinery cannot be further physically optimized. Thus, only marginal improvements are possible to increase equipment effectiveness. On the contrary, late technological advances in ICT provide the ground for significant improvements in agri-production efficiency. In this work, the V-Agrifleet tool is presented and demonstrated. V-Agrifleet is developed to provide a ?hands-free? interface for information exchange and an ?Olympic view? to all coordinated users, giving them the ability for decentralized decision-making. The proposed tool can be used by the end-users (e.g. farmers, contractors, farm associations, agri-products storage and processing facilities, etc.) order to optimize task and time management. The visualized documentation of the fleet performance provides valuable information for the evaluation management level giving the opportunity for improvements in the planning of next operations. Its vendor-independent architecture, voice-driven interaction, context awareness functionalities and operation planning support constitute V-Agrifleet application a highly innovative agricultural machinery operational aiding system.

    @article{lincoln39226,
    volume = {6},
    number = {4},
    month = {December},
    author = {Ch. Achillas and Dionysis Bochtis and D. Aidonis and V. Marinoudi and D. Folinas},
    title = {Voice-driven fleet management system for agricultural operations},
    publisher = {Elsevier},
    year = {2019},
    journal = {Information Processing in Agriculture},
    doi = {10.1016/j.inpa.2019.03.001},
    pages = {471--478},
    keywords = {ARRAY(0x55e772dfb550)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39226/},
    abstract = {Food consumption is constantly increasing at global scale. In this light, agricultural production also needs to increase in order to satisfy the relevant demand for agricultural products. However, due to by environmental and biological factors (e.g. soil compaction) the weight and size of the machinery cannot be further physically optimized. Thus, only marginal improvements are possible to increase equipment effectiveness. On the contrary, late technological advances in ICT provide the ground for significant improvements in agri-production efficiency. In this work, the V-Agrifleet tool is presented and demonstrated. V-Agrifleet is developed to provide a ?hands-free? interface for information exchange and an ?Olympic view? to all coordinated users, giving them the ability for decentralized decision-making. The proposed tool can be used by the end-users (e.g. farmers, contractors, farm associations, agri-products storage and processing facilities, etc.) order to optimize task and time management. The visualized documentation of the fleet performance provides valuable information for the evaluation management level giving the opportunity for improvements in the planning of next operations. Its vendor-independent architecture, voice-driven interaction, context awareness functionalities and operation planning support constitute V-Agrifleet application a highly innovative agricultural machinery operational aiding system.}
    }
  • G. Onoufriou, R. Bickerton, S. Pearson, and G. Leontidis, “Nemesyst: a hybrid parallelism deep learning-based framework applied for internet of things enabled food retailing refrigeration systems,” Computers in industry, vol. 114, p. 103133, 2019. doi:10.1016/j.compind.2019.103133
    [BibTeX] [Abstract] [Download PDF]

    Deep Learning has attracted considerable attention across multiple application domains, including computer vision, signal processing and natural language processing. Although quite a few single node deep learning frameworks exist, such as tensorflow, pytorch and keras, we still lack a complete process- ing structure that can accommodate large scale data processing, version control, and deployment, all while staying agnostic of any specific single node framework. To bridge this gap, this paper proposes a new, higher level framework, i.e. Nemesyst, which uses databases along with model sequentialisation to allow processes to be fed unique and transformed data at the point of need. This facilitates near real-time application and makes models available for further training or use at any node that has access to the database simultaneously. Nemesyst is well suited as an application framework for internet of things aggregated control systems, deploying deep learning techniques to optimise individual machines in massive networks. To demonstrate this framework, we adopted a case study in a novel domain; deploying deep learning to optimise the high speed control of electrical power consumed by a massive internet of things network of retail refrigeration systems in proportion to load available on the UK Na- tional Grid (a demand side response). The case study demonstrated for the first time in such a setting how deep learning models, such as Recurrent Neural Networks (vanilla and Long-Short-Term Memory) and Generative Adversarial Networks paired with Nemesyst, achieve compelling performance, whilst still being malleable to future adjustments as both the data and requirements inevitably change over time.

    @article{lincoln37181,
    volume = {114},
    month = {December},
    author = {George Onoufriou and Ronald Bickerton and Simon Pearson and Georgios Leontidis},
    note = {Partners included: Tesco and IMS-Evolve},
    title = {Nemesyst: A Hybrid Parallelism Deep Learning-Based Framework Applied for Internet of Things Enabled Food Retailing Refrigeration Systems},
    publisher = {Elsevier},
    year = {2019},
    journal = {Computers in Industry},
    doi = {10.1016/j.compind.2019.103133},
    pages = {103133},
    keywords = {ARRAY(0x55e772df8568)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37181/},
    abstract = {Deep Learning has attracted considerable attention across multiple application domains, including computer vision, signal processing and natural language processing. Although quite a few single node deep learning frameworks exist, such as tensorflow, pytorch and keras, we still lack a complete process- ing structure that can accommodate large scale data processing, version control, and deployment, all while staying agnostic of any specific single node framework. To bridge this gap, this paper proposes a new, higher level framework, i.e. Nemesyst, which uses databases along with model sequentialisation to allow processes to be fed unique and transformed data at the point of need. This facilitates near real-time application and makes models available for further training or use at any node that has access to the database simultaneously. Nemesyst is well suited as an application framework for internet of things aggregated control systems, deploying deep learning techniques to optimise individual machines in massive networks. To demonstrate this framework, we adopted a case study in a novel domain; deploying deep learning to optimise the high speed control of electrical power consumed by a massive internet of things network of retail refrigeration systems in proportion to load available on the UK Na- tional Grid (a demand side response). The case study demonstrated for the first time in such a setting how deep learning models, such as Recurrent Neural Networks (vanilla and Long-Short-Term Memory) and Generative Adversarial Networks paired with Nemesyst, achieve compelling performance, whilst still being malleable to future adjustments as both the data and requirements inevitably change over time.}
    }
  • P. Baxter, F. D. Duchetto, and M. Hanheide, “Engaging learners in dialogue interactivity development for mobile robots,” in Edurobotics 2018, 2019. doi:10.1007/978-3-030-18141-3_12
    [BibTeX] [Abstract] [Download PDF]

    The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain (a museum guide robot) our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.

    @inproceedings{lincoln40135,
    booktitle = {EDUROBOTICS 2018},
    month = {December},
    title = {Engaging Learners in Dialogue Interactivity Development for Mobile Robots},
    author = {Paul Baxter and Francesco Del Duchetto and Marc Hanheide},
    publisher = {Springer, Cham},
    year = {2019},
    doi = {10.1007/978-3-030-18141-3\_12},
    keywords = {ARRAY(0x55e772df8550)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40135/},
    abstract = {The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain (a museum guide robot) our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.}
    }
  • Q. Fu, C. Hu, J. Peng, C. Rind, and S. Yue, “A robust collision perception visual neural network with specific selectivity to darker objects,” Ieee transactions on cybernetics, p. 1–15, 2019. doi:10.1109/TCYB.2019.2946090
    [BibTeX] [Abstract] [Download PDF]

    Building an ef?cient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature and are working perfectly in the real world, could be ideal models for designing arti?cial vision systems. In the locust?s visual pathways, a lobula giant movement detector (LGMD), that is, the LGMD2, has been identi?ed as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds; similar situations which many ground vehicles and robots are often faced with. However, little has been done on modeling the LGMD2 and investigating its potential in robotics and vehicles. In this article, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust via the modeling of biased-ON and -OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro-mobile robot and tested with real-time experiments. The experimental results have veri?ed the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds.

    @article{lincoln39137,
    month = {December},
    author = {Qinbing Fu and Cheng Hu and Jigen Peng and Claire Rind and Shigang Yue},
    title = {A Robust Collision Perception Visual Neural Network with Specific Selectivity to Darker Objects},
    publisher = {IEEE},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/TCYB.2019.2946090},
    pages = {1--15},
    year = {2019},
    keywords = {ARRAY(0x55e772dfb5e0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39137/},
    abstract = {Building an ef?cient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature and are working perfectly in the real world, could be ideal models for designing arti?cial vision systems. In the locust?s visual pathways, a lobula giant movement detector (LGMD), that is, the LGMD2, has been identi?ed as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds; similar situations which many ground vehicles and robots are often faced with. However, little has been done on modeling the LGMD2 and investigating its potential in robotics and vehicles. In this article, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust via the modeling of biased-ON and -OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro-mobile robot and tested with real-time experiments. The experimental results have veri?ed the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds.}
    }
  • B. Grieve, T. Duckett, M. Collison, L. Boyd, J. West, Y. Hujun, F. Arvin, and S. Pearson, “The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: a fundamental rethink is required.,” Global food security, vol. 23, p. 116–124, 2019. doi:10.1016/j.gfs.2019.04.011
    [BibTeX] [Abstract] [Download PDF]

    Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology & physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).

    @article{lincoln35842,
    volume = {23},
    month = {December},
    author = {Bruce Grieve and Tom Duckett and Martin Collison and Lesley Boyd and Jon West and Yin Hujun and Farshad Arvin and Simon Pearson},
    title = {The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: A fundamental rethink is required.},
    publisher = {Elsevier},
    year = {2019},
    journal = {Global Food Security},
    doi = {10.1016/j.gfs.2019.04.011},
    pages = {116--124},
    keywords = {ARRAY(0x55e772dfb5c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35842/},
    abstract = {Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology \& physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).}
    }
  • H. Cuayahuitl, D. Lee, S. Ryu, Y. Cho, S. Choi, S. Indurthi, S. Yu, H. Choi, I. Hwang, and J. Kim, “Ensemble-based deep reinforcement learning for chatbots,” Neurocomputing, vol. 366, p. 118–130, 2019. doi:10.1016/j.neucom.2019.08.007
    [BibTeX] [Abstract] [Download PDF]

    Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only ? without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency ? which revealed that our proposed dialogue rewards strongly correlate with human judgements.

    @article{lincoln36668,
    volume = {366},
    month = {November},
    author = {Heriberto Cuayahuitl and Donghyeon Lee and Seonghan Ryu and Yongjin Cho and Sungja Choi and Satish Indurthi and Seunghak Yu and Hyungtak Choi and Inchul Hwang and Jihie Kim},
    title = {Ensemble-Based Deep Reinforcement Learning for Chatbots},
    publisher = {Elsevier},
    year = {2019},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2019.08.007},
    pages = {118--130},
    keywords = {ARRAY(0x55e772dfb628)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36668/},
    abstract = {Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only ? without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency ? which revealed that our proposed dialogue rewards strongly correlate with human judgements.}
    }
  • M. Sorour, K. Elgeneidy, A. Srinivasan, and M. Hanheide, “Grasping unknown objects based on gripper workspace spheres,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2019. doi:10.1109/IROS40897.2019.8967989
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a novel grasp planning algorithm for unknown objects given a registered point cloud of the target from different views. The proposed methodology requires no prior knowledge of the object, nor offline learning. In our approach, the gripper kinematic model is used to generate a point cloud of each finger workspace, which is then filled with spheres. At run-time, first the object is segmented, its major axis is computed, in a plane perpendicular to which, the main grasping action is constrained. The object is then uniformly sampled and scanned for various gripper poses that assure at least one object point is located in the workspace of each finger. In addition, collision checks with the object or the table are performed using computationally inexpensive gripper shape approximation. Our methodology is both time efficient (consumes less than 1.5 seconds in average) and versatile. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand).

    @inproceedings{lincoln36370,
    month = {November},
    author = {Mohamed Sorour and Khaled Elgeneidy and Aravinda Srinivasan and Marc Hanheide},
    booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Grasping Unknown Objects Based on Gripper Workspace Spheres},
    publisher = {IEEE},
    journal = {Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
    doi = {10.1109/IROS40897.2019.8967989},
    year = {2019},
    keywords = {ARRAY(0x55e772dffe48)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36370/},
    abstract = {In this paper, we present a novel grasp planning algorithm for unknown objects given a registered point cloud of the target from different views. The proposed methodology requires no prior knowledge of the object, nor offline learning. In our approach, the gripper kinematic model is used to generate a point cloud of each finger workspace, which is then filled with spheres. At run-time, first the object is segmented, its major axis is computed, in a plane perpendicular to which, the main grasping action is constrained. The object is then
    uniformly sampled and scanned for various gripper poses that assure at least one object point is located in the workspace of each finger. In addition, collision checks with the object or the table are performed using computationally inexpensive gripper shape approximation. Our methodology is both time efficient (consumes less than 1.5 seconds in average) and versatile. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand).}
    }
  • L. Baronti, M. Alston, N. Mavrakis, A. M. G. Esfahani, and M. Castellani, “Primitive shape fitting in point clouds using the bees algorithm,” Advances in automation and robotics, vol. 9, iss. 23, 2019. doi:10.3390/app9235198
    [BibTeX] [Abstract] [Download PDF]

    In this study, the problem of fitting shape primitives to point cloud scenes was tackled 2 as a parameter optimisation procedure and solved using the popular Bees Algorithm. Tested on three sets of clean and differently blurred point cloud models, the Bees Algorithm obtained performances comparable to those obtained using the state-of-the-art RANSAC method, and superior to those obtained by an evolutionary algorithm. Shape fitting times were compatible with the real-time application. The main advantage of the Bees Algorithm over standard methods is that it doesn?t rely on ad hoc assumptions about the nature of the point cloud model like RANSAC approximation tolerance.

    @article{lincoln39027,
    volume = {9},
    number = {23},
    month = {November},
    author = {Luca Baronti and Mark Alston and Nikos Mavrakis and Amir Masoud Ghalamzan Esfahani and Marco Castellani},
    title = {Primitive Shape Fitting in Point Clouds Using the Bees Algorithm},
    publisher = {MDPI},
    year = {2019},
    journal = {Advances in Automation and Robotics},
    doi = {10.3390/app9235198},
    keywords = {ARRAY(0x55e772dfb568)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39027/},
    abstract = {In this study, the problem of fitting shape primitives to point cloud scenes was tackled 2 as a parameter optimisation procedure and solved using the popular Bees Algorithm. Tested on three sets of clean and differently blurred point cloud models, the Bees Algorithm obtained performances comparable to those obtained using the state-of-the-art RANSAC method, and superior to those obtained by an evolutionary algorithm. Shape fitting times were compatible with the real-time application. The main advantage of the Bees Algorithm over standard methods is that it doesn?t rely on ad hoc assumptions about the nature of the point cloud model like RANSAC approximation tolerance.}
    }
  • F. Camara, N. Merat, and C. Fox, “A heuristic model for pedestrian intention estimation,” in Ieee intelligent transportation systems conference, 2019. doi:10.1109/ITSC.2019.8917195
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian behaviour and controlling interactions with pedestrians is of critical importance for autonomous vehicles, but remains a complex and challenging problem. This study infers pedestrian intent during possible road-crossing interactions, to assist autonomous vehicle decisions to yield or not yield when approaching them, and tests a simple heuristic model of intent on pedestrian-vehicle trajectory data for the first time. It relies on a heuristic approach based on the observed positions of the agents over time. The method can predict pedestrian crossing intent, crossing or stopping, with 96\% accuracy by the time the pedestrian reaches the curbside, on the standard Daimler pedestrian dataset. This result is important in demarcating scenarios which have a clear winner and can be predicted easily with the simple heuristic, from those which may require more complex game-theoretic models to predict and control.

    @inproceedings{lincoln36758,
    booktitle = {IEEE Intelligent Transportation Systems Conference},
    month = {November},
    title = {A heuristic model for pedestrian intention estimation},
    author = {Fanta Camara and Natasha Merat and Charles Fox},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/ITSC.2019.8917195},
    keywords = {ARRAY(0x55e772dfb580)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36758/},
    abstract = {Understanding pedestrian behaviour and controlling interactions with pedestrians is of critical importance for autonomous vehicles, but remains a complex and challenging problem. This study infers pedestrian intent during possible road-crossing interactions, to assist autonomous vehicle decisions to yield or not yield when approaching them, and tests a simple heuristic model of intent on pedestrian-vehicle trajectory data for the first time. It relies on a heuristic approach based
    on the observed positions of the agents over time. The method can predict pedestrian crossing intent, crossing or stopping, with 96\% accuracy by the time the pedestrian reaches the curbside, on the standard Daimler pedestrian dataset. This result is important in demarcating scenarios which have a clear winner and can be predicted easily with the simple heuristic, from those which may require more complex game-theoretic models to predict and control.}
    }
  • F. Camara, P. Dickinson, N. Merat, and C. Fox, “Towards game theoretic av controllers: measuring pedestrian behaviour in virtual reality,” in Ieee/rsj international conference on intelligent robots and systems (iros 2019) workshops, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian interaction is of great importance for autonomous vehicles (AVs). The present study investigates pedestrian behaviour during crossing scenarios with an autonomous vehicle using Virtual Reality. The self-driving car is driven by a game theoretic controller which adapts its driving style to pedestrian crossing behaviour. We found that subjects value collision avoidance about 8 times more than saving 0.02 seconds. A previous lab study found time saving to be more important than collision avoidance in a highly unrealistic board game style version of the game. The present result suggests that the VR simulation reproduces real world road-crossings better than the lab study and provides a reliable test-bed for the development of game theoretic models for AVs.

    @inproceedings{lincoln37261,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019) Workshops},
    month = {November},
    title = {Towards game theoretic AV controllers: measuring pedestrian behaviour in Virtual Reality},
    author = {Fanta Camara and Patrick Dickinson and Natasha Merat and Charles Fox},
    publisher = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019) Workshops},
    year = {2019},
    keywords = {ARRAY(0x55e772dffe78)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37261/},
    abstract = {Understanding pedestrian interaction is of great importance for autonomous vehicles (AVs). The present study investigates pedestrian behaviour during crossing scenarios with an autonomous vehicle using Virtual Reality. The self-driving car is driven by a game theoretic controller which adapts its driving style to pedestrian crossing behaviour. We found that subjects value collision avoidance about 8 times more than saving 0.02 seconds. A previous lab study found time saving to be more important than collision avoidance in a highly unrealistic board game style version of the game. The present result suggests that the VR simulation reproduces real world road-crossings better than the lab study and provides a reliable test-bed for the development of game theoretic models for AVs.}
    }
  • A. Zaganidis, A. Zerntev, T. Duckett, and G. Cielniak, “Semantically assisted loop closure in slam using ndt histograms,” in International conference on intelligent robots and systems (iros), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Precise knowledge of pose is of great importance for reliable operation of mobile robots in outdoor environments. Simultaneous localization and mapping (SLAM) is the online construction of a map during exploration of an environment. One of the components of SLAM is loop closure detection, identifying that the same location has been visited and is present on the existing map, and localizing against it. We have shown in previous work that using semantics from a deep segmentation network in conjunction with the Normal Distributions Transform point cloud registration improves the robustness, speed and accuracy of lidar odometry. In this work we extend the method for loop closure detection, using the labels already available from local registration into NDT Histograms, and we present a SLAM pipeline based on Semantic assisted NDT and PointNet++. We experimentally demonstrate on sequences from the KITTI benchmark that the map descriptor we propose outperforms NDT Histograms without semantics, and we validate its use on a SLAM task.

    @inproceedings{lincoln37750,
    booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
    month = {November},
    title = {Semantically Assisted Loop Closure in SLAM Using NDT Histograms},
    author = {Anestis Zaganidis and Alexandros Zerntev and Tom Duckett and Grzegorz Cielniak},
    year = {2019},
    keywords = {ARRAY(0x55e772dffe90)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37750/},
    abstract = {Precise knowledge of pose is of great importance for reliable operation of mobile robots in outdoor environments. Simultaneous localization and mapping (SLAM) is the online construction of a map during exploration of an environment. One of the components of SLAM is loop closure detection, identifying that the same location has been visited and is present on the existing map, and localizing against it. We have shown in previous work that using semantics from a deep segmentation network in conjunction with the Normal Distributions Transform point cloud registration improves the robustness, speed and accuracy of lidar odometry. In this work we extend the method for loop closure detection, using the labels already available from local registration into NDT Histograms, and we present a SLAM pipeline based on Semantic assisted NDT and PointNet++. We experimentally demonstrate on sequences from the KITTI benchmark that the map descriptor we propose outperforms NDT Histograms without semantics, and we validate its use on a SLAM task.}
    }
  • J. Lock, I. Gilchrist, G. Cielniak, and N. Bellotto, “Bone-conduction audio interface to guide people with visual impairments,” in International workshop on assistive engineering and information technology (aeit 2019), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The ActiVis project’s aim is to build a mobile guidance aid to help people with limited vision find objects in an unknown environment. This system uses bone-conduction headphones to transmit audio signals to the user and requires an effective non-visual interface. To this end, we propose a new audio-based interface that uses a spatialised signal to convey a target?s position on the horizontal plane. The vertical position on the median plan is given by adjusting the tone?s pitch to overcome the audio localisation limitations of bone-conduction headphones. This interface is validated through a set of experiments with blindfolded and visually impaired participants.

    @inproceedings{lincoln36793,
    booktitle = {International Workshop on Assistive Engineering and Information Technology (AEIT 2019)},
    month = {November},
    title = {Bone-Conduction Audio Interface to Guide People with Visual Impairments},
    author = {Jacobus Lock and Iain Gilchrist and Grzegorz Cielniak and Nicola Bellotto},
    year = {2019},
    keywords = {ARRAY(0x55e772dffec0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36793/},
    abstract = {The ActiVis project's aim is to build a mobile guidance aid to help people with limited vision find objects in an unknown environment. This system uses bone-conduction headphones to transmit audio signals to the user and requires an effective non-visual interface. To this end, we propose a new audio-based interface that uses a spatialised signal to convey a target?s position on the horizontal plane. The vertical position on the median plan is given by adjusting the tone?s pitch to overcome the audio localisation limitations of bone-conduction headphones. This interface is validated through a set of experiments with blindfolded and visually impaired participants.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Lindsey the tour guide robot – usage patterns in a museum long-term deployment,” in International conference on robot & human interactive communication (ro-man), New Delhi, 2019. doi:10.1109/RO-MAN46459.2019.8956329
    [BibTeX] [Abstract] [Download PDF]

    The long-term deployment of autonomous robots co-located with humans in real-world scenarios remains a challenging problem. In this paper, we present the “Lindsey” tour guide robot system in which we attempt to increase the social capability of current state-of-the-art robotic technologies. The robot is currently deployed at a museum displaying local archaeology where it is providing guided tours and information to visitors. The robot is operating autonomously daily, navigating around the museum and engaging with the public, with on-site assistance from roboticists only in cases of hardware/software malfunctions. In a deployment lasting seven months up to now, it has travelled nearly 300km and has delivered more than 2300 guided tours. First, we describe the robot framework and the management interfaces implemented. We then analyse the data collected up to now with the goal of understanding and modelling the visitors’ behavior in terms of their engagement with the technology. These data suggest that while short-term engagement is readily gained, continued engagement with the robot tour guide is likely to require more refined and robust socially interactive behaviours. The deployed system presents us with an opportunity to empirically address these issues.

    @inproceedings{lincoln37348,
    month = {October},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    booktitle = {International Conference on Robot \& Human Interactive Communication (RO-MAN)},
    address = {New Delhi},
    title = {Lindsey the Tour Guide Robot - Usage Patterns in a Museum Long-Term Deployment},
    publisher = {IEEE},
    doi = {10.1109/RO-MAN46459.2019.8956329},
    year = {2019},
    keywords = {ARRAY(0x55e772dffed8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37348/},
    abstract = {The long-term deployment of autonomous robots co-located with humans in real-world scenarios remains a challenging problem. In this paper, we present the ``Lindsey'' tour guide robot system in which we attempt to increase the social capability of current state-of-the-art robotic technologies. The robot is currently deployed at a museum displaying local archaeology where it is providing guided tours and information to visitors. The robot is operating autonomously daily, navigating around the museum and engaging with the public, with on-site assistance from roboticists only in cases of hardware/software malfunctions. In a deployment lasting seven months up to now, it has travelled nearly 300km and has delivered more than 2300 guided tours. First, we describe the robot framework and the management interfaces implemented. We then analyse the data collected up to now with the goal of understanding and modelling the visitors' behavior in terms of their engagement with the technology. These data suggest that while short-term engagement is readily gained, continued engagement with the robot tour guide is likely to require more refined and robust socially interactive behaviours. The deployed system presents us with an opportunity to empirically address these issues.}
    }
  • T. Krajnik, T. Vintr, S. M. Mellado, J. P. Fentanes, G. Cielniak, O. M. Mozos, G. Broughton, and T. Duckett, “Warped hypertime representations for long-term autonomy of mobile robots,” Ieee robotics and automation letters, vol. 4, iss. 4, p. 3310–3317, 2019. doi:10.1109/LRA.2019.2926682
    [BibTeX] [Abstract] [Download PDF]

    This letter presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modeling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.

    @article{lincoln36962,
    volume = {4},
    number = {4},
    month = {October},
    author = {Tomas Krajnik and Tomas Vintr and Sergi Molina Mellado and Jaime Pulido Fentanes and Grzegorz Cielniak and Oscar Martinez Mozos and George Broughton and Tom Duckett},
    title = {Warped Hypertime Representations for Long-Term Autonomy of Mobile Robots},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2926682},
    pages = {3310--3317},
    keywords = {ARRAY(0x55e772dfff08)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36962/},
    abstract = {This letter presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modeling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.}
    }
  • R. Madigan, S. Nordhoff, C. Fox, R. E. Amina, T. Louw, M. Wilbrink, A. Schieben, and N. Merat, “Understanding interactions between automated road transport systems and other road users: a video analysis,” Transportation research part f, vol. 66, p. 196–213, 2019. doi:10.1016/j.trf.2019.09.006
    [BibTeX] [Abstract] [Download PDF]

    If automated vehicles (AVs) are to move efficiently through the traffic environment, there is a need for them to interact and communicate with other road users in a comprehensible and predictable manner. For this reason, an understanding of the interaction requirements of other road users is needed. The current study investigated these requirements through an analysis of 22 hours of video footage of the CityMobil2 AV demonstrations in La Rochelle (France) and Trikala (Greece). Manual and automated video-analysis techniques were used to identify typical interactions patterns between AVs and other road users. Results indicate that road infrastructure and road user factors had a major impact on the type of interactions that arose between AVs and other road users. Road infrastructure features such as road width, and the presence or absence of zebra crossings had an impact on road users? trajectory decisions while approaching an AV. Where possible, pedestrians and cyclists appeared to leave as much space as possible between their trajectories and that of the AV. However, in situations where the infrastructure did not allow for the separation of traffic, risky behaviours were more likely to emerge, with cyclists, in particular, travelling closely alongside the AVs on narrow paths of the road, rather than waiting for the AV to pass. In addition, the types of interaction varied considerably across socio-demographic groups, with females and older users more likely to show cautionary behaviour around the AVs than males, or younger road users. Overall, the results highlight the importance of implementing the correct infrastructure to support the safe introduction of AVs, while also ensuring that the behaviour of the AV matches other road users? expectations as closely as possible in order to avoid traffic conflicts.

    @article{lincoln36914,
    volume = {66},
    month = {October},
    author = {Ruth Madigan and Sina Nordhoff and Charles Fox and Roja Ezzati Amina and Tyron Louw and Marc Wilbrink and Anna Schieben and Natasha Merat},
    title = {Understanding interactions between Automated Road Transport Systems and other road users: A video analysis},
    publisher = {Elsevier},
    year = {2019},
    journal = {Transportation Research Part F},
    doi = {10.1016/j.trf.2019.09.006},
    pages = {196--213},
    keywords = {ARRAY(0x55e772dfff38)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36914/},
    abstract = {If automated vehicles (AVs) are to move efficiently through the traffic environment, there is a need for them to interact and communicate with other road users in a comprehensible and predictable manner. For this reason, an understanding of the interaction requirements of other road users is needed. The current study investigated these requirements through an analysis of 22 hours of video footage of the CityMobil2 AV demonstrations in La Rochelle (France) and Trikala (Greece). Manual and automated video-analysis techniques were used to identify typical interactions patterns between AVs and other road users. Results indicate that road infrastructure and road user factors had a major impact on the type of interactions that arose between AVs and other road users. Road infrastructure features such as road width, and the presence or absence of zebra crossings had an impact on road users? trajectory decisions while approaching an AV. Where possible, pedestrians and cyclists appeared to leave as much space as possible between their trajectories and that of the AV. However, in situations where the infrastructure did not allow for the separation of traffic, risky behaviours were more likely to emerge, with cyclists, in particular, travelling closely alongside the AVs on narrow paths of the road, rather than waiting for the AV to pass. In addition, the types of interaction varied considerably across socio-demographic groups, with females and older users more likely to show cautionary behaviour around the AVs than males, or younger road users. Overall, the results highlight the importance of implementing the correct infrastructure to support the safe introduction of AVs, while also ensuring that the behaviour of the AV matches other road users? expectations as closely as possible in order to avoid traffic conflicts.}
    }
  • E. Senft, S. Lemaignan, P. Baxter, M. Bartlett, and T. Belpaeme, “Teaching robots social autonomy from in situ human guidance,” Science robotics, vol. 4, iss. 35, p. eaat1186, 2019. doi:10.1126/scirobotics.aat1186
    [BibTeX] [Abstract] [Download PDF]

    Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.

    @article{lincoln38234,
    volume = {4},
    number = {35},
    month = {October},
    author = {Emmanuel Senft and S{\'e}verin Lemaignan and Paul Baxter and Madeleine Bartlett and Tony Belpaeme},
    title = {Teaching robots social autonomy from in situ human guidance},
    publisher = {American Association for the Advancement of Science},
    year = {2019},
    journal = {Science Robotics},
    doi = {10.1126/scirobotics.aat1186},
    pages = {eaat1186},
    keywords = {ARRAY(0x55e772dfff68)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38234/},
    abstract = {Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.}
    }
  • A. Kucukyilmaz and I. Issak, “Online identification of interaction behaviors from haptic data during collaborative object transfer,” Ieee robotics and automation letters, p. 1–1, 2019. doi:10.1109/LRA.2019.2945261
    [BibTeX] [Abstract] [Download PDF]

    Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another?s actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object co-manipulation. The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80\% accuracy with both classifiers when identifying general interaction types.

    @article{lincoln37631,
    month = {October},
    author = {Ayse Kucukyilmaz and Illimar Issak},
    title = {Online Identification of Interaction Behaviors from Haptic Data during Collaborative Object Transfer},
    publisher = {IEEE},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2945261},
    pages = {1--1},
    year = {2019},
    keywords = {ARRAY(0x55e772dfff98)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37631/},
    abstract = {Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another?s actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object co-manipulation.
    The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80\% accuracy with both classifiers when identifying general interaction types.}
    }
  • A. Postnikov, I. Albayati, S. Pearson, C. Bingham, R. Bickerton, and A. Zolotas, “Facilitating static firm frequency response with aggregated networks of commercial food refrigeration systems,” Applied energy, vol. 251, 2019. doi:10.1016/j.apenergy.2019.113357
    [BibTeX] [Abstract] [Download PDF]

    Aggregated electrical loads from massive numbers of distributed retail refrigeration systems could have a significant role in frequency balancing services. To date, no study has realised effective engineering applications of static firm frequency response to these aggregated networks. Here, the authors present a novel and validated approach that enables large scale control of distributed retail refrigeration assets. The authors show a validated model that simulates the operation of retail refrigerators comprising centralised compressor packs feeding multiple in-store display cases. The model was used to determine an optimal control strategy that both minimised the engineering risk to the pack during shut down and potential impacts to food safety. The authors show that following a load shedding frequency response trigger the pack should be allowed to maintain operation but with increased suction pressure set-point. This reduces compressor load whilst enabling a continuous flow of refrigerant to food cases. In addition, the authors simulated an aggregated response of up to three hundred compressor packs (over 2 MW capacity), with refrigeration cases on hysteresis and modulation control. Hysteresis control, compared to modulation, led to undesired load oscillations when the system recovers after a frequency balancing event. Transient responses of the system during the event showed significant fluctuations of active power when compressor network responds to both primary and secondary parts of a frequency balancing event. Enabling frequency response within this system is demonstrated by linking the aggregated refrigeration loads with a simplified power grid model that simulates a power loss incident.

    @article{lincoln36072,
    volume = {251},
    month = {October},
    author = {Andrey Postnikov and Ibrahim Albayati and Simon Pearson and Chris Bingham and Ronald Bickerton and Argyrios Zolotas},
    title = {Facilitating static firm frequency response with aggregated networks of commercial food refrigeration systems},
    publisher = {Elsevier},
    journal = {Applied Energy},
    doi = {10.1016/j.apenergy.2019.113357},
    year = {2019},
    keywords = {ARRAY(0x55e772dfffc8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36072/},
    abstract = {Aggregated electrical loads from massive numbers of distributed retail refrigeration systems could have a significant role in frequency balancing services. To date, no study has realised effective engineering applications of static firm frequency response to these aggregated networks. Here, the authors present a novel and validated approach that enables large scale control of distributed retail refrigeration assets. The authors show a validated model that simulates the operation of retail refrigerators comprising centralised compressor packs feeding multiple in-store display cases. The model was used to determine an optimal control strategy that both minimised the engineering risk to the pack during shut down and potential impacts to food safety. The authors show that following a load shedding frequency response trigger the pack should be allowed to maintain operation but with increased suction pressure set-point. This reduces compressor load whilst enabling a continuous flow of refrigerant to food cases. In addition, the authors simulated an aggregated response of up to three hundred compressor packs (over 2 MW capacity), with refrigeration cases on hysteresis and modulation control. Hysteresis control, compared to modulation, led to undesired load oscillations when the system recovers after a frequency balancing event. Transient responses of the system during the event showed significant fluctuations of active power when compressor network responds to both primary and secondary parts of a frequency balancing event. Enabling frequency response within this system is demonstrated by linking the aggregated refrigeration loads with a simplified power grid model that simulates a power loss incident.}
    }
  • A. Nanjangud, C. I. Underwood, C. M. Saaj, A. Young, P. C. Blacker, S. Eckersley, M. Sweeting, and P. Bianco, “Towards on-orbit assembly of large space telescopes: mission architectures, concepts, and analyses,” in 70th international astronautical congress, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39415,
    booktitle = {70th International Astronautical Congress},
    month = {October},
    title = {Towards On-Orbit Assembly of Large Space Telescopes: Mission Architectures, Concepts, and Analyses},
    author = {Angadh Nanjangud and Craig I. Underwood and Chakravarthini M. Saaj and Alex Young and Peter C. Blacker and Steve Eckersley and Martin Sweeting and Paolo Bianco},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39415/}
    }
  • M. Selvaggio, A. G. Esfahani, R. Moccia, F. Ficuciello, and B. Siciliano, “Haptic-guided shared control for needle grasping optimization in minimally invasive robotic surgery,” Ieee/rsj international conference intelligent robotic system, 2019.
    [BibTeX] [Abstract] [Download PDF]

    During suturing tasks performed with minimally invasive surgical robots, configuration singularities and joint limits often force surgeons to interrupt the task and re- grasp the needle using dual-arm movements. This yields an increased operator?s cognitive load, time-to-completion, fatigue and performance degradation. In this paper, we propose a haptic-guided shared control method for grasping the needle with the Patient Side Manipulator (PSM) of the da Vinci robot avoiding such issues. We suggest a cost function consisting of (i) the distance from robot joint limits and (ii) the task-oriented manipulability over the suturing trajectory. We evaluate the cost and its gradient on the needle grasping manifold that allows us to obtain the optimal grasping pose for joint-limit and singularity free movements of the needle during suturing. Then, we compute force cues that are applied to the Master Tool Manipulator (MTM) of the da Vinci to guide the operator towards the optimal grasp. As such, our system helps the operator to choose a grasping configuration allowing the robot to avoid joint limits and singularities during post-grasp suturing movements. We show the effectiveness of our proposed haptic- guided shared control method during suturing using both simulated and real experiments. The results illustrate that our approach significantly improves the performance in terms of needle re-grasping.

    @article{lincoln36571,
    month = {October},
    title = {Haptic-guided shared control for needle grasping optimization in minimally invasive robotic surgery},
    author = {Mario Selvaggio and Amir Ghalamzan Esfahani and Rocco Moccia and Fanny Ficuciello and Bruno Siciliano},
    year = {2019},
    journal = {IEEE/RSJ International Conference Intelligent Robotic System},
    keywords = {ARRAY(0x55e772e00028)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36571/},
    abstract = {During suturing tasks performed with minimally invasive surgical robots, configuration singularities and joint limits often force surgeons to interrupt the task and re- grasp the needle using dual-arm movements. This yields an increased operator?s cognitive load, time-to-completion, fatigue and performance degradation. In this paper, we propose a haptic-guided shared control method for grasping the needle with the Patient Side Manipulator (PSM) of the da Vinci robot avoiding such issues. We suggest a cost function consisting of (i) the distance from robot joint limits and (ii) the task-oriented manipulability over the suturing trajectory. We evaluate the cost and its gradient on the needle grasping manifold that allows us to obtain the optimal grasping pose for joint-limit and singularity free movements of the needle during suturing. Then, we compute force cues that are applied to the Master Tool Manipulator (MTM) of the da Vinci to guide the operator towards the optimal grasp. As such, our system helps the operator to choose a grasping configuration allowing the robot to avoid joint limits and singularities during post-grasp suturing movements. We show the effectiveness of our proposed haptic- guided shared control method during suturing using both simulated and real experiments. The results illustrate that our approach significantly improves the performance in terms of needle re-grasping.}
    }
  • M. G. Lampridi, C. G. S. o, and D. Bochtis, “Agricultural sustainability: a review of concepts and methods,” Sustainability, vol. 11, iss. 18, p. 5120, 2019. doi:10.3390/su11185120
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology analysis. The framework was implemented for the systematic literature review of 38 crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community?s interest in agricultural sustainability is increasing in the last three years. The most used methods include indicator-based tools, frameworks, and indexes, followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. It should also be mentioned that combinational use of methodologies is often observed, thus a clear distinction of methodologies is not always possible

    @article{lincoln39231,
    volume = {11},
    number = {18},
    month = {September},
    author = {Maria G. Lampridi and Claus G. S{\o}rensen and Dionysis Bochtis},
    title = {Agricultural Sustainability: A Review of Concepts and Methods},
    year = {2019},
    journal = {Sustainability},
    doi = {10.3390/su11185120},
    pages = {5120},
    keywords = {ARRAY(0x55e772e00058)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39231/},
    abstract = {This paper presents a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology analysis. The framework was implemented for the systematic literature review of 38 crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community?s interest in agricultural sustainability is increasing in the last three years. The most used methods include indicator-based tools, frameworks, and indexes, followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. It should also be mentioned that combinational use of methodologies is often observed, thus a clear distinction of methodologies is not always possible}
    }
  • V. Marinoudi, C. Sorensen, S. Pearson, and D. Bochtis, “Robotics and labour in agriculture. a context consideration,” Biosystems engineering, vol. 184, p. 111–121, 2019. doi:10.1016/j.biosystemseng.2019.06.013
    [BibTeX] [Abstract] [Download PDF]

    Over the last century, agriculture transformed from a labour-intensive industry towards mechanisation and power-intensive production systems, while over the last 15 years agri- cultural industry has started to digitise. Through this transformation there was a continuous labour outflow from agriculture, mainly from standardized tasks within production process. Robots and artificial intelligence can now be used to conduct non-standardised tasks (e.g. fruit picking, selective weeding, crop sensing) previously reserved for human workers and at economically feasible costs. As a consequence, automation is no longer restricted to stan- dardized tasks within agricultural production (e.g. ploughing, combine harvesting). In addition, many job roles in agriculture may be augmented but not replaced by robots. Robots in many instances will work collaboratively with humans. This new robotic ecosystem creates complex ethical, legislative and social impacts. A key question, we consider here, is what are the short and mid-term effects of robotised agriculture on sector jobs and employment? The presented work outlines the conditions, constraints, and inherent re- lationships between labour input and technology input in bio-production, as well as, pro- vides the procedural framework and research design to be followed in order to evaluate the effect of adoption automation and robotics in agriculture.

    @article{lincoln36279,
    volume = {184},
    month = {August},
    author = {Vasso Marinoudi and Claus Sorensen and Simon Pearson and Dionysis Bochtis},
    title = {Robotics and labour in agriculture. A context consideration},
    publisher = {Elsevier},
    year = {2019},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2019.06.013},
    pages = {111--121},
    keywords = {ARRAY(0x55e772e00088)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36279/},
    abstract = {Over the last century, agriculture transformed from a labour-intensive industry towards mechanisation and power-intensive production systems, while over the last 15 years agri- cultural industry has started to digitise. Through this transformation there was a continuous labour outflow from agriculture, mainly from standardized tasks within production process. Robots and artificial intelligence can now be used to conduct non-standardised tasks (e.g. fruit picking, selective weeding, crop sensing) previously reserved for human workers and at economically feasible costs. As a consequence, automation is no longer restricted to stan- dardized tasks within agricultural production (e.g. ploughing, combine harvesting). In addition, many job roles in agriculture may be augmented but not replaced by robots. Robots in many instances will work collaboratively with humans. This new robotic ecosystem creates complex ethical, legislative and social impacts. A key question, we consider here, is what are the short and mid-term effects of robotised agriculture on sector jobs and employment? The presented work outlines the conditions, constraints, and inherent re- lationships between labour input and technology input in bio-production, as well as, pro- vides the procedural framework and research design to be followed in order to evaluate the effect of adoption automation and robotics in agriculture.}
    }
  • A. Seddaoui and M. Saaj, “Combined nonlinear h? controller for a controlled-floating space robot,” Journal of guidance, control, and dynamics, vol. 42, iss. 8, p. 1878–1885, 2019. doi:10.2514/1.G003811
    [BibTeX] [Download PDF]
    @article{lincoln37396,
    volume = {42},
    number = {8},
    month = {August},
    author = {A. Seddaoui and Mini Saaj},
    note = {cited By 0},
    title = {Combined nonlinear H? controller for a controlled-floating space robot},
    publisher = {Aerospace Research Central},
    year = {2019},
    journal = {Journal of Guidance, Control, and Dynamics},
    doi = {10.2514/1.G003811},
    pages = {1878--1885},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37396/}
    }
  • A. Seddaoui and C. M. Saaj, “Combined nonlinear h? controller for a controlled-floating space robot,” Journal of guidance, control, and dynamics, vol. 42, iss. 8, p. 1878–1885, 2019. doi:10.2514/1.G003811
    [BibTeX] [Download PDF]
    @article{lincoln39389,
    volume = {42},
    number = {8},
    month = {August},
    author = {Asma Seddaoui and Chakravarthini M. Saaj},
    title = {Combined Nonlinear H? Controller for a Controlled-Floating Space Robot},
    publisher = {Aerospace Research Central},
    year = {2019},
    journal = {Journal of Guidance, Control, and Dynamics},
    doi = {10.2514/1.G003811},
    pages = {1878--1885},
    keywords = {ARRAY(0x55e772e000e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39389/}
    }
  • S. Molina, G. Cielniak, and T. Duckett, “Go with the flow: exploration and mapping of pedestrian flow patterns from partial observations,” in International conference on robotics and automation (icra), 2019. doi:10.1109/ICRA.2019.8794434
    [BibTeX] [Abstract] [Download PDF]

    Understanding how people are likely to behave in an environment is a key requirement for efficient and safe robot navigation. However, mobile platforms are subject to spatial and temporal constraints, meaning that only partial observations of human activities are typically available to a robot, while the activity patterns of people in a given environment may also change at different times. To address these issues we present as the main contribution an exploration strategy for acquiring models of pedestrian flows, which decides not only the locations to explore but also the times when to explore them. The approach is driven by the uncertainty from multiple Poisson processes built from past observations. The approach is evaluated using two long-term pedestrian datasets, comparing its performance against uninformed exploration strategies. The results show that when using the uncertainty in the exploration policy, model accuracy increases, enabling faster learning of human motion patterns.

    @inproceedings{lincoln36396,
    booktitle = {International Conference on Robotics and Automation (ICRA)},
    month = {August},
    title = {Go with the Flow: Exploration and Mapping of Pedestrian Flow Patterns from Partial Observations},
    author = {Sergi Molina and Grzegorz Cielniak and Tom Duckett},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/ICRA.2019.8794434},
    keywords = {ARRAY(0x55e772e00118)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36396/},
    abstract = {Understanding how people are likely to behave in an environment is a key requirement for efficient and safe robot navigation. However, mobile platforms are subject to spatial and temporal constraints, meaning that only partial observations of human activities are typically available to a robot, while the activity patterns of people in a given environment may also change at different times. To address these issues we present as the main contribution an exploration strategy for acquiring models of pedestrian flows, which decides not only the locations to explore but also the times when to explore them. The approach is driven by the uncertainty from multiple Poisson processes built from past observations. The approach is evaluated using two long-term pedestrian datasets, comparing its performance against uninformed exploration strategies. The results show that when using the uncertainty in the exploration policy, model accuracy increases, enabling faster learning of human motion patterns.}
    }
  • A. Seddaoui, C. Saaj, and S. Eckersley, “Adaptive h? controller for precise manoeuvring of a space robot,” in 2019 international conference on robotics and automation (icra), 2019, p. 4746–4752. doi:10.1109/ICRA.2019.8794374
    [BibTeX] [Abstract] [Download PDF]

    A space robot working in a controlled-floating mode can be used for performing in-orbit telescope assembly through simultaneously controlling the motion of the spacecraft base and its robotic arm. Handling and assembling optical mirrors requires the space robot to achieve slow and precise manoeuvres regardless of the disturbances and errors in the trajectory. The robustness offered by the nonlinear H ? controller, in the presence of environmental disturbances and parametric uncertainties, makes it a viable solution. However, using fixed tuning parameters for this controller does not always result in the desired performance as the arm’s trajectory is not known a priori for orbital assembly missions. In this paper, a complete study on the impact of the different tuning parameters is performed and a new adaptive H ? controller is developed based on bounded functions. The simulation results presented show that the proposed adaptive H ? controller guarantees robustness and precise tracking using a minimal amount of forces and torques for assembly operations using a small space robot.

    @inproceedings{lincoln37413,
    volume = {2019-M},
    month = {August},
    author = {A. Seddaoui and C. Saaj and S. Eckersley},
    note = {cited By 0},
    booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
    title = {Adaptive H? Controller for Precise Manoeuvring of a Space Robot},
    publisher = {IEEE},
    year = {2019},
    journal = {Proceedings - IEEE International Conference on Robotics and Automation},
    doi = {10.1109/ICRA.2019.8794374},
    pages = {4746--4752},
    keywords = {ARRAY(0x55e772e00148)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37413/},
    abstract = {A space robot working in a controlled-floating mode can be used for performing in-orbit telescope assembly through simultaneously controlling the motion of the spacecraft base and its robotic arm. Handling and assembling optical mirrors requires the space robot to achieve slow and precise manoeuvres regardless of the disturbances and errors in the trajectory. The robustness offered by the nonlinear H ? controller, in the presence of environmental disturbances and parametric uncertainties, makes it a viable solution. However, using fixed tuning parameters for this controller does not always result in the desired performance as the arm's trajectory is not known a priori for orbital assembly missions. In this paper, a complete study on the impact of the different tuning parameters is performed and a new adaptive H ? controller is developed based on bounded functions. The simulation results presented show that the proposed adaptive H ? controller guarantees robustness and precise tracking using a minimal amount of forces and torques for assembly operations using a small space robot.}
    }
  • T. Vintr, Z. Yan, T. Duckett, and T. Krajnik, “Spatio-temporal representation for long-term anticipation of human presence in service robotics,” in 2019 international conference on robotics and automation (icra), 2019, p. 2620–2626. doi:10.1109/ICRA.2019.8793534
    [BibTeX] [Abstract] [Download PDF]

    We propose an efficient spatio-temporal model for mobile autonomous robots operating in human populated environments. Our method aims to model periodic temporal patterns of people presence, which are based on peoples? routines and habits. The core idea is to project the time onto a set of wrapped dimensions that represent the periodicities of people presence. Extending a 2D spatial model with this multi-dimensional representation of time results in a memory efficient spatio-temporal model. This model is capable of long-term predictions of human presence, allowing mobile robots to schedule their services better and to plan their paths. The experimental evaluation, performed over datasets gathered by a robot over a period of several weeks, indicates that the proposed method achieves more accurate predictions than the previous state of the art used in robotics.

    @inproceedings{lincoln38253,
    month = {August},
    author = {Tomas Vintr and Zhi Yan and Tom Duckett and Tomas Krajnik},
    booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
    title = {Spatio-temporal representation for long-term anticipation of human presence in service robotics},
    publisher = {IEEE},
    doi = {10.1109/ICRA.2019.8793534},
    pages = {2620--2626},
    year = {2019},
    keywords = {ARRAY(0x55e772e00178)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38253/},
    abstract = {We propose an efficient spatio-temporal model for mobile autonomous robots operating in human populated
    environments. Our method aims to model periodic temporal patterns of people presence, which are based on peoples?
    routines and habits. The core idea is to project the time onto a set of wrapped dimensions that represent the periodicities of people presence. Extending a 2D spatial model with this multi-dimensional representation of time results in a memory efficient spatio-temporal model. This model is capable of long-term predictions of human presence, allowing mobile robots to schedule their services better and to plan their paths. The experimental evaluation, performed over datasets gathered by a robot over a period of several weeks, indicates that the proposed
    method achieves more accurate predictions than the previous state of the art used in robotics.}
    }
  • E. Rodias, R. Berruto, D. Bochtis, A. Sopegno, and P. Busato, “Green, yellow, and woody biomass supply-chain management: a review,” Energies, vol. 12, iss. 15, p. 3020, 2019. doi:10.3390/en12153020
    [BibTeX] [Abstract] [Download PDF]

    Various sources of biomass contribute significantly in energy production globally given a series of constraints in its primary production. Green biomass sources (such as perennial grasses), yellow biomass sources (such as crop residues), and woody biomass sources (such as willow) represent the three pillars in biomass production by crops. In this paper, we conducted a comprehensive review on research studies targeted to advancements at biomass supply-chain management in connection to these three types of biomass sources. A framework that classifies the works in problem-based and methodology-based approaches was followed. Results show the use of modern technological means and tools in current management-related problems. From the review, it is evident that the presented up-to-date trends on biomass supply-chain management and the potential for future advanced approach applications play a crucial role on business and sustainability efficiency of biomass supply chain

    @article{lincoln39230,
    volume = {12},
    number = {15},
    month = {August},
    author = {Efthymios Rodias and Remigio Berruto and Dionysis Bochtis and Alessandro Sopegno and Patrizia Busato},
    title = {Green, Yellow, and Woody Biomass Supply-Chain Management: A Review},
    year = {2019},
    journal = {Energies},
    doi = {10.3390/en12153020},
    pages = {3020},
    keywords = {ARRAY(0x55e772e001a8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39230/},
    abstract = {Various sources of biomass contribute significantly in energy production globally given a series of constraints in its primary production. Green biomass sources (such as perennial grasses), yellow biomass sources (such as crop residues), and woody biomass sources (such as willow) represent the three pillars in biomass production by crops. In this paper, we conducted a comprehensive review on research studies targeted to advancements at biomass supply-chain management in connection to these three types of biomass sources. A framework that classifies the works in problem-based and methodology-based approaches was followed. Results show the use of modern technological means and tools in current management-related problems. From the review, it is evident that the presented up-to-date trends on biomass supply-chain management and the potential for future advanced approach applications play a crucial role on business and sustainability efficiency of biomass supply chain}
    }
  • Q. Fu, H. Wang, C. Hu, and S. Yue, “Towards computational models and applications of insect visual systems for motion perception: a review,” Artificial life, vol. 25, iss. 3, p. 263–311, 2019. doi:10.1162/artl_a_00297
    [BibTeX] [Abstract] [Download PDF]

    Motion perception is a critical capability determining a variety of aspects of insects’ life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects’ visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects’ visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.

    @article{lincoln35584,
    volume = {25},
    number = {3},
    month = {August},
    author = {Qinbing Fu and Hongxin Wang and Cheng Hu and Shigang Yue},
    title = {Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review},
    publisher = {MIT Press},
    year = {2019},
    journal = {Artificial life},
    doi = {10.1162/artl\_a\_00297},
    pages = {263--311},
    keywords = {ARRAY(0x55e772e001d8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35584/},
    abstract = {Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.}
    }
  • K. Goher and S. Fadlallah, “Control of a two-wheeled machine with two-directions handling mechanism using pid and pd-flc algorithms,” International journal of automation and computing, vol. 16, iss. 4, p. 511–533, 2019. doi:10.1007/s11633-019-1172-0
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel five degrees of freedom (DOF) two-wheeled robotic machine (TWRM) that delivers solutions for both industrial and service robotic applications by enlarging the vehicle?s workspace and increasing its flexibility. Designing a two-wheeled robot with five degrees of freedom creates a high challenge for the control, therefore the modelling and design of such robot should be precise with a uniform distribution of mass over the robot and the actuators. By employing the Lagrangian modelling approach, the TWRM?s mathematical model is derived and simulated in Matlab/Simulink?. For stabilizing the system?s highly nonlinear model, two control approaches were developed and implemented: proportional-integral-derivative (PID) and fuzzy logic control (FLC) strategies. Considering multiple scenarios with different initial conditions, the proposed control strategies? performance has been assessed.

    @article{lincoln35606,
    volume = {16},
    number = {4},
    month = {August},
    author = {Khaled Goher and Sulaiman Fadlallah},
    title = {Control of a Two-wheeled Machine with Two-directions Handling Mechanism Using PID and PD-FLC Algorithms},
    publisher = {Springer},
    year = {2019},
    journal = {International Journal of Automation and Computing},
    doi = {10.1007/s11633-019-1172-0},
    pages = {511--533},
    keywords = {ARRAY(0x55e772e00208)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35606/},
    abstract = {This paper presents a novel five degrees of freedom (DOF) two-wheeled robotic machine (TWRM) that delivers solutions
    for both industrial and service robotic applications by enlarging the vehicle?s workspace and increasing its flexibility. Designing a two-wheeled robot with five degrees of freedom creates a high challenge for the control, therefore the modelling and design of such robot should be precise with a uniform distribution of mass over the robot and the actuators. By employing the Lagrangian modelling approach, the TWRM?s mathematical model is derived and simulated in Matlab/Simulink?. For stabilizing the system?s highly nonlinear model, two control approaches were developed and implemented: proportional-integral-derivative (PID) and fuzzy logic control (FLC)
    strategies. Considering multiple scenarios with different initial conditions, the proposed control strategies? performance has been assessed.}
    }
  • B. Ugurlu, M. Acer, D. E. Barkana, I. Gocek, A. Kucukyilmaz, Y. Z. Arslan, H. Basturk, E. Samur, E. Ugur, R. Unal, and O. Bebek, “A soft+rigid hybrid exoskeleton concept in scissors-pendulum mode: a suit for human state sensing and an exoskeleton for assistance,” in 2019 ieee 16th international conference on rehabilitation robotics (icorr), 2019, p. 518–523. doi:10.1109/ICORR.2019.8779394
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a novel concept that can enable the human aware control of exoskeletons through the integration of a soft suit and a robotic exoskeleton. Unlike the state-of-the-art exoskeleton controllers which mostly rely on lumped human-robot models, the proposed concept makes use of the independent state measurements concerning the human user and the robot. The ability to observe the human state independently is the key factor in this approach. In order to realize such a system from the hardware point of view, we propose a system integration frame that combines a soft suit for human state measurement and a rigid exoskeleton for human assistance. We identify the technological requirements that are necessary for the realization of such a system with a particular emphasis on soft suit integration. We also propose a template model, named scissor pendulum, that may encapsulate the dominant dynamics of the human-robot combined model to synthesize a controller for human state regulation. A series of simulation experiments were conducted to check the controller performance. As a result, satisfactory human state regulation was attained, adequately confirming that the proposed system could potentially improve exoskeleton-aided applications.

    @inproceedings{lincoln36661,
    month = {July},
    author = {Barkan Ugurlu and Merve Acer and Duygun E. Barkana and Ikilem Gocek and Ayse Kucukyilmaz and Yunus Z. Arslan and Halil Basturk and Evren Samur and Emre Ugur and Ramazan Unal and Ozkan Bebek},
    booktitle = {2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR)},
    title = {A Soft+Rigid Hybrid Exoskeleton Concept in Scissors-Pendulum Mode: A Suit for Human State Sensing and an Exoskeleton for Assistance},
    publisher = {IEEE},
    doi = {10.1109/ICORR.2019.8779394},
    pages = {518--523},
    year = {2019},
    keywords = {ARRAY(0x55e772e00238)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36661/},
    abstract = {In this paper, we present a novel concept that can enable the human aware control of exoskeletons through the
    integration of a soft suit and a robotic exoskeleton. Unlike the state-of-the-art exoskeleton controllers which mostly rely on lumped human-robot models, the proposed concept makes use of the independent state measurements concerning the human user and the robot. The ability to observe the human state independently is the key factor in this approach. In order to realize such a system from the hardware point of view, we propose a system integration frame that combines a soft suit for human state measurement and a rigid exoskeleton for human assistance. We identify the technological requirements that are necessary for the realization of such a system with a particular emphasis on soft suit integration. We also propose a template model, named scissor pendulum, that may encapsulate the dominant dynamics of the human-robot combined model to synthesize a controller for human state regulation. A series of simulation experiments were conducted to check the controller performance. As a result, satisfactory human state regulation was attained, adequately confirming that the proposed system could potentially improve exoskeleton-aided applications.}
    }
  • S. Sari and A. Kucukyilmaz, “Vr-fit: walking-in-place locomotion with real time step detection for vr-enabled exercise,” in Mobile web and intelligent information systems, 2019, p. 255–266. doi:10.1007/978-3-030-27192-3_20
    [BibTeX] [Abstract] [Download PDF]

    With recent advances in mobile and wearable technologies, virtual reality (VR) found many applications in daily use. Today, a mobile device can be converted into a low-cost immersive VR kit thanks to the availability of do-it-yourself viewers in the shape of simple cardboards and compatible software for 3D rendering. These applications involve interacting with stationary scenes or moving in between spaces within a VR environment. VR locomotion can be enabled through a variety of methods, such as head movement tracking, joystick-triggered motion and through mapping natural movements to translate to virtual locomotion. In this study, we implemented a walk-in-place (WIP) locomotion method for a VR-enabled exercise application. We investigate the utility of WIP for exercise purposes, and compare it with joystick-based locomotion in terms of step performance and subjective qualities of the activity, such as enjoyment, encouragement for exercise and ease of use. Our technique uses vertical accelerometer data to estimate steps taken during walking or running, and locomotes the user?s avatar accordingly in virtual space. We evaluated our technique in a controlled experimental study with 12 people. Results indicate that the way users control the simulated locomotion affects how they interact with the VR simulation, and influence the subjective sense of immersion and the perceived quality of the interaction. In particular, WIP encourages users to move further, and creates a more enjoyable and interesting experience in comparison to joystick-based navigation.

    @inproceedings{lincoln36870,
    volume = {11673},
    month = {July},
    author = {Sercan Sari and Ayse Kucukyilmaz},
    booktitle = {Mobile Web and Intelligent Information Systems},
    title = {VR-Fit: Walking-in-Place Locomotion with Real Time Step Detection for VR-Enabled Exercise},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/978-3-030-27192-3\_20},
    pages = {255--266},
    keywords = {ARRAY(0x55e772e00268)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36870/},
    abstract = {With recent advances in mobile and wearable technologies, virtual reality (VR) found many applications in daily use. Today, a mobile device can be converted into a low-cost immersive VR kit thanks to the availability of do-it-yourself viewers in the shape of simple cardboards and compatible software for 3D rendering. These applications involve interacting with stationary scenes or moving in between spaces within a VR environment. VR locomotion can be enabled through a variety of methods, such as head movement tracking, joystick-triggered motion and through mapping natural movements to translate to virtual locomotion. In this study, we implemented a walk-in-place (WIP) locomotion method for a VR-enabled exercise application. We investigate the utility of WIP for exercise purposes, and compare it with joystick-based locomotion in terms of step performance and subjective qualities of the activity, such as enjoyment, encouragement for exercise and ease of use. Our technique uses vertical accelerometer data to estimate steps taken during walking or running, and locomotes the user?s avatar accordingly in virtual space. We evaluated our technique in a controlled experimental study with 12 people. Results indicate that the way users control the simulated locomotion affects how they interact with the VR simulation, and influence the subjective sense of immersion and the perceived quality of the interaction. In particular, WIP encourages users to move further, and creates a more enjoyable and interesting experience in comparison to joystick-based navigation.}
    }
  • N. Tsolakis, D. Bechtsis, and D. Bochtis, “Agros: a robot operating system based emulation tool for agricultural robotics,” Agronomy, vol. 9, iss. 7, p. 403, 2019. doi:10.3390/agronomy9070403
    [BibTeX] [Abstract] [Download PDF]

    This research aims to develop a farm management emulation tool that enables agrifood producers to effectively introduce advanced digital technologies, like intelligent and autonomous unmanned ground vehicles (UGVs), in real-world field operations. To that end, we first provide a critical taxonomy of studies investigating agricultural robotic systems with regard to: (i) the analysis approach, i.e., simulation, emulation, real-world implementation; (ii) farming operations; and (iii) the farming type. Our analysis demonstrates that simulation and emulation modelling have been extensively applied to study advanced agricultural machinery while the majority of the extant research efforts focuses on harvesting/picking/mowing and fertilizing/spraying activities; most studies consider a generic agricultural layout. Thereafter, we developed AgROS, an emulation tool based on the Robot Operating System, which could be used for assessing the efficiency of real-world robot systems in customized fields. The AgROS allows farmers to select their actual field from a map layout, import the landscape of the field, add characteristics of the actual agricultural layout (e.g., trees, static objects), select an agricultural robot from a predefined list of commercial systems, import the selected UGV into the emulation environment, and test the robot?s performance in a quasi-real-world environment. AgROS supports farmers in the ex-ante analysis and performance evaluation of robotized precision farming operations while lays the foundations for realizing ?digital twins? in agriculture

    @article{lincoln39229,
    volume = {9},
    number = {7},
    month = {July},
    author = {Naoum Tsolakis and Dimitrios Bechtsis and Dionysis Bochtis},
    title = {AgROS: A Robot Operating System Based Emulation Tool for Agricultural Robotics},
    year = {2019},
    journal = {Agronomy},
    doi = {10.3390/agronomy9070403},
    pages = {403},
    keywords = {ARRAY(0x55e772e00298)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39229/},
    abstract = {This research aims to develop a farm management emulation tool that enables agrifood producers to effectively introduce advanced digital technologies, like intelligent and autonomous unmanned ground vehicles (UGVs), in real-world field operations. To that end, we first provide a critical taxonomy of studies investigating agricultural robotic systems with regard to: (i) the analysis approach, i.e., simulation, emulation, real-world implementation; (ii) farming operations; and (iii) the farming type. Our analysis demonstrates that simulation and emulation modelling have been extensively applied to study advanced agricultural machinery while the majority of the extant research efforts focuses on harvesting/picking/mowing and fertilizing/spraying activities; most studies consider a generic agricultural layout. Thereafter, we developed AgROS, an emulation tool based on the Robot Operating System, which could be used for assessing the efficiency of real-world robot systems in customized fields. The AgROS allows farmers to select their actual field from a map layout, import the landscape of the field, add characteristics of the actual agricultural layout (e.g., trees, static objects), select an agricultural robot from a predefined list of commercial systems, import the selected UGV into the emulation environment, and test the robot?s performance in a quasi-real-world environment. AgROS supports farmers in the ex-ante analysis and performance evaluation of robotized precision farming operations while lays the foundations for realizing ?digital twins? in agriculture}
    }
  • H. Wang, J. Peng, Q. Fu, H. Wang, and S. Yue, “Visual cue integration for small target motion detection in natural cluttered backgrounds,” in The 2019 international joint conference on neural networks (ijcnn), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The robust detection of small targets against cluttered background is important for future arti?cial visual systems in searching and tracking applications. The insects? visual systems have demonstrated excellent ability to avoid predators, ?nd prey or identify conspeci?cs ? which always appear as small dim speckles in the visual ?eld. Build a computational model of the insects? visual pathways could provide effective solutions to detect small moving targets. Although a few visual system models have been proposed, they only make use of small-?eld visual features for motion detection and their detection results often contain a number of false positives. To address this issue, we develop a new visual system model for small target motion detection against cluttered moving backgrounds. Compared to the existing models, the small-?eld and wide-?eld visual features are separately extracted by two motion-sensitive neurons to detect small target motion and background motion. These two types of motion information are further integrated to ?lter out false positives. Extensive experiments showed that the proposed model can outperform the existing models in terms of detection rates.

    @inproceedings{lincoln35684,
    booktitle = {The 2019 International Joint Conference on Neural Networks (IJCNN)},
    month = {July},
    title = {Visual Cue Integration for Small Target Motion Detection in Natural Cluttered Backgrounds},
    author = {Hongxin Wang and Jigen Peng and Qinbing Fu and Huatian Wang and Shigang Yue},
    publisher = {IEEE},
    year = {2019},
    keywords = {ARRAY(0x55e772e002c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35684/},
    abstract = {The robust detection of small targets against cluttered background is important for future arti?cial visual systems in searching and tracking applications. The insects? visual systems have demonstrated excellent ability to avoid predators, ?nd prey or identify conspeci?cs ? which always appear as small dim speckles in the visual ?eld. Build a computational model of the insects? visual pathways could provide effective solutions to detect small moving targets. Although a few visual system models have been proposed, they only make use of small-?eld visual features for motion detection and their detection results often contain a number of false positives. To address this issue, we develop a new visual system model for small target motion detection against cluttered moving backgrounds. Compared to the existing models, the small-?eld and wide-?eld visual features are separately extracted by two motion-sensitive neurons to detect small target motion and background motion. These two types of motion information are further integrated to ?lter out false positives. Extensive experiments showed that the proposed model can outperform the existing models in terms of detection rates.}
    }
  • H. Wang, Q. Fu, H. Wang, J. Peng, P. Baxter, C. Hu, and S. Yue, “Angular velocity estimation of image motion mimicking the honeybee tunnel centring behaviour,” in The 2019 international joint conference on neural networks, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of ?ight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not ful?lled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee ?ying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee?s image motion detection mechanism guiding the tunnel centring behaviour.

    @inproceedings{lincoln35685,
    booktitle = {The 2019 International Joint Conference on Neural Networks},
    month = {July},
    title = {Angular Velocity Estimation of Image Motion Mimicking the Honeybee Tunnel Centring Behaviour},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Jigen Peng and Paul Baxter and Cheng Hu and Shigang Yue},
    publisher = {IEEE},
    year = {2019},
    keywords = {ARRAY(0x55e772e002f8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35685/},
    abstract = {Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of ?ight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not ful?lled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee ?ying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee?s image motion detection mechanism guiding the tunnel centring behaviour.}
    }
  • M. F. Carmona, T. Parekh, and M. Hanheide, “Making the case for human-aware navigation in warehouses,” in Taros 2019: towards autonomous robotic systems, 2019, p. 449–453. doi:10.1007/978-3-030-25332-5_38
    [BibTeX] [Abstract] [Download PDF]

    This work addresses the performance of several local planners for navigation of autonomous pallet trucks in the presence of humans in a simulated warehouse as well as a complementary approach developed within the ILIAD project. Our focus is to stress the open problem of a safe manoeuvrability of pallet trucks in the presence of moving humans. We propose a variation of ROS navigation stack that includes in the planning process a model of the human robot interaction.

    @inproceedings{lincoln37347,
    month = {July},
    author = {Manuel Fernandez Carmona and Tejas Parekh and Marc Hanheide},
    booktitle = {TAROS 2019: Towards Autonomous Robotic Systems},
    title = {Making the Case for Human-Aware Navigation in Warehouses},
    publisher = {Springer, Cham},
    doi = {10.1007/978-3-030-25332-5\_38},
    pages = {449--453},
    year = {2019},
    keywords = {ARRAY(0x55e772e00328)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37347/},
    abstract = {This work addresses the performance of several local planners for navigation of autonomous pallet trucks in the presence of humans in a simulated warehouse as well as a complementary approach developed within the ILIAD project.
    Our focus is to stress the open problem of a safe manoeuvrability of pallet trucks in the presence of moving humans.
    We propose a variation of ROS navigation stack that includes in the planning process a model of the human robot interaction.}
    }
  • H. Cuayahuitl, D. Lee, S. Ryu, S. Choi, I. Hwang, and J. Kim, “Deep reinforcement learning for chatbots using clustered actions and human-likeness rewards,” in International joint conference on neural networks (ijcnn), 2019. doi:10.1109/IJCNN.2019.8852376
    [BibTeX] [Abstract] [Download PDF]

    Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text{–}without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ?10 sentences.

    @inproceedings{lincoln35954,
    booktitle = {International Joint Conference on Neural Networks (IJCNN)},
    month = {July},
    title = {Deep Reinforcement Learning for Chatbots Using Clustered Actions and Human-Likeness Rewards},
    author = {Heriberto Cuayahuitl and Donghyeon Lee and Seonghan Ryu and Sungja Choi and Inchul Hwang and Jihie Kim},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/IJCNN.2019.8852376},
    keywords = {ARRAY(0x55e772e00358)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35954/},
    abstract = {Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text{--}without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ?10 sentences.}
    }
  • X. Sun, T. liu, C. Hu, Q. Fu, and S. Yue, “Colcos\ensuremath\Phi: a multiple pheromone communication system for swarm robotics and social insects research,” in The 2019 ieee international conference on advanced robotics and mechatronics (icarm), 2019.
    [BibTeX] [Abstract] [Download PDF]

    In the last few decades we have witnessed how the pheromone of social insect has become a rich inspiration source of swarm robotics. By utilising the virtual pheromone in physical swarm robot system to coordinate individuals and realise direct/indirect inter-robot communications like the social insect, stigmergic behaviour has emerged. However, many studies only take one single pheromone into account in solving swarm problems, which is not the case in real insects. In the real social insect world, diverse behaviours, complex collective performances and ?exible transition from one state to another are guided by different kinds of pheromones and their interactions. Therefore, whether multiple pheromone based strategy can inspire swarm robotics research, and inversely how the performances of swarm robots controlled by multiple pheromones bring inspirations to explain the social insects? behaviours will become an interesting question. Thus, to provide a reliable system to undertake the multiple pheromone study, in this paper, we speci?cally proposed and realised a multiple pheromone communication system called ColCOS{\ensuremath{\Phi}}. This system consists of a virtual pheromone sub-system wherein the multiple pheromone is represented by a colour image displayed on a screen, and the micro-robots platform designed for swarm robotics applications. Two case studies are undertaken to verify the effectiveness of this system: one is the multiple pheromone based on an ant?s forage and another is the interactions of aggregation and alarm pheromones. The experimental results demonstrate the feasibility of ColCOS{\ensuremath{\Phi}} and its great potential in directing swarm robotics and social insects research.

    @inproceedings{lincoln36187,
    booktitle = {The 2019 IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)},
    month = {July},
    title = {ColCOS{\ensuremath{\Phi}}: A Multiple Pheromone Communication System for Swarm Robotics and Social Insects Research},
    author = {Xuelong Sun and Tian liu and Cheng Hu and Qinbing Fu and Shigang Yue},
    publisher = {IEEE},
    year = {2019},
    keywords = {ARRAY(0x55e772e00388)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36187/},
    abstract = {In the last few decades we have witnessed how the pheromone of social insect has become a rich inspiration source of swarm robotics. By utilising the virtual pheromone in physical swarm robot system to coordinate individuals and realise direct/indirect inter-robot communications like the social insect, stigmergic behaviour has emerged. However, many studies only take one single pheromone into account in solving swarm problems, which is not the case in real insects. In the real social insect world, diverse behaviours, complex collective performances and ?exible transition from one state to another are guided by different kinds of pheromones and their interactions. Therefore, whether multiple pheromone based strategy can inspire swarm robotics research, and inversely how the performances of swarm robots controlled by multiple pheromones bring inspirations to explain the social insects? behaviours will become an interesting question. Thus, to provide a reliable system to undertake the multiple pheromone study, in this paper, we speci?cally proposed and realised a multiple pheromone communication system called ColCOS{\ensuremath{\Phi}}. This system consists of a virtual pheromone sub-system wherein the multiple pheromone is represented by a colour image displayed on a screen, and the micro-robots platform designed for swarm robotics applications. Two case studies are undertaken to verify the effectiveness of this system: one is the multiple pheromone based on an ant?s forage and another is the interactions of aggregation and alarm pheromones. The experimental results demonstrate the feasibility of ColCOS{\ensuremath{\Phi}} and its great potential in directing swarm robotics and social insects research.}
    }
  • J. Koleosho and C. Saaj, “System design and control of a di-wheel rover,” in 20th annual conference, taros 2019, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39422,
    booktitle = {20th Annual Conference, TAROS 2019},
    month = {July},
    title = {System Design and Control of a Di-Wheel Rover},
    author = {John Koleosho and Chakravarthini Saaj},
    publisher = {Springer},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39422/}
    }
  • H. Cao, P. G. Esteban, M. Bartlett, P. Baxter, T. Belpaeme, E. Billing, H. Cai, M. Coeckelbergh, C. Costescu, D. David, A. D. Beir, D. Hernandez, J. Kennedy, H. Liu, S. Matu, A. Mazel, A. Pandey, K. Richardson, E. Senft, S. Thill, G. V. de Perre, B. Vanderborght, D. Vernon, K. Wakanuma, H. Yu, X. Zhou, and T. Ziemke, “Robot-enhanced therapy: development and validation of supervised autonomous robotic system for autism spectrum disorders therapy,” Ieee robotics & automation magazine, vol. 26, iss. 2, p. 49–58, 2019. doi:doi:10.1109/MRA.2019.2904121
    [BibTeX] [Abstract] [Download PDF]

    Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.

    @article{lincoln36203,
    volume = {26},
    number = {2},
    month = {June},
    author = {Hoang-Long Cao and Pablo G. Esteban and Madeleine Bartlett and Paul Baxter and Tony Belpaeme and Erik Billing and Haibin Cai and Mark Coeckelbergh and Cristina Costescu and Daniel David and Albert De Beir and Daniel Hernandez and James Kennedy and Honghai Liu and Silviu Matu and Alexandre Mazel and Amit Pandey and Kathleen Richardson and Emmanuel Senft and Serge Thill and Greet Van de Perre and Bram Vanderborght and David Vernon and Kutoma Wakanuma and Hui Yu and Xiaolong Zhou and Tom Ziemke},
    title = {Robot-Enhanced Therapy: Development and Validation of Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Robotics \& Automation Magazine},
    doi = {doi:10.1109/MRA.2019.2904121},
    pages = {49--58},
    keywords = {ARRAY(0x55e772e003e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36203/},
    abstract = {Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.}
    }
  • A. Gabriel, S. Cosar, N. Bellotto, and P. Baxter, “A dataset for action recognition in the wild,” in Towards autonomous robotic systems, 2019, p. 362–374. doi:doi:10.1007/978-3-030-23807-0_30
    [BibTeX] [Abstract] [Download PDF]

    The development of autonomous robots for agriculture depends on a successful approach to recognize user needs as well as datasets reflecting the characteristics of the domain. Available datasets for 3D Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions and therefore fail to account for the highly variable conditions the robot would have to work with in the field, e.g. when providing in-field logistic support for human fruit pickers as in our scenario. Existing work on Intention Recognition mostly labels plans or actions as intentions, but neither of those fully capture the extend of human intent. In this work, we argue for a holistic view on human Intention Recognition and propose a set of recording conditions, gestures and behaviors that better reflect the environment and conditions an agricultural robot might find itself in. We demonstrate the utility of the dataset by means of evaluating two human detection methods: bounding boxes and skeleton extraction.

    @inproceedings{lincoln36395,
    volume = {11649},
    month = {June},
    author = {Alexander Gabriel and Serhan Cosar and Nicola Bellotto and Paul Baxter},
    booktitle = {Towards Autonomous Robotic Systems},
    title = {A Dataset for Action Recognition in the Wild},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-23807-0\_30},
    pages = {362--374},
    keywords = {ARRAY(0x55e772e00418)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36395/},
    abstract = {The development of autonomous robots for agriculture depends on a successful approach to recognize user needs as well as datasets reflecting the characteristics of the domain. Available datasets for 3D Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions and therefore fail to account for the highly variable conditions the robot would have to work with in the field, e.g. when providing in-field logistic support for human fruit pickers as in our scenario. Existing work on Intention Recognition mostly labels plans or actions as intentions, but neither of those fully capture the extend of human intent. In this work, we argue for a holistic view on human Intention Recognition and propose a set of recording conditions, gestures and behaviors that better reflect the environment and conditions an agricultural robot might find itself in. We demonstrate the utility of the dataset by means of evaluating two human detection methods: bounding boxes and skeleton extraction.}
    }
  • R. Akrour, J. Pajarinen, G. Neumann, and J. Peters, “Projections for approximate policy iteration algorithms,” in Proceedings of the international conference on machine learning (icml), 2019, p. 181–190.
    [BibTeX] [Abstract] [Download PDF]

    Approximate policy iteration is a class of reinforcement learning (RL) algorithms where the policy is encoded using a function approximator and which has been especially prominent in RL with continuous action spaces. In this class of RL algorithms, ensuring increase of the policy return during policy update often requires to constrain the change in action distribution. Several approximations exist in the literature to solve this constrained policy update problem. In this paper, we propose to improve over such solutions by introducing a set of projections that transform the constrained problem into an unconstrained one which is then solved by standard gradient descent. Using these projections, we empirically demonstrate that our approach can improve the policy update solution and the control over exploration of existing approximate policy iteration algorithms.

    @inproceedings{lincoln36285,
    booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
    month = {June},
    title = {Projections for Approximate Policy Iteration Algorithms},
    author = {R. Akrour and J. Pajarinen and Gerhard Neumann and J. Peters},
    publisher = {Proceedings of Machine Learning Research},
    year = {2019},
    pages = {181--190},
    keywords = {ARRAY(0x55e772e00478)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36285/},
    abstract = {Approximate policy iteration is a class of reinforcement learning (RL) algorithms where the policy is encoded using a function approximator and which has been especially prominent in RL with continuous action spaces. In this class of RL algorithms, ensuring increase of the policy return during policy update often requires to constrain the change in action distribution. Several approximations exist in the literature to solve this constrained policy update problem. In this paper, we propose to improve over such solutions by introducing a set of projections that transform the constrained problem into an unconstrained one which is then solved by standard gradient descent. Using these projections, we empirically demonstrate that our approach can improve the policy update solution and the control over exploration of existing approximate policy iteration algorithms.}
    }
  • P. Becker, H. Pandya, G. Gebhardt, C. Zhao, J. C. Taylor, and G. Neumann, “Recurrent kalman networks: factorized inference in high-dimensional deep feature spaces,” in Proceedings of the 36th international conference on machine learning, Long Beach, California, USA, 2019, p. 544–552.
    [BibTeX] [Abstract] [Download PDF]

    In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference tech- niques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.

    @inproceedings{lincoln36286,
    volume = {97},
    month = {June},
    author = {Philipp Becker and Harit Pandya and Gregor Gebhardt and Cheng Zhao and C. James Taylor and Gerhard Neumann},
    series = {Proceedings of Machine Learning Research},
    booktitle = {Proceedings of the 36th International Conference on Machine Learning},
    address = {Long Beach, California, USA},
    title = {Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces},
    publisher = {Proceedings of Machine Learning Research},
    year = {2019},
    pages = {544--552},
    keywords = {ARRAY(0x55e772e004a8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36286/},
    abstract = {In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference tech- niques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter \& Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.}
    }
  • S. M. Mustaza, Y. Elsayed, C. Lekakou, C. Saaj, and J. Fras, “Dynamic modeling of fiber-reinforced soft manipulator: a visco-hyperelastic material-based continuum mechanics approach,” Soft robotics, vol. 6, iss. 3, p. 305–317, 2019. doi:10.1089/soro.2018.0032
    [BibTeX] [Abstract] [Download PDF]

    Robot-assisted surgery is gaining popularity worldwide and there is increasing scientific interest to explore the potential of soft continuum robots for minimally invasive surgery. However, the remote control of soft robots is much more challenging compared with their rigid counterparts. Accurate modeling of manipulator dynamics is vital to remotely control the diverse movement configurations and is particularly important for safe interaction with the operating environment. However, current dynamic models applied to soft manipulator systems are simplistic and empirical, which restricts the full potential of the new soft robots technology. Therefore, this article provides a new insight into the development of a nonlinear dynamic model for a soft continuum manipulator based on a material model. The continuum manipulator used in this study is treated as a composite material and a modified nonlinear Kelvin?Voigt material model is utilized to embody the visco-hyperelastic dynamics of soft silicone. The Lagrangian approach is applied to derive the equation of motion of the manipulator. Simulation and experimental results prove that this material modeling approach sufficiently captures the nonlinear time- and rate-dependent behavior of a soft manipulator. Material model-based closed-loop trajectory control was implemented to further validate the feasibility of the derived model and increase the performance of the overall system.

    @article{lincoln37436,
    volume = {6},
    number = {3},
    month = {June},
    author = {S.M. Mustaza and Y. Elsayed and C. Lekakou and C. Saaj and J. Fras},
    note = {cited By 1},
    title = {Dynamic modeling of fiber-reinforced soft manipulator: A visco-hyperelastic material-based continuum mechanics approach},
    publisher = {Mary Ann Liebert},
    year = {2019},
    journal = {Soft Robotics},
    doi = {10.1089/soro.2018.0032},
    pages = {305--317},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37436/},
    abstract = {Robot-assisted surgery is gaining popularity worldwide and there is increasing scientific interest to explore the potential of soft continuum robots for minimally invasive surgery. However, the remote control of soft robots is much more challenging compared with their rigid counterparts. Accurate modeling of manipulator dynamics is vital to remotely control the diverse movement configurations and is particularly important for safe interaction with the operating environment. However, current dynamic models applied to soft manipulator systems are simplistic and empirical, which restricts the full potential of the new soft robots technology. Therefore, this article provides a new insight into the development of a nonlinear dynamic model for a soft continuum manipulator based on a material model. The continuum manipulator used in this study is treated as a composite material and a modified nonlinear Kelvin?Voigt material model is utilized to embody the visco-hyperelastic dynamics of soft silicone. The Lagrangian approach is applied to derive the equation of motion of the manipulator. Simulation and experimental results prove that this material modeling approach sufficiently captures the nonlinear time- and rate-dependent behavior of a soft manipulator. Material model-based closed-loop trajectory control was implemented to further validate the feasibility of the derived model and increase the performance of the overall system.}
    }
  • K. Elgeneidy, P. Lightbody, S. Pearson, and G. Neumann, “Characterising 3d-printed soft fin ray robotic fingers with layer jamming capability for delicate grasping,” in Robosoft 2019, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Motivated by the growing need within the agrifood industry to automate the handling of delicate produce, this paper presents soft robotic fingers utilising the Fin Ray effect to passively and gently adapt to delicate targets. The proposed Soft Fin Ray fingers feature thin ribs and are entirely 3D printed from a flexible material (NinjaFlex) to enhance their shape adaptation, compared to the original Fin Ray fingers. To overcome their reduced force generation, the effects of the angle and spacing of the flexible ribs were experimentally characterised. The results showed that at large displacements, layer jamming between tilted flexible ribs can significantly enhance the force generation, while minimal contact forces can be still maintained at small displacements for delicate grasping.

    @inproceedings{lincoln34950,
    booktitle = {RoboSoft 2019},
    month = {June},
    title = {Characterising 3D-printed Soft Fin Ray Robotic Fingers with Layer Jamming Capability for Delicate Grasping},
    author = {Khaled Elgeneidy and Peter Lightbody and Simon Pearson and Gerhard Neumann},
    year = {2019},
    keywords = {ARRAY(0x55e772e00508)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34950/},
    abstract = {Motivated by the growing need within the agrifood industry to automate the handling of delicate produce, this paper presents soft robotic fingers utilising the Fin Ray effect to passively and gently adapt to delicate targets. The proposed Soft Fin Ray fingers feature thin ribs and are entirely 3D printed from a flexible material (NinjaFlex) to enhance their shape adaptation, compared to the original Fin Ray fingers. To overcome their reduced force generation, the effects of
    the angle and spacing of the flexible ribs were experimentally characterised. The results showed that at large displacements, layer jamming between tilted flexible ribs can significantly enhance the force generation, while minimal contact forces can be still maintained at small displacements for delicate grasping.}
    }
  • P. Bosilj, I. Gould, T. Duckett, and G. Cielniak, “Pattern spectra from different component trees for estimating soil size distribution,” in 14th international symposium on mathematical morphology, 2019, p. 415–427.
    [BibTeX] [Abstract] [Download PDF]

    We study the pattern spectra in context of soil structure analysis. Good soil structure is vital for sustainable crop growth. Accurate and fast measuring methods can contribute greatly to soil management decisions. However, the current in-field approaches contain a degree of subjectivity, while obtaining quantifiable results through laboratory techniques typically involves sieving the soil which is labour- and time-intensive. We aim to replace this physical sieving process through image analysis, and investigate the effectiveness of pattern spectra to capture the size distribution of the soil aggregates. We calculate the pattern spectra from partitioning hierarchies in addition to the traditional max-tree. The study is posed as an image retrieval problem, and confirms the ability of pattern spectra and suitability of different partitioning trees to re-identify soil samples in different arrangements and scales.

    @inproceedings{lincoln35548,
    month = {May},
    author = {Petra Bosilj and Iain Gould and Tom Duckett and Grzegorz Cielniak},
    booktitle = {14th International Symposium on Mathematical Morphology},
    title = {Pattern Spectra from Different Component Trees for Estimating Soil Size Distribution},
    publisher = {Springer},
    journal = {International Symposium on Mathematical Morphology},
    pages = {415--427},
    year = {2019},
    keywords = {ARRAY(0x55e772e00538)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35548/},
    abstract = {We study the pattern spectra in context of soil structure analysis. Good soil structure is vital for sustainable crop growth. Accurate and fast measuring methods can contribute greatly to soil management decisions. However, the current in-field approaches contain a degree of subjectivity, while obtaining quantifiable results through laboratory techniques typically involves sieving the soil which is labour- and time-intensive. We aim to replace this physical sieving process through image analysis, and investigate the effectiveness of pattern spectra to capture the size distribution of the soil aggregates. We calculate the pattern spectra from partitioning hierarchies in addition to the traditional max-tree. The study is posed as an image retrieval problem, and confirms the ability of pattern spectra and suitability of different partitioning trees to re-identify soil samples in different arrangements and scales.}
    }
  • J. Zhao, X. Ma, Q. Fu, C. Hu, and S. Yue, “An lgmd based competitive collision avoidance strategy for uav,” in The 15th international conference on artificial intelligence applications and innovations, 2019. doi:10.1007/978-3-030-19823-7_6
    [BibTeX] [Abstract] [Download PDF]

    Building a reliable and e?cient collision avoidance system for unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can ?y in dense swarms for hundreds of miles without collision. In the locust?s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identi?ed as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing arti?cial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV?s ?eld of view into four sub?elds each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.

    @inproceedings{lincoln35691,
    booktitle = {The 15th International Conference on Artificial Intelligence Applications and Innovations},
    month = {May},
    title = {An LGMD Based Competitive Collision Avoidance Strategy for UAV},
    author = {Jiannan Zhao and Xingzao Ma and Qinbing Fu and Cheng Hu and Shigang Yue},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/978-3-030-19823-7\_6},
    keywords = {ARRAY(0x55e772e00568)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35691/},
    abstract = {Building a reliable and e?cient collision avoidance system for unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can ?y in dense swarms for hundreds of miles without collision. In the locust?s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identi?ed as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing arti?cial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV?s ?eld of view into four sub?elds each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.}
    }
  • Q. Fu, N. Bellotto, H. Wang, C. F. Rind, H. Wang, and S. Yue, “A visual neural network for robust collision perception in vehicle driving scenarios,” in 15th international conference on artificial intelligence applications and innovations, 2019. doi:10.1007/978-3-030-19823-7_5
    [BibTeX] [Abstract] [Download PDF]

    This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust’s visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.

    @inproceedings{lincoln35586,
    booktitle = {15th International Conference on Artificial Intelligence Applications and Innovations},
    month = {May},
    title = {A Visual Neural Network for Robust Collision Perception in Vehicle Driving Scenarios},
    author = {Qinbing Fu and Nicola Bellotto and Huatian Wang and F. Claire Rind and Hongxin Wang and Shigang Yue},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/978-3-030-19823-7\_5},
    keywords = {ARRAY(0x55e772e00598)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35586/},
    abstract = {This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust's visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.}
    }
  • H. Wang, Q. Fu, H. Wang, J. Peng, and S. Yue, “Constant angular velocity regulation for visually guided terrain following,” in 15th international conference on artificial intelligence applications and innovations, 2019, p. 597–608. doi:10.1007/978-3-030-19823-7_50
    [BibTeX] [Abstract] [Download PDF]

    Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee’s behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles’ terrain following.

    @inproceedings{lincoln35595,
    month = {May},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Jigen Peng and Shigang Yue},
    booktitle = {15th International Conference on Artificial Intelligence Applications and Innovations},
    title = {Constant Angular Velocity Regulation for Visually Guided Terrain Following},
    publisher = {Springer},
    doi = {10.1007/978-3-030-19823-7\_50},
    pages = {597--608},
    year = {2019},
    keywords = {ARRAY(0x55e772e005c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35595/},
    abstract = {Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee's behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles' terrain following.}
    }
  • L. Sun, C. Zhao, Z. Yan, P. Liu, T. Duckett, and R. Stolkin, “A novel weakly-supervised approach for rgb-d-based nuclear waste object detection and categorization,” Ieee sensors journal, vol. 19, iss. 9, p. 3487–3500, 2019. doi:10.1109/JSEN.2018.2888815
    [BibTeX] [Abstract] [Download PDF]

    This paper addresses the problem of RGBD-based detection and categorization of waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a novel industrial application, large amounts of annotated waste object data are currently unavailable. To overcome this problem, we propose a weakly-supervised learning approach which is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos while requiring very few annotations. The proposed method also has the potential to be applied to other household or industrial applications. We evaluate our approach on the Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance among semi-supervised methods. More importantly, we introduce a novel dataset, i.e. Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this novel industrial object recognition challenge. We further propose a complete real-time pipeline for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised approach has demonstrated to be highly effective in solving a novel RGB-D object detection and recognition application with limited human annotations.

    @article{lincoln35699,
    volume = {19},
    number = {9},
    month = {May},
    author = {Li Sun and Cheng Zhao and Zhi Yan and Pengcheng Liu and Tom Duckett and Rustam Stolkin},
    title = {A Novel Weakly-supervised approach for RGB-D-based Nuclear Waste Object Detection and Categorization},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Sensors Journal},
    doi = {10.1109/JSEN.2018.2888815},
    pages = {3487--3500},
    keywords = {ARRAY(0x55e772e005f8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35699/},
    abstract = {This paper addresses the problem of RGBD-based detection and categorization of waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a novel industrial application, large amounts of annotated waste object data are currently unavailable. To overcome this problem, we propose a weakly-supervised learning approach which is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos while requiring very few annotations. The proposed method also has the potential to be
    applied to other household or industrial applications. We evaluate our approach on the Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance among semi-supervised methods. More importantly, we introduce a novel dataset, i.e. Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this novel industrial object recognition challenge. We further propose a complete real-time pipeline for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised approach has demonstrated to be highly effective in solving a novel RGB-D object
    detection and recognition application with limited human annotations.}
    }
  • A. Nanjangud, C. M. Saaj, P. C. Blacker, A. Young, C. I. Underwood, S. Eckersley, M. Sweeting, and P. Bianco, “Robotic architectures for the on-orbit assembly of large space telescopes,” in 15th esa symposium on advanced space technologies in robotics and automation, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39623,
    booktitle = {15th ESA Symposium on Advanced Space Technologies in Robotics and Automation},
    month = {May},
    title = {Robotic Architectures for the On-Orbit Assembly of Large Space Telescopes},
    author = {Angadh Nanjangud and Chakravarthini M Saaj and Peter C. Blacker and Alex Young and Craig I. Underwood and Steve Eckersley and Martin Sweeting and Paolo Bianco},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39623/}
    }
  • E. C. Rodias, M. Lampridi, A. Sopegno, R. Berruto, G. Banias, D. Bochtis, and P. Busato, “Optimal energy performance on allocating energy crops,” Biosystems engineering, vol. 181, p. 11–27, 2019. doi:10.1016/j.biosystemseng.2019.02.007
    [BibTeX] [Abstract] [Download PDF]

    There is a variety of crops that may be considered as potential biomass production crops. In order to select the best suitable for cultivation crop for a given area, a number of several factors should be taken into account. During the crop selection process, a common framework should be followed focussing on financial or energy performance. Combining multiple crops and multiple fields for the extraction of the best allocation requires a model to evaluate various and complex factors given a specific objective. This paper studies the maximisation of total energy gained from the biomass production by energy crops, reduced by the energy costs of the production process. The tool calculates the energy balance using multiple crops allocated to multiple fields. Both binary programming and linear programming methods are employed to solve the allocation problem. Each crop is assigned to a field (or a combination of crops are allocated to each field) with the aim of maximising the energy balance provided by the production system. For the demonstration of the tool, a hypothetical case study of three different crops cultivated for a decade (Miscanthus x giganteus, Arundo donax, and Panicum virgatum) and allocated to 40 dispersed fields around a biogas plant in Italy is presented. The objective of the best allocation is the maximisation of energy balance showing that the linear solution is slightly better than the binary one in the basic scenario while focussing on suggesting alternative scenarios that would have an optimal energy balance.

    @article{lincoln39225,
    volume = {181},
    month = {May},
    author = {Efthymios C. Rodias and Maria Lampridi and Alessandro Sopegno and Remigio Berruto and George Banias and Dionysis Bochtis and Patrizia Busato},
    title = {Optimal energy performance on allocating energy crops},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2019.02.007},
    pages = {11--27},
    year = {2019},
    keywords = {ARRAY(0x55e772e00658)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39225/},
    abstract = {There is a variety of crops that may be considered as potential biomass production crops. In order to select the best suitable for cultivation crop for a given area, a number of several factors should be taken into account. During the crop selection process, a common framework should be followed focussing on financial or energy performance. Combining multiple crops and multiple fields for the extraction of the best allocation requires a model to evaluate various and complex factors given a specific objective. This paper studies the maximisation of total energy gained from the biomass production by energy crops, reduced by the energy costs of the production process. The tool calculates the energy balance using multiple crops allocated to multiple fields. Both binary programming and linear programming methods are employed to solve the allocation problem. Each crop is assigned to a field (or a combination of crops are allocated to each field) with the aim of maximising the energy balance provided by the production system. For the demonstration of the tool, a hypothetical case study of three different crops cultivated for a decade (Miscanthus x giganteus, Arundo donax, and Panicum virgatum) and allocated to 40 dispersed fields around a biogas plant in Italy is presented. The objective of the best allocation is the maximisation of energy balance showing that the linear solution is slightly better than the binary one in the basic scenario while focussing on suggesting alternative scenarios that would have an optimal energy balance.}
    }
  • F. Brandherm, J. Peters, G. Neumann, and R. Akrour, “Learning replanning policies with direct policy search,” Ieee robotics and automation letters (ra-l), vol. 4, iss. 2, p. 2196 –2203, 2019. doi:10.1109/LRA.2019.2901656
    [BibTeX] [Abstract] [Download PDF]

    Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed context

    @article{lincoln36284,
    volume = {4},
    number = {2},
    month = {April},
    author = {F. Brandherm and J. Peters and Gerhard Neumann and R. Akrour},
    title = {Learning Replanning Policies with Direct Policy Search},
    year = {2019},
    journal = {IEEE Robotics and Automation Letters (RA-L)},
    doi = {10.1109/LRA.2019.2901656},
    pages = {2196 --2203},
    keywords = {ARRAY(0x55e772e00688)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36284/},
    abstract = {Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed context}
    }
  • M. Lampridi, D. Kateris, G. Vasileiadis, S. Pearson, C. S. o, A. Balafoutis, and D. Bochtis, “A case-based economic assessment of robotics employment in precision arable farming,” Agronomy, vol. 9, iss. 4, p. 175, 2019. doi:10.3390/agronomy9040175
    [BibTeX] [Abstract] [Download PDF]

    The need to intensify agriculture to meet increasing nutritional needs, in combination with the evolution of unmanned autonomous systems has led to the development of a series of ?smart? farming technologies that are expected to replace or complement conventional machinery and human labor. This paper proposes a preliminary methodology for the economic analysis of the employment of robotic systems in arable farming. This methodology is based on the basic processes for estimating the use cost for agricultural machinery. However, for the case of robotic systems, no average norms for the majority of the operational parameters are available. Here, we propose a novel estimation process for these parameters in the case of robotic systems. As a case study, the operation of light cultivation has been selected due the technological readiness for this type of operation.

    @article{lincoln35601,
    volume = {9},
    number = {4},
    month = {April},
    author = {Maria Lampridi and Dimitrios Kateris and Giorgos Vasileiadis and Simon Pearson and Claus S{\o}rensen and Athanasios Balafoutis and Dionysis Bochtis},
    title = {A Case-Based Economic Assessment of Robotics Employment in Precision Arable Farming},
    publisher = {MDPI},
    year = {2019},
    journal = {Agronomy},
    doi = {10.3390/agronomy9040175},
    pages = {175},
    keywords = {ARRAY(0x55e772e006b8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35601/},
    abstract = {The need to intensify agriculture to meet increasing nutritional needs, in combination with the evolution of unmanned autonomous systems has led to the development of a series of ?smart? farming technologies that are expected to replace or complement conventional machinery and human labor. This paper proposes a preliminary methodology for the economic analysis of the employment of robotic systems in arable farming. This methodology is based on the basic processes for estimating the use cost for agricultural machinery. However, for the case of robotic systems, no average norms for the majority of the operational parameters are available. Here, we propose a novel estimation process for these parameters in the case of robotic systems. As a case study, the operation of light cultivation has been selected due the technological readiness for this type of operation.}
    }
  • A. W. I. Mohamed, C. M. Saaj, A. Seddaoui, and S. Eckersley, “Controlling a non-linear space robot using linear controllers,” in 5th ceas conference on guidance, navigation and control (eurognc), 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39625,
    booktitle = {5th CEAS Conference on Guidance, Navigation and Control (EuroGNC)},
    month = {April},
    title = {Controlling a Non-Linear Space Robot using Linear Controllers},
    author = {A.W.I Mohamed and C. M. Saaj and A. Seddaoui and S. Eckersley},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39625/}
    }
  • T. Angelopoulou, N. Tziolas, A. Balafoutis, G. Zalidis, and D. Bochtis, “Remote sensing techniques for soil organic carbon estimation: a review,” Remote sensing, vol. 11, iss. 6, p. 676, 2019. doi:10.3390/rs11060676
    [BibTeX] [Abstract] [Download PDF]

    Towards the need for sustainable development, remote sensing (RS) techniques in the Visible-Near Infrared?Shortwave Infrared (VNIR?SWIR, 400?2500 nm) region could assist in a more direct, cost-effective and rapid manner to estimate important indicators for soil monitoring purposes. Soil reflectance spectroscopy has been applied in various domains apart from laboratory conditions, e.g., sensors mounted on satellites, aircrafts and Unmanned Aerial Systems. The aim of this review is to illustrate the research made for soil organic carbon estimation, with the use of RS techniques, reporting the methodology and results of each study. It also aims to provide a comprehensive introduction in soil spectroscopy for those who are less conversant with the subject. In total, 28 journal articles were selected and further analysed. It was observed that prediction accuracy reduces from Unmanned Aerial Systems (UASs) to satellite platforms, though advances in machine learning techniques could further assist in the generation of better calibration models. There are some challenges concerning atmospheric, radiometric and geometric corrections, vegetation cover, soil moisture and roughness that still need to be addressed. The advantages and disadvantages of each approach are highlighted and future considerations are also discussed at the end.

    @article{lincoln39227,
    volume = {11},
    number = {6},
    month = {March},
    author = {Theodora Angelopoulou and Nikolaos Tziolas and Athanasios Balafoutis and George Zalidis and Dionysis Bochtis},
    title = {Remote Sensing Techniques for Soil Organic Carbon Estimation: A Review},
    year = {2019},
    journal = {Remote Sensing},
    doi = {10.3390/rs11060676},
    pages = {676},
    keywords = {ARRAY(0x55e772e00718)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39227/},
    abstract = {Towards the need for sustainable development, remote sensing (RS) techniques in the Visible-Near Infrared?Shortwave Infrared (VNIR?SWIR, 400?2500 nm) region could assist in a more direct, cost-effective and rapid manner to estimate important indicators for soil monitoring purposes. Soil reflectance spectroscopy has been applied in various domains apart from laboratory conditions, e.g., sensors mounted on satellites, aircrafts and Unmanned Aerial Systems. The aim of this review is to illustrate the research made for soil organic carbon estimation, with the use of RS techniques, reporting the methodology and results of each study. It also aims to provide a comprehensive introduction in soil spectroscopy for those who are less conversant with the subject. In total, 28 journal articles were selected and further analysed. It was observed that prediction accuracy reduces from Unmanned Aerial Systems (UASs) to satellite platforms, though advances in machine learning techniques could further assist in the generation of better calibration models. There are some challenges concerning atmospheric, radiometric and geometric corrections, vegetation cover, soil moisture and roughness that still need to be addressed. The advantages and disadvantages of each approach are highlighted and future considerations are also discussed at the end.}
    }
  • S. Pearson, D. May, G. Leontidis, M. Swainson, S. Brewer, L. Bidaut, J. Frey, G. Parr, R. Maull, and A. Zisman, “Are distributed ledger technologies the panacea for food traceability?,” Global food security, vol. 20, p. 145–149, 2019. doi:10.1016/j.gfs.2019.02.002
    [BibTeX] [Abstract] [Download PDF]

    Distributed Ledger Technology (DLT), such as blockchain, has the potential to transform supply chains. It can provide a cryptographically secure and immutable record of transactions and associated metadata (origin, contracts, process steps, environmental variations, microbial records, etc.) linked across whole supply chains. The ability to trace food items within and along a supply chain is legally required by all actors within the chain. It is critical to food safety, underpins trust and global food trade. However, current food traceability systems are not linked between all actors within the supply chain. Key metadata on the age and process history of a food is rarely transferred when a product is bought and sold through multiple steps within the chain. Herein, we examine the potential of massively scalable DLT to securely link the entire food supply chain, from producer to end user. Under such a paradigm, should a food safety or quality issue ever arise, authorized end users could instantly and accurately trace the origin and history of any particular food item. This novel and unparalleled technology could help underpin trust for the safety of all food, a critical component of global food security. In this paper, we investigate the (I) data requirements to develop DLT technology across whole supply chains, (ii) key challenges and barriers to optimizing the complete system, and (iii) potential impacts on production efficiency, legal compliance, access to global food markets and the safety of food. Our conclusion is that while DLT has the potential to transform food systems, this can only be fully realized through the global development and agreement on suitable data standards and governance. In addition, key technical issues need to be resolved including challenges with DLT scalability, privacy and data architectures.

    @article{lincoln35035,
    volume = {20},
    month = {March},
    author = {Simon Pearson and David May and Georgios Leontidis and Mark Swainson and Steve Brewer and Luc Bidaut and Jeremy Frey and Gerard Parr and Roger Maull and Andrea Zisman},
    title = {Are Distributed Ledger Technologies the Panacea for Food Traceability?},
    publisher = {Elsevier},
    year = {2019},
    journal = {Global Food Security},
    doi = {10.1016/j.gfs.2019.02.002},
    pages = {145--149},
    keywords = {ARRAY(0x55e772e00748)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35035/},
    abstract = {Distributed Ledger Technology (DLT), such as blockchain, has the potential to transform supply chains. It can provide a cryptographically secure and immutable record of transactions and associated metadata (origin, contracts, process steps, environmental variations, microbial records, etc.) linked across whole supply chains. The ability to trace food items within and along a supply chain is legally required by all actors within the chain. It is critical to food safety, underpins trust and global food trade. However, current food traceability systems are not linked between all actors within the supply chain. Key metadata on the age and process history of a food is rarely transferred when a product is bought and sold through multiple steps within the chain. Herein, we examine the potential of massively scalable DLT to securely link the entire food supply chain, from producer to end user. Under such a paradigm, should a food safety or quality issue ever arise, authorized end users could instantly and accurately trace the origin and history of any particular food item. This novel and unparalleled technology could help underpin trust for the safety of all food, a critical component of global food security. In this paper, we investigate the (I) data requirements to develop DLT technology across whole supply chains, (ii) key challenges and barriers to optimizing the complete system, and (iii) potential impacts on production efficiency, legal compliance, access to global food markets and the safety of food. Our conclusion is that while DLT has the potential to transform food systems, this can only be fully realized through the global development and agreement on suitable data standards and governance. In addition, key technical issues need to be resolved including challenges with DLT scalability, privacy and data architectures.}
    }
  • M. Hüttenrauch, S. Adrian, and G. Neumann, “Deep reinforcement learning for swarm systems,” Journal of machine learning research, vol. 20, iss. 54, p. 1–31, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, the observation vector for decentralized decision making is represented by a concatenation of the (local) information an agent gathers about other agents. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions, where we treat the agents as samples and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and neural networks trained end-to-end. We evaluate the representation on two well-known problems from the swarm literature in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents, facilitating the development of complex collective strategies.

    @article{lincoln36281,
    volume = {20},
    number = {54},
    month = {February},
    author = {Maximilian H{\"u}ttenrauch and Sosic Adrian and Gerhard Neumann},
    title = {Deep Reinforcement Learning for Swarm Systems},
    publisher = {Journal of Machine Learning Research},
    year = {2019},
    journal = {Journal of Machine Learning Research},
    pages = {1--31},
    keywords = {ARRAY(0x55e772e00778)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36281/},
    abstract = {Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, the observation vector for decentralized decision making is represented by a concatenation of the (local) information an agent gathers about other agents. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions, where we treat the agents as samples and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and neural networks trained end-to-end. We evaluate the representation on two well-known problems from the swarm literature in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents, facilitating the development of complex collective strategies.}
    }
  • L. Jackson, C. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and M. Ferris, “Design of a small space robot for on-orbit assembly missions,” in 5th international conference on mechatronics and robotics engineering, 2019, p. 107–112. doi:10.1145/3314493.3314520
    [BibTeX] [Abstract] [Download PDF]

    Intelligent robots have revolutionised terrestrial assembly and servicing processes, while low-cost small satellites have transformed the economics of space. This paper dovetails both technologies and proposes an innovative design for a small space robot that is potentially capable of assembly operations in-orbit. The drive for such missions stems from the growing commercial interests and scientific benefits offered by massive structures in space, such as the future large aperture astronomical or Earth Observation telescopes. However, limitations in the lifting capacity of launch vehicles currently impose severe restrictions on the size of the self-deployable monolithic telescope structure that can be carried. As a result, there is a growing demand for advancing the capabilities of space robots to assemble modular components in-orbit. To assess the feasibility of a small space robot for future in-space assembly missions, a detailed design is outlined and analysed in this paper. The trade-off between the manipulator configuration and its base spacecraft sizing is presented. This coherent design exercise is driven by various mission requirements that consider the constraints of a small spacecraft as well as its extreme operating environment.

    @inproceedings{lincoln37442,
    volume = {Part F},
    month = {February},
    author = {L. Jackson and C. Saaj and A. Seddaoui and C. Whiting and S. Eckersley and M. Ferris},
    note = {cited By 0},
    booktitle = {5th International Conference on Mechatronics and Robotics Engineering},
    title = {Design of a small space robot for on-orbit assembly missions},
    publisher = {ACM},
    year = {2019},
    journal = {ACM International Conference Proceeding Series},
    doi = {10.1145/3314493.3314520},
    pages = {107--112},
    keywords = {ARRAY(0x55e772e007a8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37442/},
    abstract = {Intelligent robots have revolutionised terrestrial assembly and servicing processes, while low-cost small satellites have transformed the economics of space. This paper dovetails both technologies and proposes an innovative design for a small space robot that is potentially capable of assembly operations in-orbit. The drive for such missions stems from the growing commercial interests and scientific benefits offered by massive structures in space, such as the future large aperture astronomical or Earth Observation telescopes. However, limitations in the lifting capacity of launch vehicles currently impose severe restrictions on the size of the self-deployable monolithic telescope structure that can be carried. As a result, there is a growing demand for advancing the capabilities of space robots to assemble modular components in-orbit. To assess the feasibility of a small space robot for future in-space assembly missions, a detailed design is outlined and analysed in this paper. The trade-off between the manipulator configuration and its base spacecraft sizing is presented. This coherent design exercise is driven by various mission requirements that consider the constraints of a small spacecraft as well as its extreme operating environment.}
    }
  • D. Bechtsis, V. Moisiadis, N. Tsolakis, D. Vlachos, and D. Bochtis, “Unmanned ground vehicles in precision farming services: an integrated emulation modelling approach,” in Information and communication technologies in modern agricultural development, Springer, 2019, vol. 953, p. 177–190. doi:doi:10.1007/978-3-030-12998-9_13
    [BibTeX] [Abstract] [Download PDF]

    Autonomous systems are a promising alternative for safely executing precision farming activities in a 24/7 perspective. In this context Unmanned Ground Vehicles (UGVs) are used in custom agricultural fields, with sophisticated sensors and data fusion techniques for real-time mapping and navigation. The aim of this study is to present a simulation software tool for providing effective and efficient farming activities in orchard fields and demonstrating the applicability of simulation in routing algorithms, hence increasing productivity, while dynamically addressing operational and tactical level uncertainties. The three dimensional virtual world includes the field layout and the static objects (orchard trees, obstacles, physical boundaries) and is constructed in the open source Gazebo simulation software while the Robot Operating System (ROS) and the implemented algorithms are tested using a custom vehicle. As a result a routing algorithm is executed and enables the UGV to pass through all the orchard trees while dynamically avoiding static and dynamic obstacles. Unlike existing sophisticated tools, the developed mechanism could accommodate an extensive variety of agricultural activities and could be transparently transferred from the simulation environment to real world ROS compatible UGVs providing user-friendly and highly customizable navigation.

    @incollection{lincoln39234,
    volume = {953},
    month = {February},
    author = {Dimitrios Bechtsis and Vasileios Moisiadis and Naoum Tsolakis and Dimitrios Vlachos and Dionysis Bochtis},
    booktitle = {Information and Communication Technologies in Modern Agricultural Development},
    title = {Unmanned Ground Vehicles in Precision Farming Services: An Integrated Emulation Modelling Approach},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-12998-9\_13},
    pages = {177--190},
    keywords = {ARRAY(0x55e772e007d8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39234/},
    abstract = {Autonomous systems are a promising alternative for safely executing precision farming activities in a 24/7 perspective. In this context Unmanned Ground Vehicles (UGVs) are used in custom agricultural fields, with sophisticated sensors and data fusion techniques for real-time mapping and navigation. The aim of this study is to present a simulation software tool for providing effective and efficient farming activities in orchard fields and demonstrating the applicability of simulation in routing algorithms, hence increasing productivity, while dynamically addressing operational and tactical level uncertainties. The three dimensional virtual world includes the field layout and the static objects (orchard trees, obstacles, physical boundaries) and is constructed in the open source Gazebo simulation software while the Robot Operating System (ROS) and the implemented algorithms are tested using a custom vehicle. As a result a routing algorithm is executed and enables the UGV to pass through all the orchard trees while dynamically avoiding static and dynamic obstacles. Unlike existing sophisticated tools, the developed mechanism could accommodate an extensive variety of agricultural activities and could be transparently transferred from the simulation environment to real world ROS compatible UGVs providing user-friendly and highly customizable navigation.}
    }
  • C. A. G. S. o, D. Kateris, and D. Bochtis, “Ict innovations and smart farming,” in Information and communication technologies in modern agricultural development, Springer, 2019, vol. 953, p. 1–19. doi:doi:10.1007/978-3-030-12998-9_1
    [BibTeX] [Abstract] [Download PDF]

    Agriculture plays a vital role in the global economy with the majority of the rural population in developing countries depending on it. The depletion of natural resources makes the improvement of the agricultural production more important but also more difficult than ever. This is the reason that although the demand is constantly growing, Information and Communication Technology (ICT) offers to producers the adoption of sustainability and improvement of their daily living conditions. ICT offers timely and updated relevant information such as weather forecast, market prices, the occurrence of new diseases and varieties, etc. The new knowledge offers a unique opportunity to bring the production enhancing technologies to the farmers and empower themselves with modern agricultural technology and act accordingly for increasing the agricultural production in a cost effective and profitable manner. The use of ICT itself or combined with other ICT systems results in productivity improvement and better resource use and reduces the time needed for farm management, marketing, logistics and quality assurance.

    @incollection{lincoln39235,
    volume = {953},
    month = {February},
    author = {Claus Aage Gr{\o}n S{\o}rensen and Dimitrios Kateris and Dionysis Bochtis},
    booktitle = {Information and Communication Technologies in Modern Agricultural Development},
    title = {ICT Innovations and Smart Farming},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-12998-9\_1},
    pages = {1--19},
    keywords = {ARRAY(0x55e772e00808)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39235/},
    abstract = {Agriculture plays a vital role in the global economy with the majority of the rural population in developing countries depending on it. The depletion of natural resources makes the improvement of the agricultural production more important but also more difficult than ever. This is the reason that although the demand is constantly growing, Information and Communication Technology (ICT) offers to producers the adoption of sustainability and improvement of their daily living conditions. ICT offers timely and updated relevant information such as weather forecast, market prices, the occurrence of new diseases and varieties, etc. The new knowledge offers a unique opportunity to bring the production enhancing technologies to the farmers and empower themselves with modern agricultural technology and act accordingly for increasing the agricultural production in a cost effective and profitable manner. The use of ICT itself or combined with other ICT systems results in productivity improvement and better resource use and reduces the time needed for farm management, marketing, logistics and quality assurance.}
    }
  • E. C. Rodias, A. Sopegno, R. Berruto, D. Bochtis, E. Cavallo, and P. Busato, “A combined simulation and linear programming method for scheduling organic fertiliser application,” Biosystems engineering, vol. 178, p. 233–243, 2019. doi:10.1016/j.biosystemseng.2018.11.002
    [BibTeX] [Abstract] [Download PDF]

    Logistics have been used to analyse agricultural operations, such as chemical application, mineral or organic fertilisation and harvesting-handling operations. Recently, due to national or European commitments concerning livestock waste management, this waste is being applied in many crops instead of other mineral fertilisers. The organic fertiliser produced has a high availability although most of the crops it is applied to have strict timeliness issues concerning its application. Here, organic fertilizer (as liquid manure) distribution logistic system is modelled by using a combined simulation and linear programming method. The method applies in certain crops and field areas taking into account specific agronomical, legislation and other constraints with the objective of minimising the optimal annual cost. Given their direct connection with the organic fertiliser distribution, the operations of cultivation and seeding were included. In a basic scenario, the optimal cost was assessed for both crops in total cultivated area of 120 ha. Three modified scenarios are presented. The first regards one more tractor as being available and provides a reduction of 3.8\% in the total annual cost in comparison with the basic scenario. In the second and third modified scenarios fields having high nitrogen demand next to the farm are considered with one or two tractors and savings of 2.5\% and 6.1\%, respectively, compared to the basic scenario are implied. Finally, it was concluded that the effect of distance from the manure production to the location of the fields could reduce costs by 6.5\%.

    @article{lincoln39224,
    volume = {178},
    month = {February},
    author = {Efthymios C. Rodias and Alessandro Sopegno and Remigio Berruto and Dionysis Bochtis and Eugenio Cavallo and Patrizia Busato},
    title = {A combined simulation and linear programming method for scheduling organic fertiliser application},
    publisher = {Elsevier},
    year = {2019},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2018.11.002},
    pages = {233--243},
    keywords = {ARRAY(0x55e772e00838)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39224/},
    abstract = {Logistics have been used to analyse agricultural operations, such as chemical application, mineral or organic fertilisation and harvesting-handling operations. Recently, due to national or European commitments concerning livestock waste management, this waste is being applied in many crops instead of other mineral fertilisers. The organic fertiliser produced has a high availability although most of the crops it is applied to have strict timeliness issues concerning its application. Here, organic fertilizer (as liquid manure) distribution logistic system is modelled by using a combined simulation and linear programming method. The method applies in certain crops and field areas taking into account specific agronomical, legislation and other constraints with the objective of minimising the optimal annual cost. Given their direct connection with the organic fertiliser distribution, the operations of cultivation and seeding were included. In a basic scenario, the optimal cost was assessed for both crops in total cultivated area of 120 ha. Three modified scenarios are presented. The first regards one more tractor as being available and provides a reduction of 3.8\% in the total annual cost in comparison with the basic scenario. In the second and third modified scenarios fields having high nitrogen demand next to the farm are considered with one or two tractors and savings of 2.5\% and 6.1\%, respectively, compared to the basic scenario are implied. Finally, it was concluded that the effect of distance from the manure production to the location of the fields could reduce costs by 6.5\%.}
    }
  • Y. Zhu, X. Li, S. Pearson, D. Wu, R. Sun, S. Johnson, J. Wheeler, and S. Fang, “Evaluation of fengyun-3c soil moisture products using in-situ data from the chinese automatic soil moisture observation stations: a case study in henan province, china,” Water, vol. 11, iss. 2, p. 248, 2019. doi:doi:10.3390/w11020248
    [BibTeX] [Abstract] [Download PDF]

    Soil moisture (SM) products derived from passive satellite missions are playing an increasingly important role in agricultural applications, especially crop monitoring and disaster warning. Evaluating the dependability of satellite-derived soil moisture products on a large scale is crucial. In this study, we assessed the level 2 (L2) SM product from the Chinese Fengyun-3C (FY-3C) radiometer against in-situ measurements collected from the Chinese Automatic Soil Moisture Observation Stations (CASMOS) during a one-year period from 1 January 2016 to 31 December 2016 across Henan in China. In contrast, we also investigated the skill of the Advanced Microwave Scanning Radiometer 2 (AMSR2) and Soil Moisture Active/Passive (SMAP) SM products simultaneously. Four statistical parameters were used to evaluate these products? reliability: mean difference, root-mean-square error (RMSE), unbiased RMSE (ubRMSE), and the correlation coefficient. Our assessment results revealed that the FY-3C L2 SM product generally showed a poor correlation with the in-situ SM data from CASMOS on both temporal and spatial scales. The AMSR2 L3 SM product of JAXA (Japan Aerospace Exploration Agency) algorithm had a similar level of skill as FY-3C in the study area. The SMAP L3 SM product outperformed the FY-3C temporally but showed lower performance in capturing the SM spatial variation. A time-series analysis indicated that the correlations and estimated error varied systematically through the growing periods of the key crops in our study area. FY-3C L2 SM data tended to overestimate soil moisture during May, August, and September when the crops reached maximum vegetation density and tended to underestimate the soil moisture content during the rest of the year. The comparison between the statistical parameters and the ground vegetation water content (VWC) further showed that the FY-3C SM product performed much better under a low VWC condition ({\ensuremath{}}0.3 kg/m2), and the performance generally decreased with increased VWC. To improve the accuracy of the FY-3C SM product, an improved algorithm that can better characterize the variations of the ground VWC should be applied in the future.

    @article{lincoln35398,
    volume = {11},
    number = {2},
    month = {January},
    author = {Yongchao Zhu and Xuan Li and Simon Pearson and Dongli Wu and Ruijing Sun and Sarah Johnson and James Wheeler and Shibo Fang},
    title = {Evaluation of Fengyun-3C Soil Moisture Products Using In-Situ Data from the Chinese Automatic Soil Moisture Observation Stations: A Case Study in Henan Province, China},
    year = {2019},
    journal = {Water},
    doi = {doi:10.3390/w11020248},
    pages = {248},
    keywords = {ARRAY(0x55e772e00868)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35398/},
    abstract = {Soil moisture (SM) products derived from passive satellite missions are playing an increasingly important role in agricultural applications, especially crop monitoring and disaster warning. Evaluating the dependability of satellite-derived soil moisture products on a large scale is crucial. In this study, we assessed the level 2 (L2) SM product from the Chinese Fengyun-3C (FY-3C) radiometer against in-situ measurements collected from the Chinese Automatic Soil Moisture Observation Stations (CASMOS) during a one-year period from 1 January 2016 to 31 December 2016 across Henan in China. In contrast, we also investigated the skill of the Advanced Microwave Scanning Radiometer 2 (AMSR2) and Soil Moisture Active/Passive (SMAP) SM products simultaneously. Four statistical parameters were used to evaluate these products? reliability: mean difference, root-mean-square error (RMSE), unbiased RMSE (ubRMSE), and the correlation coefficient. Our assessment results revealed that the FY-3C L2 SM product generally showed a poor correlation with the in-situ SM data from CASMOS on both temporal and spatial scales. The AMSR2 L3 SM product of JAXA (Japan Aerospace Exploration Agency) algorithm had a similar level of skill as FY-3C in the study area. The SMAP L3 SM product outperformed the FY-3C temporally but showed lower performance in capturing the SM spatial variation. A time-series analysis indicated that the correlations and estimated error varied systematically through the growing periods of the key crops in our study area. FY-3C L2 SM data tended to overestimate soil moisture during May, August, and September when the crops reached maximum vegetation density and tended to underestimate the soil moisture content during the rest of the year. The comparison between the statistical parameters and the ground vegetation water content (VWC) further showed that the FY-3C SM product performed much better under a low VWC condition ({\ensuremath{}}0.3 kg/m2), and the performance generally decreased with increased VWC. To improve the accuracy of the FY-3C SM product, an improved algorithm that can better characterize the variations of the ground VWC should be applied in the future.}
    }
  • A. Gabriel, N. Bellotto, and P. Baxter, “Towards a dataset of activities for action recognition in open fields,” in 2nd uk-ras robotics and autonomous systems conference, 2019, p. 64–67.
    [BibTeX] [Abstract] [Download PDF]

    In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural robot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.

    @inproceedings{lincoln36201,
    booktitle = {2nd UK-RAS Robotics and Autonomous Systems Conference},
    month = {January},
    title = {Towards a Dataset of Activities for Action Recognition in Open Fields},
    author = {Alexander Gabriel and Nicola Bellotto and Paul Baxter},
    publisher = {UK-RAS},
    year = {2019},
    pages = {64--67},
    keywords = {ARRAY(0x55e772e00898)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36201/},
    abstract = {In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural
    robot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.}
    }
  • T. Pardi, R. Stolkin, and A. G. Esfahani, “Choosing grasps to enable collision-free post-grasp manipulations,” Ieee-ras 18th international conference on humanoid robots (humanoids), 2019. doi:10.1109/HUMANOIDS.2018.8625027
    [BibTeX] [Abstract] [Download PDF]

    Consider the task of grasping the handle of a door, and then pushing it until the door opens. These two fundamental robotics problems (selecting secure grasps of a hand on an object, e.g. the door handle, and planning collision-free trajectories of a robot arm that will move that object along a desired path) have predominantly been studied separately from one another. Thus, much of the grasping literature overlooks the fundamental purpose of grasping objects, which is typically to make them move in desirable ways. Given a desired post-grasp trajectory of the object, different choices of grasp will often determine whether or not collision-free post-grasp motions of the arm can be found, which will deliver that trajectory. We address this problem by examining a number of possible stable grasping configurations on an object. For each stable grasp, we explore the motion space of the manipulator which would be needed for post-grasp motions, to deliver the object along the desired trajectory. A criterion, based on potential fields in the post-grasp motion space, is used to assign a collision-cost to each grasp. A grasping configuration is then selected which enables the desired post-grasp object motion while minimising the proximity of all robot parts to obstacles during motion. We demonstrate our method with peg-in-hole and pick-and-place experiments in cluttered scenes, using a Franka Panda robot. Our approach is effective in selecting appropriate grasps, which enable both stable grasp and also desired post-grasp movements without collisions. We also show that, when grasps are selected based on grasp stability alone, without consideration for desired post-grasp manipulations, the corresponding post-grasp movements of the manipulator may result in collisions.

    @article{lincoln35570,
    month = {January},
    title = {Choosing grasps to enable collision-free post-grasp manipulations},
    author = {Tommaso Pardi and Rustam Stolkin and Amir Ghalamzan Esfahani},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/HUMANOIDS.2018.8625027},
    journal = {IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)},
    keywords = {ARRAY(0x55e772e008c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35570/},
    abstract = {Consider the task of grasping the handle of a door, and then pushing it until the door opens. These two fundamental robotics problems (selecting secure grasps of a hand on an object, e.g. the door handle, and planning collision-free trajectories of a robot arm that will move that object along a desired path) have predominantly been studied separately from one another. Thus, much of the grasping literature overlooks the fundamental purpose of grasping objects, which is typically to make them move in desirable ways. Given a desired post-grasp trajectory of the object, different choices of grasp will often determine whether or not collision-free post-grasp motions of the arm can be found, which will deliver that trajectory. We address this problem by examining a number of possible stable grasping configurations on an object. For each stable grasp, we explore the motion space of the manipulator which would be needed for post-grasp motions, to deliver the object along the desired trajectory. A criterion, based on potential fields in the post-grasp motion space, is used to assign a collision-cost to each grasp. A grasping configuration is then selected which enables the desired post-grasp object motion while minimising the proximity of all robot parts to obstacles during motion. We demonstrate our method with peg-in-hole and pick-and-place experiments in cluttered scenes, using a Franka Panda robot. Our approach is effective in selecting appropriate grasps, which enable both stable grasp and also desired post-grasp movements without collisions. We also show that, when grasps are selected based on grasp stability alone, without consideration for desired post-grasp manipulations, the corresponding post-grasp movements of the manipulator may result in collisions.}
    }
  • H. Montes, T. Duckett, and G. Cielniak, “Model based 3d point cloud segmentation for automated selective broccoli harvesting,” in Smart industry workshop 2019, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Segmentation of 3D objects in cluttered scenes is a highly relevant problem. Given a 3D point cloud produced by a depth sensor, the goal is to separate objects of interest in the foreground from other elements in the background. We research 3D imaging methods to accurately segment and identify broccoli plants in the field. The ability to separate parts into different sets of sensor readings is an important task towards this goal. Our research is focused on the broccoli head segmentation problem as a first step towards size estimation of each broccoli crop in order to establish whether or not it is suitable for cutting.

    @inproceedings{lincoln39207,
    booktitle = {Smart Industry Workshop 2019},
    month = {January},
    title = {MODEL BASED 3D POINT CLOUD SEGMENTATION FOR AUTOMATED SELECTIVE BROCCOLI HARVESTING},
    author = {Hector Montes and Tom Duckett and Grzegorz Cielniak},
    year = {2019},
    keywords = {ARRAY(0x55e772e008f8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39207/},
    abstract = {Segmentation of 3D objects in cluttered scenes is a highly relevant problem. Given a 3D point cloud produced by a depth sensor, the goal is to separate objects of interest in the foreground from other elements in the background. We research 3D imaging methods to accurately segment and identify broccoli plants in the field. The ability to separate parts into different sets of sensor readings is an important task towards this goal. Our research is focused on the broccoli head segmentation problem as a first step towards size estimation of each broccoli crop in order to establish whether or not it is suitable for cutting.}
    }
  • K. Elgeneidy, G. Neumann, S. Pearson, M. Jackson, and N. Lohse, “Contact detection and size estimation using a modular soft gripper with embedded flex sensors,” in International conference on intelligent robots and systems (iros 2018), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Grippers made from soft elastomers are able to passively and gently adapt to their targets allowing deformable objects to be grasped safely without causing bruise or damage. However, it is difficult to regulate the contact forces due to the lack of contact feedback for such grippers. In this paper, a modular soft gripper is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex and pressure sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of opposing fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the contact type affects the rate of change in the flex sensor readings against the internal pressure.

    @inproceedings{lincoln34713,
    booktitle = {International Conference on Intelligent Robots and Systems (IROS 2018)},
    month = {January},
    title = {Contact Detection and Size Estimation Using a Modular Soft Gripper with Embedded Flex Sensors},
    author = {Khaled Elgeneidy and Gerhard Neumann and Simon Pearson and Michael Jackson and Niels Lohse},
    year = {2019},
    keywords = {ARRAY(0x55e772e00928)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34713/},
    abstract = {Grippers made from soft elastomers are able to passively and gently adapt to their targets allowing deformable objects to be grasped safely without causing bruise or damage. However, it is difficult to regulate the contact forces due to the lack of contact feedback for such grippers. In this paper, a modular soft gripper is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex and pressure sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of opposing fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the contact type affects the rate of change in the flex sensor readings against the internal pressure.}
    }
  • C. Zhao, L. Sun, P. Purkait, T. Duckett, and R. Stolkin, “Learning monocular visual odometry with dense 3d mapping from dense 3d flow,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2019. doi:10.1109/IROS.2018.8594151
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.

    @inproceedings{lincoln36001,
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {January},
    title = {Learning Monocular Visual Odometry with Dense 3D Mapping from Dense 3D Flow},
    author = {Cheng Zhao and Li Sun and Pulak Purkait and Tom Duckett and Rustam Stolkin},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/IROS.2018.8594151},
    keywords = {ARRAY(0x55e772e00958)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36001/},
    abstract = {This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.}
    }
  • R. C. Tieppo, T. L. Romanelli, M. Milan, C. A. G. S. o, and D. Bochtis, “Modeling cost and energy demand in agricultural machinery fleets for soybean and maize cultivated using a no-tillage system,” Computers and electronics in agriculture, vol. 156, p. 282–292, 2019. doi:10.1016/j.compag.2018.11.032
    [BibTeX] [Abstract] [Download PDF]

    Climate, area expansion and the possibility to grow soybean and maize within a same season using the no-tillage system and mechanized agriculture are factors that promoted the agriculture growth in Mato Grosso State ? Brazil. Mechanized operations represent around 23\% of production costs for maize and soybean, demanding a considerably powerful machinery. Energy balance is a tool to verify the sustainability level of mechanized system. Regarding the sustainability components profit and environment, this study aims to develop a deterministic model for agricultural machinery costs and energy demand for no-tillage system production of soybean and maize crops. In addition, scenario simulation aids to analyze the influence of fleet sizing regarding cost and energy demand. The development of the deterministic model consists on equations and data retrieved from literature. A simulation was developed for no-tillage soybean production system in Brazil, considering three basic mechanized operations (sowing, spraying and harvesting). Thereby, for those operations, three sizes of machinery commercially available and regularly used (small, medium, large) and seven levels of cropping area (500, 1000, 2000, 4000, 6000, 8000 and 10,000 ha) were used. The developed model was consistent for predictions of power demand, fuel consumption and costs. We noticed that the increase in area size implies in more working time for the machinery, which decreases the cost difference among the combinations. The greatest difference for the smallest area (500 ha) was 22.1 and 94.8\% for sowing and harvesting operations, respectively. For 4000 and 10,000 ha, the difference decreased to 1.30 and 0.20\%. Simulated scenarios showed the importance of determining operational cost and energy demand when energy efficiency is desired.

    @article{lincoln34502,
    volume = {156},
    month = {January},
    author = {Rafael Ceasar Tieppo and Thiago Lib{\'o}rio Romanelli and Marcos Milan and Claus Aage Gr{\o}n S{\o}rensen and Dionysis Bochtis},
    title = {Modeling cost and energy demand in agricultural machinery fleets for soybean and maize cultivated using a no-tillage system},
    publisher = {Elsevier},
    year = {2019},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2018.11.032},
    pages = {282--292},
    keywords = {ARRAY(0x55e772e009b8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34502/},
    abstract = {Climate, area expansion and the possibility to grow soybean and maize within a same season using the no-tillage system and mechanized agriculture are factors that promoted the agriculture growth in Mato Grosso State ? Brazil. Mechanized operations represent around 23\% of production costs for maize and soybean, demanding a considerably powerful machinery. Energy balance is a tool to verify the sustainability level of mechanized system. Regarding the sustainability components profit and environment, this study aims to develop a deterministic model for agricultural machinery costs and energy demand for no-tillage system production of soybean and maize crops. In addition, scenario simulation aids to analyze the influence of fleet sizing regarding cost and energy demand. The development of the deterministic model consists on equations and data retrieved from literature. A simulation was developed for no-tillage soybean production system in Brazil, considering three basic mechanized operations (sowing, spraying and harvesting). Thereby, for those operations, three sizes of machinery commercially available and regularly used (small, medium, large) and seven levels of cropping area (500, 1000, 2000, 4000, 6000, 8000 and 10,000 ha) were used. The developed model was consistent for predictions of power demand, fuel consumption and costs. We noticed that the increase in area size implies in more working time for the machinery, which decreases the cost difference among the combinations. The greatest difference for the smallest area (500 ha) was 22.1 and 94.8\% for sowing and harvesting operations, respectively. For 4000 and 10,000 ha, the difference decreased to 1.30 and 0.20\%. Simulated scenarios showed the importance of determining operational cost and energy demand when energy efficiency is desired.}
    }
  • A. Babu, P. Lightbody, G. Das, P. Liu, S. Gomez-Gonzalez, and G. Neumann, “Improving local trajectory optimisation using probabilistic movement primitives,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2019, p. 2666–2671. doi:10.1109/IROS40897.2019.8967980
    [BibTeX] [Abstract] [Download PDF]

    Local trajectory optimisation techniques are a powerful tool for motion planning. However, they often get stuck in local optima depending on the quality of the initial solution and consequently, often do not find a valid (i.e. collision free) trajectory. Moreover, they often require fine tuning of a cost function to obtain the desired motions. In this paper, we address both problems by combining local trajectory optimisation with learning from demonstrations. The human expert demonstrates how to reach different target end-effector locations in different ways. From these demonstrations, we estimate a trajectory distribution, represented by a Probabilistic Movement Primitive (ProMP). For a new target location, we sample different trajectories from the ProMP and use these trajectories as initial solutions for the local optimisation. As the ProMP generates versatile initial solutions for the optimisation, the chance of finding poor local minima is significantly reduced. Moreover, the learned trajectory distribution is used to specify the smoothness costs for the optimisation, resulting in solutions of similar shape as the demonstrations. We demonstrate the effectiveness of our approach in several complex obstacle avoidance scenarios.

    @inproceedings{lincoln40837,
    booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Improving Local Trajectory Optimisation using Probabilistic Movement Primitives},
    author = {Ashith Babu and Peter Lightbody and Gautham Das and Pengcheng Liu and Sebastian Gomez-Gonzalez and Gerhard Neumann},
    publisher = {IEEE},
    year = {2019},
    pages = {2666--2671},
    doi = {10.1109/IROS40897.2019.8967980},
    keywords = {ARRAY(0x55e772e009e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40837/},
    abstract = {Local trajectory optimisation techniques are a powerful tool for motion planning. However, they often get stuck in local optima depending on the quality of the initial solution and consequently, often do not find a valid (i.e. collision free) trajectory. Moreover, they often require fine tuning of a cost function to obtain the desired motions. In this paper, we address both problems by combining local trajectory optimisation with learning from demonstrations. The human expert demonstrates how to reach different target end-effector locations in different ways. From these demonstrations, we estimate a trajectory distribution, represented by a Probabilistic Movement Primitive (ProMP). For a new target location, we sample different trajectories from the ProMP and use these trajectories as initial solutions for the local optimisation. As the ProMP generates versatile initial solutions for the optimisation, the chance of finding poor local minima is significantly reduced. Moreover, the learned trajectory distribution is used to specify the smoothness costs for the optimisation, resulting in solutions of similar shape as the demonstrations. We demonstrate the effectiveness of our approach in several complex obstacle avoidance scenarios.}
    }
  • S. Cosar, M. Fernandez-Carmona, R. Agrigoroaie, J. Pages, F. Ferland, F. Zhao, S. Yue, N. Bellotto, and A. Tapus, “Enrichme: perception and interaction of an assistive robot for the elderly at home,” International journal of social robotics, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.

    @article{lincoln39037,
    title = {ENRICHME: Perception and Interaction of an Assistive Robot for the Elderly at Home},
    author = {Serhan Cosar and Manuel Fernandez-Carmona and Roxana Agrigoroaie and Jordi Pages and Francois Ferland and Feng Zhao and Shigang Yue and Nicola Bellotto and Adriana Tapus},
    publisher = {Springer},
    year = {2019},
    journal = {International Journal of Social Robotics},
    keywords = {ARRAY(0x55e772e00a18)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39037/},
    abstract = {Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.}
    }
  • H. Cuayahuitl, “A data-efficient deep learning approach for deployable multimodal social robots,” Neurocomputing, 2019. doi:10.1016/j.neucom.2018.09.104
    [BibTeX] [Abstract] [Download PDF]

    The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games–-and use the game of `Noughts {$\backslash$}& Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.

    @article{lincoln33533,
    title = {A Data-Efficient Deep Learning Approach for Deployable Multimodal Social Robots},
    author = {Heriberto Cuayahuitl},
    publisher = {Elsevier},
    year = {2019},
    doi = {10.1016/j.neucom.2018.09.104},
    note = {The final published version of this article can be accessed online at https://www.journals.elsevier.com/neurocomputing/},
    journal = {Neurocomputing},
    keywords = {ARRAY(0x55e772e00a48)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33533/},
    abstract = {The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games---and use the game of `Noughts {$\backslash$}\& Crosses' with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.}
    }
  • T. Flyr and S. Parsons, “Towards adversarial training for mobile robots,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11649, p. 197–208, 2019. doi:10.1007/978-3-030-23807-0_17
    [BibTeX] [Download PDF]
    @article{lincoln38396,
    volume = {11649},
    author = {T. Flyr and Simon Parsons},
    note = {cited By 0},
    title = {Towards Adversarial Training for Mobile Robots},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-23807-0\_17},
    pages = {197--208},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38396/}
    }
  • J. Ganzer-Ripoll, N. Criado, M. Lopez-Sanchez, S. Parsons, and J. A. Rodriguez-Aguilar, “Combining social choice theory and argumentation: enabling collective decision making,” Group decision and negotiation, vol. 28, iss. 1, p. 127–173, 2019. doi:10.1007/s10726-018-9594-6
    [BibTeX] [Download PDF]
    @article{lincoln38395,
    volume = {28},
    number = {1},
    author = {J. Ganzer-Ripoll and N. Criado and M. Lopez-Sanchez and Simon Parsons and J.A. Rodriguez-Aguilar},
    note = {cited By 0},
    title = {Combining Social Choice Theory and Argumentation: Enabling Collective Decision Making},
    year = {2019},
    journal = {Group Decision and Negotiation},
    doi = {10.1007/s10726-018-9594-6},
    pages = {127--173},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38395/}
    }
  • L. Jackson, C. M. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and M. Ferris, “The downsizing of a free-flying space robot,” in 20th annual conference, taros 2019, 2019, p. 480–483. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39418,
    booktitle = {20th Annual Conference, TAROS 2019},
    title = {The Downsizing of a Free-Flying Space Robot},
    author = {Lucy Jackson and Chakravarthini M. Saaj and Asma Seddaoui and Calem Whiting and Steve Eckersley and Mark Ferris},
    publisher = {Springer},
    year = {2019},
    pages = {480--483},
    doi = {10.1007/978-3-030-25332-5},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39418/}
    }
  • J. Koleosho and C. M. Saaj, “System design and control of a di-wheel rover,” in Towards autonomous robotic systems, 2019, p. 409–421. doi:10.1007/978-3-030-25332-5_35
    [BibTeX] [Abstract] [Download PDF]

    Traditionally, wheeled rovers are used for planetary surface exploration and six-wheeled chassis designs based on the Rocker-Bogie suspension system have been tested successfully on Mars. However, it is difficult to explore craters and crevasses using large six or four-wheeled rovers. Innovative designs based on smaller Di-Wheel Rovers might be better suited for such challenging terrains. A Di-Wheel Rover is a self – balancing two-wheeled mobile robot that can move in all directions within a two-dimensional plane, as well as stand upright by balancing on two wheels. This paper presents the outcomes of a feasibility study on a Di-Wheel Rover for planetary exploration missions. This includes developing its chassis design based on the hardware and software requirements, prototyping, and subsequent testing. The main contribution of this paper is the design of a self-balancing control system for the Di-Wheel Rover. This challenging design exercise was successfully completed through extensive experimentation thereby validating the performance of the Di-Wheel Rover. The details on the structural design, tuning controller gains based on an inverted pendulum model, and testing on different ground surfaces are described in this paper. The results presented in this paper give a new insight into designing low-cost Di-Wheel Rovers and clearly, there is a potential to use Di-Wheel Rovers for future planetary exploration.

    @inproceedings{lincoln39621,
    volume = {11650},
    author = {John Koleosho and Chakravarthini M. Saaj},
    booktitle = {Towards Autonomous Robotic Systems},
    title = {System Design and Control of a Di-Wheel Rover},
    publisher = {Springer},
    doi = {10.1007/978-3-030-25332-5\_35},
    pages = {409--421},
    year = {2019},
    keywords = {ARRAY(0x55e772e00b08)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39621/},
    abstract = {Traditionally, wheeled rovers are used for planetary surface exploration and six-wheeled chassis designs based on the Rocker-Bogie suspension system have been tested successfully on Mars. However, it is difficult to explore craters and crevasses using large six or four-wheeled rovers. Innovative designs based on smaller Di-Wheel Rovers might be better suited for such challenging terrains. A Di-Wheel Rover is a self - balancing two-wheeled mobile robot that can move in all directions within a two-dimensional plane, as well as stand upright by balancing on two wheels.
    This paper presents the outcomes of a feasibility study on a Di-Wheel Rover for planetary exploration missions. This includes developing its chassis design based on the hardware and software requirements, prototyping, and subsequent testing. The main contribution of this paper is the design of a self-balancing control system for the Di-Wheel Rover. This challenging design exercise was successfully completed through extensive experimentation thereby validating the performance of the Di-Wheel Rover. The details on the structural design, tuning controller gains based on an inverted pendulum model, and testing on different ground surfaces are described in this paper. The results presented in this paper give a new insight into designing low-cost Di-Wheel Rovers and clearly, there is a potential to use Di-Wheel Rovers for future planetary exploration.}
    }
  • J. Lock, G. Cielniak, and N. Bellotto, “Active object search with a mobile device for people with visual impairments,” in 14th international conference on computer vision theory and applications (visapp), 2019, p. 476–485. doi:10.5220/0007582304760485
    [BibTeX] [Abstract] [Download PDF]

    Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera’s video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.

    @inproceedings{lincoln34596,
    booktitle = {14th International Conference on Computer Vision Theory and Applications (VISAPP)},
    title = {Active Object Search with a Mobile Device for People with Visual Impairments},
    author = {Jacobus Lock and Grzegorz Cielniak and Nicola Bellotto},
    publisher = {VISIGRAPP},
    year = {2019},
    pages = {476--485},
    doi = {10.5220/0007582304760485},
    keywords = {ARRAY(0x55e772e04a00)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34596/},
    abstract = {Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera's video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.}
    }
  • J. Lock, A. G. Tramontano, S. Ghidoni, and N. Bellotto, “Activis: mobile object detection and active guidance for people with visual impairments,” in Proc. of the int. conf. on image analysis and processing (iciap), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The ActiVis project aims to deliver a mobile system that is able to guide a person with visual impairments towards a target object or area in an unknown indoor environment. For this, it uses new developments in object detection, mobile computing, action generation and human-computer interfacing to interpret the user’s surroundings and present effective guidance directions. Our approach to direction generation uses a Partially Observable Markov Decision Process (POMDP) to track the system’s state and output the optimal location to be investigated. This system includes an object detector and an audio-based guidance interface to provide a complete active search pipeline. The ActiVis system was evaluated in a set of experiments showing better performance than a simpler unguided case.

    @inproceedings{lincoln36413,
    booktitle = {Proc. of the Int. Conf. on Image Analysis and Processing (ICIAP)},
    title = {ActiVis: Mobile Object Detection and Active Guidance for People with Visual Impairments},
    author = {Jacobus Lock and A. G. Tramontano and S. Ghidoni and Nicola Bellotto},
    year = {2019},
    keywords = {ARRAY(0x55e772e04a30)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36413/},
    abstract = {The ActiVis project aims to deliver a mobile system that is able to guide a person with visual impairments towards a target object or area in an unknown indoor environment. For this, it uses new developments in object detection, mobile computing, action generation and human-computer interfacing to interpret the user's surroundings and present effective guidance directions. Our approach to direction generation uses a Partially Observable Markov Decision Process (POMDP) to track the system's state and output the optimal location to be investigated. This system includes an object detector and an audio-based guidance interface to provide a complete active search pipeline. The ActiVis system was evaluated in a set of experiments showing better performance than a simpler unguided case.}
    }
  • S. Lucarotti, C. M. Saaj, E. Allouis, and P. Bianco, “A self-reconfigurable undulating grasper for asteroid mining,” in 15th esa symposium on advanced space technologies in robotics and automation, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39624,
    booktitle = {15th ESA Symposium on Advanced Space Technologies in Robotics and Automation},
    title = {A Self-Reconfigurable Undulating Grasper for Asteroid Mining},
    author = {Suzanna Lucarotti and Chakravarthini M. Saaj and Elie Allouis and Paolo Bianco},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39624/}
    }
  • J. Pajarinen, H. L. Thai, R. Akrour, J. Peters, and G. Neumann, “Compatible natural gradient policy search,” Machine learning, 2019. doi:10.1007/s10994-019-05807-0
    [BibTeX] [Abstract] [Download PDF]

    Trust-region methods have yielded state-of-the-art results in policy search. A common approach is to use KL-divergence to bound the region of trust resulting in a natural gradient policy update. We show that the natural gradient and trust region optimization are equivalent if we use the natural parameterization of a standard exponential policy distribution in combination with compatible value function approximation. Moreover, we show that standard natural gradient updates may reduce the entropy of the policy according to a wrong schedule leading to premature convergence. To control entropy reduction we introduce a new policy search method called compatible policy search (COPOS) which bounds entropy loss. The experimental results show that COPOS yields state-of-the-art results in challenging continuous control tasks and in discrete partially observable tasks.

    @article{lincoln36283,
    title = {Compatible natural gradient policy search},
    author = {J. Pajarinen and H.L. Thai and R. Akrour and J. Peters and Gerhard Neumann},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/s10994-019-05807-0},
    journal = {Machine Learning},
    keywords = {ARRAY(0x55e772e04a90)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36283/},
    abstract = {Trust-region methods have yielded state-of-the-art results in policy search. A common approach is to use KL-divergence to bound the region of trust resulting in a natural gradient policy update. We show that the natural gradient and trust region optimization are equivalent if we use the natural parameterization of a standard exponential policy distribution in combination with compatible value function approximation. Moreover, we show that standard natural gradient updates may reduce the entropy of the policy according to a wrong schedule leading to premature convergence. To control entropy reduction we introduce a
    new policy search method called compatible policy search (COPOS) which bounds entropy loss. The experimental results show that COPOS yields state-of-the-art results in challenging continuous control tasks and in discrete partially observable tasks.}
    }
  • A. R. Panisson, ?. Sarkadi, P. McBurney, S. Parsons, and R. H. Bordini, “On the formal semantics of theory of mind in agent communication,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11327, p. 18–32, 2019. doi:10.1007/978-3-030-17294-7{$_2$}
    [BibTeX] [Download PDF]
    @article{lincoln38400,
    volume = {11327},
    author = {A.R. Panisson and ?. Sarkadi and P. McBurney and Simon Parsons and R.H. Bordini},
    note = {cited By 0},
    title = {On the Formal Semantics of Theory of Mind in Agent Communication},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-17294-7{$_2$}},
    pages = {18--32},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38400/}
    }
  • c S, A. R. Panisson, R. H. Bordini, P. McBurney, and S. Parsons, “Towards an approach for modelling uncertain theory of mind in multi-agent systems,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11327, p. 3–17, 2019. doi:10.1007/978-3-030-17294-7{$_1$}
    [BibTeX] [Abstract] [Download PDF]

    Applying Theory of Mind to multi-agent systems enables agents to model and reason about other agents? minds. Recent work shows that this ability could increase the performance of agents, making them more efficient than agents that lack this ability. However, modelling others agents? minds is a difficult task, given that it involves many factors of uncertainty, e.g., the uncertainty of the communication channel, the uncertainty of reading other agents correctly, and the uncertainty of trust in other agents. In this paper, we explore how agents acquire and update Theory of Mind under conditions of uncertainty. To represent uncertain Theory of Mind, we add probability estimation on a formal semantics model for agent communication based on the BDI architecture and agent communication languages.

    @article{lincoln38399,
    volume = {11327},
    author = {{\c S}. Sarkadi and A.R. Panisson and R.H. Bordini and P. McBurney and S. Parsons},
    note = {cited By 0},
    title = {Towards an Approach for Modelling Uncertain Theory of Mind in Multi-Agent Systems},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-17294-7{$_1$}},
    pages = {3--17},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38399/},
    abstract = {Applying Theory of Mind to multi-agent systems enables agents to model and reason about other agents? minds. Recent work shows that this ability could increase the performance of agents, making them more efficient than agents that lack this ability. However, modelling others agents? minds is a difficult task, given that it involves many factors of uncertainty, e.g., the uncertainty of the communication channel, the uncertainty of reading other agents correctly, and the uncertainty of trust in other agents. In this paper, we explore how agents acquire and update Theory of Mind under conditions of uncertainty. To represent uncertain Theory of Mind, we add probability estimation on a formal semantics model for agent communication based on the BDI architecture and agent communication languages.}
    }
  • c S, A. R. Panisson, R. H. Bordini, P. McBurney, S. Parsons, and M. Chapman, “Modelling deception using theory of mind in multi-agent systems,” Ai communications, vol. 32, iss. 4, p. 287–302, 2019. doi:10.3233/AIC-190615
    [BibTeX] [Abstract] [Download PDF]

    Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.

    @article{lincoln38401,
    volume = {32},
    number = {4},
    author = {{\c S}. Sarkadi and A.R. Panisson and R.H. Bordini and P. McBurney and S. Parsons and M. Chapman},
    note = {cited By 0},
    title = {Modelling deception using theory of mind in multi-agent systems},
    year = {2019},
    journal = {AI Communications},
    doi = {10.3233/AIC-190615},
    pages = {287--302},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38401/},
    abstract = {Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.}
    }
  • I. Sassoon, N. Kökciyan, E. Sklar, and S. Parsons, “Explainable argumentation for wellness consultation,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11763, p. 186–202, 2019. doi:10.1007/978-3-030-30391-4{$_1$}{$_1$}
    [BibTeX] [Download PDF]
    @article{lincoln38398,
    volume = {11763},
    author = {I. Sassoon and N. K{\"o}kciyan and E. Sklar and Simon Parsons},
    note = {cited By 0},
    title = {Explainable argumentation for wellness consultation},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-30391-4{$_1$}{$_1$}},
    pages = {186--202},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38398/}
    }
  • I. Sassoon, N. Kökciyan, E. Sklar, and S. Parsons, “Explainable argumentation for wellness consultation,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11763, p. 186–202, 2019. doi:10.1007/978-3-030-30391-4
    [BibTeX] [Download PDF]
    @article{lincoln38539,
    volume = {11763},
    author = {I. Sassoon and N. K{\"o}kciyan and Elizabeth Sklar and S. Parsons},
    note = {cited By 0},
    title = {Explainable argumentation for wellness consultation},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-30391-4},
    pages = {186--202},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38539/}
    }
  • A. Seddaoui, C. M. Saaj, and S. Eckersley, “Collision-free optimal trajectory generator for a controlled floating space robot,” in Towards autonomous robotic systems conference, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39420,
    booktitle = {Towards Autonomous Robotic Systems Conference},
    title = {Collision-Free Optimal Trajectory Generator for a Controlled Floating Space Robot},
    author = {Asma Seddaoui and Chakravarthini M. Saaj and Steve Eckersley},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39420/}
    }
  • D. Zhang, E. Schneider, and E. Sklar, “A cross-landscape evaluation of multi-robot team performance in static task-allocation domains,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11650, p. 261–272, 2019. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @article{lincoln38537,
    volume = {11650},
    author = {D. Zhang and E. Schneider and Elizabeth Sklar},
    note = {cited By 0},
    title = {A cross-landscape evaluation of multi-robot team performance in static task-allocation domains},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-25332-5},
    pages = {261--272},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38537/}
    }
  • T. Zhivkov, E. Schneider, and E. Sklar, “Mrcomm: multi-robot communication testbed,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11650, p. 346–357, 2019. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @article{lincoln38538,
    volume = {11650},
    author = {T. Zhivkov and E. Schneider and Elizabeth Sklar},
    note = {cited By 0},
    title = {MRComm: Multi-robot communication testbed},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-25332-5},
    pages = {346--357},
    year = {2019},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38538/}
    }

2018

  • F. J. Comin, C. Saaj, S. M. Mustaza, and R. Saaj, “Safe testing of electrical diathermy cutting using a new generation soft manipulator,” Ieee transactions on robotics, vol. 34, iss. 6, p. 1659–1666, 2018. doi:10.1109/TRO.2018.2861898
    [BibTeX] [Abstract] [Download PDF]

    The first demonstration of a pneumatic soft continuum robot is integrated in series with a rigid robot arm, safely performing teleoperated diathermic tissue-cutting. The rigid arm autonomously maintains a safe tool contact force, while the soft arm manually follows the desired cutting path. Ex-vivo experimentation demonstrates submillimetric deviations from target paths.

    @article{lincoln37426,
    volume = {34},
    number = {6},
    month = {December},
    author = {F.J. Comin and C. Saaj and S.M. Mustaza and R. Saaj},
    note = {cited By 0},
    title = {Safe Testing of Electrical Diathermy Cutting Using a New Generation Soft Manipulator},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Transactions on Robotics},
    doi = {10.1109/TRO.2018.2861898},
    pages = {1659--1666},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37426/},
    abstract = {The first demonstration of a pneumatic soft continuum robot is integrated in series with a rigid robot arm, safely performing teleoperated diathermic tissue-cutting. The rigid arm autonomously maintains a safe tool contact force, while the soft arm manually follows the desired cutting path. Ex-vivo experimentation demonstrates submillimetric deviations from target paths.}
    }
  • H. Cuayahuitl, S. Ryu, D. Lee, and J. Kim, “A study on dialogue reward prediction for open-ended conversational agents,” in Neurips workshop on conversational ai, 2018.
    [BibTeX] [Abstract] [Download PDF]

    The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way. This suggests that principled investigations into optimal context windows are urgently needed given that the amount of dialogue history and corresponding representations can play an important role in the overall performance of a conversational system. This paper studies the amount of history required by conversational agents for reliably predicting dialogue rewards. The task of dialogue reward prediction is chosen for investigating the effects of varying amounts of dialogue history and their impact on system performance. Experimental results using a dataset of 18K human-human dialogues report that lengthy dialogue histories of at least 10 sentences are preferred (25 sentences being the best in our experiments) over short ones, and that lengthy histories are useful for training dialogue reward predictors with strong positive correlations between target dialogue rewards and predicted ones.

    @inproceedings{lincoln34433,
    booktitle = {NeurIPS Workshop on Conversational AI},
    month = {December},
    title = {A Study on Dialogue Reward Prediction for Open-Ended Conversational Agents},
    author = {Heriberto Cuayahuitl and Seonghan Ryu and Donghyeon Lee and Jihie Kim},
    publisher = {arXiv},
    year = {2018},
    keywords = {ARRAY(0x55e7728a0838)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34433/},
    abstract = {The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way. This suggests that principled investigations into optimal context windows are urgently needed given that the amount of dialogue history and corresponding representations can play an important role in the overall performance of a conversational system. This paper studies the amount of history required by conversational agents for reliably predicting dialogue rewards. The task of dialogue reward prediction is chosen for investigating the effects of varying amounts of dialogue history and their impact on system performance. Experimental results using a dataset of 18K human-human dialogues report that lengthy dialogue histories of at least 10 sentences are preferred (25 sentences being the best in our experiments) over short ones, and that lengthy histories are useful for training dialogue reward predictors with strong positive correlations between target dialogue rewards and predicted ones.}
    }
  • H. Wang, S. Yue, J. Peng, P. Baxter, C. Zhang, and Z. Wang, “A model for detection of angular velocity of image motion based on the temporal tuning of the drosophila,” in Icann 2018, 2018, p. 37–46. doi:https://doi.org/10.1007/978-3-030-01421-6_4
    [BibTeX] [Abstract] [Download PDF]

    We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects? eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila?s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies.

    @inproceedings{lincoln33104,
    month = {December},
    author = {Huatian Wang and Shigang Yue and Jigen Peng and Paul Baxter and Chun Zhang and Zhihua Wang},
    booktitle = {ICANN 2018},
    title = {A Model for Detection of Angular Velocity of Image Motion Based on the Temporal Tuning of the Drosophila},
    publisher = {Springer, Cham},
    doi = {https://doi.org/10.1007/978-3-030-01421-6\_4},
    pages = {37--46},
    year = {2018},
    keywords = {ARRAY(0x55e772884d78)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33104/},
    abstract = {We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects? eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila?s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies.}
    }
  • Q. Fu, N. Bellotto, C. Hu, and S. Yue, “Performance of a visual fixation model in an autonomous micro robot inspired by drosophila physiology,” in Ieee international conference on robotics and biomimetics, 2018.
    [BibTeX] [Abstract] [Download PDF]

    In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects? visual systems is not only attractive to neural system modellers, but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to ?xation. The proposed model was realised on embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual ?xation model: the robot showed motion tracking and ?xation behaviours similarly to insects; the image processing frequency can maintain 25 {$\sim$} 45Hz. Arena tests also demonstrated a successful following behaviour aroused by ?xation in navigation.

    @inproceedings{lincoln33846,
    booktitle = {IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS},
    month = {December},
    title = {Performance of a Visual Fixation Model in an Autonomous Micro Robot Inspired by Drosophila Physiology},
    author = {Qinbing Fu and Nicola Bellotto and Cheng Hu and Shigang Yue},
    year = {2018},
    keywords = {ARRAY(0x55e7726b5ac0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33846/},
    abstract = {In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects? visual systems is not only attractive to neural system modellers, but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to ?xation. The proposed model was realised on embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual ?xation model: the robot showed motion tracking and ?xation behaviours similarly to insects; the image processing frequency can maintain 25 {$\sim$} 45Hz. Arena tests also demonstrated a successful following behaviour aroused by ?xation in navigation.}
    }
  • W. Lewinger, F. Comin, M. Matthews, and C. Saaj, “Earth analogue testing and analysis of martian duricrust properties,” Acta astronautica, vol. 152, p. 567–579, 2018. doi:10.1016/j.actaastro.2018.05.025
    [BibTeX] [Abstract] [Download PDF]

    Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.

    @article{lincoln37427,
    volume = {152},
    month = {November},
    author = {W. Lewinger and F. Comin and M. Matthews and C. Saaj},
    note = {cited By 0},
    title = {Earth analogue testing and analysis of Martian duricrust properties},
    year = {2018},
    journal = {Acta Astronautica},
    doi = {10.1016/j.actaastro.2018.05.025},
    pages = {567--579},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37427/},
    abstract = {Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.}
    }
  • F. Camara, O. Giles, M. Rothmuller, P. Rasmussen, A. Vendelbo-Larsen, G. Markkula, Y-M. Lee, N. Merat, and C. Fox, “Predicting pedestrian road-crossing assertiveness for autonomous vehicle control,” in 21st ieee international conference on intelligent transportation systems, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles (AVs) must interact with other road users including pedestrians. Unlike passive environments, pedestrians are active agents having their own utilities and decisions, which must be inferred and predicted by AVs in order to control interactions with them and navigation around them. In particular, when a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform AV controllers in this setting, this study collects and analyses data from real-world human road crossings to determine what features of crossing behaviours are predictive about the level of assertiveness of pedestrians and of the eventual winner of the interactions. It presents the largest and most detailed data set of its kind known to us, and new methods to analyze and predict pedestrian-vehicle interactions based upon it. Pedestrian-vehicle interactions are decomposed into sequences of independent discrete events. We use probabilistic methods ? regression and decision tree regression ? and sequence analysis to analyze sets and sub-sequences of actions used by both pedestrians and human drivers while crossing at an intersection, to find common patterns of behaviour and to predict the winner of each interaction. We report on the particular features found to be predictive and which can thus be integrated into game- theoretic AV controllers to inform real-time interactions.

    @inproceedings{lincoln33089,
    booktitle = {21st IEEE International Conference on Intelligent Transportation Systems},
    month = {November},
    title = {Predicting pedestrian road-crossing assertiveness for autonomous vehicle control},
    author = {F Camara and O Giles and M Rothmuller and PH Rasmussen and A Vendelbo-Larsen and G Markkula and Y-M Lee and N Merat and Charles Fox},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55e772792aa8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33089/},
    abstract = {Autonomous vehicles (AVs) must interact with other
    road users including pedestrians. Unlike passive environments,
    pedestrians are active agents having their own utilities and
    decisions, which must be inferred and predicted by AVs in order
    to control interactions with them and navigation around them.
    In particular, when a pedestrian wishes to cross the road in
    front of the vehicle at an unmarked crossing, the pedestrian
    and AV must compete for the space, which may be considered
    as a game-theoretic interaction in which one agent must yield
    to the other. To inform AV controllers in this setting, this study
    collects and analyses data from real-world human road crossings
    to determine what features of crossing behaviours are predictive
    about the level of assertiveness of pedestrians and of the eventual
    winner of the interactions. It presents the largest and most
    detailed data set of its kind known to us, and new methods to
    analyze and predict pedestrian-vehicle interactions based upon
    it. Pedestrian-vehicle interactions are decomposed into sequences
    of independent discrete events. We use probabilistic methods ?
    regression and decision tree regression ? and sequence analysis
    to analyze sets and sub-sequences of actions used by both
    pedestrians and human drivers while crossing at an intersection,
    to find common patterns of behaviour and to predict the winner
    of each interaction. We report on the particular features found
    to be predictive and which can thus be integrated into game-
    theoretic AV controllers to inform real-time interactions.}
    }
  • K. Goher and S. Fadlallah, “Pid, bfo-optimized pid, and pd-flc control of a two-wheeled machine with two-direction handling mechanism: a comparative study,” Robotics and biomimetics, vol. 5, iss. 6, 2018. doi:10.1186/s40638-018-0089-3
    [BibTeX] [Abstract] [Download PDF]

    In this paper; three control approaches are utilized in order to control the stability of a novel five-degrees-of-freedom two-wheeled robotic machine designed for industrial applications that demand a limited-space working environment. Proportional?integral?derivative (PID) control scheme, bacterial foraging optimization of PID control method, and fuzzy logic control method are applied to the wheeled machine to obtain the optimum control strategy that provides the best system stabilization performance. According to simulation results, considering multiple motion scenarios, the PID controller optimized by bacterial foraging optimization method outperformed the other two control methods in terms of minimum overshoot, rise time, and applied input forces.

    @article{lincoln34106,
    volume = {5},
    number = {6},
    month = {November},
    author = {Khaled Goher and Sulaiman Fadlallah},
    title = {PID, BFO-optimized PID, and PD-FLC control of a two-wheeled machine with two-direction handling mechanism: a comparative study},
    publisher = {SpringerOpen},
    year = {2018},
    journal = {Robotics and Biomimetics},
    doi = {10.1186/s40638-018-0089-3},
    keywords = {ARRAY(0x55e772a67a58)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34106/},
    abstract = {In this paper; three control approaches are utilized in order to control the stability of a novel five-degrees-of-freedom two-wheeled robotic machine designed for industrial applications that demand a limited-space working environment. Proportional?integral?derivative (PID) control scheme, bacterial foraging optimization of PID control method, and fuzzy logic control method are applied to the wheeled machine to obtain the optimum control strategy that provides the best system stabilization performance. According to simulation results, considering multiple motion scenarios, the PID controller optimized by bacterial foraging optimization method outperformed the other two control methods in terms of minimum overshoot, rise time, and applied input forces.}
    }
  • Q. Fu, C. Hu, J. Peng, and S. Yue, “Shaping the collision selectivity in a looming sensitive neuron model with parallel on and off pathways and spike frequency adaptation,” Neural networks, vol. 106, p. 127–143, 2018. doi:10.1016/j.neunet.2018.04.001
    [BibTeX] [Abstract] [Download PDF]

    Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector – the LGMD2.The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner.

    @article{lincoln31536,
    volume = {106},
    month = {October},
    author = {Qinbing Fu and Cheng Hu and Jigen Peng and Shigang Yue},
    title = {Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation},
    publisher = {Elsevier for European Neural Network Society (ENNS)},
    year = {2018},
    journal = {Neural Networks},
    doi = {10.1016/j.neunet.2018.04.001},
    pages = {127--143},
    keywords = {ARRAY(0x55e77269aca0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31536/},
    abstract = {Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1's collision selectivity to its neighbouring looming detector -- the LGMD2.The SFA mechanism can enhance the LGMD1's collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner.}
    }
  • S. Indurthi, S. Yu, S. Back, and H. Cuayahuitl, “Cut to the chase: a context zoom-in network for reading comprehension,” in Empirical methods in natural language processing (emnlp), 2018.
    [BibTeX] [Abstract] [Download PDF]

    In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ?NarrativeQA?. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62\% (ROUGE-L) relative improvement.

    @inproceedings{lincoln34105,
    booktitle = {Empirical Methods in Natural Language Processing (EMNLP)},
    month = {October},
    title = {Cut to the Chase: A Context Zoom-in Network for Reading Comprehension},
    author = {Satish Indurthi and Seunghak Yu and Seohyun Back and Heriberto Cuayahuitl},
    publisher = {Association for Computational Linguistics},
    year = {2018},
    keywords = {ARRAY(0x55e77266e0b8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34105/},
    abstract = {In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ?NarrativeQA?. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62\% (ROUGE-L) relative improvement.}
    }
  • L. Sun, Z. Yan, A. Zaganidis, C. Zhao, and T. Duckett, “Recurrent-octomap: learning state-based map refinement for long-term semantic mapping with 3d-lidar data,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 3749–3756, 2018. doi:10.1109/LRA.2018.2856268
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term 3D Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding of single frames, rather than 3D refinement of semantic maps (i.e. fusing semantic observations). The most widely-used approach for 3D semantic map refinement is a Bayes update, which fuses the consecutive predictive probabilities following a Markov-Chain model. Instead, we propose a learning approach to fuse the semantic features, rather than simply fusing predictions from a classifier. In our approach, we represent and maintain our 3D map as an OctoMap, and model each cell as a recurrent neural network (RNN), to obtain a Recurrent-OctoMap. In this case, the semantic mapping process can be formulated as a sequenceto-sequence encoding-decoding problem. Moreover, in order to extend the duration of observations in our Recurrent-OctoMap, we developed a robust 3D localization and mapping system for successively mapping a dynamic environment using more than two weeks of data, and the system can be trained and deployed with arbitrary memory length. We validate our approach on the ETH long-term 3D Lidar dataset [1]. The experimental results show that our proposed approach outperforms the conventional ?Bayes update? approach.

    @article{lincoln32558,
    volume = {3},
    number = {4},
    month = {October},
    author = {Li Sun and Zhi Yan and Anestis Zaganidis and Cheng Zhao and Tom Duckett},
    title = {Recurrent-OctoMap: Learning State-based Map Refinement for Long-Term Semantic Mapping with 3D-Lidar Data},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2856268},
    pages = {3749--3756},
    keywords = {ARRAY(0x55e772a40ff0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32558/},
    abstract = {This paper presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term 3D
    Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding of single frames, rather than 3D refinement of semantic maps (i.e. fusing semantic observations). The most widely-used approach for 3D semantic map refinement is a Bayes update, which fuses the consecutive predictive probabilities following a Markov-Chain model. Instead, we propose a learning approach to fuse the semantic features, rather than simply fusing predictions from a classifier. In our approach, we represent and maintain our 3D map as an OctoMap, and model each cell as a recurrent neural network (RNN), to obtain a Recurrent-OctoMap. In this case, the semantic mapping process can be formulated as a sequenceto-sequence encoding-decoding problem. Moreover, in order to extend the duration of observations in our Recurrent-OctoMap, we developed a robust 3D localization and mapping system for successively mapping a dynamic environment using more than two weeks of data, and the system can be trained and deployed with arbitrary memory length. We validate our approach on the ETH long-term 3D Lidar dataset [1]. The experimental results show that our proposed approach outperforms the conventional ?Bayes update? approach.}
    }
  • A. Zaganidis, L. Sun, T. Duckett, and G. Cielniak, “Integrating deep semantic segmentation into 3d point cloud registration,” Robotics and automation letters ieee, vol. 3, iss. 4, p. 2942–2949, 2018. doi:10.1109/LRA.2018.2848308
    [BibTeX] [Abstract] [Download PDF]

    Point cloud registration is the task of aligning 3D scans of the same environment captured from different poses. When semantic information is available for the points, it can be used as a prior in the search for correspondences to improve registration. Semantic-assisted Normal Distributions Transform (SE-NDT) is a new registration algorithm that reduces the complexity of the problem by using the semantic information to partition the point cloud into a set of normal distributions, which are then registered separately. In this paper we extend the NDT registration pipeline by using PointNet, a deep neural network for segmentation and classification of point clouds, to learn and predict per-point semantic labels. We also present the Iterative Closest Point (ICP) equivalent of the algorithm, a special case of Multichannel Generalized ICP. We evaluate the performance of SE-NDT against the state of the art in point cloud registration on the publicly available classification data set Semantic3d.net. We also test the trained classifier and algorithms on dynamic scenes, using a sequence from the public dataset KITTI. The experiments demonstrate the improvement of the registration in terms of robustness, precision and speed, across a range of initial registration errors, thanks to the inclusion of semantic information.

    @article{lincoln32390,
    volume = {3},
    number = {4},
    month = {October},
    author = {Anestis Zaganidis and Li Sun and Tom Duckett and Grzegorz Cielniak},
    title = {Integrating Deep Semantic Segmentation into 3D Point Cloud Registration},
    publisher = {IEEE},
    year = {2018},
    journal = {Robotics and Automation Letters IEEE},
    doi = {10.1109/LRA.2018.2848308},
    pages = {2942--2949},
    keywords = {ARRAY(0x55e772a0d598)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32390/},
    abstract = {Point cloud registration is the task of aligning 3D scans of the same environment captured from different poses. When semantic information is available for the points, it can be used as a prior in the search for correspondences to improve registration. Semantic-assisted Normal Distributions Transform (SE-NDT) is a new registration algorithm that reduces the complexity of the problem by using the semantic information to partition the point cloud into a set of normal distributions, which are then registered separately. In this paper we extend the NDT registration pipeline by using PointNet, a deep neural network for segmentation and classification of point clouds, to learn and predict per-point semantic labels. We also present the Iterative Closest Point (ICP) equivalent of the algorithm, a special case of Multichannel Generalized ICP. We evaluate the performance of SE-NDT against the state of the art in point cloud registration on the publicly available classification data set Semantic3d.net. We also test the trained classifier and algorithms on dynamic scenes, using a sequence from the public dataset KITTI. The experiments demonstrate the improvement of the registration in terms of robustness, precision and speed, across a range of initial registration errors, thanks to the inclusion of semantic information.}
    }
  • J. Zhao, C. Hu, C. Zhang, Z. Wang, and S. Yue, “A bio-inspired collision detector for small quadcopter,” in 2018 international joint conference on neural networks (ijcnn), 2018, p. 1–7. doi:10.1109/IJCNN.2018.8489298
    [BibTeX] [Abstract] [Download PDF]

    The sense and avoid capability enables insects to fly versatilely and robustly in dynamic and complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from Lobula giant movement detector (LGMD) neurons in the locusts, and modeled into an STM32F407 Microcontroller Unit (MCU). Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision accuracy in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex and dynamic environment. We designed the quadcopter’s responding operation to imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter’s collision avoidance task.

    @inproceedings{lincoln34847,
    month = {October},
    author = {Jiannan Zhao and Cheng Hu and Chun Zhang and Zhihua Wang and Shigang Yue},
    booktitle = {2018 International Joint Conference on Neural Networks (IJCNN)},
    title = {A Bio-inspired Collision Detector for Small Quadcopter},
    publisher = {IEEE},
    doi = {10.1109/IJCNN.2018.8489298},
    pages = {1--7},
    year = {2018},
    keywords = {ARRAY(0x55e772a25698)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34847/},
    abstract = {The sense and avoid capability enables insects to fly versatilely and robustly in dynamic and complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from Lobula giant movement detector (LGMD) neurons in the locusts, and modeled into an STM32F407 Microcontroller Unit (MCU).
    Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision accuracy in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex and dynamic environment. We designed the quadcopter's responding operation to imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter's collision avoidance task.}
    }
  • H. Wang, J. Peng, and S. Yue, “A directionally selective small target motion detecting visual neural network in cluttered backgrounds,” Ieee transaction on cybernetics, p. 1–15, 2018. doi:10.1109/TCYB.2018.2869384
    [BibTeX] [Abstract] [Download PDF]

    Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect’s visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.

    @article{lincoln33420,
    month = {October},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8485659},
    title = {A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Transaction on Cybernetics},
    doi = {10.1109/TCYB.2018.2869384},
    pages = {1--15},
    keywords = {ARRAY(0x55e7729f9738)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33420/},
    abstract = {Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect's visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.}
    }
  • P. Bosilj, T. Duckett, and G. Cielniak, “Analysis of morphology-based features for classification of crop and weeds in precision agriculture,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 2950–2956, 2018. doi:10.1109/LRA.2018.2848305
    [BibTeX] [Abstract] [Download PDF]

    Determining the types of vegetation present in an image is a core step in many precision agriculture tasks. In this paper, we focus on pixel-based approaches for classification of crops versus weeds, especially for complex cases involving overlapping plants and partial occlusion. We examine the benefits of multi-scale and content-driven morphology-based descriptors called Attribute Profiles. These are compared to state-of-the art keypoint descriptors with a fixed neighbourhood previously used in precision agriculture, namely Histograms of Oriented Gradients and Local Binary Patterns. The proposed classification technique is especially advantageous when coupled with morphology-based segmentation on a max-tree structure, as the same representation can be re-used for feature extraction. The robustness of the approach is demonstrated by an experimental evaluation on two datasets with different crop types. The proposed approach compared favourably to state-of-the-art approaches without an increase in computational complexity, while being able to provide descriptors at a higher resolution.

    @article{lincoln32371,
    volume = {3},
    number = {4},
    month = {October},
    author = {Petra Bosilj and Tom Duckett and Grzegorz Cielniak},
    title = {Analysis of morphology-based features for classification of crop and weeds in precision agriculture},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2848305},
    pages = {2950--2956},
    keywords = {ARRAY(0x55e7729e4e48)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32371/},
    abstract = {Determining the types of vegetation present in an image is a core step in many precision agriculture tasks. In this paper, we focus on pixel-based approaches for classification of crops versus weeds, especially for complex cases involving overlapping plants and partial occlusion. We examine the benefits of multi-scale and content-driven morphology-based descriptors called Attribute Profiles. These are compared to state-of-the art keypoint descriptors with a fixed neighbourhood previously used in precision agriculture, namely Histograms of Oriented Gradients and Local Binary Patterns. The proposed classification technique is especially advantageous when coupled with morphology-based segmentation on a max-tree structure, as the same representation can be re-used for feature extraction. The robustness of the approach is demonstrated by an experimental evaluation on two datasets with different crop types. The proposed approach compared favourably to state-of-the-art approaches without an increase in computational complexity, while being able to provide descriptors at a higher resolution.}
    }
  • E. Senft, S. Lemaignan, P. Baxter, and T. Belpaeme, “From evaluating to teaching: rewards and challenges of human control for learning robots,” in Iros 2018 workshop on human/robot in the loop machine learning, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Keeping a human in a robot learning cycle can provide many advantages to improve the learning process. However, most of these improvements are only available when the human teacher is in complete control of the robot?s behaviour, and not just providing feedback. This human control can make the learning process safer, allowing the robot to learn in high-stakes interaction scenarios especially social ones. Furthermore, it allows faster learning as the human guides the robot to the relevant parts of the state space and can provide additional information to the learner. This information can also enable the learning algorithms to learn for wider world representations, thus increasing the generalisability of a deployed system. Additionally, learning from end users improves the precision of the final policy as it can be specifically tailored to many situations. Finally, this progressive teaching might create trust between the learner and the teacher, easing the deployment of the autonomous robot. However, with such control comes a range of challenges. Firstly, the rich communication between the robot and the teacher needs to be handled by an interface, which may require complex features. Secondly, the teacher needs to be embedded within the robot action selection cycle, imposing time constraints, which increases the cognitive load on the teacher. Finally, given a cycle of interaction between the robot and the teacher, any mistakes made by the teacher can be propagated to the robot?s policy. Nevertheless, we are are able to show that empowering the teacher with ways to control a robot?s behaviour has the potential to drastically improve both the learning process (allowing robots to learn in a wider range of environments) and the experience of the teacher.

    @inproceedings{lincoln36200,
    booktitle = {IROS 2018 Workshop on Human/Robot in the Loop Machine Learning},
    month = {October},
    title = {From Evaluating to Teaching: Rewards and Challenges of Human Control for Learning Robots},
    author = {Emmanuel Senft and Severin Lemaignan and Paul Baxter and Tony Belpaeme},
    publisher = {Imperial College London},
    year = {2018},
    keywords = {ARRAY(0x55e7729fa650)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/36200/},
    abstract = {Keeping a human in a robot learning cycle can provide many advantages to improve the learning process. However, most of these improvements are only available when the human teacher is in complete control of the robot?s behaviour, and not just providing feedback. This human control can make the learning process safer, allowing the robot to learn in high-stakes interaction scenarios especially social ones. Furthermore, it allows faster learning as the human guides the robot to the relevant parts of the state space and can provide additional information to the learner. This information can also enable the
    learning algorithms to learn for wider world representations, thus increasing the generalisability of a deployed system. Additionally, learning from end users improves the precision of the final policy as it can be specifically tailored to many situations. Finally, this progressive teaching might create trust between the learner and the teacher, easing the deployment of the autonomous robot. However, with such control comes a range of challenges. Firstly, the rich communication between the robot and the teacher needs to be handled by an interface, which may require complex features. Secondly, the teacher needs to be embedded within the robot action selection cycle, imposing time constraints, which increases the cognitive load on the teacher. Finally, given a cycle of interaction between the robot and the teacher, any mistakes made by the teacher can be propagated to the robot?s policy. Nevertheless, we are are able to show that empowering the teacher with ways to control a robot?s behaviour has the potential to drastically improve both the learning process (allowing robots to learn in a wider range of environments) and the experience of the teacher.}
    }
  • Z. Yan, L. Sun, T. Duckett, and N. Bellotto, “Multisensor online transfer learning for 3d lidar-based human detection with a mobile robot,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Human detection and tracking is an essential task for service robots, where the combined use of multiple sensors has potential advantages that are yet to be fully exploited. In this paper, we introduce a framework allowing a robot to learn a new 3D LiDAR-based human classifier from other sensors over time, taking advantage of a multisensor tracking system. The main innovation is the use of different detectors for existing sensors (i.e. RGB-D camera, 2D LiDAR) to train, online, a new 3D LiDAR-based human classifier based on a new ?trajectory probability?. Our framework uses this probability to check whether new detections belongs to a human trajectory, estimated by different sensors and/or detectors, and to learn a human classifier in a semi-supervised fashion. The framework has been implemented and tested on a real-world dataset collected by a mobile robot. We present experiments illustrating that our system is able to effectively learn from different sensors and from the environment, and that the performance of the 3D LiDAR-based human classification improves with the number of sensors/detectors used.

    @inproceedings{lincoln32541,
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {October},
    title = {Multisensor Online Transfer Learning for 3D LiDAR-based Human Detection with a Mobile Robot},
    author = {Zhi Yan and Li Sun and Tom Duckett and Nicola Bellotto},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55e772a2bee8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32541/},
    abstract = {Human detection and tracking is an essential task for service robots, where the combined use of multiple sensors has potential advantages that are yet to be fully exploited. In this paper, we introduce a framework allowing a robot to learn a new 3D LiDAR-based human classifier from other sensors over time, taking advantage of a multisensor tracking system. The main innovation is the use of different detectors for existing sensors (i.e. RGB-D camera, 2D LiDAR) to train, online, a new 3D LiDAR-based human classifier based on a new ?trajectory probability?. Our framework uses this probability to check whether new detections belongs to a human trajectory, estimated by different sensors and/or detectors, and to learn a human classifier in a semi-supervised fashion. The framework has been implemented and tested on a real-world dataset collected by a mobile robot. We present experiments illustrating that our system is able to effectively learn from different sensors and from the environment, and that the performance of the 3D LiDAR-based human classification improves with the number of sensors/detectors used.}
    }
  • S. Papadaki, G. Banias, C. Achillas, D. Aidonis, D. Folinas, D. Bochtis, and S. Papangelou, “A humanitarian logistics case study for the intermediary phase accommodation center for refugees and other humanitarian disaster victims,” in Dynamics of disasters, Springer, 2018, vol. 140, p. 157–202. doi:doi:10.1007/978-3-319-97442-2_8
    [BibTeX] [Abstract] [Download PDF]

    The growing and uncontrollable stream of refugees from Middle East and North Africa has created considerable pressure to governments and societies all over Europe. To establish the theoretical framework, the concept of humanitarian logistics is briefly examined in this paper. Historical data from the nineteenth century onwards illuminates the fact that this influx is not a novelty in the European continent and the interpretation of statistical data highlights the characteristics and particularities of the current refugee wave, as well as the possible repercussions these could inflict both to hosting societies and to displaced populations. Finally, a review of European and national legislation and policies shows that measures taken so far are disjointed and that no complete but at the same time fair and humanitarian management strategy exists. Within this context, the paper elaborates on the development of a compact accommodation center made of shipping containers, to function as one of the initial stages in adaptation before full social integration of the displaced populations. It aims at maximizing the respect for human rights and values while minimizing the impact on society and on the environment. Some of the humanitarian and ecological issues discussed are: integration of medical, educational, religious and social functions within the unit, optimal land utilization, renewable energy use, and waste management infrastructures. Creating added value for the ?raw? material (shipping containers) and prolonging the unit?s life span by enabling transformation and change of use, transportation and reuse, and finally end-of-life dismantlement and recycling also lie within the scope of the project. The overall goal is not only to address the current needs stemming from the refugee crisis, but also to develop a project versatile enough to be adapted for implementation on further social groups in need of support. The paper?s results could serve as a useful tool for governments and organizations to better plan ahead and respond fast and efficiently not only in regard to the present humanitarian emergency, but also in any possible similar major disaster situation, including the potential consequences of climate change.

    @incollection{lincoln39233,
    volume = {140},
    month = {September},
    author = {Sofia Papadaki and Georgios Banias and Charisios Achillas and Dimitris Aidonis and Dimitris Folinas and Dionysis Bochtis and Stamatis Papangelou},
    booktitle = {Dynamics of Disasters},
    title = {A Humanitarian Logistics Case Study for the Intermediary Phase Accommodation Center for Refugees and Other Humanitarian Disaster Victims},
    publisher = {Springer},
    year = {2018},
    doi = {doi:10.1007/978-3-319-97442-2\_8},
    pages = {157--202},
    keywords = {ARRAY(0x55e772a25860)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39233/},
    abstract = {The growing and uncontrollable stream of refugees from Middle East and North Africa has created considerable pressure to governments and societies all over Europe. To establish the theoretical framework, the concept of humanitarian logistics is briefly examined in this paper. Historical data from the nineteenth century onwards illuminates the fact that this influx is not a novelty in the European continent and the interpretation of statistical data highlights the characteristics and particularities of the current refugee wave, as well as the possible repercussions these could inflict both to hosting societies and to displaced populations. Finally, a review of European and national legislation and policies shows that measures taken so far are disjointed and that no complete but at the same time fair and humanitarian management strategy exists.
    Within this context, the paper elaborates on the development of a compact accommodation center made of shipping containers, to function as one of the initial stages in adaptation before full social integration of the displaced populations. It aims at maximizing the respect for human rights and values while minimizing the impact on society and on the environment. Some of the humanitarian and ecological issues discussed are: integration of medical, educational, religious and social functions within the unit, optimal land utilization, renewable energy use, and waste management infrastructures. Creating added value for the ?raw? material (shipping containers) and prolonging the unit?s life span by enabling transformation and change of use, transportation and reuse, and finally end-of-life dismantlement and recycling also lie within the scope of the project.
    The overall goal is not only to address the current needs stemming from the refugee crisis, but also to develop a project versatile enough to be adapted for implementation on further social groups in need of support. The paper?s results could serve as a useful tool for governments and organizations to better plan ahead and respond fast and efficiently not only in regard to the present humanitarian emergency, but also in any possible similar major disaster situation, including the potential consequences of climate change.}
    }
  • H. Wang, J. Peng, and S. Yue, “A feedback neural network for small target motion detection in cluttered backgrounds,” in The 27th international conference on artificial neural networks, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Small target motion detection is critical for insects to search for and track mates or prey which always appear as small dim speckles in the visual field. A class of specific neurons, called small target motion detectors (STMDs), has been characterized by exquisite sensitivity for small target motion. Understanding and analyzing visual pathway of STMD neurons are beneficial to design artificial visual systems for small target motion detection. Feedback loops have been widely identified in visual neural circuits and play an important role in target detection. However, if there exists a feedback loop in the STMD visual pathway or if a feedback loop could significantly improve the detection performance of STMD neurons, is unclear. In this paper, we propose a feedback neural network for small target motion detection against naturally cluttered backgrounds. In order to form a feedback loop, model output is temporally delayed and relayed to previous neural layer as feedback signal. Extensive experiments showed that the significant improvement of the proposed feedback neural network over the existing STMD-based models for small target motion detection.

    @inproceedings{lincoln33422,
    booktitle = {The 27th International Conference on Artificial Neural Networks},
    month = {September},
    title = {A Feedback Neural Network for Small Target Motion Detection in Cluttered Backgrounds},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55e772a50ef0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33422/},
    abstract = {Small target motion detection is critical for insects to search for and track mates or prey which always appear as small dim speckles in the visual field. A class of specific neurons, called small target motion detectors (STMDs), has been characterized by exquisite sensitivity for small target motion. Understanding and analyzing visual pathway of STMD neurons are beneficial to design artificial visual systems for small target motion detection. Feedback loops have been widely identified in visual neural circuits and play an important role in target detection. However, if there exists a feedback loop in the STMD visual pathway or if a feedback loop could significantly improve the detection performance of STMD neurons, is unclear. In this paper, we propose a feedback neural network for small target motion detection against naturally cluttered backgrounds. In order to form a feedback loop, model output is temporally delayed and relayed to previous neural layer as feedback signal. Extensive experiments showed that the significant improvement of the proposed feedback neural network over the existing STMD-based models for small target motion detection.}
    }
  • S. Fadlallah and K. Goher, “System identification and hsdbc-optimized pid control of a portable lower-limb rehabilitation device,” in Robotics, World scientfic, 2018.
    [BibTeX] [Abstract] [Download PDF]

    The present paper introduces a novel portable leg rehabilitation system (PLRS) that is developed to provide the user with the necessary rehabilitation exercises for both the knee and ankle in addition to the portability feature to overcome the hardships associated with both effort and cost of hospitals and rehabilitation clinics? steady sessions. Prior realizing the actual prototype, the proposed configuration was visualized using SolidWorks including its main components. Aiming to control the developed system, and given the fact that tuning controller parameters is not an easy task, Hybrid Spiral-Dynamics Bacteria-Chemotaxis (HSDBC) algorithm has been applied on the proposed control strategy in order to obtain a satisfactory performance. The obtained system performance was satisfactory in terms of desired elevation and settling time.

    @incollection{lincoln34108,
    booktitle = {Robotics},
    month = {September},
    title = {System Identification and HSDBC-Optimized PID Control of a Portable Lower-Limb Rehabilitation Device},
    author = {Sulaiman Fadlallah and Khaled Goher},
    publisher = {World Scientfic},
    year = {2018},
    keywords = {ARRAY(0x55e772a3a178)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34108/},
    abstract = {The present paper introduces a novel portable leg rehabilitation system (PLRS) that is developed to provide the user with the necessary rehabilitation exercises for both the knee and ankle in addition to the portability feature to overcome the hardships associated with both effort and cost of hospitals and rehabilitation clinics? steady sessions. Prior realizing the actual prototype, the proposed configuration was visualized using SolidWorks including its main components. Aiming to control the developed system, and given the fact that tuning controller parameters is not an easy task, Hybrid Spiral-Dynamics Bacteria-Chemotaxis (HSDBC) algorithm has been applied on the proposed control strategy in order to obtain a satisfactory performance. The obtained system performance was satisfactory in terms of desired elevation and settling time.}
    }
  • C. Zhao, L. Sun, P. Purkait, T. Duckett, and R. Stolkin, “Dense rgb-d semantic mapping with pixel-voxel neural network,” Sensors, vol. 18, iss. 9, p. 3099, 2018. doi:10.3390/s18093099
    [BibTeX] [Abstract] [Download PDF]

    In this paper, a novel Pixel-Voxel network is proposed for dense 3D semantic mapping, which can perform dense 3D mapping while simultaneously recognizing and labelling the semantic category each point in the 3D map. In our approach, we fully leverage the advantages of different modalities. That is, the PixelNet can learn the high-level contextual information from 2D RGB images, and the VoxelNet can learn 3D geometrical shapes from the 3D point cloud. Unlike the existing architecture that fuses score maps from different modalities with equal weights, we propose a softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet and fuses the score maps according to their respective confidence levels. Our approach achieved competitive results on both the SUN RGB-D and NYU V2 benchmarks, while the runtime of the proposed system is boosted to around 13 Hz, enabling near-real-time performance using an i7 eight-cores PC with a single Titan X GPU.

    @article{lincoln34138,
    volume = {18},
    number = {9},
    month = {September},
    author = {Cheng Zhao and Li Sun and Pulak Purkait and Tom Duckett and Rustam Stolkin},
    title = {Dense RGB-D Semantic Mapping with Pixel-Voxel Neural Network},
    publisher = {Multidisciplinary Digital Publishing Institute},
    year = {2018},
    journal = {Sensors},
    doi = {10.3390/s18093099},
    pages = {3099},
    keywords = {ARRAY(0x55e772a4c7e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34138/},
    abstract = {In this paper, a novel Pixel-Voxel network is proposed for dense 3D semantic mapping, which can perform dense 3D mapping while simultaneously recognizing and labelling the semantic category each point in the 3D map. In our approach, we fully leverage the advantages of different modalities. That is, the PixelNet can learn the high-level contextual information from 2D RGB images, and the VoxelNet can learn 3D geometrical shapes from the 3D point cloud. Unlike the existing architecture that fuses score maps from different modalities with equal weights, we propose a softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet and fuses the score maps according to their respective confidence levels. Our approach achieved competitive results on both the SUN RGB-D and NYU V2 benchmarks, while the runtime of the proposed system is boosted to around 13 Hz, enabling near-real-time performance using an i7 eight-cores PC with a single Titan X GPU.}
    }
  • R. Pinsler, R. Akrour, T. Osa, J. Peters, and G. Neumann, “Sample and feedback efficient hierarchical reinforcement learning from human preferences,” in Ieee international conference on robotics and automation (icra), 2018. doi:10.1109/ICRA.2018.8460907
    [BibTeX] [Abstract] [Download PDF]

    While reinforcement learning has led to promising results in robotics, defining an informative reward function can sometimes prove to be challenging. Prior work considered including the human in the loop to jointly learn the reward function and the optimal policy. Generating samples from a physical robot and requesting human feedback are both taxing efforts for which efficiency is critical. In contrast to prior work, in this paper we propose to learn reward functions from both the robot and the human perspectives in order to improve on both efficiency metrics. On one side, learning a reward function from the human perspective increases feedback efficiency by assuming that humans rank trajectories according to an outcome space of reduced dimensionaltiy. On the other side, learning a reward function from the robot perspective circumvents the need for learning a dynamics model while retaining the sample efficiency of model-based approaches. We provide an algorithm that incorporates bi-perspective reward learning into a general hierarchical reinforcement learning framework and demonstrate the merits of our approach on a toy task and a simulated robot grasping task.

    @inproceedings{lincoln31675,
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    month = {September},
    title = {Sample and feedback efficient hierarchical reinforcement learning from human preferences},
    author = {R. Pinsler and R. Akrour and T. Osa and J. Peters and G. Neumann},
    publisher = {IEEE},
    year = {2018},
    doi = {10.1109/ICRA.2018.8460907},
    keywords = {ARRAY(0x55e772a48058)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31675/},
    abstract = {While reinforcement learning has led to promising results in robotics, defining an informative reward function can sometimes prove to be challenging. Prior work considered including the human in the loop to jointly learn the reward function and the optimal policy. Generating samples from a physical robot and requesting human feedback are both taxing efforts for which efficiency is critical. In contrast to prior work, in this paper we propose to learn reward functions from both the robot and the human perspectives in order to improve on both efficiency metrics. On one side, learning a reward function from the human perspective increases feedback efficiency by assuming that humans rank trajectories according to an outcome space of reduced dimensionaltiy. On the other side, learning a reward function from the robot perspective circumvents the need for learning a dynamics model while retaining the sample efficiency of model-based approaches. We provide an algorithm that incorporates bi-perspective reward learning into a general hierarchical reinforcement learning framework and demonstrate the merits of our approach on a toy task and a simulated robot grasping task.}
    }
  • L. Sun, Z. Yan, S. M. Mellado, M. Hanheide, and T. Duckett, “3dof pedestrian trajectory prediction learned from long-term autonomous mobile robot deployment data,” International conference on robotics and automation (icra) 2018, 2018. doi:10.1109/icra.2018.8461228
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel 3DOF pedestrian trajectory prediction approach for autonomous mobile service robots. While most previously reported methods are based on learning of 2D positions in monocular camera images, our approach uses range-finder sensors to learn and predict 3DOF pose trajectories (i.e. 2D position plus 1D rotation within the world coordinate system). Our approach, T-Pose-LSTM (Temporal 3DOF-Pose Long-Short-Term Memory), is trained using long-term data from real-world robot deployments and aims to learn context-dependent (environment- and time-specific) human activities. Our approach incorporates long-term temporal information (i.e. date and time) with short-term pose observations as input. A sequence-to-sequence LSTM encoder-decoder is trained, which encodes observations into LSTM and then decodes the resulting predictions. On deployment, the approach can perform on-the-fly prediction in real-time. Instead of using manually annotated data, we rely on a robust human detection, tracking and SLAM system, providing us with examples in a global coordinate system. We validate the approach using more than 15 km of pedestrian trajectories recorded in a care home environment over a period of three months. The experiments show that the proposed T-PoseLSTM model outperforms the state-of-the-art 2D-based method for human trajectory prediction in long-term mobile robot deployments.

    @article{lincoln31956,
    month = {September},
    title = {3DOF Pedestrian Trajectory Prediction Learned from Long-Term Autonomous Mobile Robot Deployment Data},
    author = {Li Sun and Zhi Yan and Sergi Molina Mellado and Marc Hanheide and Tom Duckett},
    publisher = {IEEE},
    year = {2018},
    doi = {10.1109/icra.2018.8461228},
    journal = {International Conference on Robotics and Automation (ICRA) 2018},
    keywords = {ARRAY(0x55e772a53fb8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31956/},
    abstract = {This paper presents a novel 3DOF pedestrian trajectory prediction approach for autonomous mobile service
    robots. While most previously reported methods are based on learning of 2D positions in monocular camera images,
    our approach uses range-finder sensors to learn and predict 3DOF pose trajectories (i.e. 2D position plus 1D rotation within the world coordinate system). Our approach, T-Pose-LSTM (Temporal 3DOF-Pose Long-Short-Term Memory), is trained using long-term data from real-world robot deployments and aims to learn context-dependent (environment- and time-specific) human activities. Our approach incorporates long-term temporal information (i.e. date and time) with short-term pose observations as input. A sequence-to-sequence LSTM encoder-decoder is trained, which encodes observations into LSTM and then decodes the resulting predictions. On deployment, the approach can perform on-the-fly prediction in real-time. Instead of using manually annotated data, we rely on a robust human detection, tracking and SLAM system, providing us with examples in a global coordinate system. We validate the approach using more than 15 km of pedestrian trajectories recorded in a care home environment over a period of three months. The experiments show that the proposed T-PoseLSTM model outperforms the state-of-the-art 2D-based method for human trajectory prediction in long-term mobile robot deployments.}
    }
  • A. Kucukyilmaz and Y. Demiris, “Learning shared control by demonstration for personalized wheelchair assistance,” Ieee transactions on haptics, vol. 11, iss. 3, p. 431–442, 2018. doi:10.1109/TOH.2018.2804911
    [BibTeX] [Abstract] [Download PDF]

    An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user’s previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e. in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user’s joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant’s commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

    @article{lincoln31131,
    volume = {11},
    number = {3},
    month = {September},
    author = {Ayse Kucukyilmaz and Yiannis Demiris},
    title = {Learning shared control by demonstration for personalized wheelchair assistance},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2018},
    journal = {IEEE Transactions on Haptics},
    doi = {10.1109/TOH.2018.2804911},
    pages = {431--442},
    keywords = {ARRAY(0x55e772a36e78)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31131/},
    abstract = {An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e. in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.}
    }
  • W. Jeon, G. Cielniak, and R. Sang-Yong, “Semantic segmentation using trade-off and internal ensemble,” International journal of fuzzy logic and intelligent systems, vol. 18, iss. 3, p. 196–203, 2018. doi:10.5391/IJFIS.2018.18.3.196
    [BibTeX] [Abstract] [Download PDF]

    The computer vision consists of image classification, image segmentation, object detection, and tracking, etc. Among them, image segmentation is the most basic technique of the computer vision, which divides an image into foreground and background. This paper proposes an ensemble model using a concept of physical perception for image segmentation. Practically two connected models, the DeepLab and a modified VGG model, get feedback each other in the training process. On inference processing, we combine the results of two parallel models and execute an atrous spatial pyramid pooling (ASPP) and post-processing by using conditional random field (CRF). The proposed model shows better performance than the DeepLab in local area and about 1\% improvement on average on comparison of pixel-by-pixel.

    @article{lincoln34496,
    volume = {18},
    number = {3},
    month = {September},
    author = {Wang-Su Jeon and Grzegorz Cielniak and Rhee Sang-Yong},
    title = {Semantic Segmentation Using Trade-Off and Internal Ensemble},
    year = {2018},
    journal = {International Journal of Fuzzy Logic and Intelligent Systems},
    doi = {10.5391/IJFIS.2018.18.3.196},
    pages = {196--203},
    keywords = {ARRAY(0x55e772a53a78)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34496/},
    abstract = {The computer vision consists of image classification, image segmentation, object detection, and tracking, etc. Among them, image segmentation is the most basic technique of the computer vision, which divides an image into foreground and background. This paper proposes an ensemble model using a concept of physical perception for image segmentation. Practically two connected models, the DeepLab and a modified VGG model, get feedback each other in the training process. On inference processing, we combine the results of two parallel models and execute an atrous spatial pyramid pooling (ASPP) and post-processing by using conditional random field (CRF). The proposed model shows better performance than the DeepLab in local area and about 1\% improvement on average on comparison of pixel-by-pixel.}
    }
  • T. Osa, J. Peters, and G. Neumann, “Hierarchical reinforcement learning of multiple grasping strategies with human instructions,” Advanced robotics, vol. 32, iss. 18, p. 955–968, 2018. doi:10.1080/01691864.2018.1509018
    [BibTeX] [Abstract] [Download PDF]

    Grasping is an essential component for robotic manipulation and has been investigated for decades. Prior work on grasping often assumes that a sufficient amount of training data is available for learning and planning robotic grasps. However, since constructing such an exhaustive training dataset is very challenging in practice, it is desirable that a robotic system can autonomously learn and improves its grasping strategy. In this paper, we address this problem using reinforcement learning. Although recent work has presented autonomous data collection through trial and error, such methods are often limited to a single grasp type, e.g., vertical pinch grasp. We present a hierarchical policy search approach for learning multiple grasping strategies. Our framework autonomously constructs a database of grasping motions and point clouds of objects to learn multiple grasping types autonomously. We formulate the problem of selecting the grasp location and grasp policy as a bandit problem, which can be interpreted as a variant of active learning. We applied our reinforcement learning to grasping both rigid and deformable objects. The experimental results show that our framework autonomously learns and improves its performance through trial and error and can grasp previously unseen objects with a high accuracy.

    @article{lincoln32981,
    volume = {32},
    number = {18},
    month = {September},
    author = {T. Osa and J. Peters and Gerhard Neumann},
    title = {Hierarchical Reinforcement Learning of Multiple Grasping Strategies with Human Instructions},
    publisher = {Taylor \& Francis},
    year = {2018},
    journal = {Advanced Robotics},
    doi = {10.1080/01691864.2018.1509018},
    pages = {955--968},
    keywords = {ARRAY(0x55e772a25788)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32981/},
    abstract = {Grasping is an essential component for robotic manipulation and has been investigated for decades. Prior work on grasping often assumes that a sufficient amount of training data is available for learning and planning robotic grasps. However, since constructing such an exhaustive training dataset is very challenging in practice, it is desirable that a robotic system can autonomously learn and improves its grasping strategy. In this paper, we address this problem using reinforcement learning. Although recent work has presented autonomous data collection through trial and error, such methods are often limited to a single grasp type, e.g., vertical pinch grasp. We present a hierarchical policy search approach for learning multiple grasping strategies. Our framework autonomously constructs a database of grasping motions and point clouds of objects to learn multiple grasping types autonomously. We formulate the problem of selecting the grasp location and grasp policy as a bandit problem, which can be interpreted as a variant of active learning. We applied our reinforcement learning to grasping both rigid and deformable objects. The experimental results show that our framework autonomously learns and improves its performance through trial and error and can grasp previously unseen objects with a high accuracy.}
    }
  • S. M. Mustaza, C. Saaj, F. J. Comin, W. A. Albukhanajer, D. Mahdi, and C. Lekakou, “Stiffness control for soft surgical manipulators,” International journal of humanoid robotics, vol. 15, iss. 5, 2018. doi:10.1142/S0219843618500214
    [BibTeX] [Abstract] [Download PDF]

    Tunable stiffness control is critical for undertaking surgical procedures using soft manipulators. However, active stiffness control in soft continuum manipulators is very challenging and has been rarely realized for real-time surgical applications. Low stiffness at the tip is much preferred for safe navigation of the robot in restricted spaces inside the human body. On the other hand, high stiffness at the tip is demanded for efficiently operating surgical instruments. In this paper, the manipulability and characteristics of a class of soft hyper-redundant manipulator, fabricated using Ecoflex-0050TM silicone, is discussed and a new methodology is introduced to actively tune the stiffness matrix, in real-time, for disturbance rejection and stiffness control. Experimental results are used to derive a more accurate description of the characteristics of the soft manipulator, capture the varying stiffness effects of the actuated arm and consequently offer a more accurate response using closed loop feedback control in real-time. The novel results presented in this paper advances the state-of-the-art of tunable stiffness control in soft continuum manipulators for real-time applications.

    @article{lincoln37443,
    volume = {15},
    number = {5},
    month = {August},
    author = {S.M. Mustaza and C Saaj and F.J. Comin and W.A. Albukhanajer and D. Mahdi and C. Lekakou},
    note = {cited By 1},
    title = {Stiffness Control for Soft Surgical Manipulators},
    publisher = {World Scientific},
    year = {2018},
    journal = {International Journal of Humanoid Robotics},
    doi = {10.1142/S0219843618500214},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37443/},
    abstract = {Tunable stiffness control is critical for undertaking surgical procedures using soft manipulators. However, active stiffness control in soft continuum manipulators is very challenging and has been rarely realized for real-time surgical applications. Low stiffness at the tip is much preferred for safe navigation of the robot in restricted spaces inside the human body. On the other hand, high stiffness at the tip is demanded for efficiently operating surgical instruments. In this paper, the manipulability and characteristics of a class of soft hyper-redundant manipulator, fabricated using Ecoflex-0050TM silicone, is discussed and a new methodology is introduced to actively tune the stiffness matrix, in real-time, for disturbance rejection and stiffness control. Experimental results are used to derive a more accurate description of the characteristics of the soft manipulator, capture the varying stiffness effects of the actuated arm and consequently offer a more accurate response using closed loop feedback control in real-time. The novel results presented in this paper advances the state-of-the-art of tunable stiffness control in soft continuum manipulators for real-time applications.}
    }
  • C. Hu, Q. Fu, T. liu, and S. Yue, “A hybrid visual-model based robot control strategy for micro ground robots,” Sab 2018: from animals to animats 15, vol. 10994, p. 162–174, 2018. doi:10.1007/978-3-319-97628-0_14
    [BibTeX] [Abstract] [Download PDF]

    This paper proposed a hybrid vision-based robot control strategy for micro ground robots by mediating two vision models from mixed categories: a bio-inspired collision avoidance model and a segmentation based target following model. The implemented model coordination strategy is described as a probabilistic model using ?nite state machine (FSM) that allows the robot to switch behaviours adapting to the acquired visual information. Experiments demonstrated the stability and convergence of the embedded hybrid system by real robots, including the studying of collective behaviour by a swarm of such robots with environment mediation. This research enables micro robots to run visual models with more complexity. Moreover, it showed the possibility to realize aggregation behaviour on micro robots by utilizing vision as the only sensing modality from non-omnidirectional cameras.

    @article{lincoln32842,
    volume = {10994},
    month = {August},
    author = {Cheng Hu and Qinbing Fu and Tian liu and Shigang Yue},
    booktitle = {Manoonpong P., Larsen J., Xiong X., Hallam J., Triesch J. (eds) From Animals to Animats 15. SAB 2018. Lecture Notes in Computer Science},
    title = {A Hybrid Visual-Model Based Robot Control Strategy for Micro Ground Robots},
    publisher = {Springer, Cham},
    year = {2018},
    journal = {SAB 2018: From Animals to Animats 15},
    doi = {10.1007/978-3-319-97628-0\_14},
    pages = {162--174},
    keywords = {ARRAY(0x55e772a41578)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32842/},
    abstract = {This paper proposed a hybrid vision-based robot control strategy for micro ground robots by mediating two vision models from mixed categories: a bio-inspired collision avoidance model and a segmentation based target following model. The implemented model coordination strategy is described as a probabilistic model using ?nite state machine (FSM) that allows the robot to switch behaviours adapting to the acquired visual information. Experiments demonstrated the stability and convergence of the embedded hybrid system by real robots, including the studying of collective behaviour by a swarm of such robots with environment mediation. This research enables micro robots to run visual models with more complexity. Moreover, it showed the possibility to realize aggregation behaviour on micro robots by utilizing vision as the only sensing modality from non-omnidirectional cameras.}
    }
  • K. Elgeneidy, P. Liu, S. Pearson, N. Lohse, and G. Neumann, “Printable soft grippers with integrated bend sensing for handling of crops,” Towards autonomous robotic systems (taros) conference, vol. 2018, iss. 10965, p. 479–480, 2018. doi:10.1007/978-3-319-96728-8
    [BibTeX] [Abstract] [Download PDF]

    Handling delicate crops without damaging or bruising is a challenge facing the au-tomation of tasks within the agri-food sector, which encourages the utilization of soft grippers that are inherently safe and passively compliant. In this paper we present a brief overview of the development of a printable soft gripper integrated with printable bend sensors. The softness of the gripper fingers allows delicate crops to be grasped gently, while the bend sensors are calibrated to measure bending and detect contact. This way the soft gripper not only benefits from the passive compliance of its soft fingers, but also demonstrates a sensor-guided approach for improved grasp control.

    @article{lincoln32296,
    volume = {2018},
    number = {10965},
    month = {August},
    author = {Khaled Elgeneidy and Pengcheng Liu and Simon Pearson and Niels Lohse and Gerhard Neumann},
    title = {Printable Soft Grippers with Integrated Bend Sensing for Handling of Crops},
    publisher = {Springer},
    year = {2018},
    journal = {Towards Autonomous Robotic Systems (TAROS) Conference},
    doi = {10.1007/978-3-319-96728-8},
    pages = {479--480},
    keywords = {ARRAY(0x55e772a67a10)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32296/},
    abstract = {Handling delicate crops without damaging or bruising is a challenge facing the au-tomation of tasks within the agri-food sector, which encourages the utilization of soft grippers that are inherently safe and passively compliant. In this paper we present a brief overview of the development of a printable soft gripper integrated with printable bend sensors. The softness of the gripper fingers allows delicate crops to be grasped gently, while the bend sensors are calibrated to measure bending and detect contact. This way the soft gripper not only benefits from the passive compliance of its soft fingers, but also demonstrates a sensor-guided approach for improved grasp control.}
    }
  • F. D. Duchetto, A. Kucukyilmaz, L. Iocchi, and M. Hanheide, “Don’t make the same mistakes again and again: learning local recovery policies for navigation from human demonstrations,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 4084–4091, 2018. doi:10.1109/LRA.2018.2861080
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a human-in-the-loop learning framework for mobile robots to generate effective local policies in order to recover from navigation failures in long-term autonomy. We present an analysis of failure and recovery cases derived from long-term autonomous operation of a mobile robot, and propose a two-layer learning framework that allows to detect and recover from such navigation failures. Employing a learning by demonstration (LbD) approach, our framework can incrementally learn to autonomously recover from situations it initially needs humans to help with. The learning framework allows for both real-time failure detection and regression using Gaussian processes (GPs). Our empirical results on two different failure scenarios indicate that given 40 failure state observations, the true positive rate of the failure detection model exceeds 90\%, ending with successful recovery actions in more than 90\% of all detected cases.

    @article{lincoln32850,
    volume = {3},
    number = {4},
    month = {July},
    author = {Francesco Del Duchetto and Ayse Kucukyilmaz and Luca Iocchi and Marc Hanheide},
    note = {{\copyright} 2018 IEEE},
    title = {Don't Make the Same Mistakes Again and Again: Learning Local Recovery Policies for Navigation from Human Demonstrations},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2861080},
    pages = {4084--4091},
    keywords = {ARRAY(0x55e772a28788)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32850/},
    abstract = {In this paper, we present a human-in-the-loop learning framework for mobile robots to generate effective local policies in order to recover from navigation failures in long-term autonomy. We present an analysis of failure and recovery cases derived from long-term autonomous operation of a mobile robot, and propose a two-layer learning framework that allows to detect and recover from such navigation failures. Employing a learning by demonstration (LbD) approach, our framework can incrementally learn to autonomously recover from situations it initially needs humans to help with. The learning framework allows for both real-time failure detection and regression using Gaussian processes (GPs). Our empirical results on two different failure scenarios indicate that given 40 failure state observations, the true positive rate of the failure detection model exceeds 90\%, ending with successful recovery actions in more than 90\% of all detected cases.}
    }
  • Q. Fu, C. Hu, P. Liu, and S. Yue, “Towards computational models of insect motion detectors for robot vision,” in M. giuliani et al. (eds.): taros 2018, lnai, Springer international publishing ag, part of springer nature 2018, 2018, vol. 10965, p. 465–467.
    [BibTeX] [Abstract] [Download PDF]

    In this essay, we provide a brief survey of computational models of insect motion detectors, and bio-robotic solutions to build fast and reliable motion-sensing systems for robot vision. Vision is an important sensing modality for autonomous robots, since it can extract abundant useful features from visually cluttered and dynamic environments. Fast development of computer vision technology facilitates the modeling of dynamic vision systems for mobile robots.

    @incollection{lincoln31671,
    volume = {10965},
    month = {July},
    author = {Qinbing Fu and Cheng Hu and Pengcheng Liu and Shigang Yue},
    booktitle = {M. Giuliani et al. (Eds.): TAROS 2018, LNAI},
    title = {Towards computational models of insect motion detectors for robot vision},
    publisher = {Springer International Publishing AG, part of Springer Nature 2018},
    pages = {465--467},
    year = {2018},
    keywords = {ARRAY(0x55e772a67b48)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31671/},
    abstract = {In this essay, we provide a brief survey of computational models of insect motion detectors, and bio-robotic solutions to build fast and reliable motion-sensing systems for robot vision. Vision is an important sensing modality for autonomous robots, since it can extract abundant useful features from visually cluttered and dynamic environments. Fast development of computer vision technology facilitates the modeling of dynamic vision systems for mobile robots.}
    }
  • P. Liu, K. Elgeneidy, S. Pearson, N. Huda, and G. Neumann, “Towards real-time robotic motion planning for grasping in cluttered and uncertain environments,” in 19th towards autonomous robotic systems (taros) conference, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Adaptation to unorganized, congested and uncertain environment is a desirable capability but challenging task in development of robotic motion planning algorithms for object grasping. We have to make a tradeoff between coping with the environmental complexities using computational expensive approaches, and enforcing practical manipulation and grasping in real-time. In this paper, we present a brief overview and research objectives towards real-time motion planning for grasping in cluttered and uncertain environments. We present feasible ways in approaching this goal, in which key challenges and plausible solutions are discussed.

    @inproceedings{lincoln31679,
    booktitle = {19th Towards Autonomous Robotic Systems (TAROS) Conference},
    month = {July},
    title = {Towards real-time robotic motion planning for grasping in cluttered and uncertain environments},
    author = {Pengcheng Liu and Khaled Elgeneidy and Simon Pearson and Nazmul Huda and Gerhard Neumann},
    publisher = {Springer},
    year = {2018},
    keywords = {ARRAY(0x55e772a41020)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31679/},
    abstract = {Adaptation to unorganized, congested and uncertain environment is a desirable capability but challenging task in development of robotic motion planning algorithms for object grasping. We have to make a tradeoff between coping with the environmental complexities using computational expensive approaches, and enforcing practical manipulation and grasping in real-time. In this paper, we present a brief overview and research objectives towards real-time motion planning for grasping in cluttered and uncertain environments. We present feasible ways in approaching this goal, in which key challenges and plausible solutions are discussed.}
    }
  • C. Hu, Q. Fu, and S. Yue, “Colias iv: the affordable micro robot platform with bio-inspired vision,” in Giuliani m., assaf t., giannaccini m. (eds) towards autonomous robotic systems. taros 2018. lecture notes in computer science, M. Giuliani, T. Assaf, and M. E. Giannaccini, Eds., Springer, 2018, vol. 10965. doi:10.1007/978-3-319-96728-8_17
    [BibTeX] [Abstract] [Download PDF]

    Vision is one of the most important sensing modalities for robots and has been realized on mostly large platforms. However for micro robots which are commonly utilized in swarm robotic studies, the visual ability is seldom applied or with only limited functions and resolution, due to the challenging requirements on the computation power and high data volume to deal with. This research has proposed the low-cost micro ground robot Colias IV, which is particularly designed to meet the requirements to allow embedded vision based tasks onboard, such as bio-inspired collision detection neural networks. Numerous of successful approaches have demonstrated that the proposed micro robot Colias IV to be a feasible platform for introducing visual based algorithms into swarm robotics.

    @incollection{lincoln31672,
    volume = {10965},
    month = {July},
    author = {Cheng Hu and Qinbing Fu and Shigang Yue},
    booktitle = {Giuliani M., Assaf T., Giannaccini M. (eds) Towards Autonomous Robotic Systems. TAROS 2018. Lecture Notes in Computer Science},
    editor = {Manuel Giuliani and Tareq Assaf and Maria Elena Giannaccini},
    title = {Colias IV: The Affordable Micro Robot Platform with Bio-inspired Vision},
    publisher = {Springer},
    year = {2018},
    doi = {10.1007/978-3-319-96728-8\_17},
    keywords = {ARRAY(0x55e772a2b9c0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31672/},
    abstract = {Vision is one of the most important sensing modalities for robots and has been realized on mostly large platforms. However for micro robots which are commonly utilized in swarm robotic studies, the visual ability is seldom applied or with only limited functions and resolution, due to the challenging requirements on the computation power and high data volume to deal with. This research has proposed the low-cost micro ground robot Colias IV, which is particularly designed to meet the requirements to allow embedded vision based tasks onboard, such as bio-inspired collision detection neural networks. Numerous of successful approaches have demonstrated that the proposed micro robot Colias IV to be a feasible platform for introducing visual based algorithms into swarm robotics.}
    }
  • S. Cosar, Z. Yan, F. Zhao, T. Lambrou, S. Yue, and N. Bellotto, “Thermal camera based physiological monitoring with an assistive robot,” in Ieee international engineering in medicine and biology conference, 2018.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a physiological monitoring system for assistive robots using a thermal camera. It is based on the detection of subtle changes in temperature observed on different parts of the face. First, we segment and estimate these face regions on thermal images. Then, by applying Fourier analysis on temperature data, we estimate respiration and heartbeat rate. This physiological monitoring system has been integrated in an assistive robot for elderly people at home, as part of the ENRICHME project. Its performance has been evaluated on a new thermal dataset for physiological monitoring, which is made publicly available for research purposes.

    @inproceedings{lincoln31779,
    booktitle = {IEEE International Engineering in Medicine and Biology Conference},
    month = {July},
    title = {Thermal camera based physiological monitoring with an assistive robot},
    author = {Serhan Cosar and Zhi Yan and Feng Zhao and Tryphon Lambrou and Shigang Yue and Nicola Bellotto},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55e772a50998)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31779/},
    abstract = {This paper presents a physiological monitoring system for assistive robots using a thermal camera. It is based on the detection of subtle changes in temperature observed on different parts of the face. First, we segment and estimate these face regions on thermal images. Then, by applying Fourier analysis on temperature data, we estimate respiration and heartbeat rate. This physiological monitoring system has been integrated in an assistive robot for elderly people at home, as part of the ENRICHME project. Its performance has been evaluated on a new thermal dataset for physiological monitoring, which is made publicly available for research purposes.}
    }
  • J. P. Fentanes, I. Gould, T. Duckett, S. Pearson, and G. Cielniak, “3d soil compaction mapping through kriging-based exploration with a mobile robot,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 3066 –3072, 2018. doi:10.1109/LRA.2018.2849567
    [BibTeX] [Abstract] [Download PDF]

    This paper presents an automated method for creating spatial maps of soil condition with an outdoor mobile robot. Effective soil mapping on farms can enhance yields, reduce inputs and help protect the environment. Traditionally, data are collected manually at an arbitrary set of locations, then soil maps are constructed offline using kriging, a form of Gaussian process regression. This process is laborious and costly, limiting the quality and resolution of the resulting information. Instead, we propose to use an outdoor mobile robot for automatic collection of soil condition data, building soil maps online and also adapting the robot’s exploration strategy on-the-fly based on the current quality of the map. We show how using kriging variance as a reward function for robotic exploration allows for both more efficient data collection and better soil models. This work presents the theoretical foundations for our proposal and an experimental comparison of exploration strategies using soil compaction data from a field generated with a mobile robot.

    @article{lincoln32172,
    volume = {3},
    number = {4},
    month = {July},
    author = {Jaime Pulido Fentanes and Iain Gould and Tom Duckett and Simon Pearson and Grzegorz Cielniak},
    title = {3D Soil Compaction Mapping through Kriging-based Exploration with a Mobile Robot},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2849567},
    pages = {3066 --3072},
    keywords = {ARRAY(0x55e772a509e0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32172/},
    abstract = {This paper presents an automated method for creating spatial maps of soil condition with an outdoor mobile robot. Effective soil mapping on farms can enhance yields, reduce inputs and help protect the environment. Traditionally, data are collected manually at an arbitrary set of locations, then soil maps are constructed offline using kriging, a form of Gaussian process regression. This process is laborious and costly, limiting the quality and resolution of the resulting information.
    Instead, we propose to use an outdoor mobile robot for automatic collection of soil condition data, building soil maps online and also adapting the robot's exploration strategy on-the-fly based on the current quality of the map. We show how using kriging variance as a reward function for robotic exploration allows for both more efficient data collection and better soil models. This work presents the theoretical foundations for our proposal and an experimental comparison of exploration strategies using soil compaction data from a field generated with a mobile robot.}
    }
  • X. Sun, M. Mangan, and S. Yue, “An analysis of a ring attractor model for cue integration,” in Biomimetic and biohybrid systems, Springer, 2018, p. 459–470. doi:https://doi.org/10.1007/978-3-319-95972-6_49
    [BibTeX] [Abstract] [Download PDF]

    Animals and robots must constantly combine multiple streams of noisy information from their senses to guide their actions. Recently, it has been proposed that animals may combine cues optimally using a ring attractor neural network architecture inspired by the head direction system of rats augmented with a dynamic re-weighting mechanism. In this work we report that an older and simpler ring attractor network architecture, requiring no re-weighting property combines cues according to their certainty for moderate cue conflicts but converges on the most certain cue for larger conflicts. These results are consistent with observations in animal experiments that show sub-optimal cue integration and switching from cue integration to cue selection strategies. This work therefore demonstrates an alternative architecture for those seeking neural correlates of sensory integration in animals. In addition, performance is shown robust to noise and miniaturization and thus provides an efficient solution for artificial systems.

    @incollection{lincoln33007,
    month = {July},
    author = {Xuelong Sun and Michael Mangan and Shigang Yue},
    note = {This publication can be purchased online at https://www.springer.com/us/book/9783319959719},
    booktitle = {Biomimetic and Biohybrid Systems},
    title = {An Analysis of a Ring Attractor Model for Cue Integration},
    publisher = {Springer},
    year = {2018},
    doi = {https://doi.org/10.1007/978-3-319-95972-6\_49},
    pages = {459--470},
    keywords = {ARRAY(0x55e772a3a1f0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33007/},
    abstract = {Animals and robots must constantly combine multiple streams of noisy information from their senses to guide their actions. Recently, it has been proposed that animals may combine cues optimally using a ring attractor neural network architecture inspired by the head direction system of rats augmented with a dynamic re-weighting mechanism. In this work we report that an older and simpler ring attractor network architecture, requiring no re-weighting property combines cues according to their certainty for moderate cue conflicts but converges on the most certain cue for larger conflicts. These results are consistent with observations in animal experiments that show sub-optimal cue integration and switching from cue integration to cue selection strategies. This work therefore demonstrates an alternative architecture for those seeking neural correlates of sensory integration in animals. In addition, performance is shown robust to noise and miniaturization and thus provides an efficient solution for artificial systems.}
    }
  • P. Bosilj, T. Duckett, and G. Cielniak, “Connected attribute morphology for unified vegetation segmentation and classification in precision agriculture,” Computers in industry, vol. 98, p. 226–240, 2018. doi:10.1016/j.compind.2018.02.003
    [BibTeX] [Abstract] [Download PDF]

    Discriminating value crops from weeds is an important task in precision agriculture. In this paper, we propose a novel image processing pipeline based on attribute morphology for both the segmentation and classification tasks. The commonly used approaches for vegetation segmentation often rely on thresholding techniques which reach their decisions globally. By contrast, the proposed method works with connected components obtained by image threshold decomposition, which are naturally nested in a hierarchical structure called the max-tree, and various attributes calculated from these regions. Image segmentation is performed by attribute filtering, preserving or discarding the regions based on their attribute value and allowing for the decision to be reached locally. This segmentation method naturally selects a collection of foreground regions rather than pixels, and the same data structure used for segmentation can be further reused to provide the features for classification, which is realised in our experiments by a support vector machine (SVM). We apply our methods to normalised difference vegetation index (NDVI) images, and demonstrate the performance of the pipeline on a dataset collected by the authors in an onion field, as well as a publicly available dataset for sugar beets. The results show that the proposed segmentation approach can segment the fine details of plant regions locally, in contrast to the state-of-the-art thresholding methods, while providing discriminative features which enable efficient and competitive classification rates for crop/weed discrimination.

    @article{lincoln31634,
    volume = {98},
    month = {June},
    author = {Petra Bosilj and Tom Duckett and Grzegorz Cielniak},
    title = {Connected attribute morphology for unified vegetation segmentation and classification in precision agriculture},
    publisher = {Elsevier},
    year = {2018},
    journal = {Computers in Industry},
    doi = {10.1016/j.compind.2018.02.003},
    pages = {226--240},
    keywords = {ARRAY(0x55e772a67a40)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31634/},
    abstract = {Discriminating value crops from weeds is an important task in precision agriculture. In this paper, we propose a novel image processing pipeline based on attribute morphology for both the segmentation and classification tasks. The commonly used approaches for vegetation segmentation often rely on thresholding techniques which reach their decisions globally. By contrast, the proposed method works with connected components obtained by image threshold decomposition, which are naturally nested in a hierarchical structure called the max-tree, and various attributes calculated from these regions. Image segmentation is performed by attribute filtering, preserving or discarding the regions based on their attribute value and allowing for the decision to be reached locally. This segmentation method naturally selects a collection of foreground regions rather than pixels, and the same data structure used for segmentation can be further reused to provide the features for classification, which is realised in our experiments by a support vector machine (SVM). We apply our methods to normalised difference vegetation index (NDVI) images, and demonstrate the performance of the pipeline on a dataset collected by the authors in an onion field, as well as a publicly available dataset for sugar beets. The results show that the proposed segmentation approach can segment the fine details of plant regions locally, in contrast to the state-of-the-art thresholding methods, while providing discriminative features which enable efficient and competitive classification rates for crop/weed discrimination.}
    }
  • A. Binch, N. Cooke, and C. Fox, “Rumex and urtica detection in grassland by uav,” in 14th international conference on precision agriculture, 2018.
    [BibTeX] [Abstract] [Download PDF]

    . Previous work (Binch & Fox, 2017) used autonomous ground robotic platforms to successfully detect Urtica (nettle) and Rumex (dock) weeds in grassland, to improve farm productivity and the environment through precision herbicide spraying. It assumed that ground robots swathe entire fields to both detect and spray weeds, but this is a slow process as the slow ground platform must drive over every square meter of the field even where there are no weeds. The present study examines a complimentary approach, using unmanned aerial vehicles (UAVs) to perform faster detections, in order to inform slower ground robots of weed location and direct them to spray them from the ground. In a controlled study, it finds that the existing state-of-the-art (Binch & Fox, 2017) ground detection algorithm based on local binary patterns and support vector machines is easily re-usable from a UAV with 4K camera despite large differences in camera type, distance, perspective and motion, without retraining. The algorithm achieves 83-95\% accuracy on ground platform data with 1-3 independent views, and improves to 90\% from single views on aerial data. However this is only attainable at low altitudes up to 8 feet, speeds below 0.3m/s, and a vertical view angle, suggesting that autonomous or manual UAV swathing is required to cover fields, rather than use of a single high-altitude photograph. This demonstrates for the first time that combined aerial detection with ground spraying system is feasible for Rumex and Urtica in grassland, using UAVs to replace the swathing and detection of weeds then dispatching ground platforms to spray them at the detection sites (as spraying by UAV is illegal in EU countries). This reduces total time requires to spray as the UAV performs the survey stage faster than a ground platform.

    @inproceedings{lincoln31363,
    booktitle = {14th International Conference on Precision Agriculture},
    month = {June},
    title = {Rumex and Urtica detection in grassland by UAV},
    author = {Adam Binch and Nigel Cooke and Charles Fox},
    publisher = {14th International Conference on Precision Agriculture},
    year = {2018},
    keywords = {ARRAY(0x55e772a41068)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31363/},
    abstract = {. Previous work (Binch \& Fox, 2017) used autonomous ground robotic platforms to successfully detect Urtica (nettle) and Rumex (dock) weeds in grassland, to improve farm productivity and the environment through precision herbicide spraying. It assumed that ground robots swathe entire fields to both detect and spray weeds, but this is a slow process as the slow ground platform must drive over every square meter of the field even where there are no weeds. The present study examines a complimentary approach, using unmanned aerial vehicles (UAVs) to perform faster detections, in order to inform slower ground robots of weed location and direct them to spray them from the ground. In a controlled study, it finds that the existing state-of-the-art (Binch \& Fox, 2017) ground detection algorithm based on local binary patterns and support vector machines is easily re-usable from a UAV with 4K camera despite large differences in camera type, distance, perspective and motion, without retraining. The algorithm achieves 83-95\% accuracy on ground platform data with 1-3 independent views, and improves to 90\% from single views on aerial data. However this is only attainable at low altitudes up to 8 feet, speeds below 0.3m/s, and a vertical view angle, suggesting that autonomous or manual UAV swathing is required to cover fields, rather than use of a single high-altitude photograph. This demonstrates for the first time that combined aerial detection with ground spraying system is feasible for Rumex and Urtica in grassland, using UAVs to replace the swathing and detection of weeds then dispatching ground platforms to spray them at the detection sites (as spraying by UAV is illegal in EU countries). This reduces total time requires to spray as the UAV performs the survey stage faster than a ground platform.}
    }
  • T. Duckett, S. Pearson, S. Blackmore, B. Grieve, W. Chen, G. Cielniak, J. Cleaversmith, J. Dai, S. Davis, C. Fox, P. From, I. Georgilas, R. Gill, I. Gould, M. Hanheide, F. Iida, L. Mihalyova, S. Nefti-Meziani, G. Neumann, P. Paoletti, T. Pridmore, D. Ross, M. Smith, M. Stoelen, M. Swainson, S. Wane, P. Wilson, I. Wright, and G. Yang, “Agricultural robotics: the future of robotic agriculture,” UK-RAS Network White Papers, Other , 2018.
    [BibTeX] [Abstract] [Download PDF]

    Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (“Transforming Food Production: from Farm to Fork”). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.

    @techreport{lincoln32517,
    month = {June},
    type = {Other},
    title = {Agricultural Robotics: The Future of Robotic Agriculture},
    author = {Tom Duckett and Simon Pearson and Simon Blackmore and Bruce Grieve and Wen-Hua Chen and Grzegorz Cielniak and Jason Cleaversmith and Jian Dai and Steve Davis and Charles Fox and Pal From and Ioannis Georgilas and Richie Gill and Iain Gould and Marc Hanheide and Fumiya Iida and Lyudmila Mihalyova and Samia Nefti-Meziani and Gerhard Neumann and Paolo Paoletti and Tony Pridmore and Dave Ross and Melvyn Smith and Martin Stoelen and Mark Swainson and Sam Wane and Peter Wilson and Isobel Wright and Guang-Zhong Yang},
    publisher = {UK-RAS Network White Papers},
    year = {2018},
    institution = {UK-RAS Network White Papers},
    keywords = {ARRAY(0x55e772a36ec0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32517/},
    abstract = {Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment ("Transforming Food Production: from Farm to Fork"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.}
    }
  • F. Camara and C. Fox, “Filtration analysis of pedestrian-vehicle interactions for autonomous vehicle control,” in Proceedings of the 15th international conference on intelligent autonomous systems, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Abstract. Interacting with humans remains a challenge for autonomous vehicles (AVs). When a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform development of new real-time AV controllers in this setting, this study collects and analy- ses detailed, manually-annotated, temporal data from real-world human road crossings as they interact with manual drive vehicles. It studies the temporal orderings (filtrations) in which features are revealed to the ve- hicle and their informativeness over time. It presents a new framework suggesting how optimal stopping controllers may then use such data to enable an AV to decide when to act (by speeding up, slowing down, or otherwise signalling intent to the pedestrian) or alternatively, to continue at its current speed in order to gather additional information from new features, including signals from that pedestrian, before acting itself.

    @inproceedings{lincoln32484,
    booktitle = {Proceedings of the 15th International Conference on Intelligent Autonomous Systems},
    month = {June},
    title = {Filtration analysis of pedestrian-vehicle interactions for autonomous vehicle control},
    author = {Fanta Camara and Charles Fox},
    publisher = {15th International Conference on Intelligent Autonomous Systems},
    year = {2018},
    keywords = {ARRAY(0x55e772a53a60)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32484/},
    abstract = {Abstract. Interacting with humans remains a challenge for autonomous
    vehicles (AVs). When a pedestrian wishes to cross the road in front of the
    vehicle at an unmarked crossing, the pedestrian and AV must compete
    for the space, which may be considered as a game-theoretic interaction in
    which one agent must yield to the other. To inform development of new
    real-time AV controllers in this setting, this study collects and analy-
    ses detailed, manually-annotated, temporal data from real-world human
    road crossings as they interact with manual drive vehicles. It studies the
    temporal orderings (filtrations) in which features are revealed to the ve-
    hicle and their informativeness over time. It presents a new framework
    suggesting how optimal stopping controllers may then use such data to
    enable an AV to decide when to act (by speeding up, slowing down, or
    otherwise signalling intent to the pedestrian) or alternatively, to continue
    at its current speed in order to gather additional information from new
    features, including signals from that pedestrian, before acting itself.}
    }
  • P. From, L. Grimstad, M. Hanheide, S. Pearson, and G. Cielniak, “Rasberry – robotic and autonomous systems for berry production,” Mechanical engineering magazine select articles, vol. 140, iss. 6, 2018. doi:10.1115/1.2018-JUN-6
    [BibTeX] [Abstract] [Download PDF]

    The soft fruit industry is facing unprecedented challenges due to its reliance of manual labour. We are presenting a newly launched robotics initiative which will help to address the issues faced by the industry and enable automation of the main processes involved in soft fruit production. The RASberry project (Robotics and Autonomous Systems for Berry Production) aims to develop autonomous fleets of robots for horticultural industry. To achieve this goal, the project will bridge several current technological gaps including the development of a mobile platform suitable for the strawberry fields, software components for fleet management, in-field navigation and mapping, long-term operation, and safe human-robot collaboration. In this paper, we provide a general overview of the project, describe the main system components, highlight interesting challenges from a control point of view and then present three specific applications of the robotic fleets in soft fruit production. The applications demonstrate how robotic fleets can benefit the soft fruit industry by significantly decreasing production costs, addressing labour shortages and being the first step towards fully autonomous robotic systems for agriculture.

    @article{lincoln32874,
    volume = {140},
    number = {6},
    month = {June},
    author = {Pal From and Lars Grimstad and Marc Hanheide and Simon Pearson and Grzegorz Cielniak},
    title = {RASberry - Robotic and Autonomous Systems for Berry Production},
    publisher = {ASME},
    year = {2018},
    journal = {Mechanical Engineering Magazine Select Articles},
    doi = {10.1115/1.2018-JUN-6},
    keywords = {ARRAY(0x55e772a53aa8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32874/},
    abstract = {The soft fruit industry is facing unprecedented challenges due to its reliance of manual labour. We are presenting a newly launched robotics initiative which will help to address the issues faced by the industry and enable automation of the main processes involved in soft fruit production. The RASberry project (Robotics and Autonomous Systems for Berry Production) aims to develop autonomous fleets of robots for horticultural industry. To achieve this goal, the project will bridge several current technological gaps including the development of a mobile platform suitable for the strawberry fields, software components for fleet management, in-field navigation and mapping, long-term operation, and safe human-robot collaboration.
    In this paper, we provide a general overview of the project, describe the main system components, highlight interesting challenges from a control point of view and then present three specific applications of the robotic fleets in soft fruit production. The applications demonstrate how robotic fleets can benefit the soft fruit industry by significantly decreasing production costs, addressing labour shortages and being the first step towards fully autonomous robotic systems for agriculture.}
    }
  • A. Schofield, I. Gilchrist, M. Bloj, A. Leonardis, and N. Bellotto, “Understanding images in biological and computer vision,” Interface focus, vol. 8, iss. 4, p. 1–3, 2018. doi:10.1098/rsfs.2018.0027
    [BibTeX] [Abstract] [Download PDF]

    This issue of Interface Focus is a collection of papers arising out of a Royal Society Discussion meeting entitled ?Understanding images in biological and computer vision? held at Carlton Terrace on the 19th and 20th February, 2018. There is a strong tradition of inter-disciplinarity in the study of visual perception and visual cognition. Many of the great natural scientists including Newton [1], Young [2] and Maxwell (see [3]) were intrigued by the relationship between light, surfaces and perceived colour considering both physical and perceptual processes. Brewster [4] invented both the lenticular stereoscope and the binocular camera but also studied the perception of shape-from-shading. More recently, Marr’s [5] description of visual perception as an information processing problem led to great advances in our understanding of both biological and computer vision: both the computer vision and biological vision communities have a Marr medal. The recent successes of deep neural networks in classifying the images that we see and the fMRI images that reveal the activity in our brains during the act of seeing are both intriguing. The links between machine vision systems and biology may at sometimes be weak but the similarity of some of the operations is nonetheless striking [6]. This two-day meeting brought together researchers from the fields of biological and computer vision, robotics, neuroscience, computer science and psychology to discuss the most recent developments in the field. The meeting was divided into four themes: vision for action, visual appearance, vision for recognition and machine learning.

    @article{lincoln32403,
    volume = {8},
    number = {4},
    month = {June},
    author = {Andrew Schofield and Iain Gilchrist and Marina Bloj and Ales Leonardis and Nicola Bellotto},
    title = {Understanding images in biological and computer vision},
    publisher = {The Royal Society},
    year = {2018},
    journal = {Interface Focus},
    doi = {10.1098/rsfs.2018.0027},
    pages = {1--3},
    keywords = {ARRAY(0x55e772a67b30)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32403/},
    abstract = {This issue of Interface Focus is a collection of papers arising out of a Royal Society Discussion meeting entitled ?Understanding images in biological and computer vision? held at Carlton Terrace on the 19th and 20th February, 2018. There is a strong tradition of inter-disciplinarity in the study of visual perception and visual cognition. Many of the great natural scientists including Newton [1], Young [2] and Maxwell (see [3]) were intrigued by the relationship between light, surfaces and perceived colour considering both physical and perceptual processes. Brewster [4] invented both the lenticular stereoscope and the binocular camera but also studied the perception of shape-from-shading. More recently, Marr's [5] description of visual perception as an information processing problem led to great advances in our understanding of both biological and computer vision: both the computer vision and biological vision communities have a Marr medal. The recent successes of deep neural networks in classifying the images that we see and the fMRI images that reveal the activity in our brains during the act of seeing are both intriguing. The links between machine vision systems and biology may at sometimes be weak but the similarity of some of the operations is nonetheless striking [6]. This two-day meeting brought together researchers from the fields of biological and computer vision, robotics, neuroscience, computer science and psychology to discuss the most recent developments in the field. The meeting was divided into four themes: vision for action, visual appearance, vision for recognition and machine learning.}
    }
  • G. Das, G. Cielniak, P. From, and M. Hanheide, “Discrete event simulations for scalability analysis of robotic in-field logistics in agriculture ? a case study,” in Ieee international conference on robotics and automation, workshop on robotic vision and action in agriculture, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Agriculture lends itself to automation due to its labour-intensive processes and the strain posed on workers in the domain. This paper presents a discrete event simulation (DES) framework allowing to rapidly assess different processes and layouts for in-field logistics operations employing a fleet of autonomous transportation robots supporting soft-fruit pickers. The proposed framework can help to answer pressing questions regarding the economic viability and scalability of such fleet operations, which we illustrate and discuss in the context of a specific case study considering strawberry picking operations. In particular, this paper looks into the effect of a robotic fleet in scenarios with different transportation requirements, as well as on the effect of allocation algorithms, all without requiring resource demanding field trials. The presented framework demonstrates a great potential for future development and optimisation of the efficient robotic fleet operations in agriculture.

    @inproceedings{lincoln32170,
    booktitle = {IEEE International Conference on Robotics and Automation, Workshop on Robotic Vision and Action in Agriculture},
    month = {May},
    title = {Discrete Event Simulations for Scalability Analysis of Robotic In-Field Logistics in Agriculture ? A Case Study},
    author = {Gautham Das and Grzegorz Cielniak and Pal From and Marc Hanheide},
    year = {2018},
    keywords = {ARRAY(0x55e772a7ab70)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32170/},
    abstract = {Agriculture lends itself to automation due to its labour-intensive processes and the strain posed on workers in the domain. This paper presents a discrete event simulation (DES) framework allowing to rapidly assess different processes and layouts for in-field logistics operations employing a fleet of autonomous transportation robots supporting soft-fruit pickers. The proposed framework can help to answer pressing questions regarding the economic viability and scalability of such fleet operations, which we illustrate and discuss in the context of a specific case study considering strawberry picking operations. In particular, this paper looks into the effect of a robotic fleet in scenarios with different transportation requirements, as well as on the effect of allocation algorithms, all without requiring resource demanding field trials. The presented framework demonstrates a great potential for future development and optimisation of the efficient robotic fleet operations in agriculture.}
    }
  • J. P. Fentanes, I. Gould, T. Duckett, S. Pearson, and G. Cielniak, “Soil compaction mapping through robot exploration: a study into kriging parameters,” in Icra 2018 workshop on robotic vision and action in agriculture, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Soil condition mapping is a manual, laborious and costly process which requires soil measurements to be taken at fixed, pre-defined locations, limiting the quality of the resulting information maps. For these reasons, we propose the use of an outdoor mobile robot equipped with an actuated soil probe for automatic mapping of soil condition, allowing for both, more efficient data collection and better soil models. The robot is building soil models on-line using standard geo-statistical methods such as kriging, and is using the quality of the model to drive the exploration. In this work, we take a closer look at the kriging process itself and how its parameters affect the exploration outcome. For this purpose, we employ soil compaction datasets collected from two real fields of varying characteristics and analyse how the parameters vary between fields and how they change during the exploration process. We particularly focus on the stability of the kriging parameters, their evolution over the exploration process and influence on the resulting soil maps.

    @inproceedings{lincoln32171,
    booktitle = {ICRA 2018 Workshop on Robotic Vision and Action in Agriculture},
    month = {May},
    title = {Soil Compaction Mapping Through Robot Exploration: A Study into Kriging Parameters},
    author = {Jaime Pulido Fentanes and Iain Gould and Tom Duckett and Simon Pearson and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55e772a89608)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32171/},
    abstract = {Soil condition mapping is a manual, laborious and costly process which requires soil measurements to be taken at fixed, pre-defined locations, limiting the quality of the resulting information maps. For these reasons, we propose the use of an outdoor mobile robot equipped with an actuated soil probe for automatic mapping of soil condition, allowing for both, more efficient data collection and better soil models. The robot is building soil models on-line using standard geo-statistical methods such as kriging, and is using the quality of the model to drive the exploration. In this work, we take a closer look at the kriging process itself and how its parameters affect the exploration outcome. For this purpose, we employ soil compaction datasets collected from two real fields of varying characteristics and analyse how the parameters vary between fields and how they change during the exploration process. We particularly focus on the stability of the kriging parameters, their evolution over the exploration process and influence on the resulting soil maps.}
    }
  • G. H. W. Gebhardt, K. Daun, M. Schnaubelt, and G. Neumann, “Learning robust policies for object manipulation with robot swarms,” in Ieee international conference on robotics and automation, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly. Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source. In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution. Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.

    @inproceedings{lincoln31674,
    booktitle = {IEEE International Conference on Robotics and Automation},
    month = {May},
    title = {Learning robust policies for object manipulation with robot swarms},
    author = {G. H. W. Gebhardt and K. Daun and M. Schnaubelt and G. Neumann},
    year = {2018},
    keywords = {ARRAY(0x55e772a895c0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31674/},
    abstract = {Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly.
    Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source.
    In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution. Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.}
    }
  • G. H. W. Gebhardt, K. Daun, M. Schnaubelt, and G. Neumann, “Robust learning of object assembly tasks with an invariant representation of robot swarms,” in International conference on robotics and automation (icra), 2018.
    [BibTeX] [Abstract] [Download PDF]

    {–} Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly. Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source. In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution. Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.

    @inproceedings{lincoln30920,
    booktitle = {International Conference on Robotics and Automation (ICRA)},
    month = {May},
    title = {Robust learning of object assembly tasks with an invariant representation of robot swarms},
    author = {G. H. W. Gebhardt and K. Daun and M. Schnaubelt and G. Neumann},
    year = {2018},
    keywords = {ARRAY(0x55e772a4c830)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/30920/},
    abstract = {{--} Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly. Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source. In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution.
    Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.}
    }
  • D. Koert, G. Maeda, G. Neumann, and J. Peters, “Learning coupled forward-inverse models with combined prediction errors,” in International conference on robotics and automation (icra), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Challenging tasks in unstructured environments require robots to learn complex models. Given a large amount of information, learning multiple simple models can offer an efficient alternative to a monolithic complex network. Training multiple models{–}that is, learning their parameters and their responsibilities{–}has been shown to be prohibitively hard as optimization is prone to local minima. To efficiently learn multiple models for different contexts, we thus develop a new algorithm based on expectation maximization (EM). In contrast to comparable concepts, this algorithm trains multiple modules of paired forward-inverse models by using the prediction errors of both forward and inverse models simultaneously. In particular, we show that our method yields a substantial improvement over only considering the errors of the forward models on tasks where the inverse space contains multiple solutions

    @inproceedings{lincoln31686,
    booktitle = {International Conference on Robotics and Automation (ICRA)},
    month = {May},
    title = {Learning coupled forward-inverse models with combined prediction errors},
    author = {D. Koert and G. Maeda and G. Neumann and J. Peters},
    year = {2018},
    keywords = {ARRAY(0x55e772a67b18)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31686/},
    abstract = {Challenging tasks in unstructured environments require robots to learn complex models. Given a large amount of information, learning multiple simple models can offer an efficient alternative to a monolithic complex network. Training multiple models{--}that is, learning their parameters and their responsibilities{--}has been shown to be prohibitively hard as optimization is prone to local minima. To efficiently learn multiple models for different contexts, we thus develop a new algorithm based on expectation maximization (EM). In contrast to comparable concepts, this algorithm trains multiple modules of paired forward-inverse models by using the prediction errors of both forward and inverse models simultaneously. In particular, we show that our method yields a substantial improvement over only considering the errors of the forward models on tasks where the inverse space contains multiple solutions}
    }
  • N. Bellotto, S. Cosar, and Z. Yan, “Human detection and tracking,” in Encyclopedia of robotics, M. H. Ang, O. Khatib, and B. Siciliano, Eds., Springer, 2018. doi:10.1007/978-3-642-41610-1_34-1
    [BibTeX] [Abstract] [Download PDF]

    In robotics, detecting and tracking moving objects is key to implementing useful and safe robot behaviours. Identifying which of the detected objects are humans is particularly important for domestic and public environments. Typically the robot is required to collect environmental data of the surrounding area using its on-board sensors, estimating where humans are and where they are going to. Moreover, robots should detect and track humans accurately and as early as possible in order to have enough time to react accordingly

    @incollection{lincoln30916,
    month = {May},
    author = {Nicola Bellotto and Serhan Cosar and Zhi Yan},
    booktitle = {Encyclopedia of Robotics},
    editor = {M. H. Ang and O. Khatib and B. Siciliano},
    title = {Human detection and tracking},
    publisher = {Springer},
    doi = {10.1007/978-3-642-41610-1\_34-1},
    year = {2018},
    keywords = {ARRAY(0x55e772a48040)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/30916/},
    abstract = {In robotics, detecting and tracking moving objects is key to implementing useful and safe robot behaviours. Identifying which of the detected objects are humans is particularly important for domestic and public environments.
    Typically the robot is required to collect environmental data of the surrounding area using its on-board sensors, estimating where humans are and where they are going to. Moreover, robots should detect and track humans accurately and as early as possible in order to have enough time to react accordingly}
    }
  • S. Basu, A. Omotubora, and C. Fox, “Legal framework for small autonomous agricultural robots,” Ai and society, p. 1–22, 2018. doi:10.1007/s00146-018-0846-4
    [BibTeX] [Abstract] [Download PDF]

    Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in current law, it argues that new proposals could be developed to reform these to promote further innovation and investment in agricultural robots

    @article{lincoln32026,
    month = {May},
    author = {Subhajit Basu and Adekemi Omotubora and Charles Fox},
    title = {Legal framework for small autonomous agricultural robots},
    publisher = {Springer},
    journal = {AI and Society},
    doi = {10.1007/s00146-018-0846-4},
    pages = {1--22},
    year = {2018},
    keywords = {ARRAY(0x55e772a6d830)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32026/},
    abstract = {Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous
    agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from
    a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural
    roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of
    it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal
    concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can
    be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use
    of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in
    current law, it argues that new proposals could be developed to reform these to promote further innovation and investment
    in agricultural robots}
    }
  • A. Postnikov, A. Zolotas, C. Bingham, I. Saleh, C. Arsene, S. Pearson, and R. Bickerton, “Modelling of thermostatically controlled loads to analyse the potential of delivering ffr dsr with a large network of compressor packs,” in 2017 european modelling symposium (ems), 2018, p. 163–167. doi:doi:10.1109/EMS.2017.37
    [BibTeX] [Abstract] [Download PDF]

    This paper presents preliminary work from a current study on large refrigeration pack network. In particular, the simulation model of a typical refrigeration system with a single pack of 6 compressor units operating as fixed volume displacement machines is presented, and the potential of delivering static FFR with a large population of such packs is studied. Tuning of the model is performed using experimental data collected at the Refrigeration Research Centre in Riseholme, Lincoln. The purpose of modelling is to monitor the essential dynamics of what resembles a typical supermarket convenience-type store and to measure the capacity of a massive refrigeration network to hold off a considerable amount of load in response to FFR DSR event. This study focuses on investigation of the aggregated response of 150 packs (approx. 1 MW capacity) with refrigeration cases on hysteresis and modulation control. The presented model captures interconnected dynamics (refrigerant flow in the system linked to temperature control and the system’s refrigerant demand and to compressors’ power consumption). Type of refrigerant used for simulation is R407F. Refrigerant properties such as specific enthalpy, pressure and temperature at different state points are computed on each time step of simulation with REFPROP.

    @inproceedings{lincoln32195,
    month = {May},
    author = {Andrey Postnikov and Argyrios Zolotas and Chris Bingham and Ibrahim Saleh and Corneliu Arsene and Simon Pearson and Ronald Bickerton},
    booktitle = {2017 European Modelling Symposium (EMS)},
    title = {Modelling of Thermostatically Controlled Loads to Analyse the Potential of Delivering FFR DSR with a Large Network of Compressor Packs},
    publisher = {IEEE},
    doi = {doi:10.1109/EMS.2017.37},
    pages = {163--167},
    year = {2018},
    keywords = {ARRAY(0x55e772a98e60)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32195/},
    abstract = {This paper presents preliminary work from a current study on large refrigeration pack network. In particular, the simulation model of a typical refrigeration system with a single pack of 6 compressor units operating as fixed volume displacement machines is presented, and the potential of delivering static FFR with a large population of such packs is studied. Tuning of the model is performed using experimental data collected at the Refrigeration Research Centre in Riseholme, Lincoln. The purpose of modelling is to monitor the essential dynamics of what resembles a typical supermarket convenience-type store and to measure the capacity of a massive refrigeration network to hold off a considerable amount of load in response to FFR DSR event. This study focuses on investigation of the aggregated response of 150 packs (approx. 1 MW capacity) with refrigeration cases on hysteresis and modulation control. The presented model captures interconnected dynamics (refrigerant flow in the system linked to temperature control and the system's refrigerant demand and to compressors' power consumption). Type of refrigerant used for simulation is R407F. Refrigerant properties such as specific enthalpy, pressure and temperature at different state points are computed on each time step of simulation with REFPROP.}
    }
  • D. Liu and S. Yue, “Event-driven continuous stdp learning with deep structure for visual pattern recognition,” Ieee transactions on cybernetics, vol. 49, iss. 4, 2018. doi:10.1109/tcyb.2018.2801476
    [BibTeX] [Abstract] [Download PDF]

    Human beings can achieve reliable and fast visual pattern recognition with limited time and learning samples. Underlying this capability, ventral stream plays an important role in object representation and form recognition. Modeling the ventral steam may shed light on further understanding the visual brain in humans and building artificial vision systems for pattern recognition. The current methods to model the mechanism of ventral stream are far from exhibiting fast, continuous and event-driven learning like the human brain. To create a visual system similar to ventral stream in human with fast learning capability, in this study, we propose a new spiking neural system with an event-driven continuous spike timing dependent plasticity (STDP) learning method using specific spiking timing sequences. Two novel continuous input mechanisms have been used to obtain the continuous input spiking pattern sequence. With the event-driven STDP learning rule, the proposed learning procedure will be activated if the neuron receive one pre- or post-synaptic spike event. The experimental results on MNIST database show that the proposed method outperforms all other methods in fast learning scenarios and most of the current models in exhaustive learning experiments.

    @article{lincoln31010,
    volume = {49},
    number = {4},
    month = {April},
    author = {Daqi Liu and Shigang Yue},
    title = {Event-driven continuous STDP learning with deep structure for visual pattern recognition},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2018},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/tcyb.2018.2801476},
    keywords = {ARRAY(0x55e772a82b60)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31010/},
    abstract = {Human beings can achieve reliable and fast visual pattern recognition with limited time and learning samples. Underlying this capability, ventral stream plays an important role in object representation and form recognition. Modeling the ventral steam may shed light on further understanding the visual brain in humans and building artificial vision systems for pattern recognition. The current methods to model the mechanism of ventral stream are far from exhibiting fast, continuous and event-driven learning like the human brain. To create a visual system similar to ventral stream in human with fast learning capability, in this study, we propose a new spiking neural system with an event-driven continuous spike timing dependent plasticity (STDP) learning method using specific spiking timing sequences. Two novel continuous input mechanisms have been used to obtain the continuous input spiking pattern sequence. With the event-driven STDP learning rule, the proposed learning procedure will be activated if the neuron receive one pre- or post-synaptic spike event. The experimental results on MNIST database show that the proposed method outperforms all other methods in fast learning scenarios and most of the current models in exhaustive learning experiments.}
    }
  • H. Wang, J. Peng, and S. Yue, “An improved lptc neural model for background motion direction estimation,” in 2017 joint ieee international conference on development and learning and epigenetic robotics (icdl-epirob), 2018. doi:10.1109/DEVLRN.2017.8329786
    [BibTeX] [Abstract] [Download PDF]

    A class of specialized neurons, called lobula plate tangential cells (LPTCs) has been shown to respond strongly to wide-field motion. The classic model, elementary motion detector (EMD) and its improved model, two-quadrant detector (TQD) have been proposed to simulate LPTCs. Although EMD and TQD can percept background motion, their outputs are so cluttered that it is difficult to discriminate actual motion direction of the background. In this paper, we propose a max operation mechanism to model a newly-found transmedullary neuron Tm9 whose physiological properties do not map onto EMD and TQD. This proposed max operation mechanism is able to improve the detection performance of TQD in cluttered background by filtering out irrelevant motion signals. We will demonstrate the functionality of this proposed mechanism in wide-field motion perception.

    @inproceedings{lincoln33421,
    booktitle = {2017 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)},
    month = {April},
    title = {An Improved LPTC Neural Model for Background Motion Direction Estimation},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2018},
    doi = {10.1109/DEVLRN.2017.8329786},
    keywords = {ARRAY(0x55e772a9bfe8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33421/},
    abstract = {A class of specialized neurons, called lobula plate tangential cells (LPTCs) has been shown to respond strongly to wide-field motion. The classic model, elementary motion detector (EMD) and its improved model, two-quadrant detector (TQD) have been proposed to simulate LPTCs. Although EMD and TQD can percept background motion, their outputs are so cluttered that it is difficult to discriminate actual motion direction of the background. In this paper, we propose a max operation mechanism to model a newly-found transmedullary neuron Tm9 whose physiological properties do not map onto EMD and TQD. This proposed max operation mechanism is able to improve the detection performance of TQD in cluttered background by filtering out irrelevant motion signals. We will demonstrate the functionality of this proposed mechanism in wide-field motion perception.}
    }
  • K. Elgeneidy, N. Lohse, and M. Jackson, “Bending angle prediction and control of soft pneumatic actuators with embedded flex sensors: a data-driven approach,” Mechatronics, vol. 50, p. 234–247, 2018. doi:10.1016/j.mechatronics.2017.10.005
    [BibTeX] [Abstract] [Download PDF]

    In this paper, a purely data-driven modelling approach is presented for predicting and controlling the free bending angle response of a typical soft pneumatic actuator (SPA), embedded with a resistive flex sensor. An experimental setup was constructed to test the SPA at different input pressure values and orientations, while recording the resulting feedback from the embedded flex sensor and on-board pressure sensor. A calibrated high speed camera captures image frames during the actuation, which are then analysed using an image processing program to calculate the actual bending angle and synchronise it with the recorded sensory feedback. Empirical models were derived based on the generated experimental data using two common data-driven modelling techniques; regression analysis and artificial neural networks. Both techniques were validated using a new dataset at untrained operating conditions to evaluate their prediction accuracy. Furthermore, the derived empirical model was used as part of a closed-loop PID controller to estimate and control the bending angle of the tested SPA based on the real-time sensory feedback generated. The tuned PID controller allowed the bending SPA to accurately follow stepped and sinusoidal reference signals, even in the presence of pressure leaks in the pneumatic supply. This work demonstrates how purely data-driven models can be effectively used in controlling the bending of SPAs under different operating conditions, avoiding the need for complex analytical modelling and material characterisation. Ultimately, the aim is to create more controllable soft grippers based on such SPAs with embedded sensing capabilities, to be used in applications requiring both a ?soft touch? as well as a more controllable object manipulation.

    @article{lincoln30386,
    volume = {50},
    month = {April},
    author = {Khaled Elgeneidy and Niels Lohse and Michael Jackson},
    title = {Bending angle prediction and control of soft pneumatic actuators with embedded flex sensors: a data-driven approach},
    publisher = {Elsevier for International Federation of Automatic Control (IFAC)},
    year = {2018},
    journal = {Mechatronics},
    doi = {10.1016/j.mechatronics.2017.10.005},
    pages = {234--247},
    keywords = {ARRAY(0x55e772a7ab40)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/30386/},
    abstract = {In this paper, a purely data-driven modelling approach is presented for predicting and controlling the free bending angle response of a typical soft pneumatic actuator (SPA), embedded with a resistive flex sensor. An experimental setup was constructed to test the SPA at different input pressure values and orientations, while recording the resulting feedback from the embedded flex sensor and on-board pressure sensor. A calibrated high speed camera captures image frames during the actuation, which are then analysed using an image processing program to calculate the actual bending angle and synchronise it with the recorded sensory feedback. Empirical models were derived based on the generated experimental data using two common data-driven modelling techniques; regression analysis and artificial neural networks. Both techniques were validated using a new dataset at untrained operating conditions to evaluate their prediction accuracy. Furthermore, the derived empirical model was used as part of a closed-loop PID controller to estimate and control the bending angle of the tested SPA based on the real-time sensory feedback generated. The tuned PID controller allowed the bending SPA to accurately follow stepped and sinusoidal reference signals, even in the presence of pressure leaks in the pneumatic supply. This work demonstrates how purely data-driven models can be effectively used in controlling the bending of SPAs under different operating conditions, avoiding the need for complex analytical modelling and material characterisation. Ultimately, the aim is to create more controllable soft grippers based on such SPAs with embedded sensing capabilities, to be used in applications requiring both a ?soft touch? as well as a more controllable object manipulation.}
    }
  • A. G. Esfahani and M. R. and, “Robot learning from demonstrations: emulation learning in environments with moving obstacles,” Robotics and autonomous systems, vol. 101, p. 45–56, 2018. doi:10.1016/j.robot.2017.12.001
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present an approach to the problem of Robot Learning from Demonstration (RLfD) in a dynamic environment, i.e. an environment whose state changes throughout the course of performing a task. RLfD mostly has been successfully exploited only in non-varying environments to reduce the programming time and cost, e.g. fixed manufacturing workspaces. Non-conventional production lines necessitate Human?Robot Collaboration (HRC) implying robots and humans must work in shared workspaces. In such conditions, the robot needs to avoid colliding with the objects that are moved by humans in the workspace. Therefore, not only is the robot: (i) required to learn a task model from demonstrations; but also, (ii) must learn a control policy to avoid a stationary obstacle. Furthermore, (iii) it needs to build a control policy from demonstration to avoid moving obstacles. Here, we present an incremental approach to RLfD addressing all these three problems. We demonstrate the effectiveness of the proposed RLfD approach, by a series of pick-and-place experiments by an ABB YuMi robot. The experimental results show that a person can work in a workspace shared with a robot where the robot successfully avoids colliding with him.

    @article{lincoln34519,
    volume = {101},
    month = {March},
    author = {Amir Ghalamzan Esfahani and Matteo Ragaglia and },
    title = {Robot learning from demonstrations: Emulation learning in environments with moving obstacles},
    publisher = {Elsevier},
    year = {2018},
    journal = {Robotics and autonomous systems},
    doi = {10.1016/j.robot.2017.12.001},
    pages = {45--56},
    keywords = {ARRAY(0x55e772a80818)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34519/},
    abstract = {In this paper, we present an approach to the problem of Robot Learning from Demonstration (RLfD) in a dynamic environment, i.e. an environment whose state changes throughout the course of performing a task. RLfD mostly has been successfully exploited only in non-varying environments to reduce the programming time and cost, e.g. fixed manufacturing workspaces. Non-conventional production lines necessitate Human?Robot Collaboration (HRC) implying robots and humans must work in shared workspaces. In such conditions, the robot needs to avoid colliding with the objects that are moved by humans in the workspace. Therefore, not only is the robot: (i) required to learn a task model from demonstrations; but also, (ii) must learn a control policy to avoid a stationary obstacle. Furthermore, (iii) it needs to build a control policy from demonstration to avoid moving obstacles. Here, we present an incremental approach to RLfD addressing all these three problems. We demonstrate the effectiveness of the proposed RLfD approach, by a series of pick-and-place experiments by an ABB YuMi robot. The experimental results show that a person can work in a workspace shared with a robot where the robot successfully avoids colliding with him.}
    }
  • R. Shang, B. Du, K. Dai, L. Jiao, A. G. Esfahani, R. Stolkin, and and, “Quantum-inspired immune clonal algorithm for solving large-scale capacitated arc routing problems,” Memetic computing, vol. 10, iss. 1, p. 81–102, 2018. doi:10.1007/s12293-017-0224-7
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present an approach to Large-Scale CARP called Quantum-Inspired Immune Clonal Algorithm (QICA-CARP). This algorithm combines the feature of an artificial immune system and quantum computation ground on the qubit and the quantum superposition. We call an antibody of population quantum bit encoding, in QICA-CARP. For this encoding, to control the population with a high probability evolution towards a good schema we use the information on the current optimal antibody. The mutation strategy of quantum rotation gate accelerates the convergence of the original clone operator. Moreover, quantum crossover operator enhances the exchange of information and increases the diversity of the population. Furthermore, it avoids falling into local optimum. We also use the repair operator to amend the infeasible solutions to ensure the diversity of solutions. This makes QICA-CARP approximating the optimal solution. We demonstrate the effectiveness of our approach by a set of experiments and by Comparing the results of our approach with ones obtained with the RDG-MAENS and RAM using different test sets. Experimental results show that QICA-CARP outperforms other algorithms in terms of convergence rate and the quality of the obtained solutions. Especially, QICA-CARP converges to a better lower bound at a faster rate illustrating that it is suitable for solving large-scale CARP.

    @article{lincoln34759,
    volume = {10},
    number = {1},
    month = {March},
    author = {Ronghua Shang and Bingqi Du and Kaiyun Dai and Licheng Jiao and Amir Ghalamzan Esfahani and Rustam Stolkin and and },
    title = {Quantum-Inspired Immune Clonal Algorithm for solving large-scale capacitated arc routing problems},
    publisher = {Springer},
    year = {2018},
    journal = {Memetic Computing},
    doi = {10.1007/s12293-017-0224-7},
    pages = {81--102},
    keywords = {ARRAY(0x55e772a7f800)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34759/},
    abstract = {In this paper, we present an approach to Large-Scale CARP called Quantum-Inspired Immune Clonal Algorithm (QICA-CARP). This algorithm combines the feature of an artificial immune system and quantum computation ground on the qubit and the quantum superposition. We call an antibody of population quantum bit encoding, in QICA-CARP. For this encoding, to control the population with a high probability evolution towards a good schema we use the information on the current optimal antibody. The mutation strategy of quantum rotation gate accelerates the convergence of the original clone operator. Moreover, quantum crossover operator enhances the exchange of information and increases the diversity of the population. Furthermore, it avoids falling into local optimum. We also use the repair operator to amend the infeasible solutions to ensure the diversity of solutions. This makes QICA-CARP approximating the optimal solution. We demonstrate the effectiveness of our approach by a set of experiments and by Comparing the results of our approach with ones obtained with the RDG-MAENS and RAM using different test sets. Experimental results show that QICA-CARP outperforms other algorithms in terms of convergence rate and the quality of the obtained solutions. Especially, QICA-CARP converges to a better lower bound at a faster rate illustrating that it is suitable for solving large-scale CARP.}
    }
  • A. Paraschos, C. Daniel, J. Peters, and G. Neumann, “Using probabilistic movement primitives in robotics,” Autonomous robots, vol. 42, iss. 3, p. 529–551, 2018. doi:10.1007/s10514-017-9648-7
    [BibTeX] [Abstract] [Download PDF]

    Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a unified framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios.

    @article{lincoln27883,
    volume = {42},
    number = {3},
    month = {March},
    author = {Alexandros Paraschos and Christian Daniel and Jan Peters and Gerhard Neumann},
    title = {Using probabilistic movement primitives in robotics},
    publisher = {Springer Verlag},
    year = {2018},
    journal = {Autonomous Robots},
    doi = {10.1007/s10514-017-9648-7},
    pages = {529--551},
    keywords = {ARRAY(0x55e772aa2b28)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27883/},
    abstract = {Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a unified framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the
    movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios.}
    }
  • J. Guo, K. Elgeneidy, C. Xiang, N. Lohse, L. Justham, and J. Rossiter, “Soft pneumatic grippers embedded with stretchable electroadhesion,” Smart materials and structures, vol. 27, iss. 5, p. 55006, 2018. doi:10.1088/1361-665X/aab579
    [BibTeX] [Abstract] [Download PDF]

    Current soft pneumatic grippers cannot robustly grasp flat materials and flexible objects on curved surfaces without distorting them. Current electroadhesive grippers, on the other hand, are difficult to actively deform to complex shapes to pick up free-form surfaces or objects. An easy-to-implement PneuEA gripper is proposed by the integration of an electroadhesive gripper and a two-fingered soft pneumatic gripper. The electroadhesive gripper was fabricated by segmenting a soft conductive silicon sheet into a two-part electrode design and embedding it in a soft dielectric elastomer. The two-fingered soft pneumatic gripper was manufactured using a standard soft lithography approach. This novel integration has combined the benefits of both the electroadhesive and soft pneumatic grippers. As a result, the proposed PneuEA gripper was not only able to pick-and-place flat and flexible materials such as a porous cloth but also delicate objects such as a light bulb. By combining two soft touch sensors with the electroadhesive, an intelligent and shape-adaptive PneuEA material handling system has been developed. This work is expected to widen the applications of both soft gripper and electroadhesion technologies.

    @article{lincoln32297,
    volume = {27},
    number = {5},
    month = {March},
    author = {Jianglong Guo and Khaled Elgeneidy and C Xiang and Niels Lohse and Laura Justham and Jonathan Rossiter},
    title = {Soft pneumatic grippers embedded with stretchable electroadhesion},
    publisher = {IOP Publishing},
    year = {2018},
    journal = {Smart Materials and Structures},
    doi = {10.1088/1361-665X/aab579},
    pages = {055006},
    keywords = {ARRAY(0x55e772a7abe8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32297/},
    abstract = {Current soft pneumatic grippers cannot robustly grasp flat materials and flexible objects on curved surfaces without distorting them. Current electroadhesive grippers, on the other hand, are difficult to actively deform to complex shapes to pick up free-form surfaces or objects. An easy-to-implement PneuEA gripper is proposed by the integration of an electroadhesive gripper and a two-fingered soft pneumatic gripper. The electroadhesive gripper was fabricated by segmenting a soft conductive silicon sheet into a two-part electrode design and embedding it in a soft dielectric elastomer. The two-fingered soft pneumatic gripper was manufactured using a standard soft lithography approach. This novel integration has combined the benefits of both the electroadhesive and soft pneumatic grippers. As a result, the proposed PneuEA gripper was not only able to pick-and-place flat and flexible materials such as a porous cloth but also delicate objects such as a light bulb. By combining two soft touch sensors with the electroadhesive, an intelligent and shape-adaptive PneuEA material handling system has been developed. This work is expected to widen the applications of both soft gripper and electroadhesion technologies.}
    }
  • T. Osa, J. Pajarinen, G. Neumann, A. J. Bagnell, P. Abbeel, and J. Peters, “An algorithmic perspective on imitation learning,” Foundations and trends in robotics, vol. 7, iss. 1-2, p. 1–179, 2018. doi:10.1561/2300000053
    [BibTeX] [Abstract] [Download PDF]

    As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning. We pay particular attention to the intimate connection between imitation learning approaches and those of structured prediction Daumé III et al. [2009]. To structure this discussion, we categorize imitation learning techniques based on the following key criteria which drive algorithmic decisions: 1) The structure of the policy space. Is the learned policy a time-index trajectory (trajectory learning), a mapping from observations to actions (so called behavioral cloning [Bain and Sammut, 1996]), or the result of a complex optimization or planning problem at each execution as is common in inverse optimal control methods [Kalman, 1964, Moylan and Anderson, 1973]. 2) The information available during training and testing. In particular, is the learning algorithm privy to the full state that the teacher possess? Is the learner able to interact with the teacher and gather corrections or more data? Does the learner have a (typically a priori) model of the system with which it interacts? Does the learner have access to the reward (cost) function that the teacher is attempting to optimize? 3) The notion of success. Different algorithmic approaches provide varying guarantees on the resulting learned behavior. These guarantees range from weaker (e.g., measuring disagreement with the agent?s decision) to stronger (e.g., providing guarantees on the performance of the learner with respect to a true cost function, either known or unknown). We organize our work by paying particular attention to distinction (1): dividing imitation learning into directly replicating desired behavior (sometimes called behavioral cloning) and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]). In the latter case, behavior arises as the result of an optimization problem solved for each new instance that the learner faces. In addition to method analysis, we discuss the design decisions a practitioner must make when selecting an imitation learning approach. Moreover, application examples{–}such as robots that play table tennis [Kober and Peters, 2009], programs that play the game of Go [Silver et al., 2016], and systems that understand natural language [Wen et al., 2015]{–} illustrate the properties and motivations behind different forms of imitation learning. We conclude by presenting a set of open questions and point towards possible future research directions for machine learning.

    @article{lincoln31687,
    volume = {7},
    number = {1-2},
    month = {March},
    author = {Takayuki Osa and Joni Pajarinen and Gerhard Neumann and J. Andrew Bagnell and Pieter Abbeel and Jan Peters},
    title = {An algorithmic perspective on imitation learning},
    publisher = {Now publishers},
    year = {2018},
    journal = {Foundations and Trends in Robotics},
    doi = {10.1561/2300000053},
    pages = {1--179},
    keywords = {ARRAY(0x55e772a71308)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31687/},
    abstract = {As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning. We pay particular attention to the intimate connection between imitation learning approaches and those of structured prediction Daum{\'e} III et al. [2009]. To structure this discussion, we categorize imitation learning techniques based on the following key criteria which drive algorithmic decisions:
    1) The structure of the policy space. Is the learned policy a time-index trajectory (trajectory learning), a mapping from observations to actions (so called behavioral cloning [Bain and Sammut, 1996]), or the result of a complex optimization or planning problem at each execution as is common in inverse optimal control methods [Kalman, 1964, Moylan and Anderson, 1973].
    2) The information available during training and testing. In particular, is the learning algorithm privy to the full state that the teacher possess? Is the learner able to interact with the teacher and gather corrections or more data? Does the learner have a (typically a priori) model of the system with which it interacts? Does the learner have access to the reward (cost) function that the teacher is attempting to optimize?
    3) The notion of success. Different algorithmic approaches provide varying guarantees on the resulting learned behavior. These guarantees range from weaker (e.g., measuring disagreement with the agent?s decision) to stronger (e.g., providing guarantees on the performance of the learner with respect to a true cost function, either known or unknown). We organize our work by paying particular attention to distinction (1): dividing imitation learning into directly replicating desired behavior (sometimes called behavioral cloning) and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]). In the latter case, behavior arises as the result of an optimization problem solved for each new instance that the learner faces. In addition to method analysis, we discuss the design decisions a practitioner must make when selecting an imitation learning approach. Moreover, application examples{--}such as robots that play table tennis [Kober and Peters, 2009], programs that play the game of Go [Silver et al., 2016], and systems that understand natural language [Wen et al., 2015]{--} illustrate the properties and motivations behind different forms of imitation learning. We conclude by presenting a set of open questions and point towards possible future research directions for machine learning.}
    }
  • E. Senft, S. Lemaignan, M. Bartlett, P. Baxter, and T. Belpaeme, “Robots in the classroom: learning to be a good tutor,” in R4l @ hri2018, 2018.
    [BibTeX] [Abstract] [Download PDF]

    To broaden the adoption and be more inclusive, robotic tutors need to tailor their behaviours to their audience. Traditional approaches, such as Bayesian Knowledge Tracing, try to adapt the content of lessons or the difficulty of tasks to the current estimated knowledge of the student. However, these variations only happen in a limited domain, predefined in advance, and are not able to tackle unexpected variation in a student’s behaviours. We argue that robot adaptation needs to go beyond variations in preprogrammed behaviours and that robots should in effect learn online how to become better tutors. A study is currently being carried out to evaluate how human supervision can teach a robot to support child learning during an educational game using one implementation of this approach.

    @inproceedings{lincoln31959,
    booktitle = {R4L @ HRI2018},
    month = {March},
    title = {Robots in the classroom: Learning to be a Good Tutor},
    author = {Emmanuel Senft and Severin Lemaignan and Madeleine Bartlett and Paul Baxter and Tony Belpaeme},
    year = {2018},
    keywords = {ARRAY(0x55e7726fe5f8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31959/},
    abstract = {To broaden the adoption and be more inclusive, robotic tutors need to tailor their
    behaviours to their audience. Traditional approaches, such as Bayesian Knowledge
    Tracing, try to adapt the content of lessons or the difficulty of tasks to the current
    estimated knowledge of the student. However, these variations only happen in a limited
    domain, predefined in advance, and are not able to tackle unexpected variation in a
    student's behaviours. We argue that robot adaptation needs to go beyond variations in
    preprogrammed behaviours and that robots should in effect learn online how to become
    better tutors. A study is currently being carried out to evaluate how human supervision
    can teach a robot to support child learning during an educational game using one
    implementation of this approach.}
    }
  • P. Lightbody, P. Baxter, and M. Hanheide, “Studying table-top manipulation tasks: a robust framework for object tracking in collaboration,” in The 13th annual acm/ieee international conference on human robot interaction, 2018. doi:10.1145/3173386.3177045
    [BibTeX] [Abstract] [Download PDF]

    Table-top object manipulation is a well-established test bed on which to study both basic foundations of general human-robot interaction and more specific collaborative tasks. A prerequisite, both for studies and for actual collaborative or assistive tasks, is the robust perception of any objects involved. This paper presents a real-time capable and ROS-integrated approach, bringing together state-of-the-art detection and tracking algorithms, integrating perceptual cues from multiple cameras and solving detection, sensor fusion and tracking in one framework. The highly scalable framework was tested in a HRI use-case scenario with 25 objects being reliably tracked under significant temporary occlusions. The use-case demonstrates the suitability of the approach when working with multiple objects in small table-top environments and highlights the versatility and range of analysis available with this framework.

    @inproceedings{lincoln31204,
    booktitle = {The 13th Annual ACM/IEEE International Conference on Human Robot Interaction},
    month = {March},
    title = {Studying table-top manipulation tasks: a robust framework for object tracking in collaboration},
    author = {Peter Lightbody and Paul Baxter and Marc Hanheide},
    publisher = {ACM/IEEE},
    year = {2018},
    doi = {10.1145/3173386.3177045},
    keywords = {ARRAY(0x55e7726d8c60)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31204/},
    abstract = {Table-top object manipulation is a well-established test bed on which to study both basic foundations of general human-robot interaction and more specific collaborative tasks. A prerequisite, both for studies and for actual collaborative or assistive tasks, is the robust perception of any objects involved. This paper presents a real-time capable and ROS-integrated approach, bringing together state-of-the-art detection and tracking algorithms, integrating perceptual cues from multiple cameras and solving detection, sensor fusion and tracking in one framework. The highly scalable framework was tested in a HRI use-case scenario with 25 objects being reliably tracked under significant temporary occlusions. The use-case demonstrates the suitability of the approach when working with multiple objects in small table-top environments and highlights the versatility and range of analysis available with this framework.}
    }
  • I. Saleh, A. Postnikov, C. Arsene, A. Zolotas, C. Bingham, R. Bickerton, and S. Pearson, “Impact of demand side response on a commercial retail refrigeration system,” Energies, vol. 11, iss. 2, p. 371, 2018. doi:10.3390/en11020371
    [BibTeX] [Abstract] [Download PDF]

    The UK National Grid has placed increased emphasis on the development of Demand Side Response (DSR) tariff mechanisms to manage load at peak times. Refrigeration systems, along with HVAC, are estimated to consume 14\% of the UK?s electricity and could have a significant role for DSR application. However, characterized by relatively low individual electrical loads and massive asset numbers, multiple low power refrigerators need aggregation for inclusion in these tariffs. In this paper, the impact of the Demand Side Response (DSR) control mechanisms on food retailing refrigeration systems is investigated. The experiments are conducted in a test-rig built to resemble a typical small supermarket store. The paper demonstrates how the temperature and pressure profiles of the system, the active power and the drawn current of the compressors are affected following a rapid shut down and subsequent return to normal operation as a response to a DSR event. Moreover, risks and challenges associated with primary and secondary Firm Frequency Response (FFR) mechanisms, where the load is rapidly shed at high speed in response to changes in grid frequency, is considered. For instance, measurements are included that show a significant increase in peak inrush currents of approx. 30\% when the system returns to normal operation at the end of a DSR event. Consideration of how high inrush currents after a DSR event can produce voltage fluctuations of the supply and we assess risks to the local power supply system.

    @article{lincoln31137,
    volume = {11},
    number = {2},
    month = {February},
    author = {Ibrahim Saleh and Andrey Postnikov and Corneliu Arsene and Argyrios Zolotas and Chris Bingham and Ronald Bickerton and Simon Pearson},
    title = {Impact of demand side response on a commercial retail refrigeration system},
    publisher = {MDPI},
    year = {2018},
    journal = {Energies},
    doi = {10.3390/en11020371},
    pages = {371},
    keywords = {ARRAY(0x55e7726d8ca8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31137/},
    abstract = {The UK National Grid has placed increased emphasis on the development of Demand Side Response (DSR) tariff mechanisms to manage load at peak times. Refrigeration systems, along with HVAC, are estimated to consume 14\% of the UK?s electricity and could have a significant role for DSR application. However, characterized by relatively low individual electrical loads and massive asset numbers, multiple low power refrigerators need aggregation for inclusion in these tariffs. In this paper, the impact of the Demand Side Response (DSR) control mechanisms on food retailing refrigeration systems is investigated. The experiments are conducted in a test-rig built to resemble a typical small supermarket store. The paper demonstrates how the temperature and pressure profiles of the system, the active power and the drawn current of the compressors are affected following a rapid shut down and subsequent return to normal operation as a response to a DSR event. Moreover, risks and challenges associated with primary and secondary Firm Frequency Response (FFR) mechanisms, where the load is rapidly shed at high speed in response to changes in grid frequency, is considered. For instance, measurements are included that show a significant increase in peak inrush currents of approx. 30\% when the system returns to normal operation at the end of a DSR event. Consideration of how high inrush currents after a DSR event can produce voltage fluctuations of the supply and we assess risks to the local power supply system.}
    }
  • R. Shang, Y. Yuan, L. Jiao, Y. Meng, and A. G. Esfahani, “A self-paced learning algorithm for change detection in synthetic aperture radar images,” Signal processing, vol. 142, p. 375–387, 2018. doi:10.1016/j.sigpro.2017.07.023
    [BibTeX] [Abstract] [Download PDF]

    Detecting changed regions between two given synthetic aperture radar images is very important to monitor the change of landscapes, change of ecosystem and so on. This can be formulated as a classification problem and addressed by learning a classifier, traditional machine learning classification methods very easily stick to local optima which can be caused by noises of data. Hence, we propose an unsupervised algorithm aiming at constructing a classifier based on self-paced learning. Self-paced learning is a recently developed supervised learning approach and has been proven to be capable to overcome effectively this shortcoming. After applying a pre-classification to the difference image, we uniformly select samples using the initial result. Then, self-paced learning is utilized to train a classifier. Finally, a filter is used based on spatial contextual information to further smooth the classification result. In order to demonstrate the efficiency of the proposed algorithm, we apply our proposed algorithm on five real synthetic aperture radar images datasets. The results obtained by our algorithm are compared with five other state-of-the-art algorithms, which demonstrates that our algorithm outperforms those state-of-the-art algorithms in terms of accuracy and robustness.

    @article{lincoln34757,
    volume = {142},
    month = {January},
    author = {Ronghua Shang and Yijing Yuan and Licheng Jiao and Yang Meng and Amir Ghalamzan Esfahani},
    title = {A self-paced learning algorithm for change detection in synthetic aperture radar images},
    publisher = {Elsevier},
    year = {2018},
    journal = {Signal Processing},
    doi = {10.1016/j.sigpro.2017.07.023},
    pages = {375--387},
    keywords = {ARRAY(0x55e7726f6320)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34757/},
    abstract = {Detecting changed regions between two given synthetic aperture radar images is very important to monitor the change of landscapes, change of ecosystem and so on. This can be formulated as a classification problem and addressed by learning a classifier, traditional machine learning classification methods very easily stick to local optima which can be caused by noises of data. Hence, we propose an unsupervised algorithm aiming at constructing a classifier based on self-paced learning. Self-paced learning is a recently developed supervised learning approach and
    has been proven to be capable to overcome effectively this shortcoming. After applying a pre-classification to the difference image, we uniformly select samples using the initial result. Then, self-paced learning is utilized to train a classifier. Finally, a filter is used based on spatial contextual information to further smooth the classification result. In order to demonstrate the efficiency of the proposed algorithm, we apply our proposed algorithm on five real synthetic aperture radar images datasets. The results obtained by our algorithm are compared with five other state-of-the-art algorithms, which demonstrates that our algorithm outperforms those state-of-the-art algorithms in terms of accuracy and robustness.}
    }
  • G. Petropoulos, P. Srivastava, M. Piles, and S. Pearson, “Earth observation-based operational estimation of soil moisture and evapotranspiration for agricultural crops in support of sustainable water management,” Sustainability, vol. 10, iss. 1, p. 181, 2018. doi:10.3390/su10010181
    [BibTeX] [Abstract] [Download PDF]

    Global information on the spatio-temporal variation of parameters driving the Earth?s terrestrial water and energy cycles, such as evapotranspiration (ET) rates and surface soil moisture (SSM), is of key significance. The water and energy cycles underpin global food and water security and need to be fully understood as the climate changes. In the last few decades, Earth Observation (EO) technology has played an increasingly important role in determining both ET and SSM. This paper reviews the state of the art in the use specifically of operational EO of both ET and SSM estimates. We discuss the key technical and operational considerations to derive accurate estimates of those parameters from space. The review suggests significant progress has been made in the recent years in retrieving ET and SSM operationally; yet, further work is required to optimize parameter accuracy and to improve the operational capability of services developed using EO data. Emerging applications on which ET/SSM operational products may be included in the context specifically in relation to agriculture are also highlighted; the operational use of those operational products in such applications remains to be seen.

    @article{lincoln30806,
    volume = {10},
    number = {1},
    month = {January},
    author = {George Petropoulos and Prashant Srivastava and Maria Piles and Simon Pearson},
    note = {This article belongs to the Special Issue Precision Agriculture Technologies for a Sustainable Future: Current Trends and Perspectives},
    title = {Earth observation-based operational estimation of soil moisture and evapotranspiration for agricultural crops in support of sustainable water management},
    publisher = {MDPI},
    year = {2018},
    journal = {Sustainability},
    doi = {10.3390/su10010181},
    pages = {181},
    keywords = {ARRAY(0x55e7726f6368)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/30806/},
    abstract = {Global information on the spatio-temporal variation of parameters driving the Earth?s terrestrial water and energy cycles, such as evapotranspiration (ET) rates and surface soil moisture (SSM), is of key significance. The water and energy cycles underpin global food and water security and need to be fully understood as the climate changes. In the last few decades, Earth Observation (EO) technology has played an increasingly important role in determining both ET and SSM. This paper reviews the state of the art in the use specifically of operational EO of both ET and SSM estimates. We discuss the key technical and operational considerations to derive accurate estimates of those parameters from space. The review suggests significant progress has been made in the recent years in retrieving ET and SSM operationally; yet, further work is required to optimize parameter accuracy and to improve the operational capability of services developed using EO data. Emerging applications on which ET/SSM operational products may be included in the context specifically in relation to agriculture are also highlighted; the operational use of those operational products in such applications remains to be seen.}
    }
  • G. Markkula, R. Romano, R. Madigan, C. Fox, O. Giles, and N. Merat, “Models of human decision-making as tools for estimating and optimising impacts of vehicle automation,” in Transportation research board, 2018.
    [BibTeX] [Abstract] [Download PDF]

    With the development of increasingly automated vehicles (AVs) comes the increasingly difficult challenge of comprehensively validating these for acceptable, and ideally beneficial, impacts on the transport system. There is a growing consensus that virtual testing, where simulated AVs are deployed in simulated traffic, will be key for cost-effective testing and optimisation. The least mature model components in such simulations are those generating the behaviour of human agents in or around the AVs. In this paper, human models and virtual testing applications are presented for two example scenarios: (i) a human pedestrian deciding whether to cross a street in front of an approaching automated vehicle, with or without external human-machine interface elements, and (ii) an AV handing over control to a human driver in a critical rear-end situation. These scenarios have received much recent research attention, yet simulation-ready human behaviour models are lacking. They are discussed here in the context of existing models of perceptual decision-making, situational awareness, and traffic interactions. It is argued that the human behaviour in question might be usefully conceptualised as a number of interrelated decision processes, not all of which are necessarily directly associated with externally observable behaviour. The results show that models based on this type of framework can reproduce qualitative patterns of behaviour reported in the literature for the two addressed scenarios, and it is demonstrated how computer simulations based on the models, once these have been properly validated, could allow prediction and optimisation of the AV.

    @inproceedings{lincoln33098,
    booktitle = {Transportation Research Board},
    month = {January},
    title = {Models of human decision-making as tools for estimating and optimising impacts of vehicle automation},
    author = {G Markkula and R Romano and R Madigan and Charles Fox and O Giles and N Merat},
    publisher = {Transportatio n Research Record},
    year = {2018},
    keywords = {ARRAY(0x55e7726d7c30)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33098/},
    abstract = {With the development of increasingly automated vehicles (AVs) comes the increasingly difficult challenge of comprehensively validating these for acceptable, and ideally beneficial, impacts on the transport system. There is a growing consensus that virtual testing, where simulated AVs are deployed in simulated traffic, will be key for cost-effective testing and optimisation. The least mature model components in such simulations are those generating the behaviour of human agents in or around the AVs. In this paper, human models and virtual testing applications are presented for two example scenarios: (i) a human pedestrian deciding whether to cross a street in front of an approaching automated vehicle, with or without external human-machine interface elements, and (ii) an AV handing over control to a human driver in a critical rear-end situation. These scenarios have received much recent research attention, yet simulation-ready human behaviour models are lacking. They are discussed here in the context of existing models of perceptual decision-making, situational awareness, and traffic interactions. It is argued that the human behaviour in question might be usefully conceptualised as a number of interrelated decision processes, not all of which are necessarily directly associated with externally observable behaviour. The results show that models based on this type of framework can reproduce qualitative patterns of behaviour reported in the literature for the two addressed scenarios, and it is demonstrated how computer simulations based on the models, once these have been properly validated, could allow prediction and optimisation of the AV.}
    }
  • R. Akrour, A. Abdolmaleki, H. Abdulsamad, J. Peters, and G. Neumann, “Model-free trajectory-based policy optimization with monotonic improvement,” Journal of machine learning research (jmlr), vol. 19, iss. 14, p. 1–25, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Many of the recent trajectory optimization algorithms alternate between linear approximation of the system dynamics around the mean trajectory and conservative policy update. One way of constraining the policy change is by bounding the Kullback-Leibler (KL) divergence between successive policies. These approaches already demonstrated great experimental success in challenging problems such as end-to-end control of physical systems. However, these approaches lack any improvement guarantee as the linear approximation of the system dynamics can introduce a bias in the policy update and prevent convergence to the optimal policy. In this article, we propose a new model-free trajectory-based policy optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates a local, quadratic and time-dependent Q-Function learned from trajectory data instead of a model of the system dynamics. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics. We experimentally demonstrate on highly non-linear control tasks the improvement in performance of our algorithm in comparison to approaches linearizing the system dynamics. In order to show the monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of our policy update scheme to derive a lower bound of the change in policy return between successive iterations.

    @article{lincoln32457,
    volume = {19},
    number = {14},
    author = {R. Akrour and A. Abdolmaleki and H. Abdulsamad and J. Peters and Gerhard Neumann},
    title = {Model-Free Trajectory-based Policy Optimization with Monotonic Improvement},
    publisher = {Journal of Machine Learning Research},
    journal = {Journal of Machine Learning Research (JMLR)},
    pages = {1--25},
    year = {2018},
    keywords = {ARRAY(0x55e7726f26c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32457/},
    abstract = {Many of the recent trajectory optimization algorithms alternate between linear approximation
    of the system dynamics around the mean trajectory and conservative policy update.
    One way of constraining the policy change is by bounding the Kullback-Leibler (KL)
    divergence between successive policies. These approaches already demonstrated great experimental
    success in challenging problems such as end-to-end control of physical systems.
    However, these approaches lack any improvement guarantee as the linear approximation of
    the system dynamics can introduce a bias in the policy update and prevent convergence
    to the optimal policy. In this article, we propose a new model-free trajectory-based policy
    optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates
    a local, quadratic and time-dependent Q-Function learned from trajectory data
    instead of a model of the system dynamics. Our policy update ensures exact KL-constraint
    satisfaction without simplifying assumptions on the system dynamics. We experimentally
    demonstrate on highly non-linear control tasks the improvement in performance of our algorithm
    in comparison to approaches linearizing the system dynamics. In order to show the
    monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of
    our policy update scheme to derive a lower bound of the change in policy return between
    successive iterations.}
    }
  • O. Arenz, M. Zhong, and G. Neumann, “Efficient gradient-free variational inference using policy search,” in Proceedings of the international conference on machine learning, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.

    @inproceedings{lincoln32456,
    booktitle = {Proceedings of the International Conference on Machine Learning},
    title = {Efficient Gradient-Free Variational Inference using Policy Search},
    author = {O. Arenz and M. Zhong and Gerhard Neumann},
    year = {2018},
    keywords = {ARRAY(0x55e7726d1f60)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32456/},
    abstract = {Inference from complex distributions is a common
    problem in machine learning needed for
    many Bayesian methods. We propose an efficient,
    gradient-free method for learning general GMM
    approximations of multimodal distributions based
    on recent insights from stochastic search methods.
    Our method establishes information-geometric
    trust regions to ensure efficient exploration of the
    sampling space and stability of the GMM updates,
    allowing for efficient estimation of multi-variate
    Gaussian variational distributions. For GMMs,
    we apply a variational lower bound to decompose
    the learning objective into sub-problems given
    by learning the individual mixture components
    and the coefficients. The number of mixture components
    is adapted online in order to allow for
    arbitrary exact approximations. We demonstrate
    on several domains that we can learn significantly
    better approximations than competing variational
    inference methods and that the quality of samples
    drawn from our approximations is on par
    with samples created by state-of-the-art MCMC
    samplers that require significantly more computational
    resources.}
    }
  • O. Arenz, G. Neumann, and M. Zhong, “Efficient gradient-free variational inference using policy search,” Proceedings of the 35th international conference on machine learning, vol. 80, p. 234–243, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.

    @article{lincoln33871,
    volume = {80},
    title = {Efficient Gradient-Free Variational Inference using Policy Search},
    author = {Oleg Arenz and Gerhard Neumann and Mingjun Zhong},
    publisher = {Proceedings of Machine Learning Research},
    year = {2018},
    pages = {234--243},
    journal = {Proceedings of the 35th International Conference on Machine Learning},
    keywords = {ARRAY(0x55e772aaf708)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33871/},
    abstract = {Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.}
    }
  • P. Baxter, G. Cielniak, M. Hanheide, and P. From, “Safe human-robot interaction in agriculture,” in Companion of the 2018 acm/ieee international conference on human-robot interaction – hri ’18, 2018, p. 59–60. doi:doi:10.1145/3173386.3177072
    [BibTeX] [Abstract] [Download PDF]

    Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application.

    @inproceedings{lincoln33320,
    booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18},
    title = {Safe Human-Robot Interaction in Agriculture},
    author = {Paul Baxter and Grzegorz Cielniak and Marc Hanheide and Pal From},
    publisher = {ACM},
    year = {2018},
    pages = {59--60},
    doi = {doi:10.1145/3173386.3177072},
    keywords = {ARRAY(0x55e7726c37c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33320/},
    abstract = {Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application.}
    }
  • P. Baxter, P. Lightbody, and M. Hanheide, “Robots providing cognitive assistance in shared workspaces,” in Companion of the 2018 acm/ieee international conference on human-robot interaction – hri ’18, 2018, p. 57–58. doi:doi:10.1145/3173386.3177070
    [BibTeX] [Abstract] [Download PDF]

    Human-Robot Collaboration is an area of particular current interest, with the attempt to make robots more generally useful in contexts where they work side-by-side with humans. Currently, efforts typically focus on the sensory and motor aspects of the task on the part of the robot to enable them to function safely and effectively given an assigned task. In the present contribution, we rather focus on the cognitive faculties of the human worker by attempting to incorporate known (from psychology) properties of human cognition. In a proof-of-concept study, we demonstrate how applying characteristics of human categorical perception to the type of robot assistance impacts on task performance and experience of the participants. This lays the foundation for further developments in cognitive assistance and collaboration in side-by-side working for humans and robots.

    @inproceedings{lincoln33321,
    booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18},
    title = {Robots Providing Cognitive Assistance in Shared Workspaces},
    author = {Paul Baxter and Peter Lightbody and Marc Hanheide},
    publisher = {ACM},
    year = {2018},
    pages = {57--58},
    doi = {doi:10.1145/3173386.3177070},
    keywords = {ARRAY(0x55e772aaf738)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33321/},
    abstract = {Human-Robot Collaboration is an area of particular current interest, with the attempt to make robots more generally useful in contexts where they work side-by-side with humans. Currently, efforts typically focus on the sensory and motor aspects of the task on the part of the robot to enable them to function safely and effectively given an assigned task. In the present contribution, we rather focus on the cognitive faculties of the human worker by attempting to incorporate known (from psychology) properties of human cognition. In a proof-of-concept study, we demonstrate how applying characteristics of human categorical perception to the type of robot assistance impacts on task performance and experience of the participants. This lays the foundation for further developments in cognitive assistance and collaboration in side-by-side working for humans and robots.}
    }
  • F. Camara, S. Cosar, N. Bellotto, N. Merat, and C. Fox, “Towards pedestrian-av interaction: method for elucidating pedestrian preferences,” in Ieee/rsj international conference on intelligent robots and systems (iros 2018) workshops, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicle navigation around human pedestrians remains a challenge due to the potential for complex interactions and feedback loops between the agents. As a small step towards better understanding of these interactions, this Methods Paper presents a new empirical protocol based on tracking real humans in a controlled lab environment, which is able to make inferences about the human?s preferences for interaction (how they trade off the cost of their time against the cost of a collision). Knowledge of such preferences if collected in more realistic environments could then be used by future AVs to predict and control for pedestrian behaviour. This study is intended as a work-in-progress report on methods working towards real-time and less controlled experiments, demonstrating successful use of several key components required by such systems, but in its more controlled setting. This suggests that these components could be extended to more realistic situations and results in an ongoing research programme.

    @inproceedings{lincoln33565,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018) Workshops},
    title = {Towards pedestrian-AV interaction: method for elucidating pedestrian preferences},
    author = {Fanta Camara and Serhan Cosar and Nicola Bellotto and Natasha Merat and Charles Fox},
    year = {2018},
    keywords = {ARRAY(0x55e7726eb690)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33565/},
    abstract = {Autonomous vehicle navigation around human pedestrians remains a challenge due to the potential for complex interactions and feedback loops between the agents. As a small step towards better understanding of these interactions, this Methods Paper presents a new empirical protocol based on tracking real humans in a controlled lab environment, which is able to make inferences about the human?s preferences for interaction (how they trade off the cost of their time against the cost of a collision). Knowledge of such preferences if collected in more realistic environments could then be used by future AVs to predict and control for pedestrian behaviour. This study is intended as a work-in-progress report on methods working towards real-time and less controlled experiments, demonstrating successful use of several key components required by such systems, but in its more controlled setting. This suggests that these components could be extended to more realistic situations and results in an ongoing research programme.}
    }
  • F. Camara, O. Giles, R. Madigan, M. Rothmüller, P. H. Rasmussen, S. A. Vendelbo-Larsen, G. Markkula, Y. M. Lee, L. Garach, N. Merat, and C. Fox, “Filtration analysis of pedestrian-vehicle interactions for autonomous vehicles control,” in 15th international conference on intelligent autonomous systems (ias-15) workshops, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Interacting with humans remains a challenge for autonomous vehicles (AVs). When a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform development of new real-time AV controllers in this setting, this study collects and analy- ses detailed, manually-annotated, temporal data from real-world human road crossings as they interact with manual drive vehicles. It studies the temporal orderings (filtrations) in which features are revealed to the ve- hicle and their informativeness over time. It presents a new framework suggesting how optimal stopping controllers may then use such data to enable an AV to decide when to act (by speeding up, slowing down, or otherwise signalling intent to the pedestrian) or alternatively, to continue at its current speed in order to gather additional information from new features, including signals from that pedestrian, before acting itself.

    @inproceedings{lincoln33564,
    booktitle = {15th International Conference on Intelligent Autonomous Systems (IAS-15) workshops},
    title = {Filtration analysis of pedestrian-vehicle interactions for autonomous vehicles control},
    author = {Fanta Camara and Oscar Giles and Ruth Madigan and Markus Rothm{\"u}ller and Pernille Holm Rasmussen and Signe Alexandra Vendelbo-Larsen and Gustav Markkula and Yee Mun Lee and Laura Garach and Natasha Merat and Charles Fox},
    year = {2018},
    keywords = {ARRAY(0x55e7726cfc18)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33564/},
    abstract = {Interacting with humans remains a challenge for autonomous
    vehicles (AVs). When a pedestrian wishes to cross the road in front of the
    vehicle at an unmarked crossing, the pedestrian and AV must compete
    for the space, which may be considered as a game-theoretic interaction in
    which one agent must yield to the other. To inform development of new
    real-time AV controllers in this setting, this study collects and analy-
    ses detailed, manually-annotated, temporal data from real-world human
    road crossings as they interact with manual drive vehicles. It studies the
    temporal orderings (filtrations) in which features are revealed to the ve-
    hicle and their informativeness over time. It presents a new framework
    suggesting how optimal stopping controllers may then use such data to
    enable an AV to decide when to act (by speeding up, slowing down, or
    otherwise signalling intent to the pedestrian) or alternatively, to continue
    at its current speed in order to gather additional information from new
    features, including signals from that pedestrian, before acting itself.}
    }
  • F. Camara, R. A. Romano, G. Markkula, R. Madigan, N. Merat, and C. W. Fox, “Empirical game theory of pedestrian interaction for autonomous vehicles,” in Proc. measuring behaviour 2018: international conference on methods and techniques in behavioral research, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles (AV?s) are appearing on roads, based on standard robotic mapping and navigation algorithms. However their ability to interact with other road-users is much less well understood. If AVs are programmed to stop every time another road user obstructs them, then other road users simply learn that they can take priority at every interaction, and the AV will make little or no progress. This issue is especially important in the case of a pedestrian crossing the road in front of the AV. The present methods paper expands the sequential chicken model introduced in (Fox et al., 2018), using empirical data to measure behavior of humans in a controlled plus-maze experiment, and showing how such data can be used to infer parameters of the model via a Gaussian Process. This providing a more realistic, empirical understanding of the human factors intelligence required by future autonomous vehicles.

    @inproceedings{lincoln32028,
    booktitle = {Proc. Measuring Behaviour 2018: International Conference on Methods and Techniques in Behavioral Research},
    title = {Empirical game theory of pedestrian interaction for autonomous vehicles},
    author = {Fanta Camara and Richard A. Romano and Gustav Markkula and Ruth Madigan and Natasha Merat and Charles W. Fox},
    year = {2018},
    journal = {Proceedings of Measuring Behavior 2018.},
    keywords = {ARRAY(0x55e7726fe5c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32028/},
    abstract = {Autonomous vehicles (AV?s) are appearing on roads, based on standard robotic mapping and
    navigation algorithms. However their ability to interact with other road-users is much less well understood. If
    AVs are programmed to stop every time another road user obstructs them, then other road users simply learn that
    they can take priority at every interaction, and the AV will make little or no progress. This issue is especially
    important in the case of a pedestrian crossing the road in front of the AV. The present methods paper expands the
    sequential chicken model introduced in (Fox et al., 2018), using empirical data to measure behavior of humans in
    a controlled plus-maze experiment, and showing how such data can be used to infer parameters of the model via
    a Gaussian Process. This providing a more realistic, empirical understanding of the human factors intelligence
    required by future autonomous vehicles.}
    }
  • F. Camara, O. Giles, R. Madigan, M. Rothmueller, H. P. Rasmussen, S. Vendelbo-Larsen, G. Markkula, Y. Lee, L. Garach, N. Merat, and C. Fox, “Predicting pedestrian road-crossing assertiveness for autonomous vehicle control,” in The 21st ieee international conference on intelligent transportation systems, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles (AVs) must interact with other road users including pedestrians. Unlike passive environments, pedestrians are active agents having their own utilities and decisions, which must be inferred and predicted by AVs in order to control interactions with them and navigation around them. In particular, when a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform AV controllers in this setting, this study collects and analyses data from real-world human road crossings to determine what features of crossing behaviours are predictive about the level of assertiveness of pedestrians and of the eventual winner of the interactions. It presents the largest and most detailed data set of its kind known to us, and new methods to analyze and predict pedestrian-vehicle interactions based upon it. Pedestrian-vehicle interactions are decomposed into sequences of independent discrete events. We use probabilistic methods ?logistic regression and decision tree regression ? and sequence analysis to analyze sets and sub-sequences of actions used by both pedestrians and human drivers while crossing at an intersection, to find common patterns of behaviour and to predict the winner of each interaction. We report on the particular features found to be predictive and which can thus be integrated into game-theoretic AV controllers to inform real-time interactions.

    @inproceedings{lincoln33126,
    booktitle = {The 21st IEEE International Conference on Intelligent Transportation Systems},
    title = {Predicting pedestrian road-crossing assertiveness for autonomous vehicle control},
    author = {Fanta Camara and O Giles and R Madigan and M Rothmueller and P Holm Rasmussen and SA Vendelbo-Larsen and G Markkula and YM Lee and L Garach and N Merat and CW Fox},
    publisher = {IEEE Xplore},
    year = {2018},
    keywords = {ARRAY(0x55e7728fdce8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33126/},
    abstract = {Autonomous vehicles (AVs) must interact with other road users including pedestrians. Unlike passive environments, pedestrians are active agents having their own utilities and decisions, which must be inferred and predicted by AVs in order to control interactions with them and navigation around them. In particular, when a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform AV controllers in this setting, this study collects and analyses data from real-world human road crossings to determine what features of crossing behaviours are predictive about the level of assertiveness of pedestrians and of the eventual winner of the interactions. It presents the largest and most detailed data set of its kind known to us, and new methods to analyze and predict pedestrian-vehicle interactions based upon it. Pedestrian-vehicle interactions are decomposed into sequences of independent discrete events. We use probabilistic methods ?logistic regression and decision tree regression ? and sequence analysis to analyze sets and sub-sequences of actions used by both pedestrians and human drivers while crossing at an intersection, to find common patterns of behaviour and to predict the winner of each interaction. We report on the particular features found to be predictive and which can thus be integrated into game-theoretic AV controllers to inform real-time interactions.}
    }
  • A. Cohen, S. Parsons, E. Sklar, and P. McBurney, “A characterization of types of support between structured arguments and their relationship with support in abstract argumentation,” International journal of approximate reasoning, vol. 94, p. 76–104, 2018. doi:10.1016/j.ijar.2017.12.008
    [BibTeX] [Download PDF]
    @article{lincoln38544,
    volume = {94},
    author = {A. Cohen and S. Parsons and Elizabeth Sklar and P. McBurney},
    note = {cited By 1},
    title = {A characterization of types of support between structured arguments and their relationship with support in abstract argumentation},
    journal = {International Journal of Approximate Reasoning},
    doi = {10.1016/j.ijar.2017.12.008},
    pages = {76--104},
    year = {2018},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38544/}
    }
  • K. Elgeneidy, A. Al-Yacoub, Z. Usman, N. Lohsa, M. jackson, and I. Wright, “Towards an automated masking process: a model-based approach,” Journal of engineering manufacture, 2018. doi:10.1177/0954405418810058
    [BibTeX] [Abstract] [Download PDF]

    The masking of aircraft engine parts, such as turbine blades, is a major bottleneck for the aerospace industry. The process is often carried out manually in multiple stages of coating and curing, which requires extensive time and introduces variations in the masking quality. This article investigates the automation of the masking process utilising the well-established time?pressure dispensing process for controlled maskant dispensing and a robotic manipulator for accurate part handling. A mathematical model for the time?pressure dispensing process was derived, extending previous models from the literature by incorporating the robot velocity for controlled masking line width. An experiment was designed, based on the theoretical analysis of the dispensing process, to derive an empirical model from the generated data that incorporate the losses that are otherwise difficult to model mathematically. The model was validated under new input conditions to demonstrate the feasibility of the proposed approach and the masking accuracy using the derived model.

    @article{lincoln33938,
    title = {Towards an automated masking process: A model-based approach},
    author = {Khaled Elgeneidy and Ali Al-Yacoub and Zahid Usman and Niels Lohsa and Michael jackson and Iain Wright},
    publisher = {Sage},
    year = {2018},
    doi = {10.1177/0954405418810058},
    note = {The final published version of this article can be found online at http://www.uk.sagepub.com/journals/Journal202016/},
    journal = {Journal of Engineering Manufacture},
    keywords = {ARRAY(0x55e7728d6580)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33938/},
    abstract = {The masking of aircraft engine parts, such as turbine blades, is a major bottleneck for the aerospace industry. The process is often carried out manually in multiple stages of coating and curing, which requires extensive time and introduces variations in the masking quality. This article investigates the automation of the masking process utilising the well-established time?pressure dispensing process for controlled maskant dispensing and a robotic manipulator for accurate part handling. A mathematical model for the time?pressure dispensing process was derived, extending previous models from the literature by incorporating the robot velocity for controlled masking line width. An experiment was designed, based on the theoretical analysis of the dispensing process, to derive an empirical model from the generated data that incorporate the losses that are otherwise difficult to model mathematically. The model was validated under new input conditions to demonstrate the feasibility of the proposed approach and the masking accuracy using the derived model.}
    }
  • K. Elgeneidy, G. Neumann, M. Jackson, and N. Lohse, “Directly printable flexible strain sensors for bending and contact feedback of soft actuators,” Front. robot. ai, 2018. doi:10.3389/frobt.2018.00002
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a fully printable sensorized bending actuator that can be calibrated to provide reliable bending feedback and simple contact detection. A soft bending actuator following a pleated morphology, as well as a flexible resistive strain sensor, were directly 3D printed using easily accessible FDM printer hardware with a dual-extrusion tool head. The flexible sensor was directly welded to the bending actuator?s body and systematically tested to characterize and evaluate its response under variable input pressure. A signal conditioning circuit was developed to enhance the quality of the sensory feedback, and flexible conductive threads were used for wiring. The sensorized actuator?s response was then calibrated using a vision system to convert the sensory readings to real bending angle values. The empirical relationship was derived using linear regression and validated at untrained input conditions to evaluate its accuracy. Furthermore, the sensorized actuator was tested in a constrained setup that prevents bending, to evaluate the potential of using the same sensor for simple contact detection by comparing the constrained and free-bending responses at the same input pressures. The results of this work demonstrated how a dual-extrusion FDM printing process can be tuned to directly print highly customizable flexible strain sensors that were able to provide reliable bending feedback and basic contact detection. The addition of such sensing capability to bending actuators enhances their functionality and reliability for applications such as controlled soft grasping, flexible wearables, and haptic devices.

    @article{lincoln32562,
    title = {Directly Printable Flexible Strain Sensors for Bending and Contact Feedback of Soft Actuators},
    author = {Khaled Elgeneidy and Gerhard Neumann and Michael Jackson and Niels Lohse},
    publisher = {Frontiers Media},
    year = {2018},
    doi = {10.3389/frobt.2018.00002},
    journal = {Front. Robot. AI},
    keywords = {ARRAY(0x55e7728c1118)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32562/},
    abstract = {This paper presents a fully printable sensorized bending actuator that can be calibrated to provide reliable bending feedback and simple contact detection. A soft bending actuator following a pleated morphology, as well as a flexible resistive strain sensor, were directly 3D printed using easily accessible FDM printer hardware with a dual-extrusion tool head. The flexible sensor was directly welded to the bending actuator?s body and systematically tested to characterize and evaluate its response under variable input pressure. A signal conditioning circuit was developed to enhance the quality of the sensory feedback, and flexible conductive threads were used for wiring. The sensorized actuator?s response was then calibrated using a vision system to convert the sensory readings to real bending angle values. The empirical relationship was derived using linear regression and validated at untrained input conditions to evaluate its accuracy. Furthermore, the sensorized actuator was tested in a constrained setup that prevents bending, to evaluate the potential of using the same sensor for simple contact detection by comparing the constrained and free-bending responses at the same input pressures. The results of this work demonstrated how a dual-extrusion FDM printing process can be tuned to directly print highly customizable flexible strain sensors that were able to provide reliable bending feedback and basic contact detection. The addition of such sensing capability to bending actuators enhances their functionality and reliability for applications such as controlled soft grasping, flexible wearables, and haptic devices.}
    }
  • K. Elgeneidy, G. Neumann, S. Pearson, M. Jackson, and N. Lohse, “Contact detection and object size estimation using a modular soft gripper with embedded flex sensors,” in Iros 2018, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Soft-grippers can grasp delicate and deformable objects without bruise or damage as the gripper can adapt to the object?s shape. However, the contact forces are still hard to regulate due to missing contact feedback of such grippers. In this paper, a modular soft gripper design is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of all fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the nature of the contact affects the rate of change in the flex sensor readings against the internal pressure.

    @inproceedings{lincoln32544,
    booktitle = {IROS 2018},
    title = {Contact Detection and Object Size Estimation using a Modular Soft Gripper with Embedded Flex Sensors},
    author = {Khaled Elgeneidy and Gerhard Neumann and Simon Pearson and Michael Jackson and Niels Lohse},
    year = {2018},
    journal = {IROS 2018},
    keywords = {ARRAY(0x55e7728d6298)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32544/},
    abstract = {Soft-grippers can grasp delicate and deformable objects without bruise or damage as the gripper can adapt to the object?s shape. However, the contact forces are still hard to regulate due to missing contact feedback of such grippers. In this paper, a modular soft gripper design is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of all fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the nature of the contact affects the rate of change in the flex sensor readings against the internal pressure.}
    }
  • K. Essers, M. Chapman, N. Kokciyan, I. Sassoon, T. Porat, P. Balatsoukas, P. Young, M. Ashworth, V. Curcin, S. Modgil, S. Parsons, and E. Sklar, “The consult system: demonstration.” 2018, p. 385–386. doi:10.1145/3284432.3287170
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38402,
    title = {The CONSULT system: Demonstration},
    author = {K. Essers and M. Chapman and N. Kokciyan and I. Sassoon and T. Porat and P. Balatsoukas and P. Young and M. Ashworth and V. Curcin and S. Modgil and Simon Parsons and Elizabeth Sklar},
    year = {2018},
    pages = {385--386},
    doi = {10.1145/3284432.3287170},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38402/}
    }
  • K. Essers, M. Chapman, N. Kokciyan, I. Sassoon, T. Porat, P. Balatsoukas, P. Young, M. Ashworth, V. Curcin, S. Modgil, S. Parsons, and E. Sklar, “The consult system: demonstration.” 2018, p. 385–386. doi:10.1145/3284432.3287170
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38543,
    title = {The CONSULT system: Demonstration},
    author = {K. Essers and M. Chapman and N. Kokciyan and I. Sassoon and T. Porat and P. Balatsoukas and P. Young and M. Ashworth and V. Curcin and S. Modgil and Simon Parsons and Elizabeth Sklar},
    year = {2018},
    pages = {385--386},
    doi = {10.1145/3284432.3287170},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38543/}
    }
  • K. Essers, R. Rogers, J. Sturt, E. Sklar, and E. Black, “Assessing the posture prototype: a late-breaking report on patient views.” 2018, p. 344–346. doi:10.1145/3284432.3287181
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38542,
    title = {Assessing the POSTURE prototype: A late-breaking report on patient views},
    author = {K. Essers and R. Rogers and J. Sturt and Elizabeth Sklar and E. Black},
    year = {2018},
    pages = {344--346},
    doi = {10.1145/3284432.3287181},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38542/}
    }
  • C. Fox, F. Camara, G. Markkula, R. Romano, R. Madigan, and N. Merat, “When should the chicken cross the road?: game theory for autonomous vehicle-human interactions,” in Proc. 4th international conference on vehicle technology and intelligent transport systems (vehits), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicle control is well understood for local- [15], good approximations exist such as particle ?ltering, ization, mapping and planning in un-reactive environ- which make use of large compute power to draw samples ments, but the human factors of complex interactions near solutions. stood [16], and despite its exact solution being NP-hard with other road users are not yet developed. Route planning in non-interactive envi- ronments also has well known tractable solutions such as This po- the A-star algorithm. Given a route, localizing and con- sition paper presents an initial model for negotiation be- trol to follow that route then becomes a similar task to tween an autonomous vehicle and another vehicle at an that performed by the 1959 General Motors Firebird-III unsigned intersections or (equivalently) with a pedestrian self-driving car [1], which used electromagnetic sensing at an unsigned road-crossing (jaywalking), using discrete to follow a wire built into the road. Such path follow- sequential game theory. The model is intended as a ba- ing, using wires or SLAM, can then be augmented with sic framework for more realistic and data-driven future simple safety logic to stop the vehicle if any obstacle is extensions. The model shows that when only vehicle po- in its way, as detected by any range sensor. sition is used to signal intent, the optimal behaviors for open source systems for this level of `self-driving’ are now both agents must include a non-zero probability of al- widely available [6]. lowing a collision to occur. In contrast, This suggests extensions to problems that these vehicles will face around interacting with other road users are much harder reduce this probability in future, such as other forms of both to formulate and solve. Autonomous vehicles do not signaling and control. Unlike most Game Theory appli- just have to deal with inanimate objects, sensors, and cations in Economics, active vehicle control requires real- maps. time selection from multiple equilibria with no history, They have to deal with other agents, currently human drivers and pedestrians and eventually other au- and we present and argue for a novel solution concept, meta-strategy convergence , suited to this task.

    @inproceedings{lincoln32029,
    booktitle = {Proc. 4th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS)},
    title = {When should the chicken cross the road?: Game theory for autonomous vehicle-human interactions},
    author = {Charles Fox and F. Camara and G. Markkula and R. Romano and R. Madigan and N. Merat},
    year = {2018},
    keywords = {ARRAY(0x55e7728e83a8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32029/},
    abstract = {Autonomous vehicle control is well understood for local- [15], good approximations exist such as particle ?ltering,
    ization, mapping and planning in un-reactive environ- which make use of large compute power to draw samples
    ments, but the human factors of complex interactions near solutions.
    stood [16], and despite its exact solution being NP-hard
    with other road users are not yet developed.
    Route planning in non-interactive envi-
    ronments also has well known tractable solutions such as
    This po-
    the A-star algorithm. Given a route, localizing and con-
    sition paper presents an initial model for negotiation be-
    trol to follow that route then becomes a similar task to
    tween an autonomous vehicle and another vehicle at an
    that performed by the 1959 General Motors Firebird-III
    unsigned intersections or (equivalently) with a pedestrian
    self-driving car [1], which used electromagnetic sensing
    at an unsigned road-crossing (jaywalking), using discrete
    to follow a wire built into the road.
    Such path follow-
    sequential game theory. The model is intended as a ba- ing, using wires or SLAM, can then be augmented with
    sic framework for more realistic and data-driven future simple safety logic to stop the vehicle if any obstacle is
    extensions. The model shows that when only vehicle po- in its way, as detected by any range sensor.
    sition is used to signal intent, the optimal behaviors for open source systems for this level of `self-driving' are now
    both agents must include a non-zero probability of al- widely available [6].
    lowing a collision to occur.
    In contrast,
    This suggests extensions to
    problems that these vehicles will face
    around interacting with other road users are much harder
    reduce this probability in future, such as other forms of
    both to formulate and solve. Autonomous vehicles do not
    signaling and control. Unlike most Game Theory appli-
    just have to deal with inanimate objects, sensors, and
    cations in Economics, active vehicle control requires real-
    maps.
    time selection from multiple equilibria with no history,
    They have to deal with other agents, currently
    human drivers and pedestrians and eventually other au-
    and we present and argue for a novel solution concept,
    meta-strategy convergence , suited to this task.}
    }
  • R. P. Herrero, J. P. Fentanes, and M. Hanheide, “Getting to know your robot customers: automated analysis of user identity and demographics for robots in the wild,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 3733–3740, 2018. doi:doi:10.1109/LRA.2018.2856264
    [BibTeX] [Abstract] [Download PDF]

    Long-term studies with autonomous robots ?in the wild? (deployed in real-world human-inhabited environments) are among the most laborious and resource-intensive endeavours in human-robot interaction. Even if a robot system itself is robust and well-working, the analysis of the vast amounts of user data one aims to collect and analyze poses a significant challenge. This letter proposes an automated processing pipeline, using state-of-the-art computer vision technology to estimate demographic factors from users? faces and reidentify them to establish usage patterns. It overcomes the problem of explicitly recruiting participants and having them fill questionnaires about their demographic background and allows one to study completely unsolicited and nonprimed interactions over long periods of time. This letter offers a comprehensive assessment of the performance of the automated analysis with data from 68 days of continuous deployment of a robot in a care home and also presents a set of findings obtained through the analysis, underpinning the viability of the approach. Index

    @article{lincoln33158,
    volume = {3},
    number = {4},
    author = {Roberto Pinillos Herrero and Jaime Pulido Fentanes and Marc Hanheide},
    note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8411093/},
    title = {Getting to Know Your Robot Customers: Automated Analysis of User Identity and Demographics for Robots in the Wild},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {doi:10.1109/LRA.2018.2856264},
    pages = {3733--3740},
    keywords = {ARRAY(0x55e7728ce508)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33158/},
    abstract = {Long-term studies with autonomous robots ?in the wild? (deployed in real-world human-inhabited environments) are among the most laborious and resource-intensive endeavours in human-robot interaction. Even if a robot system itself is robust and well-working, the analysis of the vast amounts of user data one aims to collect and analyze poses a significant challenge. This letter proposes an automated processing pipeline, using state-of-the-art computer vision technology to estimate demographic factors from users? faces and reidentify them to establish usage patterns. It overcomes the problem of explicitly recruiting participants and having them fill questionnaires about their demographic background and allows one to study completely unsolicited and nonprimed interactions over long periods of time. This letter offers a comprehensive assessment of the performance of the automated analysis with data from 68 days of continuous deployment of a robot in a care home and also presents a set of findings obtained through the analysis, underpinning the viability of the approach.
    Index}
    }
  • M. Huttenrauch, A. Sosic, and G. Neumann, “Exploiting local communication protocols for learning complex swarm behaviors with deep reinforcement learning,” in International conference for swarm intelligence (ants), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Swarm systems constitute a challenging problem for reinforcement learning (RL) as the algorithm needs to learn decentralized control policies that can cope with limited local sensing and communication abilities of the agents. While it is often difficult to directly define the behavior of the agents, simple communication protocols can be defined more easily using prior knowledge about the given task. In this paper, we propose a number of simple communication protocols that can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment. The protocols are based on histograms that encode the local neighborhood relations of the gents and can also transmit task-specific information, such as the shortest distance and direction to a desired target. In our framework, we use an adaptation of Trust Region Policy Optimization to learn complex collaborative tasks, such as formation building and building a communication link. We evaluate our findings in a simulated 2D-physics environment, and compare the implications of different communication protocols.

    @inproceedings{lincoln32460,
    booktitle = {International Conference for Swarm Intelligence (ANTS)},
    title = {Exploiting Local Communication Protocols for Learning Complex Swarm Behaviors with Deep Reinforcement Learning},
    author = {Max Huttenrauch and Adrian Sosic and Gerhard Neumann},
    publisher = {Springer International Publishing},
    year = {2018},
    keywords = {ARRAY(0x55e7728ce550)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32460/},
    abstract = {Swarm systems constitute a challenging problem for reinforcement learning (RL) as the algorithm needs to learn decentralized control policies that can cope with limited local sensing and communication abilities of the agents. While it is often difficult to directly define the behavior of the agents, simple communication protocols can be defined more easily using prior knowledge about the given task. In this paper, we propose a number of simple communication protocols that can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment. The protocols are based on histograms that encode the local neighborhood relations of the gents
    and can also transmit task-specific information, such as the shortest distance and direction to a desired target. In our framework, we use an adaptation of Trust Region Policy Optimization to learn complex collaborative tasks, such as formation building and building a communication link. We evaluate our findings in a simulated 2D-physics environment, and compare the implications of different communication protocols.}
    }
  • M. Imai, E. Sklar, T. J. Norman, and T. Komatsu, “Hai 2018 chairs? welcome.” 2018, p. III.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38541,
    title = {HAI 2018 Chairs? Welcome},
    author = {M. Imai and Elizabeth Sklar and T.J. Norman and T. Komatsu},
    year = {2018},
    pages = {III},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38541/}
    }
  • N. Kokciyan, I. Sassoon, A. P. Young, S. Modgil, and S. Parsons, “Reasoning with metalevel argumentation frameworks in aspartix,” in Computational models of argument, Ios press, 2018, vol. 305, p. 463–464. doi:10.3233/978-1-61499-906-5-463
    [BibTeX] [Abstract] [Download PDF]

    In this demo paper, we propose an encoding for Metalevel Argumentation Frameworks (MAFs) to be used in Aspartix, an Answer Set Programming (ASP) approach to find the justified arguments of an AF [2]. MAFs provide a uniform encoding of object level Dung Frameworks and extensions thereof that include values, preferences and attacks on attacks (EAFs). The justification status of arguments in the object level AF can then be evaluated and explained through evaluation of the arguments in the MAF. The demo includes multiple examples from the literature to show the applicability of our proposed encoding for translating various object level AFs to the uniform language of MAFs.

    @incollection{lincoln38408,
    volume = {305},
    author = {N. Kokciyan and I. Sassoon and A.P. Young and S. Modgil and S. Parsons},
    series = {Frontiers in Artificial Intelligence and Applications},
    note = {cited By 0},
    booktitle = {Computational Models of Argument},
    title = {Reasoning with metalevel argumentation frameworks in aspartix},
    publisher = {IOS Press},
    year = {2018},
    journal = {Frontiers in Artificial Intelligence and Applications},
    doi = {10.3233/978-1-61499-906-5-463},
    pages = {463--464},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38408/},
    abstract = {In this demo paper, we propose an encoding for Metalevel Argumentation Frameworks
    (MAFs) to be used in Aspartix, an Answer Set Programming (ASP) approach to find
    the justified arguments of an AF [2]. MAFs provide a uniform encoding of object level
    Dung Frameworks and extensions thereof that include values, preferences and attacks
    on attacks (EAFs). The justification status of arguments in the object level AF can then
    be evaluated and explained through evaluation of the arguments in the MAF. The demo
    includes multiple examples from the literature to show the applicability of our proposed
    encoding for translating various object level AFs to the uniform language of MAFs.}
    }
  • L. Kunze, N. Hawes, T. Duckett, and M. Hanheide, “Introduction to the special issue on ai for long-term autonomy,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 4431–4434, 2018. doi:10.1109/LRA.2018.2870466
    [BibTeX] [Abstract] [Download PDF]

    The papers in this special section focus on the use of artificial intelligence (AI) for long term autonomy. Autonomous systems have a long history in the fields of AI and robotics. However, only through recent advances in technology has it been possible to create autonomous systems capable of operating in long-term, real-world scenarios. Examples include autonomous robots that operate outdoors on land, in air, water, and space; and indoors in offices, care homes, and factories. Designing, developing, and maintaining intelligent autonomous systems that operate in real-world environments over long periods of time, i.e. weeks, months, or years, poses many challenges. This special issue focuses on such challenges and on ways to overcome them using methods from AI. Long-term autonomy can be viewed as both a challenge and an opportunity. The challenge of long-term autonomy requires system designers to ensure that an autonomous system can continue operating successfully according to its real-world application demands in unstructured and semi-structured environments. This means addressing issues related to hardware and software robustness (e.g., gluing in screws and profiling for memory leaks), as well as ensuring that all modules and functions of the system can deal with the variation in the environment and tasks that is expected to occur over its operating time.

    @article{lincoln34133,
    volume = {3},
    number = {4},
    author = {Lars Kunze and Nick Hawes and Tom Duckett and Marc Hanheide},
    title = {Introduction to the Special Issue on AI for Long-Term Autonomy},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2870466},
    pages = {4431--4434},
    keywords = {ARRAY(0x55e7728ba7a8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/34133/},
    abstract = {The papers in this special section focus on the use of artificial intelligence (AI) for long term autonomy. Autonomous systems have a long history in the fields of AI and robotics. However, only through recent advances in technology has it been possible to create autonomous systems capable of operating in long-term, real-world scenarios. Examples include autonomous robots that operate outdoors on land, in air, water, and space; and indoors in offices, care homes, and factories. Designing, developing, and maintaining intelligent autonomous systems that operate in real-world environments over long periods of time, i.e. weeks, months, or years, poses many challenges. This special issue focuses on such challenges and on ways to overcome them using methods from AI. Long-term autonomy can be viewed as both a challenge and an opportunity. The challenge of long-term autonomy requires system designers to ensure that an autonomous system can continue operating successfully according to its real-world application demands in unstructured and semi-structured environments. This means addressing issues related to hardware and software robustness (e.g., gluing in screws and profiling for memory leaks), as well as ensuring that all modules and functions of the system can deal with the variation in the environment and tasks that is expected to occur over its operating time.}
    }
  • L. Kunze, N. Hawes, T. Duckett, M. Hanheide, and T. Krajnik, “Artificial intelligence for long-term robot autonomy: a survey,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 4023–4030, 2018. doi:10.1109/LRA.2018.2860628
    [BibTeX] [Abstract] [Download PDF]

    Autonomous systems will play an essential role in many applications across diverse domains including space, marine, air, field, road, and service robotics. They will assist us in our daily routines and perform dangerous, dirty and dull tasks. However, enabling robotic systems to perform autonomously in complex, real-world scenarios over extended time periods (i.e. weeks, months, or years) poses many challenges. Some of these have been investigated by sub-disciplines of Artificial Intelligence (AI) including navigation & mapping, perception, knowledge representation & reasoning, planning, interaction, and learning. The different sub-disciplines have developed techniques that, when re-integrated within an autonomous system, can enable robots to operate effectively in complex, long-term scenarios. In this paper, we survey and discuss AI techniques as ?enablers? for long-term robot autonomy, current progress in integrating these techniques within long-running robotic systems, and the future challenges and opportunities for AI in long-term autonomy.

    @article{lincoln32829,
    volume = {3},
    number = {4},
    author = {Lars Kunze and Nick Hawes and Tom Duckett and Marc Hanheide and Tomas Krajnik},
    title = {Artificial Intelligence for Long-Term Robot Autonomy: A Survey},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2860628},
    pages = {4023--4030},
    keywords = {ARRAY(0x55e7728cd250)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32829/},
    abstract = {Autonomous systems will play an essential role in many applications across diverse domains including space, marine, air, field, road, and service robotics. They will assist us in our daily routines and perform dangerous, dirty and dull tasks. However, enabling robotic systems to perform autonomously in complex, real-world scenarios over extended time periods (i.e. weeks, months, or years) poses many challenges. Some of these have been investigated by sub-disciplines of Artificial Intelligence (AI) including navigation \& mapping, perception, knowledge representation \& reasoning, planning, interaction, and learning. The different sub-disciplines have developed techniques that, when re-integrated within an autonomous system, can enable robots to operate effectively in complex, long-term scenarios. In this paper, we survey and discuss AI techniques as ?enablers? for long-term robot autonomy, current progress in integrating these techniques within long-running robotic systems, and the future challenges and opportunities for AI in long-term autonomy.}
    }
  • W. Lewinger, F. Comin, M. Matthews, and C. Saaj, “Earth analogue testing and analysis of martian duricrust properties,” in 14th symposium on advanced space technologies in robotics and automation, 2018, p. 567–579. doi:10.1016/j.actaastro.2018.05.025
    [BibTeX] [Abstract] [Download PDF]

    Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.

    @inproceedings{lincoln39622,
    volume = {152},
    author = {William Lewinger and Francisco Comin and Marcus Matthews and Chakravarthini Saaj},
    booktitle = {14th Symposium on Advanced Space Technologies in Robotics and Automation},
    title = {Earth analogue testing and analysis of Martian duricrust properties},
    publisher = {Elsevier},
    year = {2018},
    journal = {Acta Astronautica},
    doi = {10.1016/j.actaastro.2018.05.025},
    pages = {567--579},
    keywords = {ARRAY(0x55e7728fdcb8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39622/},
    abstract = {Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.}
    }
  • Z. Li, A. Cohen, and S. Parsons, “Two forms of minimality in aspic+,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 10767, p. 203–218, 2018. doi:10.1007/978-3-030-01713-2{$_1$}{$_5$}
    [BibTeX] [Download PDF]
    @article{lincoln38404,
    volume = {10767},
    author = {Z. Li and A. Cohen and Simon Parsons},
    note = {cited By 0},
    title = {Two forms of minimality in ASPIC+},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-01713-2{$_1$}{$_5$}},
    pages = {203--218},
    year = {2018},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38404/}
    }
  • Z. Li, N. Oren, and S. Parsons, “On the links between argumentation-based reasoning and nonmonotonic reasoning,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 10757, p. 67–85, 2018. doi:10.1007/978-3-319-75553-3_5
    [BibTeX] [Abstract] [Download PDF]

    In this paper we investigate the links between instantiated argumentation systems and the axioms for non-monotonic reasoning described in [15] with the aim of characterising the nature of argument based reasoning. In doing so, we consider two possible interpretations of the consequence relation, and describe which axioms are met by ASPIC+ under each of these interpretations. We then consider the links between these axioms and the rationality postulates. Our results indicate that argument based reasoning as characterised by ASPIC+ is{–}according to the axioms of [15]{–}non-cumulative and non-monotonic, and therefore weaker than the weakest non-monotonic reasoning systems considered in [15]. This weakness underpins ASPIC+ ?s success in modelling other reasoning systems. We conclude by considering the relationship between ASPIC+ and other weak logical systems.

    @article{lincoln38405,
    volume = {10757},
    author = {Z. Li and N. Oren and S. Parsons},
    note = {cited By 0},
    title = {On the links between argumentation-based reasoning and nonmonotonic reasoning},
    publisher = {Springer},
    year = {2018},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-319-75553-3\_5},
    pages = {67--85},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38405/},
    abstract = {In this paper we investigate the links between instantiated argumentation systems and the axioms for non-monotonic reasoning described in [15] with the aim of characterising the nature of argument based reasoning. In doing so, we consider two possible interpretations of the consequence relation, and describe which axioms are met by ASPIC+ under each of these interpretations. We then consider the links between these axioms and the rationality postulates. Our results indicate that argument based reasoning as characterised by ASPIC+ is{--}according to the axioms of [15]{--}non-cumulative and non-monotonic, and therefore weaker than the weakest non-monotonic reasoning systems considered in [15]. This weakness underpins ASPIC+ ?s success in modelling other reasoning systems. We conclude by considering the relationship between ASPIC+ and other weak logical systems.}
    }
  • K. Liakos, P. Busato, D. Moshou, S. Pearson, and D. Bochtis, “Machine learning in agriculture: a review,” Sensors, vol. 18, iss. 8, p. 2674, 2018. doi:10.3390/s18082674
    [BibTeX] [Abstract] [Download PDF]

    Machine learning has emerged with big data technologies and high-performance computing to create new opportunities for data intensive science in the multi-disciplinary agri-technologies domain. In this paper, we present a comprehensive review of research dedicated to applications of machine learning in agricultural production systems. The works analyzed were categorized in (a) crop management, including applications on yield prediction, disease detection, weed detection crop quality, and species recognition; (b) livestock management, including applications on animal welfare and livestock production; (c) water management; and (d) soil management. The filtering and classification of the presented articles demonstrate how agriculture will benefit from machine learning technologies. By applying machine learning to sensor data, farm management systems are evolving into real time artificial intelligence enabled programs that provide rich recommendations and insights for farmer decision support and action

    @article{lincoln33015,
    volume = {18},
    number = {8},
    author = {Konstantinos Liakos and Patrizia Busato and Dimitrios Moshou and Simon Pearson and Dionysis Bochtis},
    note = {This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).},
    title = {Machine Learning in Agriculture: A Review},
    publisher = {MDPI},
    year = {2018},
    journal = {Sensors},
    doi = {10.3390/s18082674},
    pages = {2674},
    keywords = {ARRAY(0x55e7728e0f80)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33015/},
    abstract = {Machine learning has emerged with big data technologies and high-performance computing to create new opportunities for data intensive science in the multi-disciplinary agri-technologies domain. In this paper, we present a comprehensive review of research dedicated to applications of machine learning in agricultural production systems. The works analyzed were categorized in (a) crop management, including applications on yield prediction, disease detection, weed detection crop quality, and species recognition; (b) livestock management, including applications on animal welfare and livestock production; (c) water management; and (d) soil management. The filtering and classification of the presented articles demonstrate how agriculture will benefit from machine learning technologies. By applying machine learning to sensor data, farm management systems are evolving into real time artificial intelligence enabled programs that provide rich recommendations and insights for farmer decision support and action}
    }
  • P. Liu, G. Neumann, Q. Fu, S. Pearson, and H. Yu, “Energy-efficient design and control of a vibro-driven robot,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Vibro-driven robotic (VDR) systems use stick-slip motions for locomotion. Due to the underactuated nature of the system, efficient design and control are still open problems. We present a new energy preserving design based on a spring-augmented pendulum. We indirectly control the friction-induced stick-slip motions by exploiting the passive dynamics in order to achieve an improvement in overall travelling distance and energy efficacy. Both collocated and non-collocated constraint conditions are elaborately analysed and considered to obtain a desired trajectory generation profile. For tracking control, we develop a partial feedback controller which for the pendulum which counteracts the dynamic contributions from the platform. Comparative simulation studies show the effectiveness and intriguing performance of the proposed approach, while its feasibility is experimentally verified through a physical robot. Our robot is to the best of our knowledge the first nonlinear-motion prototype in literature towards the VDR systems.

    @inproceedings{lincoln32540,
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Energy-efficient design and control of a vibro-driven robot},
    author = {Pengcheng Liu and Gerhard Neumann and Qinbing Fu and Simon Pearson and Hongnian Yu},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55e7728c1460)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32540/},
    abstract = {Vibro-driven robotic (VDR) systems use stick-slip motions for locomotion. Due to the underactuated nature of the system, efficient design and control are still open problems. We present a new energy preserving design based on a spring-augmented pendulum. We indirectly control the friction-induced stick-slip motions by exploiting the passive dynamics in order to achieve an improvement in overall travelling distance and energy efficacy. Both collocated and non-collocated constraint conditions are elaborately analysed and considered to obtain a desired trajectory generation profile. For tracking control, we develop a partial feedback controller which for the pendulum which counteracts the dynamic contributions from the platform. Comparative simulation studies show the effectiveness and intriguing performance of the proposed approach, while its feasibility is experimentally verified through a physical robot. Our robot is to the best of our knowledge the first nonlinear-motion prototype in literature towards the VDR systems.}
    }
  • S. M. Mellado, G. Cielniak, T. Krajník, and T. Duckett, “Modelling and predicting rhythmic flow patterns in dynamic environments,” in Taros, 2018, p. 135–146.
    [BibTeX] [Abstract] [Download PDF]

    We present a time-dependent probabilistic map able to model and predict flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction on a grid-based map by a set of harmonic functions, which efficiently capture long-term (minutes to weeks) variations of crowd movements over time. The evaluation, performed on data from two real environments, shows that the proposed model enables prediction of human movement patterns in the future. Potential applications include human-aware motion planning, improving the efficiency and safety of robot navigation.

    @inproceedings{lincoln33448,
    booktitle = {TAROS},
    title = {Modelling and Predicting Rhythmic Flow Patterns in Dynamic Environments},
    author = {Sergi Molina Mellado and Grzegorz Cielniak and Tom{\'a}{\v s} Krajn{\'i}k and Tom Duckett},
    year = {2018},
    pages = {135--146},
    keywords = {ARRAY(0x55e772a558c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33448/},
    abstract = {We present a time-dependent probabilistic map able to model and predict flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction on a grid-based map by a set of harmonic functions, which efficiently capture long-term (minutes to weeks) variations of crowd movements over time. The evaluation, performed on data from two real environments, shows that the proposed model enables prediction of human movement patterns in the future. Potential applications include human-aware motion planning, improving the efficiency and safety of robot navigation.}
    }
  • A. R. Panisson, S. Parsons, P. McBurney, and R. H. Bordini, “Choosing appropriate arguments from trustworthy sources,” Frontiers in artificial intelligence and applications, vol. 305, p. 345–352, 2018. doi:10.3233/978-1-61499-906-5-345
    [BibTeX] [Download PDF]
    @article{lincoln38406,
    volume = {305},
    author = {A.R. Panisson and Simon Parsons and P. McBurney and R.H. Bordini},
    note = {cited By 0},
    title = {Choosing appropriate arguments from trustworthy sources},
    journal = {Frontiers in Artificial Intelligence and Applications},
    doi = {10.3233/978-1-61499-906-5-345},
    pages = {345--352},
    year = {2018},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38406/}
    }
  • A. R. Panisson, S. Sarkadi, P. McBurney, S. Parsons, and R. H. Bordini, “Lies, bullshit, and deception in agent-oriented programming languages.” 2018, p. 50–61.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38409,
    volume = {2154},
    title = {Lies, bullshit, and deception in agent-oriented programming languages},
    author = {A.R. Panisson and S. Sarkadi and P. McBurney and Simon Parsons and R.H. Bordini},
    year = {2018},
    pages = {50--61},
    note = {cited By 2},
    journal = {CEUR Workshop Proceedings},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38409/}
    }
  • J. Raphael and E. Sklar, “Towards dynamic coalition formation for intelligent traffic management,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 10767, p. 400–414, 2018. doi:10.1007/978-3-030-01713-2
    [BibTeX] [Download PDF]
    @article{lincoln38545,
    volume = {10767},
    author = {J. Raphael and Elizabeth Sklar},
    note = {cited By 0},
    title = {Towards dynamic coalition formation for intelligent traffic management},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-01713-2},
    pages = {400--414},
    year = {2018},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38545/}
    }
  • I. Saleh, A. Postnikov, C. Bingham, R. Bickerton, A. Zolotas, and S. Pearson, “Aggregated power profile of a large network of refrigeration compressors following ffr dsr events,” in International conference on energy engineering and smart grids, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Refrigeration systems and HVAC are estimated to consume approximately 14\% of the UK?s electricity and could make a significant contribution towards the application of DSR. In this paper, active power profiles of single and multi-pack refrigeration systems responding DSR events are experimentally investigated. Further, a large population of 300 packs (approx. 1.5 MW capacity) is simulated to investigate the potential of delivering DSR using a network of refrigeration compressors, in common with commercial retail refrigeration systems. Two scenarios of responding to DSR are adopted for the studies viz. with and without applying a suction pressure offset after an initial 30 second shut-down of the compressors. The experiments are conducted at the Refrigeration Research Centre at University of Lincoln. Simulations of the active power profile for the compressors following triggered DSR events are realized based on a previously reported model of the thermodynamic properties of the refrigeration system. A Simulink model of a three phase power supply system is used to determine the impact of compressor operation on the power system performance, and in particular, on the line voltage of the local power supply system. The authors demonstrate how the active power and the drawn current of the multi-pack refrigeration system are affected following a rapid shut down and subsequent return to operation. Specifically, it is shown that there is a significant increase in power consumption post DSR, approximately two times higher than during normal operation, particularly when many packs of compressors are synchronized post DSR event, which can have a significant effect on the line voltage of the power supply.

    @inproceedings{lincoln32931,
    booktitle = {International Conference on Energy Engineering and Smart Grids},
    title = {Aggregated power profile of a large network of refrigeration compressors following FFR DSR events},
    author = {Ibrahim Saleh and Andrey Postnikov and Chris Bingham and Ronald Bickerton and Argyrios Zolotas and Simon Pearson},
    publisher = {ESG2018},
    year = {2018},
    keywords = {ARRAY(0x55e7728559e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32931/},
    abstract = {Refrigeration systems and HVAC are estimated to consume approximately 14\% of the UK?s electricity and could make a significant contribution towards the application of DSR. In this paper, active power profiles of single and multi-pack refrigeration systems responding DSR events are experimentally investigated. Further, a large population of 300 packs (approx. 1.5 MW capacity) is simulated to investigate the potential of delivering DSR using a network of refrigeration compressors, in common with commercial retail refrigeration systems. Two scenarios of responding to DSR are adopted for the studies viz. with and without applying a suction pressure offset after an initial 30 second shut-down of the compressors. The experiments are conducted at the Refrigeration Research Centre at University of Lincoln. Simulations of the active power profile for the compressors following triggered DSR events are realized based on a previously reported model of the thermodynamic properties of the refrigeration system. A Simulink model of a three phase power supply system is used to determine the impact of compressor operation on the power system performance, and in particular, on the line voltage of the local power supply system. The authors demonstrate how the active power and the drawn current of the multi-pack refrigeration system are affected following a rapid shut down and subsequent return to operation. Specifically, it is shown that there is a significant increase in power consumption post DSR, approximately two times higher than during normal operation, particularly when many packs of compressors are synchronized post DSR event, which can have a significant effect on the line voltage of the power supply.}
    }
  • E. Sklar and M. Q. Azhar, “Explanation through argumentation.” 2018, p. 277–285. doi:10.1145/3284432.3284473
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38540,
    title = {Explanation through argumentation},
    author = {Elizabeth Sklar and M.Q. Azhar},
    year = {2018},
    pages = {277--285},
    doi = {10.1145/3284432.3284473},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38540/}
    }
  • A. P. Young, N. Kokciyan, I. Sassoon, S. Modgil, and S. Parsons, “Instantiating metalevel argumentation frameworks,” in Computational models of argument, Ios press, 2018, vol. 305, p. 97–108. doi:10.3233/978-1-61499-906-5-97
    [BibTeX] [Abstract] [Download PDF]

    We directly instantiate metalevel argumentation frameworks (MAFs) to enable argumentation-based reasoning about information relevant to various applications. The advantage of this is that information that typically cannot be incorporated via the instantiation of object-level argumentation frameworks can now be incorporated, in particular information referencing (1) preferences over arguments, (2) the rationale for attacks, and (3) the dialectical effect of critical questions that shifts the burden of proof when posed. We achieve this by using a variant of ASPIC+ and a higher-order typed language that can reference object-level formulae and arguments. We illustrate these representational advantages with a running example from clinical decision support.

    @incollection{lincoln38407,
    volume = {305},
    author = {A.P. Young and N. Kokciyan and I. Sassoon and S. Modgil and S. Parsons},
    series = {Frontiers in Artificial Intelligence and Applications},
    note = {cited By 1},
    booktitle = {Computational Models of Argument},
    title = {Instantiating metalevel argumentation frameworks},
    publisher = {IOS Press},
    year = {2018},
    journal = {Frontiers in Artificial Intelligence and Applications},
    doi = {10.3233/978-1-61499-906-5-97},
    pages = {97--108},
    url = {http://eprints.lincoln.ac.uk/id/eprint/38407/},
    abstract = {We directly instantiate metalevel argumentation frameworks (MAFs) to enable argumentation-based reasoning about information relevant to various applications. The advantage of this is that information that typically cannot be incorporated via the instantiation of object-level argumentation frameworks can now be incorporated, in particular information referencing (1) preferences over arguments, (2) the rationale for attacks, and (3) the dialectical effect of critical questions that shifts the burden of proof when posed. We achieve this by using a variant of ASPIC+ and a higher-order typed language that can reference object-level formulae and arguments. We illustrate these representational advantages with a running example from clinical decision support.}
    }

2017

  • F. J. Comin and C. M. Saaj, “Models for slip estimation and soft terrain characterization with multilegged wheel-legs,” Ieee transactions on robotics, vol. 33, iss. 6, p. 1438–1452, 2017. doi:10.1109/TRO.2017.2723904
    [BibTeX] [Abstract] [Download PDF]

    Successful operation of off-road mobile robots faces the challenge of mobility hazards posed by soft, deformable terrain, e.g., sand traps. The slip caused by these hazards has a significant impact on tractive efficiency, leading to complete immobilization in extreme circumstances. This paper addresses the interaction between dry frictional soil and the multilegged wheel-leg concept, with the aim of exploiting its enhanced mobility for safe, in situ terrain sensing. The influence of multiple legs and different foot designs on wheel-leg-soil interaction is analyzed by incorporating these aspects to an existing terradynamics model. In addition, new theoretical models are proposed and experimentally validated to relate wheel-leg slip to both motor torque and stick-slip vibrations. These models, which are capable of estimating wheel-leg slip from purely proprioceptive sensors, are then applied in combination with detected wheel-leg sinkage to successfully characterize the load bearing and shear strength properties of different types of deformable soil. The main contribution of this paper enables nongeometric hazard detection based on detected wheel-leg slip and sinkage.

    @article{lincoln37397,
    volume = {33},
    number = {6},
    month = {December},
    author = {F.J. Comin and C. M. Saaj},
    note = {cited By 0},
    title = {Models for slip estimation and soft terrain characterization with multilegged wheel-legs},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Transactions on Robotics},
    doi = {10.1109/TRO.2017.2723904},
    pages = {1438--1452},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37397/},
    abstract = {Successful operation of off-road mobile robots faces the challenge of mobility hazards posed by soft, deformable terrain, e.g., sand traps. The slip caused by these hazards has a significant impact on tractive efficiency, leading to complete immobilization in extreme circumstances. This paper addresses the interaction between dry frictional soil and the multilegged wheel-leg concept, with the aim of exploiting its enhanced mobility for safe, in situ terrain sensing. The influence of multiple legs and different foot designs on wheel-leg-soil interaction is analyzed by incorporating these aspects to an existing terradynamics model. In addition, new theoretical models are proposed and experimentally validated to relate wheel-leg slip to both motor torque and stick-slip vibrations. These models, which are capable of estimating wheel-leg slip from purely proprioceptive sensors, are then applied in combination with detected wheel-leg sinkage to successfully characterize the load bearing and shear strength properties of different types of deformable soil. The main contribution of this paper enables nongeometric hazard detection based on detected wheel-leg slip and sinkage.}
    }
  • D. Wang, X. Hou, J. Xu, S. Yue, and C. Liu, “Traffic sign detection using a cascade method with fast feature extraction and saliency test,” Ieee transactions on intelligent transportation systems, vol. 18, iss. 12, p. 3290–3302, 2017. doi:10.1109/tits.2017.2682181
    [BibTeX] [Abstract] [Download PDF]

    Automatic traffic sign detection is challenging due to the complexity of scene images, and fast detection is required in real applications such as driver assistance systems. In this paper, we propose a fast traffic sign detection method based on a cascade method with saliency test and neighboring scale awareness. In the cascade method, feature maps of several channels are extracted efficiently using approximation techniques. Sliding windows are pruned hierarchically using coarse-to-fine classifiers and the correlation between neighboring scales. The cascade system has only one free parameter, while the multiple thresholds are selected by a data-driven approach. To further increase speed, we also use a novel saliency test based on mid-level features to pre-prune background windows. Experiments on two public traffic sign data sets show that the proposed method achieves competing performance and runs 27 times as fast as most of the state-of-the-art methods.

    @article{lincoln27022,
    volume = {18},
    number = {12},
    month = {December},
    author = {Dongdong Wang and Xinwen Hou and Jiawei Xu and Shigang Yue and Cheng-Lin Liu},
    title = {Traffic sign detection using a cascade method with fast feature extraction and saliency test},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Transactions on Intelligent Transportation Systems},
    doi = {10.1109/tits.2017.2682181},
    pages = {3290--3302},
    keywords = {ARRAY(0x55e773220370)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27022/},
    abstract = {Automatic traffic sign detection is challenging due to the complexity of scene images, and fast detection is required in real applications such as driver assistance systems. In this paper, we propose a fast traffic sign detection method based on a cascade method with saliency test and neighboring scale awareness. In the cascade method, feature maps of several channels are extracted efficiently using approximation techniques. Sliding windows are pruned hierarchically using coarse-to-fine classifiers and the correlation between neighboring scales. The cascade system has only one free parameter, while the multiple thresholds are selected by a data-driven approach. To further increase speed, we also use a novel saliency test based on mid-level features to pre-prune background windows. Experiments on two public traffic sign data sets show that the proposed method achieves competing performance and runs 27 times as fast as most of the state-of-the-art methods.}
    }
  • M. T. Lazaro, G. Grisetti, L. Iocchi, J. P. Fentanes, and M. Hanheide, “A lightweight navigation system for mobile robots,” in Iberian robotics conference, 2017, p. 295–306. doi:10.1007/978-3-319-70836-2_25
    [BibTeX] [Abstract] [Download PDF]

    {\copyright} Springer International Publishing AG 2018. In this paper, we describe a navigation system requiring very few computational resources, but still providing performance comparable with commonly used tools in the ROS universe. This lightweight navigation system is thus suitable for robots with low computational resources and provides interfaces for both ROS and NAOqi middlewares. We have successfully evaluated the software on different robots and in different situations, including SoftBank Pepper robot for RoboCup@Home SSPL competitions and on small home-made robots for RoboCup@Home Education workshops. The developed software is well documented and easy to understand. It is released open-source and as Debian package to facilitate ease of use, in particular for the young researchers participating in robotic competitions and for educational activities.

    @inproceedings{lincoln37349,
    volume = {694},
    month = {December},
    author = {Maria Teresa Lazaro and G. Grisetti and Luca Iocchi and Jaime Pulido Fentanes and Marc Hanheide},
    booktitle = {Iberian Robotics conference},
    title = {A Lightweight Navigation System for Mobile Robots},
    doi = {10.1007/978-3-319-70836-2\_25},
    pages = {295--306},
    year = {2017},
    keywords = {ARRAY(0x55e772f75f28)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37349/},
    abstract = {{\copyright} Springer International Publishing AG 2018. In this paper, we describe a navigation system requiring very few computational resources, but still providing performance comparable with commonly used tools in the ROS universe. This lightweight navigation system is thus suitable for robots with low computational resources and provides interfaces for both ROS and NAOqi middlewares. We have successfully evaluated the software on different robots and in different situations, including SoftBank Pepper robot for RoboCup@Home SSPL competitions and on small home-made robots for RoboCup@Home Education workshops. The developed software is well documented and easy to understand. It is released open-source and as Debian package to facilitate ease of use, in particular for the young researchers participating in robotic competitions and for educational activities.}
    }
  • M. Heshmat, M. Fernandez-Carmona, Z. Yan, and N. Bellotto, “Active human detection with a mobile robot,” in Uk-ras conference on robotics and autonomous systems, 2017.
    [BibTeX] [Abstract] [Download PDF]

    The problem of active human detection with a mobile robot equipped with an RGB-D camera is considered in this work. Traditional human detection algorithms for indoor mobile robots face several challenges, including occlusions due to cluttered dynamic environments, changing backgrounds, and large variety of human movements. Active human detection aims to improve classic detection systems by actively selecting new and potentially better observation points of the person. In this preliminary work, we present a system that actively guides a mobile robot towards high-confidence human detections, including initial simulation tests that highlight pros and cons of the proposed approach.

    @inproceedings{lincoln29946,
    booktitle = {UK-RAS Conference on Robotics and Autonomous Systems},
    month = {December},
    title = {Active human detection with a mobile robot},
    author = {Mohamed Heshmat and Manuel Fernandez-Carmona and Zhi Yan and Nicola Bellotto},
    year = {2017},
    keywords = {ARRAY(0x55e7729d9d48)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/29946/},
    abstract = {The problem of active human detection with a mobile robot equipped with an RGB-D camera is considered in this work. Traditional human detection algorithms for indoor mobile robots face several challenges, including occlusions due to cluttered dynamic environments, changing backgrounds, and large variety of human movements. Active human detection aims to improve classic detection systems by actively selecting new and potentially better observation points of the person. In this preliminary work, we present a system that actively guides a mobile robot towards high-confidence human detections, including initial simulation tests that highlight pros and cons of the proposed approach.}
    }
  • S. M. Mellado, G. Cielniak, T. Krajnik, and T. Duckett, “Modelling and predicting rhythmic flow patterns in dynamic environments,” in Uk-ras network conference, 2017.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we introduce a time-dependent probabilistic map able to model and predict future flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction by a set of harmonic functions, which efficiently capture long-term (hours to months) variations of crowd movements over time, so from a robotics perspective, this model could be useful to add the predicted human behaviour into the control loop to influence the actions of the robot. Our approach is evaluated with data collected from a real environment and initial qualitative results are presented.

    @inproceedings{lincoln31053,
    booktitle = {UK-RAS Network Conference},
    month = {December},
    title = {Modelling and predicting rhythmic flow patterns in dynamic environments},
    author = {Sergi Molina Mellado and Grzegorz Cielniak and Tomas Krajnik and Tom Duckett},
    year = {2017},
    keywords = {ARRAY(0x55e7726ba7c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31053/},
    abstract = {In this paper, we introduce a time-dependent probabilistic map able to model and predict future flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction by a set of harmonic functions, which efficiently capture long-term (hours to months) variations of crowd movements over time, so from a robotics perspective, this model could be useful to add the predicted human behaviour into the control loop to influence the actions of the robot. Our approach is evaluated with data collected from a real environment and initial qualitative results are presented.}
    }
  • J. P. Fentanes, C. Dondrup, and M. Hanheide, “Navigation testing for continuous integration in robotics,” in Uk-ras conference on robotics and autonomous systems, 2017.
    [BibTeX] [Abstract] [Download PDF]

    Robots working in real-world applications need to be robust and reliable. However, ensuring robust software in an academic development environment with dozens of developers poses a significant challenge. This work presents a testing framework, successfully employed in a large-scale integrated robotics project, based on continuous integration and the fork-and-pull model of software development, implementing automated system regression testing for robot navigation. It presents a framework suitable for both regression testing and also providing processes for parameter optimisation and benchmarking.

    @inproceedings{lincoln31547,
    booktitle = {UK-RAS Conference on Robotics and Autonomous Systems},
    month = {December},
    title = {Navigation testing for continuous integration in robotics},
    author = {Jaime Pulido Fentanes and Christian Dondrup and Marc Hanheide},
    publisher = {UK-RAS Conference on Robotics and Autonomous Systems (RAS 2017)},
    year = {2017},
    keywords = {ARRAY(0x55e772a761c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31547/},
    abstract = {Robots working in real-world applications need to be robust and reliable. However, ensuring robust software in an academic development environment with dozens of developers poses a significant challenge. This work presents a testing framework, successfully employed in a large-scale integrated robotics project, based on continuous integration and the fork-and-pull model of software development, implementing automated system regression testing for robot navigation. It presents a framework suitable for both regression testing and also providing processes for parameter optimisation and benchmarking.}
    }
  • Q. Fu and S. Yue, “Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network,” in 2017 ieee international conference on robotics and biomimetics (robio), Ieee, 2017, p. 1636–1641.
    [BibTeX] [Abstract] [Download PDF]

    How do animals, e.g. insects, detect meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? This open question has been very attractive for decades. In this paper, with respect to latest biological research progress made on motion detection circuitry, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely motion and position pathways explored in fly visual system, for mimicking the tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information on moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) Compared to conventional computer vision techniques, it turns out the computational simplicity of this model may benefit the utility in small robots for real time fixating. (2) The hybrid neural network structure fulfills the characteristics of a putative signal tuning map in physiology. (3) It also satisfies with a profound implication proposed by biologists: visual fixation behaviors could be simply tuned via only the position pathway; nevertheless, the motion-detecting pathway enhances the tracking precision.

    @incollection{lincoln28879,
    month = {December},
    author = {Qinbing Fu and Shigang Yue},
    note = {{\copyright} 2017 IEEE},
    booktitle = {2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)},
    title = {Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network},
    publisher = {IEEE},
    pages = {1636--1641},
    year = {2017},
    keywords = {ARRAY(0x55e772ce9080)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/28879/},
    abstract = {How do animals, e.g. insects, detect meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? This open question has been very attractive for decades. In this paper, with respect to latest biological research progress made on motion detection circuitry, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely motion and position pathways explored in fly visual system, for mimicking the tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information on moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) Compared to conventional computer vision techniques, it turns out the computational simplicity of this model may benefit the utility in small robots for real time fixating. (2) The hybrid neural network structure fulfills the characteristics of a putative signal tuning map in physiology. (3) It also satisfies with a profound implication proposed by biologists: visual fixation behaviors could be simply tuned via only the position pathway; nevertheless, the motion-detecting pathway enhances the tracking precision.}
    }
  • C. Keeble, P. A. Thwaites, S. Barber, G. R. Law, and P. D. Baxter, “Adaptation of chain event graphs for use with case-control studies in epidemiology,” The international journal of biostatistics, vol. 13, iss. 2, 2017. doi:10.1515/ijb-2016-0073
    [BibTeX] [Abstract] [Download PDF]

    Case-control studies are used in epidemiology to try to uncover the causes of diseases, but are a retrospective study design known to suffer from non-participation and recall bias, which may explain their decreased popularity in recent years. Traditional analyses report usually only the odds ratio for given exposures and the binary disease status. Chain event graphs are a graphical representation of a statistical model derived from event trees which have been developed in artificial intelligence and statistics, and only recently introduced to the epidemiology literature. They are a modern Bayesian technique which enable prior knowledge to be incorporated into the data analysis using the agglomerative hierarchical clustering algorithm, used to form a suitable chain event graph. Additionally, they can account for missing data and be used to explore missingness mechanisms. Here we adapt the chain event graph framework to suit scenarios often encountered in case-control studies, to strengthen this study design which is time and financially efficient. We demonstrate eight adaptations to the graphs, which consist of two suitable for full case-control study analysis, four which can be used in interim analyses to explore biases, and two which aim to improve the ease and accuracy of analyses. The adaptations are illustrated with complete, reproducible, fully-interpreted examples, including the event tree and chain event graph. Chain event graphs are used here for the first time to summarise non-participation, data collection techniques, data reliability, and disease severity in case-control studies. We demonstrate how these features of a case-control study can be incorporated into the analysis to provide further insight, which can help to identify potential biases and lead to more accurate study results.

    @article{lincoln29511,
    volume = {13},
    number = {2},
    month = {December},
    author = {Claire Keeble and Peter Adam Thwaites and Stuart Barber and Graham Richard Law and Paul David Baxter},
    title = {Adaptation of chain event graphs for use with case-Control studies in epidemiology},
    publisher = {De Gruyter},
    year = {2017},
    journal = {The International Journal of Biostatistics},
    doi = {10.1515/ijb-2016-0073},
    keywords = {ARRAY(0x55e7726babd0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/29511/},
    abstract = {Case-control studies are used in epidemiology to try to uncover the causes of diseases, but are a retrospective study design known to suffer from non-participation and recall bias, which may explain their decreased popularity in recent years. Traditional analyses report usually only the odds ratio for given exposures and the binary disease status. Chain event graphs are a graphical representation of a statistical model derived from event trees which have been developed in artificial intelligence and statistics, and only recently introduced to the epidemiology literature. They are a modern Bayesian technique which enable prior knowledge to be incorporated into the data analysis using the agglomerative hierarchical clustering algorithm, used to form a suitable chain event graph. Additionally, they can account for missing data and be used to explore missingness mechanisms. Here we adapt the chain event graph framework to suit scenarios often encountered in case-control studies, to strengthen this study design which is time and financially efficient. We demonstrate eight adaptations to the graphs, which consist of two suitable for full case-control study analysis, four which can be used in interim analyses to explore biases, and two which aim to improve the ease and accuracy of analyses. The adaptations are illustrated with complete, reproducible, fully-interpreted examples, including the event tree and chain event graph. Chain event graphs are used here for the first time to summarise non-participation, data collection techniques, data reliability, and disease severity in case-control studies. We demonstrate how these features of a case-control study can be incorporated into the analysis to provide further insight, which can help to identify potential biases and lead to more accurate study results.}
    }
  • G. Maeda, M. Ewerton, G. Neumann, R. Lioutikov, and J. Peters, “Phase estimation for fast action recognition and trajectory generation in human?robot collaboration,” The international journal of robotics research, vol. 36, iss. 13-14, p. 1579–1594, 2017. doi:10.1177/0278364917693927
    [BibTeX] [Abstract] [Download PDF]

    This paper proposes a method to achieve fast and fluid human?robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semi-autonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction Probabilistic Movement Primitives with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping that must rely on a consistent stream of measurements at runtime. The resulting framework can achieve phase estimation, action recognition and robot trajectory coordination using a single probabilistic representation. We evaluated the method using a seven-degree-of-freedom lightweight robot arm equipped with a five-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on dynamic time warping.

    @article{lincoln26734,
    volume = {36},
    number = {13-14},
    month = {December},
    author = {Guilherme Maeda and Marco Ewerton and Gerhard Neumann and Rudolf Lioutikov and Jan Peters},
    title = {Phase estimation for fast action recognition and trajectory generation in human?robot collaboration},
    publisher = {SAGE},
    year = {2017},
    journal = {The International Journal of Robotics Research},
    doi = {10.1177/0278364917693927},
    pages = {1579--1594},
    keywords = {ARRAY(0x55e7726abf20)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/26734/},
    abstract = {This paper proposes a method to achieve fast and fluid human?robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semi-autonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction Probabilistic Movement Primitives with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping that must rely on a consistent stream of measurements at runtime. The resulting framework can achieve phase estimation, action recognition and robot trajectory coordination using a single probabilistic representation. We evaluated the method using a seven-degree-of-freedom lightweight robot arm equipped with a five-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on dynamic time warping.}
    }
  • C. Wirth, R. Akrour, G. Neumann, and J. Fürnkranz, “A survey of preference-based reinforcement learning methods,” Journal of machine learning research, vol. 18, iss. 136, p. 1–46, 2017.
    [BibTeX] [Abstract] [Download PDF]

    Reinforcement learning (RL) techniques optimize the accumulated long-term reward of a suitably chosen reward function. However, designing such a reward function often requires a lot of task- specific prior knowledge. The designer needs to consider different objectives that do not only influence the learned behavior but also the learning progress. To alleviate these issues, preference-based reinforcement learning algorithms (PbRL) have been proposed that can directly learn from an expert’s preferences instead of a hand-designed numeric reward. PbRL has gained traction in recent years due to its ability to resolve the reward shaping problem, its ability to learn from non numeric rewards and the possibility to reduce the dependence on expert knowledge. We provide a unified framework for PbRL that describes the task formally and points out the different design principles that affect the evaluation task for the human as well as the computational complexity. The design principles include the type of feedback that is assumed, the representation that is learned to capture the preferences, the optimization problem that has to be solved as well as how the exploration/exploitation problem is tackled. Furthermore, we point out shortcomings of current algorithms, propose open research questions and briefly survey practical tasks that have been solved using PbRL.

    @article{lincoln30636,
    volume = {18},
    number = {136},
    month = {December},
    author = {Christian Wirth and Riad Akrour and Gerhard Neumann and Johannes F{\"u}rnkranz},
    title = {A survey of preference-based reinforcement learning methods},
    publisher = {Journal of Machine Learning Research / Massachusetts Institute of Technology Press (MIT Press) / Microtome},
    year = {2017},
    journal = {Journal of Machine Learning Research},
    pages = {1--46},
    keywords = {ARRAY(0x55e772e15650)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/30636/},
    abstract = {Reinforcement learning (RL) techniques optimize the accumulated long-term reward of a suitably chosen reward function. However, designing such a reward function often requires a lot of task- specific prior knowledge. The designer needs to consider different objectives that do not only influence the learned behavior but also the learning progress. To alleviate these issues, preference-based reinforcement learning algorithms (PbRL) have been proposed that can directly learn from an expert's preferences instead of a hand-designed numeric reward. PbRL has gained traction in recent years due to its ability to resolve the reward shaping problem, its ability to learn from non numeric rewards and the possibility to reduce the dependence on expert knowledge. We provide a unified framework for PbRL that describes the task formally and points out the different design principles that affect the evaluation task for the human as well as the computational complexity. The design principles include the type of feedback that is assumed, the representation that is learned to capture the preferences, the optimization problem that has to be solved as well as how the exploration/exploitation problem is tackled. Furthermore, we point out shortcomings of current algorithms, propose open research questions and briefly survey practical tasks that have been solved using PbRL.}
    }
  • B. Hu, S. Yue, and Z. Zhang, “A rotational motion perception neural network based on asymmetric spatiotemporal visual information processing,” Ieee transactions on neural networks and learning systems, vol. 28, iss. 11, p. 2803–2821, 2017. doi:10.1109/TNNLS.2016.2592969
    [BibTeX] [Abstract] [Download PDF]

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

    @article{lincoln24936,
    volume = {28},
    number = {11},
    month = {November},
    author = {Bin Hu and Shigang Yue and Zhuhong Zhang},
    title = {A rotational motion perception neural network based on asymmetric spatiotemporal visual information processing},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2016.2592969},
    pages = {2803--2821},
    keywords = {ARRAY(0x55e7726ab3e0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/24936/},
    abstract = {All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.}
    }
  • C. G. S. o, E. Rodias, and D. Bochtis, “Auto-steering and controlled traffic farming ? route planning and economics,” in Precision agriculture: technology and economic perspectives, Springer, 2017, p. 129–145. doi:doi:10.1007/978-3-319-68715-5_6
    [BibTeX] [Abstract] [Download PDF]

    Agriculture nowadays includes automation systems that contribute significantly to many levels of the food production process. Such systems include GPS based systems like auto-steering and Controlled Traffic Farming (CTF). These systems have led to many innovations in agricultural field area coverage design. Integrating these advancements, two different route planning designs, a traditional and an optimised one, are outlined and explained in this chapter. Four different machinery scenarios were tested in four fields each, and the main aim was to compare the two different route planning systems under economic criteria and identify the best operational route coverage design criterion. The results show that there are significant reductions in operational costs varying from 9 to 20\%, depending on the specific machinery and field configurations. Such results show the considerable potential of advanced route planning designs and further optimization measures. They indicate the need for research efforts that quantify the operational and economic benefits by optimising field coverage designs in the headlands, turnings or obstacles avoidance according to the actual configuration to minimize the non-working activities and, as a consequence, the overall operational cost.

    @incollection{lincoln39232,
    month = {November},
    author = {Claus G. S{\o}rensen and Efthymios Rodias and Dionysis Bochtis},
    booktitle = {Precision Agriculture: Technology and Economic Perspectives},
    title = {Auto-Steering and Controlled Traffic Farming ? Route Planning and Economics},
    publisher = {Springer},
    doi = {doi:10.1007/978-3-319-68715-5\_6},
    pages = {129--145},
    year = {2017},
    keywords = {ARRAY(0x55e772cd1430)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39232/},
    abstract = {Agriculture nowadays includes automation systems that contribute significantly to many levels of the food production process. Such systems include GPS based systems like auto-steering and Controlled Traffic Farming (CTF). These systems have led to many innovations in agricultural field area coverage design. Integrating these advancements, two different route planning designs, a traditional and an optimised one, are outlined and explained in this chapter. Four different machinery scenarios were tested in four fields each, and the main aim was to compare the two different route planning systems under economic criteria and identify the best operational route coverage design criterion. The results show that there are significant reductions in operational costs varying from 9 to 20\%, depending on the specific machinery and field configurations. Such results show the considerable potential of advanced route planning designs and further optimization measures. They indicate the need for research efforts that quantify the operational and economic benefits by optimising field coverage designs in the headlands, turnings or obstacles avoidance according to the actual configuration to minimize the non-working activities and, as a consequence, the overall operational cost.}
    }
  • H. Cuayahuitl, “Deep reinforcement learning for conversational robots playing games,” in Ieee ras international conference on humanoid robots, 2017.
    [BibTeX] [Abstract] [Download PDF]

    Deep reinforcement learning for interactive multimodal robots is attractive for endowing machines with trainable skill acquisition. But this form of learning still represents several challenges. The challenge that we focus in this paper is effective policy learning. To address that, in this paper we compare the Deep Q-Networks (DQN) method against a variant that aims for stronger decisions than the original method by avoiding decisions with the lowest negative rewards. We evaluated our baseline and proposed algorithms in agents playing the game of Noughts and Crosses with two grid sizes (3×3 and 5×5). Experimental results show evidence that our proposed method can lead to more effective policies than the baseline DQN method, which can be used for training interactive social robots.

    @inproceedings{lincoln29060,
    booktitle = {IEEE RAS International Conference on Humanoid Robots},
    month = {November},
    title = {Deep reinforcement learning for conversational robots playing games},
    author = {Heriberto Cuayahuitl},
    publisher = {IEEE},
    year = {2017},
    keywords = {ARRAY(0x55e772d09c60)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/29060/},
    abstract = {Deep reinforcement learning for interactive multimodal robots is attractive for endowing machines with trainable skill acquisition. But this form of learning still represents several challenges. The challenge that we focus in this paper is effective policy learning. To address that, in this paper we compare the Deep Q-Networks (DQN) method against a variant that aims for stronger decisions than the original method by avoiding decisions with the lowest negative rewards. We evaluated our baseline and proposed algorithms in agents playing the game of Noughts and Crosses with two grid sizes (3x3 and 5x5). Experimental results show evidence that our proposed method can lead to more effective policies than the baseline DQN method, which can be used for training interactive social robots.}
    }
  • C. Keeble, P. A. Thwaites, P. D. Baxter, S. Barber, R. C. Parslow, and G. R. Law, “Learning through chain event graphs: the role of maternal factors in childhood type 1 diabetes,” American journal of epidemiology, vol. 186, iss. 10, p. 1204–1208, 2017. doi:10.1093/aje/kwx171
    [BibTeX] [Abstract] [Download PDF]

    Chain event graphs (CEGs) are a graphical representation of a statistical model derived from event trees. They have previously been applied to cohort studies but not to case-control studies. In this paper, we apply the CEG framework to a Yorkshire, United Kingdom, case-control study of childhood type 1 diabetes (1993?1994) in order to examine 4 exposure variables associated with the mother, 3 of which are fully observed (her school-leaving-age, amniocenteses during pregnancy, and delivery type) and 1 with missing values (her rhesus factor), while incorporating previous type 1 diabetes knowledge. We conclude that the unknown rhesus factor values were likely to be missing not at random and were mainly rhesus-positive. The mother?s school-leaving-age and rhesus factor were not associated with the diabetes status of the child, whereas having at least 1 amniocentesis procedure and, to a lesser extent, birth by cesarean delivery were associated; the combination of both procedures further increased the probability of diabetes. This application of CEGs to case-control data allows for the inclusion of missing data and prior knowledge, while investigating associations in the data. Communication of the analysis with the clinical expert is more straightforward than with traditional modeling, and this approach can be applied retrospectively or when assumptions for traditional analyses are not held.

    @article{lincoln26599,
    volume = {186},
    number = {10},
    month = {November},
    author = {C. Keeble and P. A. Thwaites and P. D. Baxter and S. Barber and R. C. Parslow and G. R. Law},
    title = {Learning Through Chain Event Graphs: The Role of Maternal Factors in Childhood Type 1 Diabetes},
    publisher = {Oxford University Press},
    year = {2017},
    journal = {American Journal of Epidemiology},
    doi = {10.1093/aje/kwx171},
    pages = {1204--1208},
    keywords = {ARRAY(0x55e772e6ae48)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/26599/},
    abstract = {Chain event graphs (CEGs) are a graphical representation of a statistical model derived from event trees. They have previously been applied to cohort studies but not to case-control studies. In this paper, we apply the CEG framework to a Yorkshire, United Kingdom, case-control study of childhood type 1 diabetes (1993?1994) in order to examine 4 exposure variables associated with the mother, 3 of which are fully observed (her school-leaving-age, amniocenteses during pregnancy, and delivery type) and 1 with missing values (her rhesus factor), while incorporating previous type 1 diabetes knowledge. We conclude that the unknown rhesus factor values were likely to be missing not at random and were mainly rhesus-positive. The mother?s school-leaving-age and rhesus factor were not associated with the diabetes status of the child, whereas having at least 1 amniocentesis procedure and, to a lesser extent, birth by cesarean delivery were associated; the combination of both procedures further increased the probability of diabetes. This application of CEGs to case-control data allows for the inclusion of missing data and prior knowledge, while investigating associations in the data. Communication of the analysis with the clinical expert is more straightforward than with traditional modeling, and this approach can be applied retrospectively or when assumptions for traditional analyses are not held.}
    }
  • E. Senft, S. Lemaignan, P. Baxter, and T. Belpaeme, “Toward supervised reinforcement learning with partial states for social hri,” in 4th aaai fss on artificial intelligence for social human-robot interaction (ai-hri), Arlington, Virginia, U.S.A., 2017, p. 109–113.
    [BibTeX] [Abstract] [Download PDF]

    Social interacting is a complex task for which machine learning holds particular promise. However, as no sufficiently accurate simulator of human interactions exists today, the learning of social interaction strategies has to happen online in the real world. Actions executed by the robot impact on humans, and as such have to be carefully selected, making it impossible to rely on random exploration. Additionally, no clear reward function exists for social interactions. This implies that traditional approaches used for Reinforcement Learning cannot be directly applied for learning how to interact with the social world. As such we argue that robots will profit from human expertise and guidance to learn social interactions. However, as the quantity of input a human can provide is limited, new methods have to be designed to use human input more efficiently. In this paper we describe a setup in which we combine a framework called Supervised Progressively Autonomous Robot Competencies (SPARC), which allows safer online learning with Reinforcement Learning, with the use of partial states rather than full states to accelerate generalisation and obtain a usable action policy more quickly.

    @inproceedings{lincoln30193,
    month = {November},
    author = {Emmanuel Senft and Severin Lemaignan and Paul Baxter and Tony Belpaeme},
    booktitle = {4th AAAI FSS on Artificial Intelligence for Social Human-Robot Interaction (AI-HRI)},
    address = {Arlington, Virginia, U.S.A.},
    title = {Toward supervised reinforcement learning with partial states for social HRI},
    publisher = {AAAI Press},
    pages = {109--113},
    year = {2017},
    keywords = {ARRAY(0x55e772d61840)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/30193/},
    abstract = {Social interacting is a complex task for which machine learning holds particular promise. However, as no sufficiently accurate simulator of human interactions exists today, the learning of social interaction strategies has to happen online in the real world. Actions executed by the robot impact on humans, and as such have to be carefully selected, making it impossible to rely on random exploration. Additionally, no clear reward function exists for social interactions. This implies that traditional approaches used for Reinforcement Learning cannot be directly applied for learning how to interact with the social world. As such we argue that robots will profit from human expertise and guidance to learn social interactions. However, as the quantity of input a human can provide is limited, new methods have to be designed to use human input more efficiently. In this paper we describe a setup in which we combine a framework called Supervised Progressively Autonomous Robot Competencies (SPARC), which allows safer online learning with Reinforcement Learning, with the use of partial states rather than full states to accelerate generalisation and obtain a usable action policy more quickly.}
    }
  • K. Kusumam, T. Krajnik, S. Pearson, T. Duckett, and G. Cielniak, “3d-vision based detection, localization, and sizing of broccoli heads in the field,” Journal of field robotics, vol. 34, iss. 8, p. 1505–1518, 2017. doi:10.1002/rob.21726
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors, which was developed and evaluated using sensory data collected under real-world field conditions in both the UK and Spain. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning, and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier, and a temporal filter to track the detected heads results in a system that detects broccoli heads with high precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field. Additionally, we present methods for automatically estimating the size of the broccoli heads, to determine when a head is ready for harvest. All of the methods were evaluated using ground-truth data from both the UK and Spain, which we also make available to the research community for subsequent algorithm development and result comparison. Cross-validation of the system trained on the UK dataset on the Spanish dataset, and vice versa, indicated good generalization capabilities of the system, confirming the strong potential of low-cost 3D imaging for commercial broccoli harvesting.

    @article{lincoln27782,
    volume = {34},
    number = {8},
    month = {November},
    author = {Keerthy Kusumam and Tomas Krajnik and Simon Pearson and Tom Duckett and Grzegorz Cielniak},
    title = {3D-vision based detection, localization, and sizing of broccoli heads in the field},
    publisher = {Wiley Periodicals, Inc.},
    year = {2017},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21726},
    pages = {1505--1518},
    keywords = {ARRAY(0x55e772e12038)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27782/},
    abstract = {This paper describes a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors, which was developed and evaluated using sensory data collected under real-world field conditions in both the UK and Spain. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning, and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier, and a temporal filter to track the detected heads results in a system that detects broccoli heads with high precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field. Additionally, we present methods for automatically estimating the size of the broccoli heads, to determine when a head is ready for harvest. All of the methods were evaluated using ground-truth data from both the UK and Spain, which we also make available to the research community for subsequent algorithm development and result comparison. Cross-validation of the system trained on the UK dataset on the Spanish dataset, and vice versa, indicated good generalization capabilities of the system, confirming the strong potential of low-cost 3D imaging for commercial broccoli harvesting.}
    }
  • E. Senft, P. Baxter, J. Kennedy, S. Lemaignan, and T. Belpaeme, “Supervised autonomy for online learning in human-robot interaction,” Pattern recognition letters, vol. 96, p. 77–86, 2017. doi:10.1016/j.patrec.2017.03.015
    [BibTeX] [Abstract] [Download PDF]

    When a robot is learning it needs to explore its environment and how its environment responds on its actions. When the environment is large and there are a large number of possible actions the robot can take, this exploration phase can take prohibitively long. However, exploration can often be optimised by letting a human expert guide the robot during its learning. Interactive machine learning, in which a human user interactively guides the robot as it learns, has been shown to be an effective way to teach a robot. It requires an intuitive control mechanism to allow the human expert to provide feedback on the robot?s progress. This paper presents a novel method which combines Reinforcement Learning and Supervised Progressively Autonomous Robot Competencies (SPARC). By allowing the user to fully control the robot and by treating rewards as implicit, SPARC aims to learn an action policy while maintaining human supervisory oversight of the robot?s behaviour. This method is evaluated and compared to Interactive Reinforcement Learning in a robot teaching task. Qualitative and quantitative results indicate that SPARC allows for safer and faster learning by the robot, whilst not placing a high workload on the human teacher.

    @article{lincoln26857,
    volume = {96},
    month = {November},
    author = {Emmanuel Senft and Paul Baxter and James Kennedy and Severin Lemaignan and Tony Belpaeme},
    title = {Supervised autonomy for online learning in human-robot interaction},
    publisher = {Elsevier / North Holland for International Association for Pattern Recognition},
    year = {2017},
    journal = {Pattern Recognition Letters},
    doi = {10.1016/j.patrec.2017.03.015},
    pages = {77--86},
    keywords = {ARRAY(0x55e772cd1418)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/26857/},
    abstract = {When a robot is learning it needs to explore its environment and how its environment responds on its
    actions. When the environment is large and there are a large number of possible actions the robot can
    take, this exploration phase can take prohibitively long. However, exploration can often be optimised
    by letting a human expert guide the robot during its learning. Interactive machine learning, in which a
    human user interactively guides the robot as it learns, has been shown to be an effective way to teach a
    robot. It requires an intuitive control mechanism to allow the human expert to provide feedback on
    the robot?s progress. This paper presents a novel method which combines Reinforcement Learning
    and Supervised Progressively Autonomous Robot Competencies (SPARC). By allowing the user to
    fully control the robot and by treating rewards as implicit, SPARC aims to learn an action policy
    while maintaining human supervisory oversight of the robot?s behaviour. This method is evaluated and
    compared to Interactive Reinforcement Learning in a robot teaching task. Qualitative and quantitative
    results indicate that SPARC allows for safer and faster learning by the robot, whilst not placing a high
    workload on the human teacher.}
    }
  • E. Rodias, R. Berruto, P. Busato, D. Bochtis, C. S. o, and K. Zhou, “Energy savings from optimised in-field route planning for agricultural machinery,” Sustainability, vol. 9, iss. 11, p. 1956, 2017. doi:10.3390/su9111956
    [BibTeX] [Abstract] [Download PDF]

    Various types of sensors technologies, such as machine vision and global positioning system (GPS) have been implemented in navigation of agricultural vehicles. Automated navigation systems have proved the potential for the execution of optimised route plans for field area coverage. This paper presents an assessment of the reduction of the energy requirements derived from the implementation of optimised field area coverage planning. The assessment regards the analysis of the energy requirements and the comparison between the non-optimised and optimised plans for field area coverage in the whole sequence of operations required in two different cropping systems: Miscanthus and Switchgrass production. An algorithmic approach for the simulation of the executed field operations by following both non-optimised and optimised field-work patterns was developed. As a result, the corresponding time requirements were estimated as the basis of the subsequent energy cost analysis. Based on the results, the optimised routes reduce the fuel energy consumption up to 8\%, the embodied energy consumption up to 7\%, and the total energy consumption from 3\% up to 8\%

    @article{lincoln39222,
    volume = {9},
    number = {11},
    month = {October},
    author = {Efthymios Rodias and Remigio Berruto and Patrizia Busato and Dionysis Bochtis and Claus S{\o}rensen and Kun Zhou},
    title = {Energy Savings from Optimised In-Field Route Planning for Agricultural Machinery},
    year = {2017},
    journal = {Sustainability},
    doi = {10.3390/su9111956},
    pages = {1956},
    keywords = {ARRAY(0x55e77304db88)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39222/},
    abstract = {Various types of sensors technologies, such as machine vision and global positioning system (GPS) have been implemented in navigation of agricultural vehicles. Automated navigation systems have proved the potential for the execution of optimised route plans for field area coverage. This paper presents an assessment of the reduction of the energy requirements derived from the implementation of optimised field area coverage planning. The assessment regards the analysis of the energy requirements and the comparison between the non-optimised and optimised plans for field area coverage in the whole sequence of operations required in two different cropping systems: Miscanthus and Switchgrass production. An algorithmic approach for the simulation of the executed field operations by following both non-optimised and optimised field-work patterns was developed. As a result, the corresponding time requirements were estimated as the basis of the subsequent energy cost analysis. Based on the results, the optimised routes reduce the fuel energy consumption up to 8\%, the embodied energy consumption up to 7\%, and the total energy consumption from 3\% up to 8\%}
    }
  • C. Hu, F. Arvin, C. Xiong, and S. Yue, “A bio-inspired embedded vision system for autonomous micro-robots: the lgmd case,” Ieee transactions on cognitive and developmental systems, vol. 9, iss. 3, p. 241–254, 2017. doi:10.1109/TCDS.2016.2574624
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a new bio-inspired vision system embedded for micro-robots. The vision system takes inspiration from locusts in detecting fast approaching objects. Neurophysiological research suggested that locusts use a wide-field visual neuron called lobula giant movement detector (LGMD) to respond to imminent collisions. In this work, we present the implementation of the selected neuron model by a low-cost ARM processor as part of a composite vision module. As the first embedded LGMD vision module fits to a micro-robot, the developed system performs all image acquisition and processing independently. The vision module is placed on top of a microrobot to initiate obstacle avoidance behaviour autonomously. Both simulation and real-world experiments were carried out to test the reliability and robustness of the vision system. The results of the experiments with different scenarios demonstrated the potential of the bio-inspired vision system as a low-cost embedded module for autonomous robots.

    @article{lincoln25279,
    volume = {9},
    number = {3},
    month = {September},
    author = {Cheng Hu and Farshad Arvin and Caihua Xiong and Shigang Yue},
    title = {A bio-inspired embedded vision system for autonomous micro-robots: the LGMD case},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Transactions on Cognitive and Developmental Systems},
    doi = {10.1109/TCDS.2016.2574624},
    pages = {241--254},
    keywords = {ARRAY(0x55e77304dbd0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/25279/},
    abstract = {In this paper, we present a new bio-inspired vision system embedded for micro-robots. The vision system takes inspiration from locusts in detecting fast approaching objects. Neurophysiological research suggested that locusts use a wide-field visual neuron called lobula giant movement detector (LGMD) to respond to imminent collisions. In this work, we present the implementation of the selected neuron model by a low-cost ARM processor as part of a composite vision module. As the first embedded LGMD vision module fits to a micro-robot, the developed system performs all image acquisition and processing independently. The vision module is placed on top of a microrobot to initiate obstacle avoidance behaviour autonomously. Both simulation and real-world experiments were carried out to test the reliability and robustness of the vision system. The results of the experiments with different scenarios demonstrated the potential of the bio-inspired vision system as a low-cost embedded module for autonomous robots.}
    }
  • Z. Yan, T. Duckett, and N. Bellotto, “Online learning for human classification in 3d lidar-based tracking,” in Ieee/rsj international conference on itelligent robots and systems (iros), 2017. doi:10.1109/IROS.2017.8202247
    [BibTeX] [Abstract] [Download PDF]

    Human detection and tracking is one of the most important aspects to be considered in service robotics, as the robot often shares its workspace and interacts closely with humans. This paper presents an online learning framework for human classification in 3D LiDAR scans, taking advantage of robust multi-target tracking to avoid the need for data annotation by a human expert. The system learns iteratively by retraining a classifier online with the samples collected by the robot over time. A novel aspect of our approach is that errors in training data can be corrected using the information provided by the 3D LiDAR-based tracking. In order to do this, an efficient 3D cluster detector of potential human targets has been implemented. We evaluate the framework using a new 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments analyse the real-time performance of the cluster detector and show that our online-trained human classifier matches and in some cases outperforms its offline version.

    @inproceedings{lincoln27675,
    booktitle = {IEEE/RSJ International Conference on Itelligent Robots and Systems (IROS)},
    month = {September},
    title = {Online learning for human classification in 3D LiDAR-based tracking},
    author = {Zhi Yan and Tom Duckett and Nicola Bellotto},
    publisher = {IEEE},
    year = {2017},
    doi = {10.1109/IROS.2017.8202247},
    keywords = {ARRAY(0x55e77303b510)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27675/},
    abstract = {Human detection and tracking is one of the most important aspects to be considered in service robotics, as the robot often shares its workspace and interacts closely with humans. This paper presents an online learning framework for human classification in 3D LiDAR scans, taking advantage of robust multi-target tracking to avoid the need for data annotation by a human expert. The system learns iteratively by retraining a classifier online with the samples collected by the robot over time. A novel aspect of our approach is that errors in training data can be corrected using the information provided by the 3D LiDAR-based tracking. In order to do this, an efficient 3D cluster detector of potential human targets has been implemented. We evaluate the framework using a new 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments analyse the real-time performance of the cluster detector and show that our online-trained human classifier matches and in some cases outperforms its offline version.}
    }
  • A. Zaganidis, M. Magnusson, T. Duckett, and G. Cielniak, “Semantic-assisted 3d normal distributions transform for scan registration in environments with limited structure,” in International conference on intelligent robots and systems (iros), 2017.
    [BibTeX] [Abstract] [Download PDF]

    Point cloud registration is a core problem of many robotic applications, including simultaneous localization and mapping. The Normal Distributions Transform (NDT) is a method that fits a number of Gaussian distributions to the data points, and then uses this transform as an approximation of the real data, registering a relatively small number of distributions as opposed to the full point cloud. This approach contributes to NDT?s registration robustness and speed but leaves room for improvement in environments of limited structure. To address this limitation we propose a method for the introduction of semantic information extracted from the point clouds into the registration process. The paper presents a large scale experimental evaluation of the algorithm against NDT on two publicly available benchmark data sets. For the purpose of this test a measure of smoothness is used for the semantic partitioning of the point clouds. The results indicate that the proposed method improves the accuracy, robustness and speed of NDT registration, especially in unstructured environments, making NDT suitable for a wider range of applications.

    @inproceedings{lincoln28481,
    booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
    month = {September},
    title = {Semantic-assisted 3D Normal Distributions Transform for scan registration in environments with limited structure},
    author = {Anestis Zaganidis and Martin Magnusson and Tom Duckett and Grzegorz Cielniak},
    publisher = {IEEE/RSJ},
    year = {2017},
    keywords = {ARRAY(0x55e773033158)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/28481/},
    abstract = {Point cloud registration is a core problem of many robotic applications, including simultaneous localization and mapping. The Normal Distributions Transform (NDT) is a method that fits a number of Gaussian distributions to the data points, and then uses this transform as an approximation of the real data, registering a relatively small number of distributions as opposed to the full point cloud. This approach contributes to NDT?s registration robustness and speed but leaves room for improvement in environments of limited structure.
    To address this limitation we propose a method for the introduction of semantic information extracted from the point clouds into the registration process. The paper presents a large scale experimental evaluation of the algorithm against NDT on two publicly available benchmark data sets. For the purpose of this test a measure of smoothness is used for the semantic partitioning of the point clouds. The results indicate that the proposed method improves the accuracy, robustness and speed of NDT registration, especially in unstructured environments, making NDT suitable for a wider range of applications.}
    }
  • Q. Fu, C. Hu, T. liu, and S. Yue, “Collision selective lgmds neuron models research benefits from a vision-based autonomous micro robot,” 2017 ieee/rsj international conference on intelligent robots and systems (iros), p. 3996–4002, 2017. doi:10.1109/IROS.2017.8206254
    [BibTeX] [Abstract] [Download PDF]

    The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts’ visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been built for perceiving potential collisions in an efficient and reliable manner; a few modeling works have also demonstrated their effectiveness for robotic implementations. In this research, for the first time, we set up binocular neuronal models, combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions: (1) The arena tests involving multiple robots verified the robustness and efficiency of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models fulfilling corresponded biological research results. (3) The low-cost robot may also shed lights on similar bio-inspired embedded vision systems and swarm robotics applications.

    @article{lincoln27834,
    month = {September},
    author = {Qinbing Fu and Cheng Hu and Tian liu and Shigang Yue},
    note = {{\copyright} 2017 IEEE},
    booktitle = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems},
    title = {Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot},
    publisher = {IEEE},
    year = {2017},
    journal = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    doi = {10.1109/IROS.2017.8206254},
    pages = {3996--4002},
    keywords = {ARRAY(0x55e773043128)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27834/},
    abstract = {The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been built for perceiving potential collisions in an efficient and reliable manner; a few modeling works have also demonstrated their effectiveness for robotic implementations. In this research, for the first time, we set up binocular neuronal models, combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions: (1) The arena tests involving multiple robots verified the robustness and efficiency of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models fulfilling corresponded biological research results. (3) The low-cost robot may also shed lights on similar bio-inspired embedded vision systems and swarm robotics applications.}
    }
  • J. Pajarinen, V. Kyrki, M. Koval, S. Srinivasa, J. Peters, and G. Neumann, “Hybrid control trajectory optimization under uncertainty,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2017.
    [BibTeX] [Abstract] [Download PDF]

    Trajectory optimization is a fundamental problem in robotics. While optimization of continuous control trajectories is well developed, many applications require both discrete and continuous, i.e. hybrid controls. Finding an optimal sequence of hybrid controls is challenging due to the exponential explosion of discrete control combinations. Our method, based on Differential Dynamic Programming (DDP), circumvents this problem by incorporating discrete actions inside DDP: we first optimize continuous mixtures of discrete actions, and, subsequently force the mixtures into fully discrete actions. Moreover, we show how our approach can be extended to partially observable Markov decision processes (POMDPs) for trajectory planning under uncertainty. We validate the approach in a car driving problem where the robot has to switch discrete gears and in a box pushing application where the robot can switch the side of the box to push. The pose and the friction parameters of the pushed box are initially unknown and only indirectly observable.

    @inproceedings{lincoln28257,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {September},
    title = {Hybrid control trajectory optimization under uncertainty},
    author = {J. Pajarinen and V. Kyrki and M. Koval and S Srinivasa and J. Peters and G. Neumann},
    year = {2017},
    keywords = {ARRAY(0x55e772a02510)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/28257/},
    abstract = {Trajectory optimization is a fundamental problem in robotics. While optimization of continuous control trajectories is well developed, many applications require both discrete and continuous, i.e. hybrid controls. Finding an optimal sequence of hybrid controls is challenging due to the exponential explosion of discrete control combinations. Our method, based on Differential Dynamic Programming (DDP), circumvents this problem by incorporating discrete actions inside DDP: we first optimize continuous mixtures of discrete actions, and, subsequently force the mixtures into fully discrete actions. Moreover, we show how our approach can be extended to partially observable Markov decision processes (POMDPs) for trajectory planning under uncertainty. We validate the approach in a car driving problem where the robot has to switch discrete gears and in a box pushing application where the robot can switch the side of the box to push. The pose and the friction parameters of the pushed box are initially unknown and only indirectly observable.}
    }
  • K. Goher, N. Mansouri, and F. Sulaiman, “Assessment of personal care and medical robots from older adults? perspective,” Robotics and biomimetics, vol. 4, iss. 5, 2017. doi:10.1186/s40638-017-0061-7
    [BibTeX] [Abstract] [Download PDF]

    Demographic reports indicate that population of older adults is growing significantly over the world and in particular in developed nations. Consequently, there are a noticeable number of demands for certain services such as health-care systems and assistive medical robots and devices. In today?s world, different types of robots play substantial roles specifically in medical sector to facilitate human life, especially older adults. Assistive medical robots and devices are created in various designs to fulfill specific needs of older adults. Though medical robots are utilized widely by senior citizens, it is dramatic to find out into what extent assistive robots satisfy their needs and expectations. This paper reviews various assessments of assistive medical robots from older adults? perspectives with the purpose of identifying senior citizen?s needs, expectations, and preferences. On the other hand, these kinds of assessments inform robot designers, developers, and programmers to come up with robots fulfilling elderly?s needs while improving their life quality.

    @article{lincoln33039,
    volume = {4},
    number = {5},
    month = {September},
    author = {Khaled Goher and Nazanin Mansouri and Fadlallah Sulaiman},
    note = {This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.},
    title = {Assessment of personal care and medical robots from older adults? perspective},
    publisher = {Springer},
    year = {2017},
    journal = {Robotics and Biomimetics},
    doi = {10.1186/s40638-017-0061-7},
    keywords = {ARRAY(0x55e772e2b0e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33039/},
    abstract = {Demographic reports indicate that population of older adults is growing significantly over the world and in particular in developed nations. Consequently, there are a noticeable number of demands for certain services such as health-care systems and assistive medical robots and devices. In today?s world, different types of robots play substantial roles specifically in medical sector to facilitate human life, especially older adults. Assistive medical robots and devices are created in various designs to fulfill specific needs of older adults. Though medical robots are utilized widely by senior citizens, it is dramatic to find out into what extent assistive robots satisfy their needs and expectations. This paper reviews various assessments of assistive medical robots from older adults? perspectives with the purpose of identifying senior citizen?s needs, expectations, and preferences. On the other hand, these kinds of assessments inform robot designers, developers, and programmers to come up with robots fulfilling elderly?s needs while improving their life quality.}
    }
  • T. Vintr, S. M. Mellado, G. Cielniak, T. Duckett, and T. Krajnik, “Spatiotemporal models for motion planning in human populated environments,” in Student conference on planning in artificial intelligence and robotics (pair), 2017.
    [BibTeX] [Abstract] [Download PDF]

    In this paper we present an effective spatio-temporal model for motion planning computed using a novel representation known as the temporary warp space-hypertime continuum. Such a model is suitable for robots that are expected to be helpful to humans in their natural environments. This method allows to capture natural periodicities of human behavior by adding additional time dimensions. The model created thus represents the temporal structure of the human habits within a given space and can be analyzed using regular analytical methods. We visualize the results on a real-world dataset using heatmaps.

    @inproceedings{lincoln31052,
    booktitle = {Student Conference on Planning in Artificial Intelligence and Robotics (PAIR)},
    month = {September},
    title = {Spatiotemporal models for motion planning in human populated environments},
    author = {Tomas Vintr and Sergi Molina Mellado and Grzegorz Cielniak and Tom Duckett and Tomas Krajnik},
    publisher = {Czech Technical University in Prague, Faculty of Electrical Engineering},
    year = {2017},
    keywords = {ARRAY(0x55e772a35010)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/31052/},
    abstract = {In this paper we present an effective spatio-temporal model for motion planning computed using a novel representation known as the temporary warp space-hypertime continuum. Such a model is suitable for robots that are expected to be helpful to humans in their natural environments. This method allows to capture natural periodicities of human behavior by adding additional time dimensions. The model created thus represents the temporal structure of the human habits within a given space and can be analyzed using regular analytical methods. We visualize the results on a real-world dataset using heatmaps.}
    }
  • N. Hawes, C. Burbridge, F. Jovan, L. Kunze, B. Lacerda, L. Mudrova, J. Young, J. Wyatt, D. Hebesberger, T. Kortner, R. Ambrus, N. Bore, J. Folkesson, P. Jensfelt, L. Beyer, A. Hermans, B. Leibe, A. Aldoma, T. Faulhammer, M. Zillich, M. Vincze, E. Chinellato, M. Al-Omari, P. Duckworth, Y. Gatsoulis, D. C. Hogg, A. G. Cohn, C. Dondrup, J. P. Fentanes, T. Krajnik, J. M. Santos, T. Duckett, and M. Hanheide, “The strands project: long-term autonomy in everyday environments,” Ieee robotics & automation magazine, vol. 24, iss. 3, p. 146–156, 2017. doi:10.1109/MRA.2016.2636359
    [BibTeX] [Abstract] [Download PDF]

    Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance.

    @article{lincoln40526,
    volume = {24},
    number = {3},
    month = {September},
    author = {Nick Hawes and Christopher Burbridge and Ferdian Jovan and Lars Kunze and Bruno Lacerda and Lenka Mudrova and Jay Young and Jeremy Wyatt and Denise Hebesberger and Tobias Kortner and Rares Ambrus and Nils Bore and John Folkesson and Patric Jensfelt and Lucas Beyer and Alexander Hermans and Bastian Leibe and Aitor Aldoma and Thomas Faulhammer and Michael Zillich and Markus Vincze and Eris Chinellato and Muhannad Al-Omari and Paul Duckworth and Yiannis Gatsoulis and David C. Hogg and Anthony G. Cohn and Christian Dondrup and Jaime Pulido Fentanes and Tomas Krajnik and Joao M. Santos and Tom Duckett and Marc Hanheide},
    title = {The STRANDS Project: Long-Term Autonomy in Everyday Environments},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Robotics \& Automation Magazine},
    doi = {10.1109/MRA.2016.2636359},
    pages = {146--156},
    keywords = {ARRAY(0x55e77304c0b0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/40526/},
    abstract = {Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance.}
    }
  • P. Lightbody, M. Hanheide, and T. Krajnik, “An efficient visual fiducial localisation system,” Applied computing review, vol. 17, iss. 3, p. 28–37, 2017. doi:10.1145/3161534.3161537
    [BibTeX] [Abstract] [Download PDF]

    With use cases that range from external localisation of single robots or robotic swarms to self-localisation in marker-augmented environments and simplifying perception by tagging objects in a robot’s surrounding, fiducial markers have a wide field of application in the robotic world. We propose a new family of circular markers which allow for both computationally efficient detection, tracking and identification and full 6D position estimation. At the core of the proposed approach lies the separation of the detection and identification steps, with the former using computationally efficient circular marker detection and the latter utilising an open-ended `necklace encoding’, allowing scalability to a large number of individual markers. While the proposed algorithm achieves similar accuracy to other state-of-the-art methods, its experimental evaluation in realistic conditions demonstrates that it can detect markers from larger distances while being up to two orders of magnitude faster than other state-of-the-art fiducial marker detection methods. In addition, the entire system is available as an open-source package at {$\backslash$}url\{https://github.com/LCAS/whycon\}.

    @article{lincoln29678,
    volume = {17},
    number = {3},
    month = {September},
    author = {Peter Lightbody and Marc Hanheide and Tomas Krajnik},
    note = {Copyright is held by the authors. This work is based on an earlier work: SAC?17 Proceedings of the 2017 ACM Symposium on Applied Computing, Copyright 2017 ACM 978-1-4503-4486-9. http://dx.doi.org/10. 1145/3019612.3019709},
    title = {An efficient visual fiducial localisation system},
    publisher = {ACM},
    year = {2017},
    journal = {Applied Computing Review},
    doi = {10.1145/3161534.3161537},
    pages = {28--37},
    keywords = {ARRAY(0x55e773057298)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/29678/},
    abstract = {With use cases that range from external localisation of single robots or robotic swarms to self-localisation in marker-augmented environments and simplifying perception by tagging objects in a robot's surrounding, fiducial markers have a wide field of application in the robotic world.
    We propose a new family of circular markers which allow for both computationally efficient detection, tracking and identification and full 6D position estimation.
    At the core of the proposed approach lies the separation of the detection and identification steps, with the former using computationally efficient circular marker detection and the latter utilising an open-ended `necklace encoding', allowing scalability to a large number of individual markers.
    While the proposed algorithm achieves similar accuracy to other state-of-the-art methods, its experimental evaluation in realistic conditions demonstrates that it can detect markers from larger distances while being up to two orders of magnitude faster than other state-of-the-art fiducial marker detection methods. In addition, the entire system is available as an open-source package at {$\backslash$}url\{https://github.com/LCAS/whycon\}.}
    }
  • T. Krajnik, J. P. Fentanes, J. Santos, and T. Duckett, “Fremen: frequency map enhancement for long-term mobile robot autonomy in changing environments,” Robotics, ieee transactions on [see also robotics and automation, ieee transactions on], vol. 33, iss. 4, p. 964–977, 2017. doi:10.1109/TRO.2017.2665664
    [BibTeX] [Abstract] [Download PDF]

    We present a new approach to long-term mobile robot mapping in dynamic indoor environments. Unlike traditional world models that are tailored to represent static scenes, our approach explicitly models environmental dynamics. We assume that some of the hidden processes that influence the dynamic environment states are periodic and model the uncertainty of the estimated state variables by their frequency spectra. The spectral model can represent arbitrary timescales of environment dynamics with low memory requirements. Transformation of the spectral model to the time domain allows for the prediction of the future environment states, which improves the robot’s long-term performance in dynamic environments. Experiments performed over time periods of months to years demonstrate that the approach can efficiently represent large numbers of observations and reliably predict future environment states. The experiments indicate that the model’s predictive capabilities improve mobile robot localisation and navigation in changing environments.

    @article{lincoln26196,
    volume = {33},
    number = {4},
    month = {August},
    author = {Tomas Krajnik and Jaime Pulido Fentanes and Joao Santos and Tom Duckett},
    title = {FreMEn: Frequency map enhancement for long-term mobile robot autonomy in changing environments},
    publisher = {IEEE},
    year = {2017},
    journal = {Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on]},
    doi = {10.1109/TRO.2017.2665664},
    pages = {964--977},
    keywords = {ARRAY(0x55e773065fb0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/26196/},
    abstract = {We present a new approach to long-term mobile robot mapping in dynamic indoor environments. Unlike traditional world models that are tailored to represent static scenes, our approach explicitly models environmental dynamics. We assume that some of the hidden processes that influence the dynamic environment states are periodic and model the uncertainty of the estimated state variables by their frequency spectra. The spectral model can represent arbitrary timescales of environment dynamics with low memory requirements. Transformation of the spectral model to the time domain allows for the prediction of the future environment states, which improves the robot's long-term performance in dynamic environments. Experiments performed over time periods of months to years demonstrate that the approach can efficiently represent large numbers of observations and reliably predict future environment states. The experiments indicate that the model's predictive capabilities improve mobile robot localisation and navigation in changing environments.}
    }
  • H. van Hoof, G. Neumann, and J. Peters, “Non-parametric policy search with limited information loss,” Journal of machine learning research, vol. 18, iss. 73, p. 1–46, 2017.
    [BibTeX] [Abstract] [Download PDF]

    Learning complex control policies from non-linear and redundant sensory input is an important challenge for reinforcement learning algorithms. Non-parametric methods that approximate values functions or transition models can address this problem, by adapting to the complexity of the dataset. Yet, many current non-parametric approaches rely on unstable greedy maximization of approximate value functions, which might lead to poor convergence or oscillations in the policy update. A more robust policy update can be obtained by limiting the information loss between successive state-action distributions. In this paper, we develop a policy search algorithm with policy updates that are both robust and non-parametric. Our method can learn non-parametric control policies for infinite horizon continuous Markov decision processes with non-linear and redundant sensory representations. We investigate how we can use approximations of the kernel function to reduce the time requirements of the demanding non-parametric computations. In our experiments, we show the strong performance of the proposed method, and how it can be approximated efficiently. Finally, we show that our algorithm can learn a real-robot underpowered swing-up task directly from image data.

    @article{lincoln28020,
    volume = {18},
    number = {73},
    month = {August},
    author = {Herke van Hoof and Gerhard Neumann and Jan Peters},
    title = {Non-parametric policy search with limited information loss},
    publisher = {Journal of Machine Learning Research},
    year = {2017},
    journal = {Journal of Machine Learning Research},
    pages = {1--46},
    keywords = {ARRAY(0x55e773065fe0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/28020/},
    abstract = {Learning complex control policies from non-linear and redundant sensory input is an important challenge for reinforcement learning algorithms. Non-parametric methods that approximate values functions or transition models can address this problem, by adapting to the complexity of the dataset. Yet, many current non-parametric approaches rely on
    unstable greedy maximization of approximate value functions, which might lead to poor convergence or oscillations in the policy update. A more robust policy update can be obtained by limiting the information loss between successive state-action distributions. In this paper, we develop a policy search algorithm with policy updates that are both robust and non-parametric. Our method can learn non-parametric control policies for infinite horizon continuous Markov decision processes with non-linear and redundant sensory representations. We investigate how we can use approximations of the kernel function to reduce the time requirements of the demanding non-parametric computations. In our experiments, we show the strong performance of the proposed method, and how it can be approximated efficiently. Finally, we show that our algorithm can learn a real-robot underpowered swing-up task directly from image data.}
    }
  • H. Cuayahuitl and S. Yu, “Deep reinforcement learning of dialogue policies with less weight updates,” in International conference of the speech communication association (interspeech), 2017.
    [BibTeX] [Abstract] [Download PDF]

    Deep reinforcement learning dialogue systems are attractive because they can jointly learn their feature representations and policies without manual feature engineering. But its application is challenging due to slow learning. We propose a two-stage method for accelerating the induction of single or multi-domain dialogue policies. While the first stage reduces the amount of weight updates over time, the second stage uses very limited minibatches (of as much as two learning experiences) sampled from experience replay memories. The former frequently updates the weights of the neural nets at early stages of training, and decreases the amount of updates as training progresses by performing updates during exploration and by skipping updates during exploitation. The learning process is thus accelerated through less weight updates in both stages. An empirical evaluation in three domains (restaurants, hotels and tv guide) confirms that the proposed method trains policies 5 times faster than a baseline without the proposed method. Our findings are useful for training larger-scale neural-based spoken dialogue systems.

    @inproceedings{lincoln27676,
    booktitle = {International Conference of the Speech Communication Association (INTERSPEECH)},
    month = {August},
    title = {Deep reinforcement learning of dialogue policies with less weight updates},
    author = {Heriberto Cuayahuitl and Seunghak Yu},
    year = {2017},
    keywords = {ARRAY(0x55e773043c60)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27676/},
    abstract = {Deep reinforcement learning dialogue systems are attractive because they can jointly learn their feature representations and policies without manual feature engineering. But its application is challenging due to slow learning. We propose a two-stage method for accelerating the induction of single or multi-domain dialogue policies. While the first stage reduces the amount of weight updates over time, the second stage uses very limited minibatches (of as much as two learning experiences) sampled from experience replay memories. The former frequently updates the weights of the neural nets at early stages of training, and decreases the amount of updates as training progresses by performing updates during exploration and by skipping updates during exploitation. The learning process is thus accelerated
    through less weight updates in both stages. An empirical evaluation in three domains (restaurants, hotels and tv guide) confirms that the proposed method trains policies 5 times faster than a baseline without the proposed method. Our findings are useful for training larger-scale neural-based spoken dialogue systems.}
    }
  • A. Abdolmaleki, B. Price, N. Lau, P. Reis, and G. Neumann, “Contextual cma-es,” in International joint conference on artificial intelligence (ijcai), 2017.
    [BibTeX] [Abstract] [Download PDF]

    Many stochastic search algorithms are designed to optimize a fixed objective function to learn a task, i.e., if the objective function changes slightly, for example, due to a change in the situation or context of the task, relearning is required to adapt to the new context. For instance, if we want to learn a kicking movement for a soccer robot, we have to relearn the movement for different ball locations. Such relearning is undesired as it is highly inefficient and many applications require a fast adaptation to a new context/situation. Therefore, we investigate contextual stochastic search algorithms that can learn multiple, similar tasks simultaneously. Current contextual stochastic search methods are based on policy search algorithms and suffer from premature convergence and the need for parameter tuning. In this paper, we extend the well known CMA-ES algorithm to the contextual setting and illustrate its performance on several contextual tasks. Our new algorithm, called contextual CMAES, leverages from contextual learning while it preserves all the features of standard CMA-ES such as stability, avoidance of premature convergence, step size control and a minimal amount of parameter tuning.

    @inproceedings{lincoln28141,
    booktitle = {International Joint Conference on Artificial Intelligence (IJCAI)},
    month = {August},
    title = {Contextual CMA-ES},
    author = {A. Abdolmaleki and B. Price and N. Lau and P. Reis and G. Neumann},
    year = {2017},
    keywords = {ARRAY(0x55e77306e750)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/28141/},
    abstract = {Many stochastic search algorithms are designed to optimize a fixed objective function to learn a task, i.e., if the objective function changes slightly, for example, due to a change in the situation or context of the task, relearning is required to adapt to the new context. For instance, if we want to learn a kicking movement for a soccer robot, we have to relearn the movement for different ball locations. Such relearning is undesired as it is highly inefficient and many applications require a fast adaptation to a new context/situation. Therefore, we investigate contextual stochastic search algorithms
    that can learn multiple, similar tasks simultaneously. Current contextual stochastic search methods are based on policy search algorithms and suffer from premature convergence and the need for parameter tuning. In this paper, we extend the well known CMA-ES algorithm to the contextual setting and illustrate its performance on several contextual
    tasks. Our new algorithm, called contextual CMAES, leverages from contextual learning while it preserves all the features of standard CMA-ES such as stability, avoidance of premature convergence, step size control and a minimal amount of parameter tuning.}
    }
  • A. Binch and C. Fox, “Controlled comparison of machine vision algorithms for rumex and urtica detection in grassland,” Computers and electronics in agriculture, vol. 140, p. 123–138, 2017. doi:10.1016/j.compag.2017.05.018
    [BibTeX] [Abstract] [Download PDF]

    Automated robotic weeding of grassland will improve the productivity of dairy and sheep farms 7 while helping to conserve their environments. Previous studies have reported results of machine 8 vision methods to separate grass from grassland weeds but each use their own datasets and 9 report only performance of their own algorithm, making it impossible to compare them. A 10 definitive, large-scale independent study is presented of all major known grassland weed detec- 11 tion methods evaluated on a new standardised data set under a wider range of environment 12 conditions. This allows for a fair, unbiased, independent and statistically significant comparison 13 of these and future methods for the first time. We test features including linear binary pat- 14 terns, BRISK, Fourier and Watershed; and classifiers including support vector machines, linear 15 discriminants, nearest neighbour, and meta-classifier combinations. The most accurate method 16 is found to use linear binary patterns together with a support vector machine

    @article{lincoln32031,
    volume = {140},
    month = {August},
    author = {Adam Binch and Charles Fox},
    title = {Controlled comparison of machine vision algorithms for Rumex and Urtica detection in grassland},
    publisher = {Elsevier},
    year = {2017},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2017.05.018},
    pages = {123--138},
    keywords = {ARRAY(0x55e77306e708)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/32031/},
    abstract = {Automated robotic weeding of grassland will improve the productivity of dairy and sheep farms
    7 while helping to conserve their environments. Previous studies have reported results of machine
    8 vision methods to separate grass from grassland weeds but each use their own datasets and
    9 report only performance of their own algorithm, making it impossible to compare them. A
    10 definitive, large-scale independent study is presented of all major known grassland weed detec-
    11 tion methods evaluated on a new standardised data set under a wider range of environment
    12 conditions. This allows for a fair, unbiased, independent and statistically significant comparison
    13 of these and future methods for the first time. We test features including linear binary pat-
    14 terns, BRISK, Fourier and Watershed; and classifiers including support vector machines, linear
    15 discriminants, nearest neighbour, and meta-classifier combinations. The most accurate method
    16 is found to use linear binary patterns together with a support vector machine}
    }
  • R. Akrour, D. Sorokin, J. Peters, and G. Neumann, “Local bayesian optimization of motor skills,” in International conference on machine learning (icml), 2017.
    [BibTeX] [Abstract] [Download PDF]

    Bayesian optimization is renowned for its sample efficiency but its application to higher dimensional tasks is impeded by its focus on global optimization. To scale to higher dimensional problems, we leverage the sample efficiency of Bayesian optimization in a local context. The optimization of the acquisition function is restricted to the vicinity of a Gaussian search distribution which is moved towards high value areas of the objective. The proposed informationtheoretic update of the search distribution results in a Bayesian interpretation of local stochastic search: the search distribution encodes prior knowledge on the optimum?s location and is weighted at each iteration by the likelihood of this location?s optimality. We demonstrate the effectiveness of our algorithm on several benchmark objective functions as well as a continuous robotic task in which an informative prior is obtained by imitation learning.

    @inproceedings{lincoln27902,
    booktitle = {International Conference on Machine Learning (ICML)},
    month = {August},
    title = {Local Bayesian optimization of motor skills},
    author = {R. Akrour and D. Sorokin and J. Peters and G. Neumann},
    year = {2017},
    keywords = {ARRAY(0x55e772fa24e8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27902/},
    abstract = {Bayesian optimization is renowned for its sample
    efficiency but its application to higher dimensional
    tasks is impeded by its focus on global
    optimization. To scale to higher dimensional
    problems, we leverage the sample efficiency of
    Bayesian optimization in a local context. The
    optimization of the acquisition function is restricted
    to the vicinity of a Gaussian search distribution
    which is moved towards high value areas
    of the objective. The proposed informationtheoretic
    update of the search distribution results
    in a Bayesian interpretation of local stochastic
    search: the search distribution encodes prior
    knowledge on the optimum?s location and is
    weighted at each iteration by the likelihood of
    this location?s optimality. We demonstrate the
    effectiveness of our algorithm on several benchmark
    objective functions as well as a continuous
    robotic task in which an informative prior is obtained
    by imitation learning.}
    }
  • D. Liu and S. Yue, “Fast unsupervised learning for visual pattern recognition using spike timing dependent plasticity,” Neurocomputing, vol. 249, p. 212–224, 2017. doi:10.1016/j.neucom.2017.04.003
    [BibTeX] [Abstract] [Download PDF]

    Real-time learning needs algorithms operating in a fast speed comparable to human or animal, however this is a huge challenge in processing visual inputs. Research shows a biological brain can process complicated real-life recognition scenarios at milliseconds scale. Inspired by biological system, in this paper, we proposed a novel real-time learning method by combing the spike timing-based feed-forward spiking neural network (SNN) and the fast unsupervised spike timing dependent plasticity learning method with dynamic post-synaptic thresholds. Fast cross-validated experiments using MNIST database showed the high e?ciency of the proposed method at an acceptable accuracy.

    @article{lincoln26922,
    volume = {249},
    month = {August},
    author = {Daqi Liu and Shigang Yue},
    title = {Fast unsupervised learning for visual pattern recognition using spike timing dependent plasticity},
    publisher = {Elsevier},
    year = {2017},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2017.04.003},
    pages = {212--224},
    keywords = {ARRAY(0x55e77306e6f0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/26922/},
    abstract = {Real-time learning needs algorithms operating in a fast speed comparable to human or animal, however this is a huge challenge in processing visual inputs. Research shows a biological brain can process complicated real-life recognition scenarios at milliseconds scale. Inspired by biological system, in this paper, we proposed a novel real-time learning method by combing the spike timing-based feed-forward spiking neural network (SNN) and the fast unsupervised spike timing dependent plasticity learning method with dynamic post-synaptic thresholds. Fast cross-validated experiments using MNIST database showed the high e?ciency of the proposed method at an acceptable accuracy.}
    }
  • A. Rahman, A. Ahmed, and S. Yue, “Classification of tongue – glossitis abnormality,” Lecture notes in engineering and computer science: proceedings of the world congress on engineering, p. 1–4, 2017.
    [BibTeX] [Abstract] [Download PDF]

    Glossitis abnormality is a tongue abnormality affecting patients suffering from Diabetes Mellitus (DM). The novelty of the proposed approach is attributed to utilising visual signs that appear on the tongue due to Glossitis abnormality caused by the high blood sugar level in the human body. The clinical test for the blood sugar level is inconvenient for some patients in rural and poor areas where medical services are minimal or may not be available at all. This paper presents an approach to classifying a tongue abnormality related to Diabetes Mellitus (DM) following Western Medicine. To screen and monitor human organ effectively, the proposed computer-aided model predicts and classifies abnormality appears on the tongue or tongue surface using visual signs caused by the Glossitis abnormality. The visual signs extracted following a coherent diagnosis procedure complying with Western Medicine (WM) in practice. The experimental result has shown a promising accuracy of 95.8\% for the Glossitis abnormality by applying Random Forest classifier on the extracted visual signs from 572 tongue samples of 166 patients.

    @article{lincoln35378,
    month = {July},
    author = {Ashiqur Rahman and Amr Ahmed and Shigang Yue},
    booktitle = {The 2017 International Conference of Data Mining and Knowledge Engineering},
    title = {Classification of Tongue - Glossitis Abnormality},
    publisher = {International Association of Engineers (IAENG)},
    journal = {Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress on Engineering},
    pages = {1--4},
    year = {2017},
    keywords = {ARRAY(0x55e77306bb50)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/35378/},
    abstract = {Glossitis abnormality is a tongue abnormality affecting patients suffering from Diabetes Mellitus (DM). The novelty of the proposed approach is attributed to utilising visual signs that appear on the tongue due to Glossitis abnormality caused by the high blood sugar level in the human body. The clinical test for the blood sugar level is inconvenient for some patients in rural and poor areas where medical services are minimal or may not be available at all.
    This paper presents an approach to classifying a tongue abnormality related to Diabetes Mellitus (DM) following Western Medicine. To screen and monitor human organ effectively, the proposed computer-aided model predicts and classifies abnormality appears on the tongue or tongue surface using visual signs caused by the Glossitis abnormality. The visual signs extracted following a coherent diagnosis procedure complying with Western Medicine (WM) in practice. The experimental result has shown a promising accuracy of 95.8\% for the Glossitis abnormality by applying Random Forest classifier on the extracted visual signs from 572 tongue samples of 166 patients.}
    }
  • C. Lekakou, S. M. Mustaza, T. Crisp, Y. Elsayed, and M. Saaj, “A material-based model for the simulation and control of soft robot actuator,” in Annual conference towards autonomous robotic systems, 2017, p. 557–569. doi:10.1007/978-3-319-64107-2_45
    [BibTeX] [Abstract] [Download PDF]

    An innovative material-based model is described for a three-pneumatic channel, soft robot actuator and implemented in simulations and control. Two types of material models are investigated: a soft, hyperelastic material model and a novel visco-hyperelastic material model are presented and evaluated in simulations of one-channel operation. The advanced visco-hyperelastic model is further demonstrated in control under multi-channel actuation. Finally, a soft linear elastic material model was used in finite element analysis of the soft three-pneumatic channel actuator within SOFA, moving inside a pipe and interacting with its rigid wall or with a soft hemispherical object attached to that wall. A collision model was used for these interactions and the simulations yielded ?virtual haptic? 3d-force profiles at monitored nodes at the free- and fixed-end of the actuator.

    @inproceedings{lincoln37437,
    volume = {10454},
    month = {July},
    author = {C. Lekakou and S.M. Mustaza and T. Crisp and Y. Elsayed and Mini Saaj},
    note = {cited By 1},
    booktitle = {Annual Conference Towards Autonomous Robotic Systems},
    title = {A material-based model for the simulation and control of soft robot actuator},
    publisher = {Springer},
    year = {2017},
    journal = {Proc. 18th Towards Autonomous Robotics Systems Conference},
    doi = {10.1007/978-3-319-64107-2\_45},
    pages = {557--569},
    keywords = {ARRAY(0x55e77305e2c0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/37437/},
    abstract = {An innovative material-based model is described for a three-pneumatic channel, soft robot actuator and implemented in simulations and control. Two types of material models are investigated: a soft, hyperelastic material model and a novel visco-hyperelastic material model are presented and evaluated in simulations of one-channel operation. The advanced visco-hyperelastic model is further demonstrated in control under multi-channel actuation. Finally, a soft linear elastic material model was used in finite element analysis of the soft three-pneumatic channel actuator within SOFA, moving inside a pipe and interacting with its rigid wall or with a soft hemispherical object attached to that wall. A collision model was used for these interactions and the simulations yielded ?virtual haptic? 3d-force profiles at monitored nodes at the free- and fixed-end of the actuator.}
    }
  • A. Abdolmaleki, B. Price, N. Lau, L. P. Reis, and G. Neumann, “Deriving and improving cma-es with information geometric trust regions,” in The genetic and evolutionary computation conference (gecco 2017), 2017.
    [BibTeX] [Abstract] [Download PDF]

    CMA-ES is one of the most popular stochastic search algorithms. It performs favourably in many tasks without the need of extensive parameter tuning. The algorithm has many beneficial properties, including automatic step-size adaptation, efficient covariance updates that incorporates the current samples as well as the evolution path and its invariance properties. Its update rules are composed of well established heuristics where the theoretical foundations of some of these rules are also well understood. In this paper we will fully derive all CMA-ES update rules within the framework of expectation-maximisation-based stochastic search algorithms using information-geometric trust regions. We show that the use of the trust region results in similar updates to CMA-ES for the mean and the covariance matrix while it allows for the derivation of an improved update rule for the step-size. Our new algorithm, Trust-Region Covariance Matrix Adaptation Evolution Strategy (TR-CMA-ES) is fully derived from first order optimization principles and performs favourably in compare to standard CMA-ES algorithm.

    @inproceedings{lincoln27056,
    booktitle = {The Genetic and Evolutionary Computation Conference (GECCO 2017)},
    month = {July},
    title = {Deriving and improving CMA-ES with Information geometric trust regions},
    author = {Abbas Abdolmaleki and Bob Price and Nuno Lau and Luis Paulo Reis and Gerhard Neumann},
    year = {2017},
    keywords = {ARRAY(0x55e77306bb08)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27056/},
    abstract = {CMA-ES is one of the most popular stochastic search algorithms.
    It performs favourably in many tasks without the need of extensive
    parameter tuning. The algorithm has many beneficial properties,
    including automatic step-size adaptation, efficient covariance updates
    that incorporates the current samples as well as the evolution
    path and its invariance properties. Its update rules are composed
    of well established heuristics where the theoretical foundations of
    some of these rules are also well understood. In this paper we
    will fully derive all CMA-ES update rules within the framework of
    expectation-maximisation-based stochastic search algorithms using
    information-geometric trust regions. We show that the use of the trust
    region results in similar updates to CMA-ES for the mean and the
    covariance matrix while it allows for the derivation of an improved
    update rule for the step-size. Our new algorithm, Trust-Region Covariance
    Matrix Adaptation Evolution Strategy (TR-CMA-ES) is
    fully derived from first order optimization principles and performs
    favourably in compare to standard CMA-ES algorithm.}
    }
  • A. Paraschos, R. Lioutikov, J. Peters, and G. Neumann, “Probabilistic prioritization of movement primitives,” Ieee robotics and automation letters, vol. PP, iss. 99, 2017. doi:10.1109/LRA.2017.2725440
    [BibTeX] [Abstract] [Download PDF]

    Movement prioritization is a common approach to combine controllers of different tasks for redundant robots, where each task is assigned a priority. The priorities of the tasks are often hand-tuned or the result of an optimization, but seldomly learned from data. This paper combines Bayesian task prioritization with probabilistic movement primitives to prioritize full motion sequences that are learned from demonstrations. Probabilistic movement primitives (ProMPs) can encode distributions of movements over full motion sequences and provide control laws to exactly follow these distributions. The probabilistic formulation allows for a natural application of Bayesian task prioritization. We extend the ProMP controllers with an additional feedback component that accounts inaccuracies in following the distribution and allows for a more robust prioritization of primitives. We demonstrate how the task priorities can be obtained from imitation learning and how different primitives can be combined to solve even unseen task-combinations. Due to the prioritization, our approach can efficiently learn a combination of tasks without requiring individual models per task combination. Further, our approach can adapt an existing primitive library by prioritizing additional controllers, for example, for implementing obstacle avoidance. Hence, the need of retraining the whole library is avoided in many cases. We evaluate our approach on reaching movements under constraints with redundant simulated planar robots and two physical robot platforms, the humanoid robot ?iCub? and a KUKA LWR robot arm.

    @article{lincoln27901,
    volume = {PP},
    number = {99},
    month = {July},
    author = {Alexandros Paraschos and Rudolf Lioutikov and Jan Peters and Gerhard Neumann},
    booktitle = {, Proceedings of the International Conference on Intelligent Robot Systems, and IEEE Robotics and Automation Letters (RA-L)},
    title = {Probabilistic prioritization of movement primitives},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2017.2725440},
    keywords = {ARRAY(0x55e772f40018)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27901/},
    abstract = {Movement prioritization is a common approach
    to combine controllers of different tasks for redundant robots,
    where each task is assigned a priority. The priorities of the
    tasks are often hand-tuned or the result of an optimization,
    but seldomly learned from data. This paper combines Bayesian
    task prioritization with probabilistic movement primitives to
    prioritize full motion sequences that are learned from demonstrations.
    Probabilistic movement primitives (ProMPs) can
    encode distributions of movements over full motion sequences
    and provide control laws to exactly follow these distributions.
    The probabilistic formulation allows for a natural application of
    Bayesian task prioritization. We extend the ProMP controllers
    with an additional feedback component that accounts inaccuracies
    in following the distribution and allows for a more
    robust prioritization of primitives. We demonstrate how the
    task priorities can be obtained from imitation learning and
    how different primitives can be combined to solve even unseen
    task-combinations. Due to the prioritization, our approach can
    efficiently learn a combination of tasks without requiring individual
    models per task combination. Further, our approach can
    adapt an existing primitive library by prioritizing additional
    controllers, for example, for implementing obstacle avoidance.
    Hence, the need of retraining the whole library is avoided in
    many cases. We evaluate our approach on reaching movements
    under constraints with redundant simulated planar robots and
    two physical robot platforms, the humanoid robot ?iCub? and
    a KUKA LWR robot arm.}
    }
  • H. Cuayahuitl, S. Yu, A. Williamson, and J. Carse, “Scaling up deep reinforcement learning for multi-domain dialogue systems,” in International joint conference on neural networks (ijcnn), 2017. doi:10.1109/IJCNN.2017.7966275
    [BibTeX] [Abstract] [Download PDF]

    Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning{–}termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.

    @inproceedings{lincoln26622,
    booktitle = {International Joint Conference on Neural Networks (IJCNN)},
    month = {July},
    title = {Scaling up deep reinforcement learning for multi-domain dialogue systems},
    author = {Heriberto Cuayahuitl and Seunghak Yu and Ashley Williamson and Jacob Carse},
    publisher = {IEEE},
    year = {2017},
    doi = {10.1109/IJCNN.2017.7966275},
    keywords = {ARRAY(0x55e772f25db0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/26622/},
    abstract = {Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning{--}termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN
    (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is
    promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.}
    }
  • R. Lioutikov, G. Neumann, G. Maeda, and J. Peters, “Learning movement primitive libraries through probabilistic segmentation,” International journal of robotics research (ijrr), vol. 36, iss. 8, p. 879–894, 2017. doi:10.1177/0278364917713116
    [BibTeX] [Abstract] [Download PDF]

    Movement primitives are a well established approach for encoding and executing movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received similar attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.

    @article{lincoln28021,
    volume = {36},
    number = {8},
    month = {July},
    author = {Rudolf Lioutikov and Gerhard Neumann and Guilherme Maeda and Jan Peters},
    title = {Learning movement primitive libraries through probabilistic segmentation},
    publisher = {SAGE},
    year = {2017},
    journal = {International Journal of Robotics Research (IJRR)},
    doi = {10.1177/0278364917713116},
    pages = {879--894},
    keywords = {ARRAY(0x55e772f320c8)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/28021/},
    abstract = {Movement primitives are a well established approach for encoding and executing movements. While the primitives
    themselves have been extensively researched, the concept of movement primitive libraries has not received similar
    attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced
    in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative
    set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected,
    mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By
    exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive
    library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic
    representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot
    segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments
    demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.}
    }
  • H. Abdulsamad, O. Arenz, J. Peters, and G. Neumann, “State-regularized policy search for linearized dynamical systems,” in Proceedings of the international conference on automated planning and scheduling (icaps), 2017.
    [BibTeX] [Abstract] [Download PDF]

    Trajectory-Centric Reinforcement Learning and Trajectory Optimization methods optimize a sequence of feedbackcontrollers by taking advantage of local approximations of model dynamics and cost functions. Stability of the policy update is a major issue for these methods, rendering them hard to apply for highly nonlinear systems. Recent approaches combine classical Stochastic Optimal Control methods with information-theoretic bounds to control the step-size of the policy update and could even be used to train nonlinear deep control policies. These methods bound the relative entropy between the new and the old policy to ensure a stable policy update. However, despite the bound in policy space, the state distributions of two consecutive policies can still differ significantly, rendering the used local approximate models invalid. To alleviate this issue we propose enforcing a relative entropy constraint not only on the policy update, but also on the update of the state distribution, around which the dynamics and cost are being approximated. We present a derivation of the closed-form policy update and show that our approach outperforms related methods on two nonlinear and highly dynamic simulated systems.

    @inproceedings{lincoln27055,
    booktitle = {Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS)},
    month = {June},
    title = {State-regularized policy search for linearized dynamical systems},
    author = {Hany Abdulsamad and Oleg Arenz and Jan Peters and Gerhard Neumann},
    year = {2017},
    keywords = {ARRAY(0x55e772f22de0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27055/},
    abstract = {Trajectory-Centric Reinforcement Learning and Trajectory
    Optimization methods optimize a sequence of feedbackcontrollers
    by taking advantage of local approximations of
    model dynamics and cost functions. Stability of the policy update
    is a major issue for these methods, rendering them hard
    to apply for highly nonlinear systems. Recent approaches
    combine classical Stochastic Optimal Control methods with
    information-theoretic bounds to control the step-size of the
    policy update and could even be used to train nonlinear deep
    control policies. These methods bound the relative entropy
    between the new and the old policy to ensure a stable policy
    update. However, despite the bound in policy space, the
    state distributions of two consecutive policies can still differ
    significantly, rendering the used local approximate models invalid.
    To alleviate this issue we propose enforcing a relative
    entropy constraint not only on the policy update, but also on
    the update of the state distribution, around which the dynamics
    and cost are being approximated. We present a derivation
    of the closed-form policy update and show that our approach
    outperforms related methods on two nonlinear and highly dynamic
    simulated systems.}
    }
  • E. Rodias, R. Berruto, D. Bochtis, P. Busato, and A. Sopegno, “A computational tool for comparative energy cost analysis of multiple-crop production systems,” Energies, vol. 10, iss. 7, p. 831, 2017. doi:10.3390/en10070831
    [BibTeX] [Abstract] [Download PDF]

    Various crops can be considered as potential bioenergy and biofuel production feedstocks. The selection of the crops to be cultivated for that purpose is based on several factors. For an objective comparison between different crops, a common framework is required to assess their economic or energetic performance. In this paper, a computational tool for the energy cost evaluation of multiple-crop production systems is presented. All the in-field and transport operations are considered, providing a detailed analysis of the energy requirements of the components that contribute to the overall energy consumption. A demonstration scenario is also described. The scenario is based on three selected energy crops, namely Miscanthus, Arundo donax and Switchgrass. The tool can be used as a decision support system for the evaluation of different agronomical practices (such as fertilization and agrochemicals application), machinery systems, and management practices that can be applied in each one of the individual crops within the production system

    @article{lincoln39221,
    volume = {10},
    number = {7},
    month = {June},
    author = {Efthymios Rodias and Remigio Berruto and Dionysis Bochtis and Patrizia Busato and Alessandro Sopegno},
    title = {A Computational Tool for Comparative Energy Cost Analysis of Multiple-Crop Production Systems},
    year = {2017},
    journal = {Energies},
    doi = {10.3390/en10070831},
    pages = {831},
    keywords = {ARRAY(0x55e772f37510)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39221/},
    abstract = {Various crops can be considered as potential bioenergy and biofuel production feedstocks. The selection of the crops to be cultivated for that purpose is based on several factors. For an objective comparison between different crops, a common framework is required to assess their economic or energetic performance. In this paper, a computational tool for the energy cost evaluation of multiple-crop production systems is presented. All the in-field and transport operations are considered, providing a detailed analysis of the energy requirements of the components that contribute to the overall energy consumption. A demonstration scenario is also described. The scenario is based on three selected energy crops, namely Miscanthus, Arundo donax and Switchgrass. The tool can be used as a decision support system for the evaluation of different agronomical practices (such as fertilization and agrochemicals application), machinery systems, and management practices that can be applied in each one of the individual crops within the production system}
    }
  • K. Goher, A. Almeshal, S. Agouri, A. Nasir, O. Tokhi, M. Alenizi, T. Alzanki, and S. Fadlallah, “Hybrid spiral-dynamic bacteria-chemotaxis algorithm with application to control two-wheeled machines,” Robotics and biomimetics, 2017. doi:10.1186/s40638-017-0059-1
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the implementation of the hybrid spiral-dynamic bacteria-chemotaxis (HSDBC) approach to control two different configurations of a two-wheeled vehicle. The HSDBC is a combination of bacterial chemotaxis used in bacterial forging algorithm (BFA) and the spiral-dynamic algorithm (SDA). BFA provides a good exploration strategy due to the chemotaxis approach. However, it endures an oscillation problem near the end of the search process when using a large step size. Conversely; for a small step size, it affords better exploitation and accuracy with slower convergence. SDA provides better stability when approaching an optimum point and has faster convergence speed. This may cause the search agents to get trapped into local optima which results in low accurate solution. HSDBC exploits the chemotactic strategy of BFA and fitness accuracy and convergence speed of SDA so as to overcome the problems associated with both the SDA and BFA algorithms alone. The HSDBC thus developed is evaluated in optimizing the performance and energy consumption of two highly nonlinear platforms, namely single and double inverted pendulum-like vehicles with an extended rod. Comparative results with BFA and SDA show that the proposed algorithm is able to result in better performance of the highly nonlinear systems.

    @article{lincoln33057,
    month = {June},
    author = {Khaled Goher and Abdullah Almeshal and Saad Agouri and Ahmed Nasir and Osman Tokhi and Mohamed Alenizi and Talal Alzanki and Sulaiman Fadlallah},
    note = {This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.},
    title = {Hybrid spiral-dynamic bacteria-chemotaxis algorithm with application to control two-wheeled machines},
    publisher = {Springer},
    journal = {Robotics and Biomimetics},
    doi = {10.1186/s40638-017-0059-1},
    year = {2017},
    keywords = {ARRAY(0x55e772f466c0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33057/},
    abstract = {This paper presents the implementation of the hybrid spiral-dynamic bacteria-chemotaxis (HSDBC) approach to control two different configurations of a two-wheeled vehicle. The HSDBC is a combination of bacterial chemotaxis used in bacterial forging algorithm (BFA) and the spiral-dynamic algorithm (SDA). BFA provides a good exploration strategy due to the chemotaxis approach. However, it endures an oscillation problem near the end of the search process when using a large step size. Conversely; for a small step size, it affords better exploitation and accuracy with slower convergence. SDA provides better stability when approaching an optimum point and has faster convergence speed. This may cause the search agents to get trapped into local optima which results in low accurate solution. HSDBC exploits the chemotactic strategy of BFA and fitness accuracy and convergence speed of SDA so as to overcome the problems associated with both the SDA and BFA algorithms alone. The HSDBC thus developed is evaluated in optimizing the performance and energy consumption of two highly nonlinear platforms, namely single and double inverted pendulum-like vehicles with an extended rod. Comparative results with BFA and SDA show that the proposed algorithm is able to result in better performance of the highly nonlinear systems.}
    }
  • M. Hanheide, M. Göbelbecker, G. S. Horn, A. Pronobis, K. Sjöö, A. Aydemir, P. Jensfelt, C. Gretton, R. Dearden, M. Janicek, H. Zender, G. Kruijff, N. Hawes, and J. L. Wyatt, “Robot task planning and explanation in open and uncertain worlds,” Artificial intelligence, vol. 247, p. 119–150, 2017. doi:10.1016/j.artint.2015.08.008
    [BibTeX] [Abstract] [Download PDF]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot’s knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot’s actions can have: epistemic effects (I believe X because I saw it) and assumptions (I’ll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

    @article{lincoln18592,
    volume = {247},
    month = {June},
    author = {Marc Hanheide and Moritz G{\"o}belbecker and Graham S. Horn and Andrzej Pronobis and Kristoffer Sj{\"o}{\"o} and Alper Aydemir and Patric Jensfelt and Charles Gretton and Richard Dearden and Miroslav Janicek and Hendrik Zender and Geert-Jan Kruijff and Nick Hawes and Jeremy L. Wyatt},
    title = {Robot task planning and explanation in open and uncertain worlds},
    publisher = {Elsevier},
    year = {2017},
    journal = {Artificial Intelligence},
    doi = {10.1016/j.artint.2015.08.008},
    pages = {119--150},
    keywords = {ARRAY(0x55e772f466f0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/18592/},
    abstract = {A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.}
    }
  • A. Kupcsik, M. P. Deisenroth, J. Peters, A. P. Loh, P. Vadakkepat, and G. Neumann, “Model-based contextual policy search for data-efficient generalization of robot skills,” Artificial intelligence, vol. 247, p. 415–439, 2017. doi:10.1016/j.artint.2014.11.005
    [BibTeX] [Abstract] [Download PDF]

    In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.

    @article{lincoln25774,
    volume = {247},
    month = {June},
    author = {A. Kupcsik and M. P. Deisenroth and J. Peters and A. P. Loh and P. Vadakkepat and G. Neumann},
    title = {Model-based contextual policy search for data-efficient generalization of robot skills},
    publisher = {Elsevier},
    year = {2017},
    journal = {Artificial Intelligence},
    doi = {10.1016/j.artint.2014.11.005},
    pages = {415--439},
    keywords = {ARRAY(0x55e772f52240)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/25774/},
    abstract = {In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.}
    }
  • S. Mustaza and C. M. Saaj, “Gynaecological endoscopic uterine elevator (gentler),” in 25th international congress of the european association of endoscopic surgeons, 2017.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39636,
    booktitle = {25th International Congress of the European Association of Endoscopic Surgeons},
    month = {June},
    title = {Gynaecological ENdoscopic uTerine eLEvatoR (GENTLER)},
    author = {S. Mustaza and C.M. Saaj},
    year = {2017},
    url = {http://eprints.lincoln.ac.uk/id/eprint/39636/}
    }
  • P. Baxter, E. Ashurst, R. Read, J. Kennedy, and T. Belpaeme, “Robot education peers in a situated primary school study: personalisation promotes child learning,” Plos one, 2017. doi:10.1371/journal.pone.0178126
    [BibTeX] [Abstract] [Download PDF]

    The benefit of social robots to support child learning in an educational context over an extended period of time is evaluated. Specifically, the effect of personalisation and adaptation of robot social behaviour is assessed. Two autonomous robots were embedded within two matched classrooms of a primary school for a continuous two week period without experimenter supervision to act as learning companions for the children for familiar and novel subjects. Results suggest that while children in both personalised and non-personalised conditions learned, there was increased child learning of a novel subject exhibited when interacting with a robot that personalised its behaviours, with indications that this benefit extended to other class-based performance. Additional evidence was obtained suggesting that there is increased acceptance of the personalised robot peer over a non-personalised version. These results provide the first evidence in support of peer-robot behavioural personalisation having a positive influence on learning when embedded in a learning environment for an extended period of time.

    @article{lincoln27582,
    month = {May},
    title = {Robot education peers in a situated primary school study: personalisation promotes child learning},
    author = {Paul Baxter and Emily Ashurst and Robin Read and James Kennedy and Tony Belpaeme},
    publisher = {Public Library of Science},
    year = {2017},
    doi = {10.1371/journal.pone.0178126},
    journal = {PLoS One},
    keywords = {ARRAY(0x55e772f25cc0)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27582/},
    abstract = {The benefit of social robots to support child learning in an educational context over an extended period of time is evaluated. Specifically, the effect of personalisation and adaptation of robot social behaviour is assessed. Two autonomous robots were embedded within two matched classrooms of a primary school for a continuous two week period without experimenter supervision to act as learning companions for the children for familiar and novel subjects. Results suggest that while children in both personalised and non-personalised conditions learned, there was increased child learning of a novel subject exhibited when interacting with a robot that personalised its behaviours, with indications that this benefit extended to other class-based performance. Additional evidence was obtained suggesting that there is increased acceptance of the personalised robot peer over a non-personalised version. These results provide the first evidence in support of peer-robot behavioural personalisation having a positive influence on learning when embedded in a learning environment for an extended period of time.}
    }
  • Q. Fu and S. Yue, “Modeling direction selective visual neural network with on and off pathways for extracting motion cues from cluttered background,” in The 2017 international joint conference on neural networks (ijcnn 2017), 2017.
    [BibTeX] [Abstract] [Download PDF]

    The nature endows animals robustvision systems for extracting and recognizing differentmotion cues, detectingpredators, chasing preys/mates in dynamic and cluttered environments. Direction selective neurons (DSNs), with preference to certain orientation visual stimulus, have been found in both vertebrates and invertebrates for decades. In thispaper, with respectto recent biological research progress in motion-detecting circuitry, we propose a novel way to model DSNs for recognizing movements on four cardinal directions. It is based on an architecture of ON and OFF visual pathways underlies a theory of splitting motion signals into parallel channels, encoding brightness increments and decrements separately. To enhance the edge selectivity and speed response to moving objects, we put forth a bio-plausible spatial-temporal network structure with multiple connections of same polarity ON/OFF cells. Each pair-wised combination is ?ltered with dynamic delay depending on sampling distance. The proposed vision system was challenged against image streams from both synthetic and cluttered real physical scenarios. The results demonstrated three major contributions: ?rst, the neural network ful?lled the characteristics of a postulated physiological map of conveying visual information through different neuropile layers; second, the DSNs model can extract useful directional motion cues from cluttered background robustly and timely, which hits at potential of quick implementation in visionbased micro mobile robots; moreover, it also represents better speed response compared to a state-of-the-art elementary motion detector.

    @inproceedings{lincoln26619,
    booktitle = {The 2017 International Joint Conference on Neural Networks (IJCNN 2017)},
    month = {May},
    title = {Modeling direction selective visual neural network with ON and OFF pathways for extracting motion cues from cluttered background},
    author = {Qinbing Fu and Shigang Yue},
    year = {2017},
    keywords = {ARRAY(0x55e772f4f660)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/26619/},
    abstract = {The nature endows animals robustvision systems for extracting and recognizing differentmotion cues, detectingpredators, chasing preys/mates in dynamic and cluttered environments. Direction selective neurons (DSNs), with preference to certain orientation visual stimulus, have been found in both vertebrates and invertebrates for decades. In thispaper, with respectto recent biological research progress in motion-detecting circuitry, we propose a novel way to model DSNs for recognizing movements on four cardinal directions. It is based on an architecture of ON and OFF visual pathways underlies a theory of splitting motion signals into parallel channels, encoding brightness increments and decrements separately. To enhance the edge selectivity and speed response to moving objects, we put forth a bio-plausible spatial-temporal network structure with multiple connections of same polarity ON/OFF cells. Each pair-wised combination is ?ltered with dynamic delay depending on sampling distance. The proposed vision system was challenged against image streams from both synthetic and cluttered real physical scenarios. The results demonstrated three major contributions: ?rst, the neural network ful?lled the characteristics of a postulated physiological map of conveying visual information through different neuropile layers; second, the DSNs model can extract useful directional motion cues from cluttered background robustly and timely, which hits at potential of quick implementation in visionbased micro mobile robots; moreover, it also represents better speed response compared to a state-of-the-art elementary motion detector.}
    }
  • G. H. W. Gebhardt, K. Daun, M. Schnaubelt, A. Hendrich, D. Kauth, and G. Neumann, “Learning to assemble objects with a robot swarm,” in Proceedings of the 16th conference on autonomous agents and multiagent systems (aamas 17), 2017, p. 1547–1549.
    [BibTeX] [Abstract] [Download PDF]

    Large populations of simple robots can solve complex tasks, but controlling them is still a challenging problem, due to limited communication and computation power. In order to assemble objects, have shown that a human controller can solve such a task. Instead, we investigate how to learn the assembly of multiple objects with a single central controller. We propose splitting the assembly process in two sub-tasks – generating a top-level assembly policy and learning an object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution.The resulting system is able to solve assembly tasks with varying object shapes being assembled as shown in multiple simulation scenarios.

    @inproceedings{lincoln28089,
    month = {May},
    author = {Gregor H. W. Gebhardt and Kevin Daun and Marius Schnaubelt and Alexander Hendrich and Daniel Kauth and Gerhard Neumann},
    note = {Extended abstract},
    booktitle = {Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems (AAMAS 17)},
    title = {Learning to assemble objects with a robot swarm},
    publisher = {international foundation for autonomous agents and multiagent systems},
    pages = {1547--1549},
    year = {2017},
    keywords = {ARRAY(0x55e772f58b70)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/28089/},
    abstract = {Large populations of simple robots can solve complex tasks, but controlling them is still a challenging problem, due to limited communication and computation power. In order to assemble objects, have shown that a human controller can solve such a task. Instead, we investigate how to learn the assembly of multiple objects with a single central controller. We propose splitting the assembly process in two sub-tasks -- generating a top-level assembly policy and learning an object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution.The resulting system is able to solve assembly tasks with varying object shapes being assembled as shown in multiple simulation scenarios.}
    }
  • K. Goher and S. Fadlallah, “Design, modelling and control of a portable leg rehabilitation system,” Asme journal of dynamic systems, measurement and control, 2017. doi:10.1115/1.4035815
    [BibTeX] [Abstract] [Download PDF]

    In this work, a novel design of a portable leg rehabilitation system (PLRS) is presented. The main purpose of this paper is to provide a portable system, which allows patients with lower limb disabilities to perform leg and foot rehabilitation exercises anywhere without any embarrassment compared to other devices that lack the portability feature. The model of the system is identified by inverse kinematics and dynamics analysis. In kinematics analysis, the pattern of motion of both leg and foot holders for different modes of operation has been investigated. The system is modeled by applying Lagrangian dynamics approach. The mathematical model derived considers calf and foot masses and moment of inertias as important parameters. Therefore, a gait analysis study is conducted to calculate the required parameters to simulate the model. PD controller and PID controller are applied to the model and compared. The PID controller optimized by Hybrid Spiral-Dynamics Bacteria-Chemotaxis (HSDBC) algorithm provides the best response with a reasonable settling time and minimum overshot. The robustness of the HSDBC-PID controller is tested by applying disturbance force with various amplitudes. A setup is built for the system experimental validation where the system mathematical model is compare with the estimated model using System Identification Toolbox. A significant difference is observed between both models when applying the obtained HSDBC-PID controller for the mathematical model. The results of this experiment are used to update the controller parameters of the HSDBC-optimized PID.

    @article{lincoln33056,
    month = {May},
    author = {Khaled Goher and Sulaiman Fadlallah},
    note = {The final published version of this article is available online at http://dynamicsystems.asmedigitalcollection.asme.org/article.aspx?articleid=2599257},
    title = {Design, Modelling and Control of a Portable Leg Rehabilitation System},
    publisher = {American Society of Mechanical Engineers},
    journal = {ASME Journal of Dynamic Systems, Measurement and Control},
    doi = {10.1115/1.4035815},
    year = {2017},
    keywords = {ARRAY(0x55e772f33f10)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/33056/},
    abstract = {In this work, a novel design of a portable leg rehabilitation system (PLRS) is presented. The main purpose of this paper is to provide a portable system, which allows patients with lower limb disabilities to perform leg and foot rehabilitation exercises anywhere without any embarrassment compared to other devices that lack the portability feature. The model of the system is identified by inverse kinematics and dynamics analysis. In kinematics analysis, the pattern of motion of both leg and foot holders for different modes of operation has been investigated. The system is modeled by applying Lagrangian dynamics approach. The mathematical model derived considers calf and foot masses and moment of inertias as important parameters. Therefore, a gait analysis study is conducted to calculate the required parameters to simulate the model. PD controller and PID controller are applied to the model and compared. The PID controller optimized by Hybrid Spiral-Dynamics Bacteria-Chemotaxis (HSDBC) algorithm provides the best response with a reasonable settling time and minimum overshot. The robustness of the HSDBC-PID controller is tested by applying disturbance force with various amplitudes. A setup is built for the system experimental validation where the system mathematical model is compare with the estimated model using System Identification Toolbox. A significant difference is observed between both models when applying the obtained HSDBC-PID controller for the mathematical model. The results of this experiment are used to update the controller parameters of the HSDBC-optimized PID.}
    }
  • P. G. Esteban, P. Baxter, T. Belpaeme, E. Billing, H. Cai, H. Cao, M. Coeckelbergh, C. Costescu, D. David, A. D. Beir, Y. Fang, Z. Ju, J. Kennedy, H. Liu, A. Mazel, A. Pandey, K. Richardson, E. Senft, S. Thill, G. V. de Perre, B. Vanderborght, D. Vernon, H. Yu, and T. Ziemke, “How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder,” Paladyn, journal of behavioral robotics, vol. 8, iss. 1, 2017. doi:10.1515/pjbr-2017-0002
    [BibTeX] [Abstract] [Download PDF]

    Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.

    @article{lincoln27519,
    volume = {8},
    number = {1},
    month = {May},
    author = {Pablo G. Esteban and Paul Baxter and Tony Belpaeme and Erik Billing and Haibin Cai and Hoang-Long Cao and Mark Coeckelbergh and Cristina Costescu and Daniel David and Albert De Beir and Yinfeng Fang and Zhaojie Ju and James Kennedy and Honghai Liu and Alexandre Mazel and Amit Pandey and Kathleen Richardson and Emmanue Senft and Serge Thill and Greet Van de Perre and Bram Vanderborght and David Vernon and Hui Yu and Tom Ziemke},
    title = {How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder},
    publisher = {Springer/Versita with DeGruyter},
    year = {2017},
    journal = {Paladyn, Journal of Behavioral Robotics},
    doi = {10.1515/pjbr-2017-0002},
    keywords = {ARRAY(0x55e772f58b58)},
    url = {http://eprints.lincoln.ac.uk/id/eprint/27519/},
    abstract = {Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.}
    }
  • P. Busato, A. Sopegno, R. Berruto, D. Bochtis, and A. Calvo, “A web-based tool for energy balance estimation in multiple-crops production systems,” Sustainability, vol. 9, iss. 5, p. 789, 2017. doi:10.3390/su9050789
    [BibTeX] [Abstract] [Download PDF]

    Biomass produ