Publications

download the BibTeX file of all L-CAS publications

2021

  • S. D. Mohan, F. J. Davis, A. Badiee, P. Hadley, C. A. Twitchen, and S. Pearson, “Optical and thermal properties of commercial polymer film,modeling the albedo effect,” Journal of applied polymer science, vol. 138, iss. 24, p. 50581, 2021. doi:10.1002/app.50 581
    [BibTeX] [Abstract] [Download PDF]

    Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.

    @article{lincoln44141,
    volume = {138},
    number = {24},
    month = {June},
    author = {Saeed D Mohan and Fred J Davis and Amir Badiee and Paul Hadley and Carrie A Twitchen and Simon Pearson},
    title = {Optical and thermal properties of commercial polymer film,modeling the albedo effect},
    publisher = {Wiley},
    year = {2021},
    journal = {Journal of Applied Polymer Science},
    doi = {10.1002/app.50 581},
    pages = {50581},
    keywords = {ARRAY(0x55578ff61da8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44141/},
    abstract = {Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.}
    }
  • J. C. Mayoral, L. Grimstad, P. r a, and G. Cielniak, “Integration of a human-aware risk-based braking system into an open-field mobile robot,” in Ieee international conference on robotics and automation (icra), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Safety integration components for robotic applications are a mandatory feature for any autonomous mobile application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor. Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.

    @inproceedings{lincoln44427,
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    month = {May},
    title = {Integration of a Human-aware Risk-based Braking System into an Open-Field Mobile Robot},
    author = {Jose C. Mayoral and Lars Grimstad and P{\r a}l J. From and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2021},
    keywords = {ARRAY(0x55578fe7e168)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44427/},
    abstract = {Safety integration components for robotic applications are a mandatory feature for any autonomous mobile
    application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for
    open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor.
    Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.}
    }
  • N. Wagner, R. Kirk, M. Hanheide, and G. Cielniak, “Efficient and robust orientation estimation of strawberries for fruit picking applications,” in Ieee international conference on robotics and automation (icra), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB data only, mean errors of as low as 11? could be achieved.

    @inproceedings{lincoln44426,
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    month = {May},
    title = {Efficient and Robust Orientation Estimation of Strawberries for Fruit Picking Applications},
    author = {Nikolaus Wagner and Raymond Kirk and Marc Hanheide and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2021},
    keywords = {ARRAY(0x55578fe7e120)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44426/},
    abstract = {Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well
    as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic
    harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB
    data only, mean errors of as low as 11? could be achieved.}
    }
  • N. Andreakos, S. Yue, and V. Cutsuridis, “Quantitative investigation of memory recall performance of a computational microcircuit model of the hippocampus,” Brain informatics, vol. 8, iss. 9, 2021. doi:10.1186/s40708-021-00131-7
    [BibTeX] [Abstract] [Download PDF]

    Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.

    @article{lincoln44717,
    volume = {8},
    number = {9},
    month = {May},
    author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis},
    title = {Quantitative Investigation Of Memory Recall Performance Of A Computational Microcircuit Model Of The Hippocampus},
    publisher = {SpringerOpen},
    year = {2021},
    journal = {Brain Informatics},
    doi = {10.1186/s40708-021-00131-7},
    keywords = {ARRAY(0x55578fd1dc70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44717/},
    abstract = {Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.}
    }
  • F. Camara, P. Dickinson, and C. Fox, “Evaluating pedestrian interaction preferences with a game theoretic autonomous vehicle in virtual reality,” Transportation research part f, vol. 78, p. 410–423, 2021. doi:10.1016/j.trf.2021.02.017
    [BibTeX] [Abstract] [Download PDF]

    Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved problems, but how to control their interactions with other road users in mixed traffic environments, especially with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game theory model has been developed only in unrealistic lab environments. To improve their realism, this study empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to investigate participants? preference for AV driving style. The results show that the majority of participants were not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two environments and with different car models suggestive of car and last-mile style vehicles. These results provide some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and more generally show how such parameters can be inferred from virtual reality experiments.

    @article{lincoln44566,
    volume = {78},
    month = {April},
    author = {Fanta Camara and Patrick Dickinson and Charles Fox},
    title = {Evaluating Pedestrian Interaction Preferences with a Game Theoretic Autonomous Vehicle in Virtual Reality},
    publisher = {Elsevier},
    year = {2021},
    journal = {Transportation Research Part F},
    doi = {10.1016/j.trf.2021.02.017},
    pages = {410--423},
    keywords = {ARRAY(0x55578f71d228)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44566/},
    abstract = {Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved
    problems, but how to control their interactions with other road users in mixed traffic environments, especially
    with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control
    AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game
    theory model has been developed only in unrealistic lab environments. To improve their realism, this study
    empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous
    vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game
    theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions
    for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in
    VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to
    investigate participants? preference for AV driving style. The results show that the majority of participants were
    not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two
    environments and with different car models suggestive of car and last-mile style vehicles. These results provide
    some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and
    more generally show how such parameters can be inferred from virtual reality experiments.}
    }
  • A. S. Gomez, E. Aptoula, S. Parsons, and S. Bosilj, “Deep regression versus detection for counting in robotic phenotyping,” Ieee robotics and automation letters, vol. 6, iss. 2, p. 2902–2907, 2021. doi:10.1109/LRA.2021.3062586
    [BibTeX] [Abstract] [Download PDF]

    Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.

    @article{lincoln44001,
    volume = {6},
    number = {2},
    month = {April},
    author = {Adrian Salazar Gomez and E Aptoula and Simon Parsons and Simon Bosilj},
    title = {Deep Regression versus Detection for Counting in Robotic Phenotyping},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2021.3062586},
    pages = {2902--2907},
    keywords = {ARRAY(0x55578ff56658)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44001/},
    abstract = {Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.}
    }
  • N. Dethlefs, A. Schoene, and H. Cuayahuitl, “A divide-and-conquer approach to neural natural language generation from structured data,” Neurocomputing, vol. 433, p. 300–309, 2021. doi:10.1016/j.neucom.2020.12.083
    [BibTeX] [Abstract] [Download PDF]

    Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.

    @article{lincoln43748,
    volume = {433},
    month = {April},
    author = {Nina Dethlefs and Annika Schoene and Heriberto Cuayahuitl},
    title = {A Divide-and-Conquer Approach to Neural Natural Language Generation from Structured Data},
    publisher = {Elsevier},
    year = {2021},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2020.12.083},
    pages = {300--309},
    keywords = {ARRAY(0x55578fe8e110)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43748/},
    abstract = {Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.}
    }
  • F. Yang, L. Shu, Y. Yang, G. Han, S. Pearson, and K. Li, “Optimal deployment of solar insecticidal lamps over constrained locations in mixed-crop farmlands,” Ieee internet of things journal, 2021. doi:10.1109/JIOT.2021.3064043
    [BibTeX] [Abstract] [Download PDF]

    Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.

    @article{lincoln44192,
    month = {March},
    title = {Optimal Deployment of Solar Insecticidal Lamps over Constrained Locations in Mixed-Crop Farmlands},
    author = {Fan Yang and Lei Shu and Yuli Yang and Guangjie Han and Simon Pearson and Kailiang Li},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/JIOT.2021.3064043},
    journal = {IEEE Internet of Things Journal},
    keywords = {ARRAY(0x55578fa61a98)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44192/},
    abstract = {Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.}
    }
  • P. McBurney and S. Parsons, “Argument schemes and dialogue protocols: doug walton’s legacy in artificial intelligence,” Journal of applied logics, vol. 8, iss. 1, p. 263–286, 2021.
    [BibTeX] [Abstract] [Download PDF]

    This paper is intended to honour the memory of Douglas Walton (1942–2020), a Canadian philosopher of argumentation who died in January 2020. Walton’s contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions –- argumentation schemes and dialogue protocols –- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.

    @article{lincoln43751,
    volume = {8},
    number = {1},
    month = {February},
    author = {Peter McBurney and Simon Parsons},
    title = {Argument Schemes and Dialogue Protocols: Doug Walton's legacy in artificial intelligence},
    publisher = {College Publications},
    year = {2021},
    journal = {Journal of Applied Logics},
    pages = {263--286},
    keywords = {ARRAY(0x55578ff61e08)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43751/},
    abstract = {This paper is intended to honour the memory of Douglas Walton (1942--2020), a Canadian philosopher of argumentation who died in January 2020. Walton's contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions --- argumentation schemes and dialogue protocols --- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.}
    }
  • A. Seddaoui and M. C. Saaj, “Collision-free optimal trajectory generation for a space robot using genetic algorithm,” Acta astronautica, vol. 179, p. 311–321, 2021. doi:10.1016/j.actaastro.2020.11.001
    [BibTeX] [Abstract] [Download PDF]

    Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around their target. Several challenges arise when modelling, controlling and planning the motion of such systems, therefore, new methodologies are required. A safe approach towards the grasping point implies that the space robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking using a nonlinear adaptive

    @article{lincoln43074,
    volume = {179},
    month = {February},
    author = {Asma Seddaoui and Mini Chakravarthini Saaj},
    note = {The paper is the outcome of a PhD I supervised at University of Surrey.},
    title = {Collision-free optimal trajectory generation for a space robot using genetic algorithm},
    publisher = {Elsevier},
    year = {2021},
    journal = {Acta Astronautica},
    doi = {10.1016/j.actaastro.2020.11.001},
    pages = {311--321},
    keywords = {ARRAY(0x55578fe8e140)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43074/},
    abstract = {Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around
    their target. Several challenges arise when modelling, controlling and planning the motion of such systems,
    therefore, new methodologies are required. A safe approach towards the grasping point implies that the space
    robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain
    the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity
    of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a
    Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft
    base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The
    key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as
    an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking
    using a nonlinear adaptive}
    }
  • M. Lujak, E. I. Sklar, and F. Semet, “Agriculture fleet vehicle routing: a decentralised and dynamic problem,” Ai communications, vol. 34, iss. 1, p. 55–71, 2021. doi:10.3233/AIC-201581
    [BibTeX] [Abstract] [Download PDF]

    To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.

    @article{lincoln43570,
    volume = {34},
    number = {1},
    month = {February},
    author = {Marin Lujak and Elizabeth I Sklar and Frederic Semet},
    title = {Agriculture fleet vehicle routing: A decentralised and dynamic problem},
    publisher = {IOS Press},
    year = {2021},
    journal = {AI Communications},
    doi = {10.3233/AIC-201581},
    pages = {55--71},
    keywords = {ARRAY(0x55578fdf4388)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43570/},
    abstract = {To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.}
    }
  • S. Mghames, M. Hanheide, and A. G. Esfahani, “Interactive movement primitives: planning to push occluding pieces for fruit picking,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2021. doi:10.1109/IROS45743.2020.9341728
    [BibTeX] [Abstract] [Download PDF]

    Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.

    @inproceedings{lincoln42217,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {February},
    title = {Interactive Movement Primitives: Planning to Push Occluding Pieces for Fruit Picking},
    author = {Sariah Mghames and Marc Hanheide and Amir Ghalamzan Esfahani},
    year = {2021},
    doi = {10.1109/IROS45743.2020.9341728},
    note = {{\copyright} 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.},
    keywords = {ARRAY(0x55578fd1a588)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42217/},
    abstract = {Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.}
    }
  • Z. Al-saadi, D. Sirintuna, A. Kucukyilmaz, and C. Basdogan, “A novel haptic feature set for the classification of interactive motor behaviors in collaborative object transfer,” Ieee transactions on haptics, p. 1–1, 2021. doi:10.1109/TOH.2020.3034244
    [BibTeX] [Abstract] [Download PDF]

    Haptics provides a natural and intuitive channel of communication during the interaction of two humans in complex physical tasks, such as joint object transportation. However, despite the utmost importance of touch in physical interactions, the use of haptics is underrepresented when developing intelligent systems. This study explores the prominence of haptic data to extract information about underlying interaction patterns within human-human cooperation. For this purpose, we design salient haptic features describing the collaboration quality within a physical dyadic task and investigate the use of these features to classify the interaction patterns. We categorize the interaction into four discrete behavior classes. These classes describe whether the partners work in harmony or face conflicts while jointly transporting an object through translational or rotational movements. We test the proposed features on a physical human-human interaction (pHHI) dataset, consisting of data collected from 12 human dyads. Using these data, we verify the salience of haptic features by achieving a correct classification rate over 91\% using a Random Forest classifier.

    @article{lincoln43742,
    title = {A Novel Haptic Feature Set for the Classification of Interactive Motor Behaviors in Collaborative Object Transfer},
    author = {Zaid Al-saadi and Doganay Sirintuna and Ayse Kucukyilmaz and Cagatay Basdogan},
    publisher = {IEEE},
    year = {2021},
    pages = {1--1},
    doi = {10.1109/TOH.2020.3034244},
    journal = {IEEE Transactions on Haptics},
    keywords = {ARRAY(0x55578faf42f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43742/},
    abstract = {Haptics provides a natural and intuitive channel of communication during the interaction of two humans in complex physical tasks, such as joint object transportation. However, despite the utmost importance of touch in physical interactions, the use of haptics is underrepresented when developing intelligent systems. This study explores the prominence of haptic data to extract information about underlying interaction patterns within human-human cooperation. For this purpose, we design salient haptic features describing the collaboration quality within a physical dyadic task and investigate the use of these features to classify the interaction patterns. We categorize the interaction into four discrete behavior classes. These classes describe whether the partners work in harmony or face conflicts while jointly transporting an object through translational or rotational movements. We test the proposed features on a physical human-human interaction (pHHI) dataset, consisting of data collected from 12 human dyads. Using these data, we verify the salience of haptic features by achieving a correct classification rate over 91\% using a Random Forest classifier.}
    }
  • N. Kokciyan, I. Sassoon, E. Sklar, S. Parsons, and S. Modgil, “Applying metalevel argumentation frameworks to support medical decision making,” Ieee intelligent systems, 2021. doi:10.1109/MIS.2021.3051420
    [BibTeX] [Abstract] [Download PDF]

    People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.

    @article{lincoln43690,
    title = {Applying Metalevel Argumentation Frameworks to Support Medical Decision Making},
    author = {Nadin Kokciyan and Isabel Sassoon and Elizabeth Sklar and Simon Parsons and Sanjay Modgil},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/MIS.2021.3051420},
    journal = {IEEE Intelligent Systems},
    keywords = {ARRAY(0x55578fdc27f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43690/},
    abstract = {People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.}
    }

2020

  • N. Andreakos, S. Yue, and V. Cutsuridis, “Recall performance improvement in a bio-inspired model of the mammalian hippocampus,” in Brein informatics, 2020, p. 319–328. doi:10.1007/978-3-030-59277-6_29
    [BibTeX] [Abstract] [Download PDF]

    Mammalian hippocampus is involved in short-term formation of declarative memories. We employed a bio-inspired neural model of hippocampal CA1 region consisting of a zoo of excitatory and inhibitory cells. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. To systematically evaluate the model?s recall performance against number of stored patterns, overlaps and ?active cells per pattern?, its cells were driven by a non-specific excitatory input to their dendrites. This excitatory input to model excitatory cells provided context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells? dendrites acted as a non-specific global threshold machine that removed spurious activity during recall. Out of the three models tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells per pattern? had a massive effect on network recall quality regardless of how many patterns were stored in it. As ?active cells per pattern? decreased, network?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved. Key finding was that increased firing rate of an inhibitory cell inhibiting a network of excitatory cells has a better success at removing spurious activity at the network level and improving recall quality than increasing the synaptic strength of the same inhibitory cell inhibiting the same network of excitatory cells, while keeping its firing rate fixed.

    @inproceedings{lincoln43364,
    booktitle = {Brein Informatics},
    month = {December},
    title = {Recall Performance Improvement in a Bio-Inspired Model of the Mammalian Hippocampus},
    author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis},
    year = {2020},
    pages = {319--328},
    doi = {10.1007/978-3-030-59277-6\_29},
    keywords = {ARRAY(0x55578fddd360)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43364/},
    abstract = {Mammalian hippocampus is involved in short-term formation of declarative memories. We employed a
    bio-inspired neural model of hippocampal CA1 region consisting of a zoo of excitatory and inhibitory
    cells. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting
    highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. To
    systematically evaluate the model?s recall performance against number of stored patterns, overlaps and
    ?active cells per pattern?, its cells were driven by a non-specific excitatory input to their dendrites. This
    excitatory input to model excitatory cells provided context and timing information for retrieval of
    previously stored memory patterns. Inhibition to excitatory cells? dendrites acted as a non-specific global
    threshold machine that removed spurious activity during recall. Out of the three models tested, ?model 1?
    recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active
    cells per pattern? had a massive effect on network recall quality regardless of how many patterns were
    stored in it. As ?active cells per pattern? decreased, network?s memory capacity increased, interference
    effects between stored patterns decreased, and recall quality improved. Key finding was that increased
    firing rate of an inhibitory cell inhibiting a network of excitatory cells has a better success at removing
    spurious activity at the network level and improving recall quality than increasing the synaptic strength of
    the same inhibitory cell inhibiting the same network of excitatory cells, while keeping its firing rate fixed.}
    }
  • Q. Fu and S. Yue, “Complementary visual neuronal systems model for collision sensing,” in The ieee international conference on advanced robotics and mechatronics (arm), 2020. doi:10.1109/ICARM49381.2020.9195303
    [BibTeX] [Abstract] [Download PDF]

    Inspired by insects? visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-?eld motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in ?ies, have been studied, intensively. The LGMDs have speci?c selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To ?ll this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented ingroundmicro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.

    @inproceedings{lincoln42134,
    booktitle = {The IEEE International Conference on Advanced Robotics and Mechatronics (ARM)},
    month = {December},
    title = {Complementary Visual Neuronal Systems Model for Collision Sensing},
    author = {Qinbing Fu and Shigang Yue},
    year = {2020},
    doi = {10.1109/ICARM49381.2020.9195303},
    keywords = {ARRAY(0x55578fdda990)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42134/},
    abstract = {Inspired by insects? visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-?eld motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in ?ies, have been studied, intensively. The LGMDs have speci?c selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To ?ll this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented ingroundmicro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.}
    }
  • M. Terreran, A. Tramontano, J. Lock, S. Ghidoni, and N. Bellotto, “Real-time object detection using deep learning for helping people with visual impairments,” in 4th ieee international conference on image processing, applications and systems (ipas), 2020.
    [BibTeX] [Abstract] [Download PDF]

    Object detection plays a crucial role in the development of Electronic Travel Aids (ETAs), capable to guide a person with visual impairments towards a target object in an unknown indoor environment. In such a scenario, the object detector runs on a mobile device (e.g. smartphone) and needs to be fast, accurate, and, most importantly, lightweight. Nowadays, Deep Neural Networks (DNN) have become the state-of-the-art solution for object detection tasks, with many works improving speed and accuracy by proposing new architectures or extending existing ones. A common strategy is to use deeper networks to get higher performance, but that leads to a higher computational cost which makes it impractical to integrate them on mobile devices with limited computational power. In this work we compare different object detectors to find a suitable candidate to be implemented on ETAs, focusing on lightweight models capable of working in real-time on mobile devices with a good accuracy. In particular, we select two models: SSD Lite with Mobilenet V2 and Tiny-DSOD. Both models have been tested on the popular OpenImage dataset and a new dataset, called Office dataset, collected to further test models? performance and robustness in a real scenario inspired by the actual perception challenges of a user with visual impairments.

    @inproceedings{lincoln42338,
    booktitle = {4th IEEE International Conference on Image Processing, Applications and Systems (IPAS)},
    month = {December},
    title = {Real-time Object Detection using Deep Learning for helping People with Visual Impairments},
    author = {Matteo Terreran and Andrea Tramontano and Jacobus Lock and Stefano Ghidoni and Nicola Bellotto},
    publisher = {IEEE},
    year = {2020},
    keywords = {ARRAY(0x55578fdda978)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42338/},
    abstract = {Object detection plays a crucial role in the development of Electronic Travel Aids (ETAs), capable to guide a person with visual impairments towards a target object in an unknown indoor environment. In such a scenario, the object detector runs on a mobile device (e.g. smartphone) and needs to be fast, accurate, and, most importantly, lightweight. Nowadays, Deep Neural Networks (DNN) have become the state-of-the-art solution for object detection tasks, with many works improving speed and accuracy by proposing new architectures or extending existing ones. A common strategy is to use deeper networks to get higher performance, but that leads to a higher computational cost which makes it impractical to integrate them on mobile devices with limited computational power. In this work we compare different object detectors to find a suitable candidate to be implemented on ETAs, focusing on lightweight models capable of working in real-time on mobile devices with a good accuracy. In particular, we select two models: SSD Lite with Mobilenet V2 and Tiny-DSOD. Both models have been tested on the popular OpenImage dataset and a new dataset, called Office dataset, collected to further test models? performance and robustness in a real scenario inspired by the actual perception challenges of a user with visual impairments.}
    }
  • L. Roberts-Elliott, M. Fernandez-Carmona, and M. Hanheide, “Towards safer robot motion: using a qualitative motion model to classify human-robot spatial interaction,” in 21st towards autonomous robotic systems conference, 2020.
    [BibTeX] [Abstract] [Download PDF]

    For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.

    @inproceedings{lincoln40186,
    booktitle = {21st Towards Autonomous Robotic Systems Conference},
    month = {December},
    title = {Towards Safer Robot Motion: Using a Qualitative Motion Model to Classify Human-Robot Spatial Interaction},
    author = {Laurence Roberts-Elliott and Manuel Fernandez-Carmona and Marc Hanheide},
    publisher = {Springer},
    year = {2020},
    keywords = {ARRAY(0x55578fddd3f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40186/},
    abstract = {For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.}
    }
  • G. Canal, R. Borgo, A. Coles, A. Drake, D. Huynh, P. Keller, S. Krivić, P. Luff, Q. Mahesar, L. Moreau, S. Parsons, M. Patel, and E. Sklar, “Building trust in human-machine partnerships,” Computer law & security review, vol. 39, p. 105489, 2020. doi:10.1016/j.clsr.2020.105489
    [BibTeX] [Abstract] [Download PDF]

    Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing.

    @article{lincoln43255,
    volume = {39},
    month = {November},
    author = {Gerard Canal and Rita Borgo and Andrew Coles and Archie Drake and Dong Huynh and Perry Keller and Senka Krivi{\'c} and Paul Luff and Quratul-ain Mahesar and Luc Moreau and Simon Parsons and Menisha Patel and Elizabeth Sklar},
    title = {Building Trust in Human-Machine Partnerships},
    journal = {Computer Law \& Security Review},
    doi = {10.1016/j.clsr.2020.105489},
    pages = {105489},
    year = {2020},
    keywords = {ARRAY(0x55578fddd3d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43255/},
    abstract = {Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing.}
    }
  • P. Bosilj, I. Gould, T. Duckett, and G. Cielniak, “Estimating soil aggregate size distribution from images using pattern spectra,” Biosystems engineering, vol. 198, p. 63–77, 2020. doi:10.1016/j.biosystemseng.2020.07.012
    [BibTeX] [Abstract] [Download PDF]

    A method for quantifying aggregate size distribution from the images of soil samples is introduced. Knowledge of soil aggregate size distribution can help to inform soil management practices for the sustainable growth of crops. While current in-field approaches are mostly subjective, obtaining quantifiable results in a laboratory is labour- and time-intensive. Our goal is to develop an imaging technique for quantitative analysis of soil aggregate size distribution, which could provide the basis of a tool for rapid assessment of soil structure. The prediction accuracy of pattern spectra descriptors based on hierarchical representations from attribute morphology are analysed, as well as the impact of using images of different quality and scales. The method is able to handle greater sample complexity than the previous approaches, while working with smaller samples sizes that are easier to handle. The results show promise for size analysis of soils with larger structures, and minimal sample preparation, as typical of soil assessment in agriculture.

    @article{lincoln42179,
    volume = {198},
    month = {October},
    author = {Petra Bosilj and Iain Gould and Tom Duckett and Grzegorz Cielniak},
    title = {Estimating soil aggregate size distribution from images using pattern spectra},
    publisher = {Elsevier},
    year = {2020},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2020.07.012},
    pages = {63--77},
    keywords = {ARRAY(0x55578fddd438)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42179/},
    abstract = {A method for quantifying aggregate size distribution from the images of soil samples is introduced. Knowledge of soil aggregate size distribution can help to inform soil management practices for the sustainable growth of crops. While current in-field approaches are mostly subjective, obtaining quantifiable results in a laboratory is labour- and time-intensive. Our goal is to develop an imaging technique for quantitative analysis of soil aggregate size distribution, which could provide the basis of a tool for rapid assessment of soil structure. The prediction accuracy of pattern spectra descriptors based on hierarchical representations from attribute morphology are analysed, as well as the impact of using images of different quality and scales. The method is able to handle greater sample complexity than the previous approaches, while working with smaller samples sizes that are easier to handle. The results show promise for size analysis of soils with larger structures, and minimal sample preparation, as typical of soil assessment in agriculture.}
    }
  • F. Camara, S. Cosar, N. Bellotto, N. Merat, and C. Fox, “Continuous game theory pedestrian modelling method for autonomous vehicles,” in Human factors in intelligent vehicles, C. Olaverri-Monreal, F. García-Fernández, and R. J. F. Rossetti, Eds., River publishers, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous Vehicles (AVs) must interact with other road users. They must understand and adapt to complex pedestrian behaviour, especially during crossings where priority is not clearly defined. This includes feedback effects such as modelling a pedestrian?s likely behaviours resulting from changes in the AVs behaviour. For example, whether a pedestrian will yield if the AV accelerates, and vice versa. To enable such automated interactions, it is necessary for the AV to possess a statistical model of the pedestrian?s responses to its own actions. A previous work demonstrated a proof-of- concept method to fit parameters to a simplified model based on data from a highly artificial discrete laboratory task with human subjects. The method was based on LIDAR-based person tracking, game theory, and Gaussian process analysis. The present study extends this method to enable analysis of more realistic continuous human experimental data. It shows for the first time how game-theoretic predictive parameters can be fit into pedestrians natural and continuous motion during road-crossings, and how predictions can be made about their interactions with AV controllers in similar real-world settings.

    @incollection{lincoln42872,
    month = {October},
    author = {Fanta Camara and Serhan Cosar and Nicola Bellotto and Natasha Merat and Charles Fox},
    series = {River Publishers Series in Transport Technology},
    booktitle = {Human Factors in Intelligent Vehicles},
    editor = {Cristina Olaverri-Monreal and Fernando Garc{\'i}a-Fern{\'a}ndez and Rosaldo J. F. Rossetti},
    title = {Continuous Game Theory Pedestrian Modelling Method for Autonomous Vehicles},
    publisher = {River Publishers},
    year = {2020},
    keywords = {ARRAY(0x55578fdddc48)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42872/},
    abstract = {Autonomous Vehicles (AVs) must interact with other road users. They must understand and adapt to complex pedestrian behaviour, especially during crossings where priority is not clearly defined. This includes feedback effects
    such as modelling a pedestrian?s likely behaviours resulting from changes in the AVs behaviour. For example, whether a pedestrian will yield if the AV accelerates, and vice versa. To enable such automated interactions, it is necessary for the AV to possess a statistical model of the pedestrian?s responses to its own actions. A previous work demonstrated a proof-of- concept method to fit parameters to a simplified model based on data from a highly artificial discrete laboratory task with human subjects. The method was based on LIDAR-based person tracking, game theory, and Gaussian process analysis. The present study extends this method to enable analysis of more realistic continuous human experimental data. It shows for the first time how game-theoretic predictive parameters can be fit into pedestrians natural and continuous motion during road-crossings, and how predictions can be made about their interactions with AV controllers in similar real-world settings.}
    }
  • J. Barber, H. Cuayahuitl, M. Zhong, and W. Luan, “Lightweight non-intrusive load monitoring employing pruned sequence-to-point learning,” in 5th international workshop on non-intrusive load monitoring, 2020. doi:10.1145/1122445.1122456
    [BibTeX] [Abstract] [Download PDF]

    Non-intrusive load monitoring (NILM) is the process in which a household?s total power consumption is used to determine the power consumption of household appliances. Previous work has shown that sequence-to-point (seq2point) learning is one of the most promising methods for tackling NILM. This process uses a sequence of aggregate power data to map a target appliance’s power consumption at the midpoint of that window of power data. However, models produced using this method contain upwards of thirty million weights, meaning that the models require large volumes of resources to perform disaggregation. This paper addresses this problem by pruning the weights learned by such a model, which results in a lightweight NILM algorithm for the purpose of being deployed on mobile devices such as smart meters. The pruned seq2point learning algorithm was applied to the REFIT data, experimentally showing that the performance was retained comparing to the original seq2point learning whilst the number of weights was reduced by 87{$\backslash$}\%. Code:https://github.com/JackBarber98/pruned-nilm

    @inproceedings{lincoln42806,
    booktitle = {5th International Workshop on Non-Intrusive Load Monitoring},
    month = {October},
    title = {Lightweight Non-Intrusive Load Monitoring Employing Pruned Sequence-to-Point Learning},
    author = {Jack Barber and Heriberto Cuayahuitl and Mingjun Zhong and Wempen Luan},
    publisher = {ACM Conference Proceedings},
    year = {2020},
    doi = {10.1145/1122445.1122456},
    keywords = {ARRAY(0x55578fddd378)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42806/},
    abstract = {Non-intrusive load monitoring (NILM) is the process in which a household?s total power consumption is used to determine the power consumption of household appliances.
    Previous work has shown that sequence-to-point (seq2point) learning is one of the most promising methods for tackling NILM. This process uses a sequence of aggregate power data to map a target appliance's power consumption at the midpoint of that window of power data.
    However, models produced using this method contain upwards of thirty million weights, meaning that the models require large volumes of resources to perform disaggregation. This paper addresses this problem by pruning the weights learned by such a model, which results in a lightweight NILM algorithm for the purpose of being deployed on mobile devices such as smart meters. The pruned seq2point learning algorithm was applied to the REFIT data, experimentally showing that the performance was retained comparing to the original seq2point learning whilst the number of weights was reduced by 87{$\backslash$}\%. Code:https://github.com/JackBarber98/pruned-nilm}
    }
  • W. Khan, G. Das, M. Hanheide, and G. Cielniak, “Incorporating spatial constraints into a bayesian tracking framework for improved localisation in agricultural environments,” in 2020 ieee/rsj international conference on intelligent robots and systems, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF). The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.

    @inproceedings{lincoln42419,
    booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems},
    month = {October},
    title = {Incorporating Spatial Constraints into a Bayesian Tracking Framework for Improved Localisation in Agricultural Environments},
    author = {Waqas Khan and Gautham Das and Marc Hanheide and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2020},
    keywords = {ARRAY(0x55578fddd390)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42419/},
    abstract = {Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF).
    The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.}
    }
  • N. Mavrakis, R. Stolkin, and A. G. Esfahani, “Estimating an object?s inertial parameters by robotic pushing: a data-driven approach,” in The ieee/rsj international conference on intelligent robots and systems (iros), 2020, p. 9537–9544. doi:10.1109/IROS45743.2020.9341112
    [BibTeX] [Abstract] [Download PDF]

    Estimating the inertial properties of an object can make robotic manipulations more efficient, especially in extreme environments. This paper presents a novel method of estimating the 2D inertial parameters of an object, by having a robot applying a push on it. We draw inspiration from previous analyses on quasi-static pushing mechanics, and introduce a data-driven model that can accurately represent these mechan- ics and provide a prediction for the object?s inertial parameters. We evaluate the model with two datasets. For the first dataset, we set up a V-REP simulation of seven robots pushing objects with large range of inertial parameters, acquiring 48000 pushes in total. For the second dataset, we use the object pushes from the MIT M-Cube lab pushing dataset. We extract features from force, moment and velocity measurements of the pushes, and train a Multi-Output Regression Random Forest. The experimental results show that we can accurately predict the 2D inertial parameters from a single push, and that our method retains this robust performance under various surface types.

    @inproceedings{lincoln42213,
    month = {October},
    author = {Nikos Mavrakis and Rustam Stolkin and Amir Ghalamzan Esfahani},
    booktitle = {The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Estimating An Object?s Inertial Parameters By Robotic Pushing: A Data-Driven Approach},
    journal = {The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020)},
    doi = {10.1109/IROS45743.2020.9341112},
    pages = {9537--9544},
    year = {2020},
    keywords = {ARRAY(0x55578fdddc78)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42213/},
    abstract = {Estimating the inertial properties of an object can make robotic manipulations more efficient, especially in extreme environments. This paper presents a novel method of estimating the 2D inertial parameters of an object, by having a robot applying a push on it. We draw inspiration from previous analyses on quasi-static pushing mechanics, and introduce a data-driven model that can accurately represent these mechan- ics and provide a prediction for the object?s inertial parameters. We evaluate the model with two datasets. For the first dataset, we set up a V-REP simulation of seven robots pushing objects with large range of inertial parameters, acquiring 48000 pushes in total. For the second dataset, we use the object pushes from the MIT M-Cube lab pushing dataset. We extract features from force, moment and velocity measurements of the pushes, and train a Multi-Output Regression Random Forest. The experimental results show that we can accurately predict the 2D inertial parameters from a single push, and that our method retains this robust performance under various surface types.}
    }
  • D. Bochtis, L. Benos, M. Lampridi, V. Marinoudi, S. Pearson, and C. G. S. o, “Agricultural workforce crisis in light of the covid-19 pandemic,” Sustainability, vol. 12, iss. 19, p. 8212, 2020. doi:10.3390/su12198212
    [BibTeX] [Abstract] [Download PDF]

    COVID-19 and the restrictive measures towards containing the spread of its infections have seriously affected the agricultural workforce and jeopardized food security. The present study aims at assessing the COVID-19 pandemic impacts on agricultural labor and suggesting strategies to mitigate them. To this end, after an introduction to the pandemic background, the negative consequences on agriculture and the existing mitigation policies, risks to the agricultural workers were benchmarked across the United States? Standard Occupational Classification system. The individual tasks associated with each occupation in agricultural production were evaluated on the basis of potential COVID-19 infection risk. As criteria, the most prevalent virus transmission mechanisms were considered, namely the possibility of touching contaminated surfaces and the close proximity of workers. The higher risk occupations within the sector were identified, which facilitates the allocation of worker protection resources to the occupations where they are most needed. In particular, the results demonstrated that 50\% of the agricultural workforce and 54\% of the workers? annual income are at moderate to high risk. As a consequence, a series of control measures need to be adopted so as to enhance the resilience and sustainability of the sector as well as protect farmers including physical distancing, hygiene practices, and personal protection equipment.

    @article{lincoln43697,
    volume = {12},
    number = {19},
    month = {October},
    author = {Dionysis Bochtis and Lefteris Benos and Maria Lampridi and Vasso Marinoudi and Simon Pearson and Claus G. S{\o}rensen},
    title = {Agricultural Workforce Crisis in Light of the COVID-19 Pandemic},
    year = {2020},
    journal = {Sustainability},
    doi = {10.3390/su12198212},
    pages = {8212},
    keywords = {ARRAY(0x55578fdddc90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43697/},
    abstract = {COVID-19 and the restrictive measures towards containing the spread of its infections have seriously affected the agricultural workforce and jeopardized food security. The present study aims at assessing the COVID-19 pandemic impacts on agricultural labor and suggesting strategies to mitigate them. To this end, after an introduction to the pandemic background, the negative consequences on agriculture and the existing mitigation policies, risks to the agricultural workers were benchmarked across the United States? Standard Occupational Classification system. The individual tasks associated with each occupation in agricultural production were evaluated on the basis of potential COVID-19 infection risk. As criteria, the most prevalent virus transmission mechanisms were considered, namely the possibility of touching contaminated surfaces and the close proximity of workers. The higher risk occupations within the sector were identified, which facilitates the allocation of worker protection resources to the occupations where they are most needed. In particular, the results demonstrated that 50\% of the agricultural workforce and 54\% of the workers? annual income are at moderate to high risk. As a consequence, a series of control measures need to be adopted so as to enhance the resilience and sustainability of the sector as well as protect farmers including physical distancing, hygiene practices, and personal protection equipment.}
    }
  • G. Bosworth, L. Price, M. Collison, and C. Fox, “Unequal futures of rural mobility:?challenges for a ?smart countryside?,” Local economy, vol. 35, iss. 6, p. 586–608, 2020. doi:10.1177/0269094220968231
    [BibTeX] [Abstract] [Download PDF]

    Current transport strategy in the UK is strongly urban-focused, with assumptions that technological advances in mobility will simply trickle down into rural areas. This paper challenges such a view and instead draws on rural development thinking aligned to a ?Smart Countryside? which emphasises the need for place-based approaches. Survey and interview methods are employed to develop a framework of rural needs associated with older people, younger people and businesses. This framework is employed to assess a range of mobility innovations that could most effectively address these needs in different rural contexts. In presenting visions of future rural mobility, the paper also identifies key infrastructure as well as institutional and financial changes that are required to facilitate the roll-out of new technologies across rural areas.

    @article{lincoln42612,
    volume = {35},
    number = {6},
    month = {September},
    author = {Gary Bosworth and Liz Price and Martin Collison and Charles Fox},
    title = {Unequal Futures of Rural Mobility:Challenges for a ?Smart Countryside?},
    publisher = {Sage},
    year = {2020},
    journal = {Local Economy},
    doi = {10.1177/0269094220968231},
    pages = {586--608},
    keywords = {ARRAY(0x55578fdddcc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42612/},
    abstract = {Current transport strategy in the UK is strongly urban-focused, with assumptions that technological advances in mobility will simply trickle down into rural areas. This paper challenges such a view and instead draws on rural development thinking aligned to a ?Smart Countryside? which emphasises the need for place-based approaches. Survey and interview methods are employed to develop a framework of rural needs associated with older people, younger people and businesses. This framework is employed to assess a range of mobility innovations that could most effectively address these needs in different rural contexts. In presenting visions of future rural mobility, the paper also identifies key infrastructure as well as institutional and financial changes that are required to facilitate the roll-out of new technologies across rural areas.}
    }
  • M. T. Fountain, A. Badiee, S. Hemer, A. Delgado, M. Mangan, C. Dowding, F. Davis, and S. Pearson, “The use of light spectrum blocking films to reduce populations of drosophila suzukii matsumura in fruit crops,” Scientific reports, vol. 10, iss. 1, 2020. doi:10.1038/s41598-020-72074-8
    [BibTeX] [Abstract] [Download PDF]

    Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries. Effective control is challenging and reliant on integrated pest management which includes the use of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest that can be integrated into control strategies are urgently required. In many production regions, including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials. These can be modified to filter specific wavebands of light. We investigated whether targeted spectral modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii. We present a novel approach that starts from a neuroscientific investigation of insect sensory systems and ends with infield testing of new cladding materials inspired by the biological data. We show D. suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm (orange & red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting materials reduces pest populations up to 73\% in field trials.

    @article{lincoln42446,
    volume = {10},
    number = {1},
    month = {September},
    author = {Michelle T. Fountain and Amir Badiee and Sebastian Hemer and Alvaro Delgado and Michael Mangan and Colin Dowding and Frederick Davis and Simon Pearson},
    title = {The use of light spectrum blocking films to reduce populations of Drosophila suzukii Matsumura in fruit crops},
    publisher = {Nature Publishing Group},
    year = {2020},
    journal = {Scientific Reports},
    doi = {10.1038/s41598-020-72074-8},
    keywords = {ARRAY(0x55578fdddcd8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42446/},
    abstract = {Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of
    multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries.
    Effective control is challenging and reliant on integrated pest management which includes the use
    of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest
    that can be integrated into control strategies are urgently required. In many production regions,
    including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials.
    These can be modified to filter specific wavebands of light. We investigated whether targeted spectral
    modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii.
    We present a novel approach that starts from a neuroscientific investigation of insect sensory systems
    and ends with infield testing of new cladding materials inspired by the biological data. We show D.
    suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm
    (orange \& red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting
    materials reduces pest populations up to 73\% in field trials.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Are you still with me? continuous engagement assessment from a robot’s point of view,” Frontiers in robotics and ai, vol. 7, iss. 116, 2020. doi:10.3389/frobt.2020.00116
    [BibTeX] [Abstract] [Download PDF]

    Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.

    @article{lincoln42433,
    volume = {7},
    number = {116},
    month = {September},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    title = {Are You Still With Me? Continuous Engagement Assessment From a Robot's Point of View},
    publisher = {Frontiers Media S.A.},
    year = {2020},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2020.00116},
    keywords = {ARRAY(0x55578fdddd08)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42433/},
    abstract = {Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement\_detector, as a tool to measure engagement in a variety of settings.}
    }
  • S. Kottayil, P. Tsoleridis, K. Rossa, R. Connors, and C. Fox, “Investigation of driver route choice behaviour using bluetooth data,” in 15th world conference on transport research, 2020, p. 632–645. doi:10.1016/j.trpro.2020.08.065
    [BibTeX] [Abstract] [Download PDF]

    Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.

    @inproceedings{lincoln34791,
    volume = {48},
    month = {September},
    author = {Sreedevi Kottayil and Panagiotis Tsoleridis and Kacper Rossa and Richard Connors and Charles Fox},
    booktitle = {15th World Conference on Transport Research},
    title = {Investigation of Driver Route Choice Behaviour using Bluetooth Data},
    publisher = {Elsevier},
    year = {2020},
    journal = {Transportation Research Procedia},
    doi = {10.1016/j.trpro.2020.08.065},
    pages = {632--645},
    keywords = {ARRAY(0x55578fdddd38)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34791/},
    abstract = {Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.}
    }
  • H. Isakhani, S. Yue, C. Xiong, W. Chen, X. Sun, and T. liu, “Fabrication and mechanical analysis of bioinspired gliding-optimized wing prototypes for micro aerial vehicles,” in 5th international conference on advanced robotics and mechatronics (icarm), 2020, p. 602–608. doi:10.1109/ICARM49381.2020.9195392
    [BibTeX] [Abstract] [Download PDF]

    Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.

    @inproceedings{lincoln43687,
    month = {September},
    author = {Hamid Isakhani and Shigang Yue and Caihua Xiong and Wenbin Chen and Xuelong Sun and Tian liu},
    booktitle = {5th International Conference on Advanced Robotics and Mechatronics (ICARM)},
    title = {Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles},
    publisher = {IEEE},
    year = {2020},
    journal = {2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM)},
    doi = {10.1109/ICARM49381.2020.9195392},
    pages = {602--608},
    keywords = {ARRAY(0x55578fdddd68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43687/},
    abstract = {Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.}
    }
  • T. Liu, X. Sun, C. Hu, Q. Fu, H. Isakhani, and S. Yue, “Investigating multiple pheromones in swarm robots – a case study of multi-robot deployment,” in 2020 5th international conference on advanced robotics and mechatronics (icarm), 2020, p. 595–601. doi:10.1109/ICARM49381.2020.9195311
    [BibTeX] [Abstract] [Download PDF]

    Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects’ collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOS{\ensuremath{\Phi}}. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOS{\ensuremath{\Phi}} platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects’ research.

    @inproceedings{lincoln43680,
    month = {September},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Hamid Isakhani and Shigang Yue},
    booktitle = {2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM)},
    title = {Investigating Multiple Pheromones in Swarm Robots - A Case Study of Multi-Robot Deployment},
    publisher = {IEEE},
    doi = {10.1109/ICARM49381.2020.9195311},
    pages = {595--601},
    year = {2020},
    keywords = {ARRAY(0x55578fdddd98)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43680/},
    abstract = {Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects' collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOS{\ensuremath{\Phi}}. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOS{\ensuremath{\Phi}} platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects' research.}
    }
  • C. Hu, C. Xiong, J. Peng, and S. Yue, “Coping with multiple visual motion cues under extremely constrained computation power of micro autonomous robots,” Ieee access, vol. 8, p. 159050–159066, 2020. doi:10.1109/ACCESS.2020.3016893
    [BibTeX] [Abstract] [Download PDF]

    The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system’s functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems.

    @article{lincoln43658,
    volume = {8},
    month = {September},
    author = {Cheng Hu and Caihua Xiong and Jigen Peng and Shigang Yue},
    title = {Coping With Multiple Visual Motion Cues Under Extremely Constrained Computation Power of Micro Autonomous Robots},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2020.3016893},
    pages = {159050--159066},
    keywords = {ARRAY(0x55578fddddc8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43658/},
    abstract = {The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system's functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems.}
    }
  • N. Andreakos, S. Yue, and V. Cutsuridis, “Improving recall in an associative neural network model of the hippocampus,” in 9th international conference, living machines 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    The mammalian hippocampus is involved in auto-association and hetero-association of declarative memories. We employed a bio-inspired neural model of hippocampal CA1 region to systematically evaluate its mean recall quality against different number of stored patterns, overlaps and active cells per pattern. Model consisted of excitatory (pyramidal cells) and four types of inhibitory cells: axo-axonic, basket, bistratified, and oriens lacunosum-moleculare cells. Cells were simplified compartmental models with complex ion channel dynamics. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. During recall excitatory input to network excitatory cells provided context and timing information for retrieval of previously stored memory patterns. Dendritic inhibition acted as a nonspecific global threshold machine that removed spurious activity during recall. Simulations showed recall quality improved when the network?s memory capacity increased as the number of active cells per pattern decreased. Furthermore, increased firing rate of a presynaptic inhibitory threshold machine inhibiting a network of postsynaptic excitatory cells has a better success at removing spurious activity at the network level and improving recall quality than increased synaptic efficacy of the same threshold machine on the same network of excitatory cells, while keeping its firing rate fixed.

    @inproceedings{lincoln43365,
    booktitle = {9th International Conference, Living Machines 2020},
    month = {September},
    title = {Improving Recall in an Associative Neural Network Model of the Hippocampus},
    author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis},
    publisher = {Springer Nature},
    year = {2020},
    keywords = {ARRAY(0x55578fddddf8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43365/},
    abstract = {The mammalian hippocampus is involved in auto-association and hetero-association of declarative
    memories. We employed a bio-inspired neural model of hippocampal CA1 region to systematically
    evaluate its mean recall quality against different number of stored patterns, overlaps and active cells per
    pattern. Model consisted of excitatory (pyramidal cells) and four types of inhibitory cells: axo-axonic,
    basket, bistratified, and oriens lacunosum-moleculare cells. Cells were simplified compartmental models
    with complex ion channel dynamics. Cells? firing was timed to a theta oscillation paced by two distinct
    neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the
    other to the peak of theta. During recall excitatory input to network excitatory cells provided context and
    timing information for retrieval of previously stored memory patterns. Dendritic inhibition acted as a nonspecific
    global threshold machine that removed spurious activity during recall. Simulations showed recall
    quality improved when the network?s memory capacity increased as the number of active cells per pattern
    decreased. Furthermore, increased firing rate of a presynaptic inhibitory threshold machine inhibiting a
    network of postsynaptic excitatory cells has a better success at removing spurious activity at the network
    level and improving recall quality than increased synaptic efficacy of the same threshold machine on the
    same network of excitatory cells, while keeping its firing rate fixed.}
    }
  • S. Parsa, D. Kamale, S. Mghames, K. Nazari, T. Pardi, A. Srinivasan, G. Neumann, M. Hanheide, and A. G. Esfahani, “Haptic-guided shared control grasping: collision-free manipulation,” in Case 2020- international conference on automation science and engineering, 2020. doi:10.1109/CASE48305.2020.9216789
    [BibTeX] [Abstract] [Download PDF]

    We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.

    @inproceedings{lincoln41283,
    month = {August},
    author = {Soran Parsa and Disha Kamale and Sariah Mghames and Kiyanoush Nazari and Tommaso Pardi and Aravinda Srinivasan and Gerhard Neumann and Marc Hanheide and Amir Ghalamzan Esfahani},
    booktitle = {CASE 2020- International Conference on Automation Science and Engineering},
    title = {Haptic-guided shared control grasping: collision-free manipulation},
    publisher = {IEEE},
    journal = {International Conference on Automation Science and Engineering (CASE)},
    doi = {10.1109/CASE48305.2020.9216789},
    year = {2020},
    keywords = {ARRAY(0x55578fddde28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41283/},
    abstract = {We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.}
    }
  • G. Bosworth, C. Fox, L. Price, and M. Collison, “The future of rural mobility study (forms),” Midlands Connect, Project Report , 2020.
    [BibTeX] [Abstract] [Download PDF]

    Recognising the urban-focus of many national and regional transport strategies, the purpose of this project is to explore how emerging technologies could support rural economies across the Midlands. Fundamentally, the rationale for the study is to begin with an assessment of rural needs and then exploring a range of mobility innovations, including social innovations as well as technologies, that can provide place-based solutions designed for more rural areas. This avoids the National Transport Strategy assumption that new mobility innovations will inevitably occur in urban areas and then be rolled out across more rural places. While economic realities mean that many private sector transport innovations can start out in urban centres, their rural impacts may be quite different and require alternative responses from rural planners and policy-makers.

    @techreport{lincoln42273,
    month = {August},
    type = {Project Report},
    title = {The Future of Rural Mobility Study (FoRMS)},
    author = {Gary Bosworth and Charles Fox and Liz Price and Martin Collison},
    publisher = {Midlands Connect},
    year = {2020},
    institution = {Midlands Connect},
    keywords = {ARRAY(0x55578fddde58)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42273/},
    abstract = {Recognising the urban-focus of many national and regional transport strategies, the purpose of this project is to explore how emerging technologies could support rural economies across the Midlands. Fundamentally, the rationale for the study is to begin with an assessment of rural needs and then exploring a range of mobility innovations, including social innovations as well as technologies, that can provide place-based solutions designed for more rural areas. This avoids the National Transport Strategy assumption that new mobility innovations will inevitably occur in urban areas and then be rolled out across more rural places. While economic realities mean that many private sector transport innovations can start out in urban centres, their rural impacts may be quite different and require alternative responses from rural planners and policy-makers.}
    }
  • S. Cosar, M. Fernandez-Carmona, R. Agrigoroaie, J. Pages, F. Ferland, F. Zhao, S. Yue, N. Bellotto, and A. Tapus, “Enrichme: perception and interaction of an assistive robot for the elderly at home,” International journal of social robotics, vol. 12, iss. 3, p. 779–805, 2020. doi:10.1007/s12369-019-00614-y
    [BibTeX] [Abstract] [Download PDF]

    Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.

    @article{lincoln39037,
    volume = {12},
    number = {3},
    month = {July},
    author = {Serhan Cosar and Manuel Fernandez-Carmona and Roxana Agrigoroaie and Jordi Pages and Francois Ferland and Feng Zhao and Shigang Yue and Nicola Bellotto and Adriana Tapus},
    title = {ENRICHME: Perception and Interaction of an Assistive Robot for the Elderly at Home},
    publisher = {Springer},
    year = {2020},
    journal = {International Journal of Social Robotics},
    doi = {10.1007/s12369-019-00614-y},
    pages = {779--805},
    keywords = {ARRAY(0x55578fddde88)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39037/},
    abstract = {Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.}
    }
  • I. J. Gould, I. Wright, M. Collison, E. Ruto, G. Bosworth, and S. Pearson, “The impact of coastal flooding on agriculture: a case study of lincolnshire, united kingdom,” Land degradation & development, vol. 31, iss. 12, p. 1545–1559, 2020. doi:10.1002/ldr.3551
    [BibTeX] [Abstract] [Download PDF]

    Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.

    @article{lincoln40049,
    volume = {31},
    number = {12},
    month = {July},
    author = {Iain J Gould and Isobel Wright and Martin Collison and Eric Ruto and Gary Bosworth and Simon Pearson},
    title = {The impact of coastal flooding on agriculture: a case study of Lincolnshire, United Kingdom},
    publisher = {Wiley},
    year = {2020},
    journal = {Land Degradation \& Development},
    doi = {10.1002/ldr.3551},
    pages = {1545--1559},
    keywords = {ARRAY(0x55578fdddeb8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40049/},
    abstract = {Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.}
    }
  • F. Camara, N. Bellotto, S. Cosar, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, and C. Fox, “Pedestrian models for autonomous driving part i: low-level models, from sensing to tracking,” Ieee transactions on intelligent transport systems, 2020. doi:10.1109/TITS.2020.3006768
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.

    @article{lincoln41705,
    month = {July},
    title = {Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from Sensing to Tracking},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Dimitris Nathanael and Mathias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Charles Fox},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/TITS.2020.3006768},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    keywords = {ARRAY(0x55578fdddee8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41705/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.}
    }
  • F. Camara, N. Bellotto, S. Cosar, F. Weber, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, G. Markkula, A. Schieben, F. Tango, N. Merat, and C. Fox, “Pedestrian models for autonomous driving part ii: high-level models of human behavior,” Ieee transactions on intelligent transport systems, 2020. doi:10.1109/TITS.2020.3006767
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.

    @article{lincoln41706,
    month = {July},
    title = {Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Florian Weber and Dimitris Nathanael and Matthias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Gustav Markkula and Anna Schieben and Fabio Tango and Natasha Merat and Charles Fox},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/TITS.2020.3006767},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    keywords = {ARRAY(0x55578fdddf18)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41706/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.}
    }
  • H. Isakhani, C. Xiong, S. Yue, and W. Chen, “A bioinspired airfoil optimization technique using nash genetic algorithm,” in 2020 17th international conference on ubiquitous robots (ur), 2020, p. 506–513. doi:doi:10.1109/UR49135.2020.9144868
    [BibTeX] [Abstract] [Download PDF]

    Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77\% improvement in its gliding ratio.

    @inproceedings{lincoln43819,
    month = {July},
    author = {Hamid Isakhani and Caihua Xiong and Shigang Yue and Wenbin Chen},
    booktitle = {2020 17th International Conference on Ubiquitous Robots (UR)},
    title = {A Bioinspired Airfoil Optimization Technique Using Nash Genetic Algorithm},
    publisher = {IEEE},
    doi = {doi:10.1109/UR49135.2020.9144868},
    pages = {506--513},
    year = {2020},
    keywords = {ARRAY(0x55578fdddf48)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43819/},
    abstract = {Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77\% improvement in its gliding ratio.}
    }
  • X. Sun, S. Yue, and M. Mangan, “A decentralised neural model explaining optimal integration of navigational strategies in insects,” Elife, vol. 9, 2020. doi:10.7554/eLife.54026
    [BibTeX] [Abstract] [Download PDF]

    Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.

    @article{lincoln41703,
    volume = {9},
    month = {July},
    author = {Xuelong Sun and Shigang Yue and Michael Mangan},
    title = {A decentralised neural model explaining optimal integration of navigational strategies in insects},
    publisher = {eLife Sciences Publications},
    journal = {eLife},
    doi = {10.7554/eLife.54026},
    year = {2020},
    keywords = {ARRAY(0x55578fdddf78)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41703/},
    abstract = {Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.}
    }
  • H. Cuayahuitl, “A data-efficient deep learning approach for deployable multimodal social robots,” Neurocomputing, vol. 396, p. 587–598, 2020. doi:10.1016/j.neucom.2018.09.104
    [BibTeX] [Abstract] [Download PDF]

    The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games–-and use the game of `Noughts {$\backslash$}& Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.

    @article{lincoln42805,
    volume = {396},
    month = {July},
    author = {Heriberto Cuayahuitl},
    note = {The final published version of this article can be accessed online at https://www.journals.elsevier.com/neurocomputing/},
    title = {A Data-Efficient Deep Learning Approach for Deployable Multimodal Social Robots},
    publisher = {Elsevier},
    year = {2020},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2018.09.104},
    pages = {587--598},
    keywords = {ARRAY(0x55578fdddfa8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42805/},
    abstract = {The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games---and use the game of `Noughts {$\backslash$}\& Crosses' with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.}
    }
  • Q. Fu and S. Yue, “Modelling drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds,” Biological cybernetics, 2020. doi:10.1007/s00422-020-00841-x
    [BibTeX] [Abstract] [Download PDF]

    Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.

    @article{lincoln42133,
    month = {July},
    title = {Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds},
    author = {Qinbing Fu and Shigang Yue},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s00422-020-00841-x},
    journal = {Biological Cybernetics},
    keywords = {ARRAY(0x55578fdddfd8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42133/},
    abstract = {Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.}
    }
  • M. Sorour, K. Elgeneidy, M. Hanheide, and A. Srinivasan, “Enhancing grasp pose computation in gripper workspace spheres,” in Icra 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76\% and 85:5\% respectively has been reported by real world experiments.

    @inproceedings{lincoln39957,
    booktitle = {ICRA 2020},
    month = {July},
    title = {Enhancing Grasp Pose Computation in Gripper Workspace Spheres},
    author = {Mohamed Sorour and Khaled Elgeneidy and Marc Hanheide and Aravinda Srinivasan},
    year = {2020},
    keywords = {ARRAY(0x55578fdde008)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39957/},
    abstract = {In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features
    a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76\% and 85:5\% respectively has been reported by real world experiments.}
    }
  • D. Liu, N. Bellotto, and S. Yue, “Deep spiking neural network for video-based disguise face recognition based on dynamic facial movements,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 6, p. 1843–1855, 2020. doi:10.1109/TNNLS.2019.2927274
    [BibTeX] [Abstract] [Download PDF]

    With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well – it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios

    @article{lincoln41718,
    volume = {31},
    number = {6},
    month = {June},
    author = {Daqi Liu and Nicola Bellotto and Shigang Yue},
    title = {Deep Spiking Neural Network for Video-based Disguise Face Recognition Based on Dynamic Facial Movements},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2927274},
    pages = {1843--1855},
    keywords = {ARRAY(0x55578fddf060)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41718/},
    abstract = {With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well - it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios}
    }
  • J. L. Louedec, H. A. Montes, T. Duckett, and G. Cielniak, “Segmentation and detection from organised 3d point clouds: a case study in broccoli head detection,” in 2020 ieee/cvf conference on computer vision and pattern recognition workshops (cvprw), 2020, p. 285–293. doi:10.1109/CVPRW50498.2020.00040
    [BibTeX] [Abstract] [Download PDF]

    Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.

    @inproceedings{lincoln43425,
    month = {June},
    author = {Justin Le Louedec and Hector A. Montes and Tom Duckett and Grzegorz Cielniak},
    booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
    title = {Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection},
    publisher = {IEEE},
    doi = {10.1109/CVPRW50498.2020.00040},
    pages = {285--293},
    year = {2020},
    keywords = {ARRAY(0x55578fddf090)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43425/},
    abstract = {Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.}
    }
  • K. Elgeneidy and K. Goher, “Structural optimization of adaptive soft fin ray fingers with variable stiffening capability,” in Ieee robosoft 2020, 2020. doi:10.1109/RoboSoft48309.2020.9115969
    [BibTeX] [Abstract] [Download PDF]

    Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.

    @inproceedings{lincoln40182,
    booktitle = {IEEE RoboSoft 2020},
    month = {June},
    title = {Structural Optimization of Adaptive Soft Fin Ray Fingers with Variable Stiffening Capability},
    author = {Khaled Elgeneidy and Khaled Goher},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/RoboSoft48309.2020.9115969},
    keywords = {ARRAY(0x55578fddf0c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40182/},
    abstract = {Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.}
    }
  • R. Polvara, M. Fernandez-Carmona, M. Hanheide, and G. Neumann, “Next-best-sense: a multi-criteria robotic exploration strategy for rfid tags discovery,” Ieee robotics and automation letters, vol. 5, iss. 3, p. 4477–4484, 2020. doi:10.1109/LRA.2020.3001539
    [BibTeX] [Abstract] [Download PDF]

    Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation – a scan with the RFID reader – as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors’ repository.

    @article{lincoln41120,
    volume = {5},
    number = {3},
    month = {June},
    author = {Riccardo Polvara and Manuel Fernandez-Carmona and Marc Hanheide and Gerhard Neumann},
    title = {Next-Best-Sense: a multi-criteria robotic exploration strategy for RFID tags discovery},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2020.3001539},
    pages = {4477--4484},
    keywords = {ARRAY(0x55578fddf0f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41120/},
    abstract = {Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation -- a scan with the RFID reader -- as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors' repository.}
    }
  • Q. Fu, H. Wang, J. Peng, and S. Yue, “Improved collision perception neuronal system model with adaptive inhibition mechanism and evolutionary learning,” Ieee access, vol. 8, p. 108896–108912, 2020. doi:10.1109/ACCESS.2020.3001396
    [BibTeX] [Abstract] [Download PDF]

    Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.

    @article{lincoln42131,
    volume = {8},
    month = {June},
    author = {Qinbing Fu and Huatian Wang and Jigen Peng and Shigang Yue},
    title = {Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2020.3001396},
    pages = {108896--108912},
    keywords = {ARRAY(0x55578fddf120)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42131/},
    abstract = {Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.}
    }
  • Y. M. Lee, R. Madigan, O. Giles, L. Garach?Morcillo, G. Markkula, C. Fox, F. Camara, M. Rothmueller, S. A. Vendelbo?Larsen, P. H. Rasmussen, A. Dietrich, D. Nathanael, V. Portouli, A. Schieben, and N. Merat, “Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles,” Cognition, technology & work, 2020. doi:10.1007/s10111-020-00635-y
    [BibTeX] [Abstract] [Download PDF]

    To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu- nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred. This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle- based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit communication cues from drivers, although some cultural differences were observed. The implications of these findings are discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.

    @article{lincoln41217,
    month = {June},
    title = {Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles},
    author = {Yee Mun Lee and Ruth Madigan and Oscar Giles and Laura Garach?Morcillo and Gustav Markkula and Charles Fox and Fanta Camara and Markus Rothmueller and Signe Alexandra Vendelbo?Larsen and Pernille Holm Rasmussen and Andre Dietrich and Dimitris Nathanael and Villy Portouli and Anna Schieben and Natasha Merat},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s10111-020-00635-y},
    journal = {Cognition, Technology \& Work},
    keywords = {ARRAY(0x55578fddf150)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41217/},
    abstract = {To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by
    road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu-
    nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European
    project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six
    observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using
    an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from
    the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand
    gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication
    techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred.
    This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed
    pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle-
    based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement
    information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit
    communication cues from drivers, although some cultural differences were observed. The implications of these findings are
    discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.}
    }
  • Z. Yan, S. Schreiberhuber, G. Halmetschlager, T. Duckett, M. Vincze, and N. Bellotto, “Robot perception of static and dynamic objects with an autonomous floor scrubber,” Intelligent service robotics, 2020. doi:10.1007/s11370-020-00324-9
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.

    @article{lincoln40882,
    month = {June},
    title = {Robot Perception of Static and Dynamic Objects with an Autonomous Floor Scrubber},
    author = {Zhi Yan and Simon Schreiberhuber and Georg Halmetschlager and Tom Duckett and Markus Vincze and Nicola Bellotto},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s11370-020-00324-9},
    journal = {Intelligent Service Robotics},
    keywords = {ARRAY(0x55578fddf180)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40882/},
    abstract = {This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.}
    }
  • A. Binch, G. Das, J. P. Fentanes, and M. Hanheide, “Context dependant iterative parameter optimisation for robust robot navigation,” in 2020 ieee international conference on robotics and automation (icra), 2020, p. 3937–3943. doi:10.1109/ICRA40945.2020.9196550
    [BibTeX] [Abstract] [Download PDF]

    Progress in autonomous mobile robotics has seen significant advances in the development of many algorithms for motion control and path planning. However, robust performance from these algorithms can often only be expected if the parameters controlling them are tuned specifically for the respective robot model, and optimised for specific scenarios in the environment the robot is working in. Such parameter tuning can, depending on the underlying algorithm, amount to a substantial combinatorial challenge, often rendering extensive manual tuning of these parameters intractable. In this paper, we present a framework that permits the use of different navigation actions and/or parameters depending on the spatial context of the navigation task, while considering the respective navigation algorithms themselves mostly as a “black box”, and find suitable parameters by means of an iterative optimisation, improving for performance metrics in simulated environments. We present a genetic algorithm incorporated into the framework and empirically show that the resulting parameter sets lead to substantial performance improvements in both simulated and real-world environments in the domain of agricultural robots.

    @inproceedings{lincoln42389,
    month = {May},
    author = {Adam Binch and Gautham Das and Jaime Pulido Fentanes and Marc Hanheide},
    booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Context Dependant Iterative Parameter Optimisation for Robust Robot Navigation},
    publisher = {IEEE},
    doi = {10.1109/ICRA40945.2020.9196550},
    pages = {3937--3943},
    year = {2020},
    keywords = {ARRAY(0x55578fddf1b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42389/},
    abstract = {Progress in autonomous mobile robotics has seen significant advances in the development of many algorithms for motion control and path planning. However, robust performance from these algorithms can often only be expected if the parameters controlling them are tuned specifically for the respective robot model, and optimised for specific scenarios in the environment the robot is working in. Such parameter tuning can, depending on the underlying algorithm, amount to a substantial combinatorial challenge, often rendering extensive manual tuning of these parameters intractable. In this paper, we present a framework that permits the use of different navigation actions and/or parameters depending on the spatial context of the navigation task, while considering the respective navigation algorithms themselves mostly as a "black box", and find suitable parameters by means of an iterative optimisation, improving for performance metrics in simulated environments. We present a genetic algorithm incorporated into the framework and empirically show that the resulting parameter sets lead to substantial performance improvements in both simulated and real-world environments in the domain of agricultural robots.}
    }
  • I. Albayati, A. Postnikov, S. Pearson, R. Bickerton, A. Zolotas, and C. Bingham, “Power and energy analysis for a commercial retail refrigeration system responding to a static demand side response,” International journal of electrical power & energy systems, vol. 117, p. 105645, 2020. doi:10.1016/j.ijepes.2019.105645
    [BibTeX] [Abstract] [Download PDF]

    The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.

    @article{lincoln38163,
    volume = {117},
    month = {May},
    author = {Ibrahim Albayati and Andrey Postnikov and Simon Pearson and Ronald Bickerton and Argyrios Zolotas and Chris Bingham},
    title = {Power and Energy Analysis for a Commercial Retail Refrigeration System Responding to a Static Demand Side Response},
    publisher = {Elsevier},
    year = {2020},
    journal = {International Journal of Electrical Power \& Energy Systems},
    doi = {10.1016/j.ijepes.2019.105645},
    pages = {105645},
    keywords = {ARRAY(0x55578fddf1e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38163/},
    abstract = {The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.}
    }
  • L. Sun, D. Adolfsson, M. Magnusson, H. Andreasson, I. Posner, and T. Duckett, “Localising faster: efficient and precise lidar-based robot localisation in large-scale environments,” in 2020 ieee international conference on robotics and automation (icra), 2020, p. 4386–4392. doi:10.1109/ICRA40945.2020.9196708
    [BibTeX] [Abstract] [Download PDF]

    This paper proposes a novel approach for global localisation of mobile robots in large-scale environments. Our method leverages learning-based localisation and filtering-based localisation, to localise the robot efficiently and precisely through seeding Monte Carlo Localisation (MCL) with a deep learned distribution. In particular, a fast localisation system rapidly estimates the 6-DOF pose through a deep-probabilistic model (Gaussian Process Regression with a deep kernel), then a precise recursive estimator refines the estimated robot pose according to the geometric alignment. More importantly, the Gaussian method (i.e. deep probabilistic localisation) and non-Gaussian method (i.e. MCL) can be integrated naturally via importance sampling. Consequently, the two systems can be integrated seamlessly and mutually benefit from each other. To verify the proposed framework, we provide a case study in large-scale localisation with a 3D lidar sensor. Our experiments on the Michigan NCLT long-term dataset show that the proposed method is able to localise the robot in 1.94 s on average (median of 0.8 s) with precision 0.75 m in a large-scale environment of approximately 0.5 km 2 .

    @inproceedings{lincoln43349,
    month = {May},
    author = {Li Sun and Daniel Adolfsson and Martin Magnusson and Henrik Andreasson and Ingmar Posner and Tom Duckett},
    booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Localising Faster: Efficient and precise lidar-based robot localisation in large-scale environments},
    publisher = {IEEE},
    doi = {10.1109/ICRA40945.2020.9196708},
    pages = {4386--4392},
    year = {2020},
    keywords = {ARRAY(0x55578fddf210)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43349/},
    abstract = {This paper proposes a novel approach for global localisation of mobile robots in large-scale environments. Our method leverages learning-based localisation and filtering-based localisation, to localise the robot efficiently and precisely through seeding Monte Carlo Localisation (MCL) with a deep learned distribution. In particular, a fast localisation system rapidly estimates the 6-DOF pose through a deep-probabilistic model (Gaussian Process Regression with a deep kernel), then a precise recursive estimator refines the estimated robot pose according to the geometric alignment. More importantly, the Gaussian method (i.e. deep probabilistic localisation) and non-Gaussian method (i.e. MCL) can be integrated naturally via importance sampling. Consequently, the two systems can be integrated seamlessly and mutually benefit from each other. To verify the proposed framework, we provide a case study in large-scale localisation with a 3D lidar sensor. Our experiments on the Michigan NCLT long-term dataset show that the proposed method is able to localise the robot in 1.94 s on average (median of 0.8 s) with precision 0.75 m in a large-scale environment of approximately 0.5 km 2 .}
    }
  • F. Camara, P. Dickenson, N. Merat, and C. Fox, “Examining pedestrian-autonomous vehicle interactions in virtual reality,” in 8th transport research arena tra 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles now have well developed algorithms and open source software for localisation and navigation in static environments but their future interactions with other road users in mixed traffic environments, especially with pedestrians, raise some concerns. Pedestrian behaviour is complex to model and unpredictable, thus creating a big challenge for self-driving cars. This paper examines pedestrian behaviour during crossing scenarios with a game theoretic autonomous vehicle in virtual reality. In a first experiment, we recorded participants? trajectories and found that they were crossing more cautiously in VR than in previous laboratory experiments. In two other experiments, we used a gradient descent approach to investigate participants? preference for a certain AV driving style. We found that the majority of them were not expecting the car to stop in these scenarios. These results suggest that VR is an interesting tool for testing autonomous vehicle algorithms and for finding out about pedestrian preferences.

    @inproceedings{lincoln40029,
    booktitle = {8th Transport Research Arena TRA 2020},
    month = {April},
    title = {Examining Pedestrian-Autonomous Vehicle Interactions in Virtual Reality},
    author = {Fanta Camara and Patrick Dickenson and Natasha Merat and Charles Fox},
    year = {2020},
    keywords = {ARRAY(0x55578fddf240)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40029/},
    abstract = {Autonomous vehicles now have well developed algorithms and open source software for localisation and
    navigation in static environments but their future interactions with other road users in mixed traffic
    environments, especially with pedestrians, raise some concerns. Pedestrian behaviour is complex to model and
    unpredictable, thus creating a big challenge for self-driving cars. This paper examines pedestrian behaviour
    during crossing scenarios with a game theoretic autonomous vehicle in virtual reality. In a first experiment, we
    recorded participants? trajectories and found that they were crossing more cautiously in VR than in previous
    laboratory experiments. In two other experiments, we used a gradient descent approach to investigate
    participants? preference for a certain AV driving style. We found that the majority of them were not expecting the
    car to stop in these scenarios. These results suggest that VR is an interesting tool for testing autonomous vehicle
    algorithms and for finding out about pedestrian preferences.}
    }
  • D. D. Barrie, R. Margetts, and K. Goher, “Simpa: soft-grasp infant myoelectric prosthetic arm,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 699–704, 2020. doi:10.1109/LRA.2019.2963820
    [BibTeX] [Abstract] [Download PDF]

    Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.

    @article{lincoln39383,
    volume = {5},
    number = {2},
    month = {April},
    author = {Daniel De Barrie and Rebecca Margetts and Khaled Goher},
    title = {SIMPA: Soft-Grasp Infant Myoelectric Prosthetic Arm},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2963820},
    pages = {699--704},
    keywords = {ARRAY(0x55578fddf270)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39383/},
    abstract = {Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using
    predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.}
    }
  • H. Wang, J. Peng, and S. Yue, “A directionally selective small target motion detecting visual neural network in cluttered backgrounds,” Ieee transactions on cybernetics, vol. 50, iss. 4, p. 1541–1555, 2020. doi:10.1109/TCYB.2018.2869384
    [BibTeX] [Abstract] [Download PDF]

    Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect’s visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.

    @article{lincoln33420,
    volume = {50},
    number = {4},
    month = {April},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8485659},
    title = {A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/TCYB.2018.2869384},
    pages = {1541--1555},
    keywords = {ARRAY(0x55578fddf2a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33420/},
    abstract = {Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect's visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.}
    }
  • V. R. Ponnambalam, J. P. Fentanes, G. Das, G. Cielniak, J. G. O. Gjevestad, and P. From, “Agri-cost-maps ? integration of environmental constraints into navigation systems for agricultural robot,” in 6th international conference on control, automation and robotics (iccar), 2020. doi:10.1109/ICCAR49639.2020.9108030
    [BibTeX] [Abstract] [Download PDF]

    Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.

    @inproceedings{lincoln42418,
    booktitle = {6th International Conference on Control, Automation and Robotics (ICCAR)},
    month = {April},
    title = {Agri-Cost-Maps ? Integration of Environmental Constraints into Navigation Systems for Agricultural Robot},
    author = {Vignesh Raja Ponnambalam and Jaime Pulido Fentanes and Gautham Das and Grzegorz Cielniak and Jon Glenn Omholt Gjevestad and Pal From},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/ICCAR49639.2020.9108030},
    keywords = {ARRAY(0x55578fddf2d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42418/},
    abstract = {Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.}
    }
  • V. R. Ponnambalam, J. P. Fentanes, G. Das, G. Cielniak, J. G. O. Gjevestad, and P. r a, “Agri-cost-maps – integration of environmental constraints into navigation systems for agricultural robots,” in 6th international conference on control, automation and robotics (iccar), 2020. doi:10.1109/ICCAR49639.2020.9108030
    [BibTeX] [Abstract] [Download PDF]

    Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.

    @inproceedings{lincoln42458,
    booktitle = {6th International Conference on Control, Automation and Robotics (ICCAR)},
    month = {April},
    title = {Agri-Cost-Maps - Integration of Environmental Constraints into Navigation Systems for Agricultural Robots},
    author = {Vignesh Raja Ponnambalam and Jaime Pulido Fentanes and Gautham Das and Grzegorz Cielniak and Jon Glenn Omholt Gjevestad and P{\r a}l Johan From},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/ICCAR49639.2020.9108030},
    keywords = {ARRAY(0x55578fddf300)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42458/},
    abstract = {Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.}
    }
  • T. Pardi, V. Ortenzi, C. Fairbairn, T. Pipe, A. G. Esfahani, and R. Stolkin, “Planning maximum-manipulability cutting paths,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 1999–2006, 2020. doi:10.1109/LRA.2020.2970949
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.

    @article{lincoln41285,
    volume = {5},
    number = {2},
    month = {April},
    author = {Tommaso Pardi and Valerio Ortenzi and Colin Fairbairn and Tony Pipe and Amir Ghalamzan Esfahani and Rustam Stolkin},
    title = {Planning maximum-manipulability cutting paths},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2020.2970949},
    pages = {1999--2006},
    keywords = {ARRAY(0x55578fddf330)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41285/},
    abstract = {This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.}
    }
  • W. Martindale, S. Pearson, M. Swainson, L. Korir, I. Wright, A. M. Opiyo, B. Karanja, S. Nyalala, and M. Kumar, “Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between kenyan, indian and united kingdom food manufacturers,” Emerald open research, vol. 2, iss. 12, 2020. doi:10.35241/emeraldopenres.13414.1
    [BibTeX] [Abstract] [Download PDF]

    The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.

    @article{lincoln40529,
    volume = {2},
    number = {12},
    month = {April},
    author = {Wayne Martindale and Simon Pearson and Mark Swainson and Lilian Korir and Isobel Wright and Arnold M. Opiyo and Benard Karanja and Samuel Nyalala and Mahesh Kumar},
    title = {Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between Kenyan, Indian and United Kingdom food manufacturers},
    publisher = {Emerald},
    year = {2020},
    journal = {Emerald Open Research},
    doi = {10.35241/emeraldopenres.13414.1},
    keywords = {ARRAY(0x55578fddf360)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40529/},
    abstract = {The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.}
    }
  • S. Cosar and N. Bellotto, “Human re-identification with a robot thermal camera using entropy-based sampling,” Journal of intelligent and robotic systems, vol. 98, iss. 1, p. 85–102, 2020. doi:10.1007/s10846-019-01026-w
    [BibTeX] [Abstract] [Download PDF]

    Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.

    @article{lincoln35778,
    volume = {98},
    number = {1},
    month = {April},
    author = {Serhan Cosar and Nicola Bellotto},
    title = {Human Re-Identification with a Robot Thermal Camera using Entropy-based Sampling},
    publisher = {Springer},
    year = {2020},
    journal = {Journal of Intelligent and Robotic Systems},
    doi = {10.1007/s10846-019-01026-w},
    pages = {85--102},
    keywords = {ARRAY(0x55578fddf390)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35778/},
    abstract = {Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.}
    }
  • X. Li, C. Fox, and S. Coutts, “Deep learning for robotic strawberry harvesting,” in Ukras20, 2020, p. 80–82. doi:10.31256/Bj3Kl5B
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}We develop a novel machine learning based robotic strawberry harvesting system for fruit counting, sizing/weighting, and yield prediction.

    @inproceedings{lincoln41273,
    month = {April},
    author = {Xiaodong Li and Charles Fox and Shaun Coutts},
    booktitle = {UKRAS20},
    title = {Deep learning for robotic strawberry harvesting},
    publisher = {UK-RAS},
    doi = {10.31256/Bj3Kl5B},
    pages = {80--82},
    year = {2020},
    keywords = {ARRAY(0x55578fddf3c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41273/},
    abstract = {Abstract{--}We develop a novel machine learning based robotic
    strawberry harvesting system for fruit counting, sizing/weighting,
    and yield prediction.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Automatic assessment and learning of robot social abilities,” in Companion of the 2020 acm/ieee international conference on human-robot interaction, 2020, p. 561–563. doi:10.1145/3371382.3377430
    [BibTeX] [Abstract] [Download PDF]

    One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].

    @inproceedings{lincoln40509,
    booktitle = {Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction},
    month = {March},
    title = {Automatic Assessment and Learning of Robot Social Abilities},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    year = {2020},
    pages = {561--563},
    doi = {10.1145/3371382.3377430},
    keywords = {ARRAY(0x55578fddf3f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40509/},
    abstract = {One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].}
    }
  • H. Wang, J. Peng, X. Zheng, and S. Yue, “A robust visual system for small target motion detection against cluttered moving backgrounds,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 3, p. 839–853, 2020. doi:10.1109/TNNLS.2019.2910418
    [BibTeX] [Abstract] [Download PDF]

    Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems–ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.

    @article{lincoln36114,
    volume = {31},
    number = {3},
    month = {March},
    author = {Hongxin Wang and Jigen Peng and Xuqiang Zheng and Shigang Yue},
    title = {A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2910418},
    pages = {839--853},
    keywords = {ARRAY(0x55578fddf420)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36114/},
    abstract = {Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems--ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.}
    }
  • J. L. Louedec, B. Li, and G. Cielniak, “Evaluation of 3d vision systems for detection of small objects in agricultural environments,” in The 15th international joint conference on computer vision, imaging and computer graphics theory and applications, 2020. doi:10.5220/0009182806820689
    [BibTeX] [Abstract] [Download PDF]

    3D information provides unique information about shape, localisation and relations between objects, not found in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a study on the application of modern 3D sensing technology together with the state-of-the-art machine learning algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for future work on 3D perception for challenging outdoor applications such as agriculture.

    @inproceedings{lincoln40456,
    booktitle = {The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications},
    month = {February},
    title = {Evaluation of 3D Vision Systems for Detection of Small Objects in Agricultural Environments},
    author = {Justin Le Louedec and Bo Li and Grzegorz Cielniak},
    publisher = {SciTePress},
    year = {2020},
    doi = {10.5220/0009182806820689},
    keywords = {ARRAY(0x55578fddf450)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40456/},
    abstract = {3D information provides unique information about shape, localisation and relations between objects, not found
    in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a
    study on the application of modern 3D sensing technology together with the state-of-the-art machine learning
    algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance
    of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks
    trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the
    current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for
    future work on 3D perception for challenging outdoor applications such as agriculture.}
    }
  • R. Polvara, M. Patacchiola, M. Hanheide, and G. Neumann, “Sim-to-real quadrotor landing via sequential deep q-networks and domain randomization,” Robotics, vol. 9, iss. 1, 2020. doi:doi:10.3390/robotics9010008
    [BibTeX] [Abstract] [Download PDF]

    The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.

    @article{lincoln40216,
    volume = {9},
    number = {1},
    month = {February},
    author = {Riccardo Polvara and Massimiliano Patacchiola and Marc Hanheide and Gerhard Neumann},
    title = {Sim-to-Real Quadrotor Landing via Sequential Deep Q-Networks and Domain Randomization},
    publisher = {MDPI},
    year = {2020},
    journal = {Robotics},
    doi = {doi:10.3390/robotics9010008},
    keywords = {ARRAY(0x55578fddf480)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40216/},
    abstract = {The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.}
    }
  • R. Kirk, M. Mangan, and G. Cielniak, “Feasibility study of in-field phenotypic trait extraction for robotic soft-fruit operations,” in Ukras20 conference: ?robots into the real world? proceedings, 2020, p. 21–23. doi:doi:10.31256/Uk4Td6I
    [BibTeX] [Abstract] [Download PDF]

    There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN.

    @inproceedings{lincoln42101,
    month = {February},
    author = {Raymond Kirk and Michael Mangan and Grzegorz Cielniak},
    booktitle = {UKRAS20 Conference: ?Robots into the real world? Proceedings},
    title = {Feasibility Study of In-Field Phenotypic Trait Extraction for Robotic Soft-Fruit Operations},
    publisher = {UKRAS},
    doi = {doi:10.31256/Uk4Td6I},
    pages = {21--23},
    year = {2020},
    keywords = {ARRAY(0x55578fddf4b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42101/},
    abstract = {There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN.}
    }
  • M. Bartlett, C. Costescu, P. Baxter, and S. Thill, “Requirements for robotic interpretation of social signals ?in the wild?: insights from diagnostic criteria of autism spectrum disorder,” Mdpi information, vol. 11, iss. 81, p. 1–20, 2020. doi:10.3390/info11020081
    [BibTeX] [Abstract] [Download PDF]

    The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.

    @article{lincoln40108,
    volume = {11},
    number = {81},
    month = {February},
    author = {M Bartlett and C Costescu and Paul Baxter and S Thill},
    title = {Requirements for Robotic Interpretation of Social Signals ?in the Wild?: Insights from Diagnostic Criteria of Autism Spectrum Disorder},
    publisher = {MDPI},
    year = {2020},
    journal = {MDPI Information},
    doi = {10.3390/info11020081},
    pages = {1--20},
    keywords = {ARRAY(0x55578fddf4e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40108/},
    abstract = {The last few decades have seen widespread advances in technological means to characterise
    observable aspects of human behaviour such as gaze or posture. Among others, these developments
    have also led to significant advances in social robotics. At the same time, however, social robots
    are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether
    the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we
    characterise the problems that a social robot in the real world may face, and review the technological
    state of the art in terms of addressing these. We do this by considering what it would entail
    to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD
    diagnosis fundamentally requires the ability to characterise human behaviour from observable
    aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis
    is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall,
    we demonstrate that even with relatively clear therapist-provided criteria and current technological
    progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have
    clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis,
    we provide a classification of criteria based on whether or not they depend on covert information
    and highlight present-day possibilities for supporting therapists in diagnosis through technological
    means. For social robotics, we highlight the fundamental role of covert behaviour, show that the
    current state-of-the-art is unable to characterise this, and emphasise that future research should tackle
    this explicitly in realistic settings.}
    }
  • B. Chen, J. Huang, Y. Huang, S. Kollias, and S. Yue, “Combining guaranteed and spot markets in display advertising: selling guaranteed page views with stochastic demand,” European journal of operational research, vol. 280, iss. 3, p. 1144–1159, 2020. doi:10.1016/j.ejor.2019.07.067
    [BibTeX] [Abstract] [Download PDF]

    While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.

    @article{lincoln39575,
    volume = {280},
    number = {3},
    month = {February},
    author = {Bowei Chen and Jingmin Huang and Yufei Huang and Stefanos Kollias and Shigang Yue},
    title = {Combining guaranteed and spot markets in display advertising: Selling guaranteed page views with stochastic demand},
    publisher = {Elsevier},
    year = {2020},
    journal = {European Journal of Operational Research},
    doi = {10.1016/j.ejor.2019.07.067},
    pages = {1144--1159},
    keywords = {ARRAY(0x55578fddf510)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39575/},
    abstract = {While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page
    views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.}
    }
  • J. P. Fentanes, A. Badiee, T. Duckett, J. Evans, S. Pearson, and G. Cielniak, “Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor,” Journal of field robotics, vol. 37, iss. 1, p. 122–136, 2020. doi:10.1002/rob.21914
    [BibTeX] [Abstract] [Download PDF]

    Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.

    @article{lincoln37350,
    volume = {37},
    number = {1},
    month = {January},
    author = {Jaime Pulido Fentanes and Amir Badiee and Tom Duckett and Jonathan Evans and Simon Pearson and Grzegorz Cielniak},
    title = {Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor},
    publisher = {Wiley Periodicals, Inc.},
    year = {2020},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21914},
    pages = {122--136},
    keywords = {ARRAY(0x55578fddf540)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37350/},
    abstract = {Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.}
    }
  • P. Chudzik, A. Mitchell, M. Alkaseem, Y. Wu, S. Fang, T. Hudaib, S. Pearson, and B. Al-Diri, “Mobile real-time grasshopper detection and data aggregation framework,” Scientific reports, vol. 10, p. 1150, 2020. doi:10.1038/s41598-020-57674-8
    [BibTeX] [Abstract] [Download PDF]

    nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.

    @article{lincoln39125,
    volume = {10},
    month = {January},
    author = {Piotr Chudzik and Arthur Mitchell and Mohammad Alkaseem and Yingie Wu and Shibo Fang and Taghread Hudaib and Simon Pearson and Bashir Al-Diri},
    title = {Mobile Real-Time Grasshopper Detection and Data Aggregation Framework},
    publisher = {Springer},
    year = {2020},
    journal = {Scientific Reports},
    doi = {10.1038/s41598-020-57674-8},
    pages = {1150},
    keywords = {ARRAY(0x55578fddf570)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39125/},
    abstract = {nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images
    to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.}
    }
  • R. Kirk, G. Cielniak, and M. Mangan, “L*a*b*fruits: a rapid and robust outdoor fruit detection system combining bio-inspired features with one-stage deep learning networks,” Sensors, vol. 20, iss. 1, p. 275, 2020. doi:10.3390/s20010275
    [BibTeX] [Abstract] [Download PDF]

    Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.

    @article{lincoln39423,
    volume = {20},
    number = {1},
    month = {January},
    author = {Raymond Kirk and Grzegorz Cielniak and Michael Mangan},
    title = {L*a*b*Fruits: A Rapid and Robust Outdoor Fruit Detection System Combining Bio-Inspired Features with One-Stage Deep Learning Networks},
    publisher = {MDPI},
    year = {2020},
    journal = {Sensors},
    doi = {10.3390/s20010275},
    pages = {275},
    keywords = {ARRAY(0x55578fddf5d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39423/},
    abstract = {Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.}
    }
  • P. Bosilj, E. Aptoula, T. Duckett, and G. Cielniak, “Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture,” Journal of field robotics, vol. 37, iss. 1, p. 7–19, 2020. doi:10.1002/rob.21869
    [BibTeX] [Abstract] [Download PDF]

    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.

    @article{lincoln35535,
    volume = {37},
    number = {1},
    month = {January},
    author = {Petra Bosilj and Erchan Aptoula and Tom Duckett and Grzegorz Cielniak},
    title = {Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture},
    publisher = {Wiley},
    year = {2020},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21869},
    pages = {7--19},
    keywords = {ARRAY(0x55578fddf600)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35535/},
    abstract = {Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.}
    }
  • C. Coppola, S. Cosar, D. R. Faria, and N. Bellotto, “Social activity recognition on continuous rgb-d video sequences,” International journal of social robotics, p. 1–15, 2020. doi:10.1007/s12369-019-00541-y
    [BibTeX] [Abstract] [Download PDF]

    Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.

    @article{lincoln35151,
    month = {January},
    author = {Claudio Coppola and Serhan Cosar and Diego R. Faria and Nicola Bellotto},
    title = {Social Activity Recognition on Continuous RGB-D Video Sequences},
    publisher = {Springer},
    journal = {International Journal of Social Robotics},
    doi = {10.1007/s12369-019-00541-y},
    pages = {1--15},
    year = {2020},
    keywords = {ARRAY(0x55578fddf630)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35151/},
    abstract = {Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.}
    }
  • Z. Yan, T. Duckett, and N. Bellotto, “Online learning for 3d lidar-based human detection: experimental analysis of point cloud clustering and classification methods,” Autonomous robots, vol. 44, iss. 2, p. 147–164, 2020. doi:10.1007/s10514-019-09883-y
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of “experts” to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.

    @article{lincoln36535,
    volume = {44},
    number = {2},
    month = {January},
    author = {Zhi Yan and Tom Duckett and Nicola Bellotto},
    title = {Online Learning for 3D LiDAR-based Human Detection: Experimental Analysis of Point Cloud Clustering and Classification Methods},
    publisher = {Springer},
    year = {2020},
    journal = {Autonomous Robots},
    doi = {10.1007/s10514-019-09883-y},
    pages = {147--164},
    keywords = {ARRAY(0x55578fddf660)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36535/},
    abstract = {This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of "experts" to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.}
    }
  • F. Camara and C. Fox, “Space invaders: pedestrian proxemic utility functions and trust zones for autonomous vehicle interactions,” International journal of social robotics, 2020. doi:10.1007/s12369-020-00717-x
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian proxemic utility and trust will help autonomous vehicles to plan and control interactions with pedestrians more safely and efficiently. When pedestrians cross the road in front of human-driven vehicles, the two agents use knowledge of each other?s preferences to negotiate and to determine who will yield to the other. Autonomous vehicles will require similar understandings, but previous work has shown a need for them to be provided in the form of continuous proxemic utility functions, which are not available from previous proxemics stud- ies based on Hall?s discrete zones. To fill this gap, a new Bayesian method to infer continuous pedestrian proxemic utility functions is proposed, and related to a new definition of ?physical trust requirement? (PTR) for road-crossing scenarios. The method is validated on simulation data then its parameters are inferred empirically from two public datasets. Results show that pedestrian proxemic utility is best described by a hyperbolic function, and that trust by the pedestrian is required in a discrete ?trust zone? which emerges naturally from simple physics. The PTR concept is then shown to be capable of generating and explaining the empirically observed zone sizes of Hall’s discrete theory of proxemics.

    @article{lincoln42876,
    title = {Space Invaders: Pedestrian Proxemic Utility Functions and Trust Zones for Autonomous Vehicle Interactions},
    author = {Fanta Camara and Charles Fox},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s12369-020-00717-x},
    journal = {International Journal of Social Robotics},
    keywords = {ARRAY(0x55578fddf6c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42876/},
    abstract = {Understanding pedestrian proxemic utility and trust will help autonomous vehicles to plan and control interactions with pedestrians more safely and efficiently. When pedestrians cross the road in front of human-driven vehicles, the two agents use knowledge of each other?s preferences to negotiate and to determine who will yield to the other. Autonomous vehicles will require similar understandings, but previous work has shown a need for them to be provided
    in the form of continuous proxemic utility functions, which are not available from previous proxemics stud-
    ies based on Hall?s discrete zones. To fill this gap, a new Bayesian method to infer continuous pedestrian
    proxemic utility functions is proposed, and related to a new definition of ?physical trust requirement? (PTR)
    for road-crossing scenarios. The method is validated on simulation data then its parameters are inferred empirically from two public datasets. Results show that pedestrian proxemic utility is best described by a hyperbolic function, and that trust by the pedestrian is required in a discrete ?trust zone? which emerges naturally from simple physics. The PTR concept is then shown to be capable of generating and explaining the
    empirically observed zone sizes of Hall's discrete theory of proxemics.}
    }
  • F. Lei, Z. Peng, V. Cutsuridis, M. Liu, Y. Zhang, and S. Yue, “Competition between on and off neural pathways enhancing collision selectivity,” in Ieee wcci 2020-ijcnn regular session, 2020. doi:10.1109/IJCNN48605.2020.9207131
    [BibTeX] [Abstract] [Download PDF]

    The LGMD1 neuron of locusts shows strong looming-sensitive property for both light and dark objects. Although a few LGMD1 models have been proposed, they are not reliable to inhibit the translating motion under certain conditions compare to the biological LGMD1 in the locust. To address this issue, we propose a bio-plausible model to enhance the collision selectivity by inhibiting the translating motion. The proposed model contains three parts, the retina to lamina layer for receiving luminance change signals, the lamina to medulla layer for extracting motion cues via ON and OFF pathways separately, the medulla to lobula layer for eliminating translational excitation with neural competition. We tested the model by synthetic stimuli and real physical stimuli. The experimental results demonstrate that the proposed LGMD1 model has a strong preference for objects in direct collision course-it can detect looming objects in different conditions while completely ignoring translating objects.

    @inproceedings{lincoln41701,
    booktitle = {IEEE WCCI 2020-IJCNN regular session},
    title = {Competition between ON and OFF Neural Pathways Enhancing Collision Selectivity},
    author = {Fang Lei and Zhiping Peng and Vassilis Cutsuridis and Mei Liu and Yicheng Zhang and Shigang Yue},
    year = {2020},
    doi = {10.1109/IJCNN48605.2020.9207131},
    keywords = {ARRAY(0x55578fddf6f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41701/},
    abstract = {The LGMD1 neuron of locusts shows strong looming-sensitive property for both light and dark objects. Although a few LGMD1 models have been proposed, they are not reliable to inhibit the translating motion under certain conditions compare to the biological LGMD1 in the locust. To address this issue, we propose a bio-plausible model to enhance the collision selectivity by inhibiting the translating motion. The proposed model contains three parts, the retina to lamina layer for receiving luminance change signals, the lamina to medulla layer for extracting motion cues via ON and OFF pathways separately, the medulla to lobula layer for eliminating translational excitation with neural competition. We tested the model by synthetic stimuli and real physical stimuli. The experimental results demonstrate that the proposed LGMD1 model has a strong preference for objects in direct collision course-it can detect looming objects in
    different conditions while completely ignoring translating objects.}
    }
  • J. Lock, I. Gilchrist, G. Cielniak, and N. Bellotto, “Experimental analysis of a spatialised audio interface for people with visual impairments,” Acm transactions on accessible computing, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.

    @article{lincoln41544,
    title = {Experimental Analysis of a Spatialised Audio Interface for People with Visual Impairments},
    author = {Jacobus Lock and Iain Gilchrist and Grzegorz Cielniak and Nicola Bellotto},
    publisher = {Association for Computing Machinery},
    year = {2020},
    journal = {ACM Transactions on Accessible Computing},
    keywords = {ARRAY(0x55578fddf720)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41544/},
    abstract = {Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.}
    }
  • J. Singh, A. R. Srinivasan, G. Neumann, and A. Kucukyilmaz, “Haptic-guided teleoperation of a 7-dof collaborative robot arm with an identical twin master,” Ieee transactions on haptics, p. 1–1, 2020. doi:10.1109/TOH.2020.2971485
    [BibTeX] [Abstract] [Download PDF]

    In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.

    @article{lincoln40137,
    title = {Haptic-Guided Teleoperation of a 7-DoF Collaborative Robot Arm with an Identical Twin Master},
    author = {Jayant Singh and Aravinda Ramakrishnan Srinivasan and Gerhard Neumann and Ayse Kucukyilmaz},
    publisher = {IEEE},
    year = {2020},
    pages = {1--1},
    doi = {10.1109/TOH.2020.2971485},
    journal = {IEEE Transactions on Haptics},
    keywords = {ARRAY(0x55578fddf750)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40137/},
    abstract = {In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.}
    }
  • H. Wang, Q. Fu, H. Wang, P. Baxter, J. Peng, and S. Yue, “A bioinspired angular velocity decoding neural network model for visually guided flights,” Neural networks, 2020. doi:10.1016/j.neunet.2020.12.008
    [BibTeX] [Abstract] [Download PDF]

    Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.

    @article{lincoln43704,
    title = {A bioinspired angular velocity decoding neural network model for visually guided flights},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Paul Baxter and Jigen Peng and Shigang Yue},
    publisher = {Elsevier},
    year = {2020},
    doi = {10.1016/j.neunet.2020.12.008},
    journal = {Neural Networks},
    keywords = {ARRAY(0x55578fddf780)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43704/},
    abstract = {Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.}
    }

2019

  • C. Achillas, D. Bochtis, D. Aidonis, V. Marinoudi, and D. Folinas, “Voice-driven fleet management system for agricultural operations,” Information processing in agriculture, vol. 6, iss. 4, p. 471–478, 2019. doi:10.1016/j.inpa.2019.03.001
    [BibTeX] [Abstract] [Download PDF]

    Food consumption is constantly increasing at global scale. In this light, agricultural production also needs to increase in order to satisfy the relevant demand for agricultural products. However, due to by environmental and biological factors (e.g. soil compaction) the weight and size of the machinery cannot be further physically optimized. Thus, only marginal improvements are possible to increase equipment effectiveness. On the contrary, late technological advances in ICT provide the ground for significant improvements in agri-production efficiency. In this work, the V-Agrifleet tool is presented and demonstrated. V-Agrifleet is developed to provide a ?hands-free? interface for information exchange and an ?Olympic view? to all coordinated users, giving them the ability for decentralized decision-making. The proposed tool can be used by the end-users (e.g. farmers, contractors, farm associations, agri-products storage and processing facilities, etc.) order to optimize task and time management. The visualized documentation of the fleet performance provides valuable information for the evaluation management level giving the opportunity for improvements in the planning of next operations. Its vendor-independent architecture, voice-driven interaction, context awareness functionalities and operation planning support constitute V-Agrifleet application a highly innovative agricultural machinery operational aiding system.

    @article{lincoln39226,
    volume = {6},
    number = {4},
    month = {December},
    author = {Ch. Achillas and Dionysis Bochtis and D. Aidonis and V. Marinoudi and D. Folinas},
    title = {Voice-driven fleet management system for agricultural operations},
    publisher = {Elsevier},
    year = {2019},
    journal = {Information Processing in Agriculture},
    doi = {10.1016/j.inpa.2019.03.001},
    pages = {471--478},
    keywords = {ARRAY(0x55578fe88318)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39226/},
    abstract = {Food consumption is constantly increasing at global scale. In this light, agricultural production also needs to increase in order to satisfy the relevant demand for agricultural products. However, due to by environmental and biological factors (e.g. soil compaction) the weight and size of the machinery cannot be further physically optimized. Thus, only marginal improvements are possible to increase equipment effectiveness. On the contrary, late technological advances in ICT provide the ground for significant improvements in agri-production efficiency. In this work, the V-Agrifleet tool is presented and demonstrated. V-Agrifleet is developed to provide a ?hands-free? interface for information exchange and an ?Olympic view? to all coordinated users, giving them the ability for decentralized decision-making. The proposed tool can be used by the end-users (e.g. farmers, contractors, farm associations, agri-products storage and processing facilities, etc.) order to optimize task and time management. The visualized documentation of the fleet performance provides valuable information for the evaluation management level giving the opportunity for improvements in the planning of next operations. Its vendor-independent architecture, voice-driven interaction, context awareness functionalities and operation planning support constitute V-Agrifleet application a highly innovative agricultural machinery operational aiding system.}
    }
  • G. Onoufriou, R. Bickerton, S. Pearson, and G. Leontidis, “Nemesyst: a hybrid parallelism deep learning-based framework applied for internet of things enabled food retailing refrigeration systems,” Computers in industry, vol. 113, p. 103133, 2019. doi:10.1016/j.compind.2019.103133
    [BibTeX] [Abstract] [Download PDF]

    Deep Learning has attracted considerable attention across multiple application domains, including computer vision, signal processing and natural language processing. Although quite a few single node deep learning frameworks exist, such as tensorflow, pytorch and keras, we still lack a complete process- ing structure that can accommodate large scale data processing, version control, and deployment, all while staying agnostic of any specific single node framework. To bridge this gap, this paper proposes a new, higher level framework, i.e. Nemesyst, which uses databases along with model sequentialisation to allow processes to be fed unique and transformed data at the point of need. This facilitates near real-time application and makes models available for further training or use at any node that has access to the database simultaneously. Nemesyst is well suited as an application framework for internet of things aggregated control systems, deploying deep learning techniques to optimise individual machines in massive networks. To demonstrate this framework, we adopted a case study in a novel domain; deploying deep learning to optimise the high speed control of electrical power consumed by a massive internet of things network of retail refrigeration systems in proportion to load available on the UK Na- tional Grid (a demand side response). The case study demonstrated for the first time in such a setting how deep learning models, such as Recurrent Neural Networks (vanilla and Long-Short-Term Memory) and Generative Adversarial Networks paired with Nemesyst, achieve compelling performance, whilst still being malleable to future adjustments as both the data and requirements inevitably change over time.

    @article{lincoln37181,
    volume = {113},
    month = {December},
    author = {George Onoufriou and Ronald Bickerton and Simon Pearson and Georgios Leontidis},
    note = {Partners included: Tesco and IMS-Evolve},
    title = {Nemesyst: A Hybrid Parallelism Deep Learning-Based Framework Applied for Internet of Things Enabled Food Retailing Refrigeration Systems},
    publisher = {Elsevier},
    year = {2019},
    journal = {Computers in Industry},
    doi = {10.1016/j.compind.2019.103133},
    pages = {103133},
    keywords = {ARRAY(0x55578fe86330)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37181/},
    abstract = {Deep Learning has attracted considerable attention across multiple application domains, including computer vision, signal processing and natural language processing. Although quite a few single node deep learning frameworks exist, such as tensorflow, pytorch and keras, we still lack a complete process- ing structure that can accommodate large scale data processing, version control, and deployment, all while staying agnostic of any specific single node framework. To bridge this gap, this paper proposes a new, higher level framework, i.e. Nemesyst, which uses databases along with model sequentialisation to allow processes to be fed unique and transformed data at the point of need. This facilitates near real-time application and makes models available for further training or use at any node that has access to the database simultaneously. Nemesyst is well suited as an application framework for internet of things aggregated control systems, deploying deep learning techniques to optimise individual machines in massive networks. To demonstrate this framework, we adopted a case study in a novel domain; deploying deep learning to optimise the high speed control of electrical power consumed by a massive internet of things network of retail refrigeration systems in proportion to load available on the UK Na- tional Grid (a demand side response). The case study demonstrated for the first time in such a setting how deep learning models, such as Recurrent Neural Networks (vanilla and Long-Short-Term Memory) and Generative Adversarial Networks paired with Nemesyst, achieve compelling performance, whilst still being malleable to future adjustments as both the data and requirements inevitably change over time.}
    }
  • P. Baxter, F. D. Duchetto, and M. Hanheide, “Engaging learners in dialogue interactivity development for mobile robots,” in Edurobotics 2018, 2019. doi:10.1007/978-3-030-18141-3_12
    [BibTeX] [Abstract] [Download PDF]

    The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain (a museum guide robot) our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.

    @inproceedings{lincoln40135,
    booktitle = {EDUROBOTICS 2018},
    month = {December},
    title = {Engaging Learners in Dialogue Interactivity Development for Mobile Robots},
    author = {Paul Baxter and Francesco Del Duchetto and Marc Hanheide},
    publisher = {Springer, Cham},
    year = {2019},
    doi = {10.1007/978-3-030-18141-3\_12},
    keywords = {ARRAY(0x55578fe86318)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40135/},
    abstract = {The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain (a museum guide robot) our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.}
    }
  • Q. Fu, C. Hu, J. Peng, C. Rind, and S. Yue, “A robust collision perception visual neural network with specific selectivity to darker objects,” Ieee transactions on cybernetics, p. 1–15, 2019. doi:10.1109/TCYB.2019.2946090
    [BibTeX] [Abstract] [Download PDF]

    Building an ef?cient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature and are working perfectly in the real world, could be ideal models for designing arti?cial vision systems. In the locust?s visual pathways, a lobula giant movement detector (LGMD), that is, the LGMD2, has been identi?ed as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds; similar situations which many ground vehicles and robots are often faced with. However, little has been done on modeling the LGMD2 and investigating its potential in robotics and vehicles. In this article, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust via the modeling of biased-ON and -OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro-mobile robot and tested with real-time experiments. The experimental results have veri?ed the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds.

    @article{lincoln39137,
    month = {December},
    author = {Qinbing Fu and Cheng Hu and Jigen Peng and Claire Rind and Shigang Yue},
    title = {A Robust Collision Perception Visual Neural Network with Specific Selectivity to Darker Objects},
    publisher = {IEEE},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/TCYB.2019.2946090},
    pages = {1--15},
    year = {2019},
    keywords = {ARRAY(0x55578fe883a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39137/},
    abstract = {Building an ef?cient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature and are working perfectly in the real world, could be ideal models for designing arti?cial vision systems. In the locust?s visual pathways, a lobula giant movement detector (LGMD), that is, the LGMD2, has been identi?ed as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds; similar situations which many ground vehicles and robots are often faced with. However, little has been done on modeling the LGMD2 and investigating its potential in robotics and vehicles. In this article, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust via the modeling of biased-ON and -OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro-mobile robot and tested with real-time experiments. The experimental results have veri?ed the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds.}
    }
  • B. Grieve, T. Duckett, M. Collison, L. Boyd, J. West, Y. Hujun, F. Arvin, and S. Pearson, “The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: a fundamental rethink is required.,” Global food security, vol. 23, p. 116–124, 2019. doi:10.1016/j.gfs.2019.04.011
    [BibTeX] [Abstract] [Download PDF]

    Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology & physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).

    @article{lincoln35842,
    volume = {23},
    month = {December},
    author = {Bruce Grieve and Tom Duckett and Martin Collison and Lesley Boyd and Jon West and Yin Hujun and Farshad Arvin and Simon Pearson},
    title = {The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: A fundamental rethink is required.},
    publisher = {Elsevier},
    year = {2019},
    journal = {Global Food Security},
    doi = {10.1016/j.gfs.2019.04.011},
    pages = {116--124},
    keywords = {ARRAY(0x55578fe88390)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35842/},
    abstract = {Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology \& physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).}
    }
  • H. Cuayahuitl, D. Lee, S. Ryu, Y. Cho, S. Choi, S. Indurthi, S. Yu, H. Choi, I. Hwang, and J. Kim, “Ensemble-based deep reinforcement learning for chatbots,” Neurocomputing, vol. 366, p. 118–130, 2019. doi:10.1016/j.neucom.2019.08.007
    [BibTeX] [Abstract] [Download PDF]

    Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only ? without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency ? which revealed that our proposed dialogue rewards strongly correlate with human judgements.

    @article{lincoln36668,
    volume = {366},
    month = {November},
    author = {Heriberto Cuayahuitl and Donghyeon Lee and Seonghan Ryu and Yongjin Cho and Sungja Choi and Satish Indurthi and Seunghak Yu and Hyungtak Choi and Inchul Hwang and Jihie Kim},
    title = {Ensemble-Based Deep Reinforcement Learning for Chatbots},
    publisher = {Elsevier},
    year = {2019},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2019.08.007},
    pages = {118--130},
    keywords = {ARRAY(0x55578fe883f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36668/},
    abstract = {Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only ? without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency ? which revealed that our proposed dialogue rewards strongly correlate with human judgements.}
    }
  • M. Sorour, K. Elgeneidy, A. Srinivasan, and M. Hanheide, “Grasping unknown objects based on gripper workspace spheres,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2019, p. 1541–1547. doi:10.1109/IROS40897.2019.8967989
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a novel grasp planning algorithm for unknown objects given a registered point cloud of the target from different views. The proposed methodology requires no prior knowledge of the object, nor offline learning. In our approach, the gripper kinematic model is used to generate a point cloud of each finger workspace, which is then filled with spheres. At run-time, first the object is segmented, its major axis is computed, in a plane perpendicular to which, the main grasping action is constrained. The object is then uniformly sampled and scanned for various gripper poses that assure at least one object point is located in the workspace of each finger. In addition, collision checks with the object or the table are performed using computationally inexpensive gripper shape approximation. Our methodology is both time efficient (consumes less than 1.5 seconds in average) and versatile. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand).

    @inproceedings{lincoln36370,
    month = {November},
    author = {Mohamed Sorour and Khaled Elgeneidy and Aravinda Srinivasan and Marc Hanheide},
    booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Grasping Unknown Objects Based on Gripper Workspace Spheres},
    publisher = {IEEE},
    year = {2019},
    journal = {Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
    doi = {10.1109/IROS40897.2019.8967989},
    pages = {1541--1547},
    keywords = {ARRAY(0x55578fe88d20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36370/},
    abstract = {In this paper, we present a novel grasp planning algorithm for unknown objects given a registered point cloud of the target from different views. The proposed methodology requires no prior knowledge of the object, nor offline learning. In our approach, the gripper kinematic model is used to generate a point cloud of each finger workspace, which is then filled with spheres. At run-time, first the object is segmented, its major axis is computed, in a plane perpendicular to which, the main grasping action is constrained. The object is then
    uniformly sampled and scanned for various gripper poses that assure at least one object point is located in the workspace of each finger. In addition, collision checks with the object or the table are performed using computationally inexpensive gripper shape approximation. Our methodology is both time efficient (consumes less than 1.5 seconds in average) and versatile. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand).}
    }
  • L. Baronti, M. Alston, N. Mavrakis, A. M. G. Esfahani, and M. Castellani, “Primitive shape fitting in point clouds using the bees algorithm,” Advances in automation and robotics, vol. 9, iss. 23, 2019. doi:10.3390/app9235198
    [BibTeX] [Abstract] [Download PDF]

    In this study, the problem of fitting shape primitives to point cloud scenes was tackled 2 as a parameter optimisation procedure and solved using the popular Bees Algorithm. Tested on three sets of clean and differently blurred point cloud models, the Bees Algorithm obtained performances comparable to those obtained using the state-of-the-art RANSAC method, and superior to those obtained by an evolutionary algorithm. Shape fitting times were compatible with the real-time application. The main advantage of the Bees Algorithm over standard methods is that it doesn?t rely on ad hoc assumptions about the nature of the point cloud model like RANSAC approximation tolerance.

    @article{lincoln39027,
    volume = {9},
    number = {23},
    month = {November},
    author = {Luca Baronti and Mark Alston and Nikos Mavrakis and Amir Masoud Ghalamzan Esfahani and Marco Castellani},
    title = {Primitive Shape Fitting in Point Clouds Using the Bees Algorithm},
    publisher = {MDPI},
    year = {2019},
    journal = {Advances in Automation and Robotics},
    doi = {10.3390/app9235198},
    keywords = {ARRAY(0x55578fe88330)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39027/},
    abstract = {In this study, the problem of fitting shape primitives to point cloud scenes was tackled 2 as a parameter optimisation procedure and solved using the popular Bees Algorithm. Tested on three sets of clean and differently blurred point cloud models, the Bees Algorithm obtained performances comparable to those obtained using the state-of-the-art RANSAC method, and superior to those obtained by an evolutionary algorithm. Shape fitting times were compatible with the real-time application. The main advantage of the Bees Algorithm over standard methods is that it doesn?t rely on ad hoc assumptions about the nature of the point cloud model like RANSAC approximation tolerance.}
    }
  • F. Camara, N. Merat, and C. Fox, “A heuristic model for pedestrian intention estimation,” in Ieee intelligent transportation systems conference, 2019. doi:10.1109/ITSC.2019.8917195
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian behaviour and controlling interactions with pedestrians is of critical importance for autonomous vehicles, but remains a complex and challenging problem. This study infers pedestrian intent during possible road-crossing interactions, to assist autonomous vehicle decisions to yield or not yield when approaching them, and tests a simple heuristic model of intent on pedestrian-vehicle trajectory data for the first time. It relies on a heuristic approach based on the observed positions of the agents over time. The method can predict pedestrian crossing intent, crossing or stopping, with 96\% accuracy by the time the pedestrian reaches the curbside, on the standard Daimler pedestrian dataset. This result is important in demarcating scenarios which have a clear winner and can be predicted easily with the simple heuristic, from those which may require more complex game-theoretic models to predict and control.

    @inproceedings{lincoln36758,
    booktitle = {IEEE Intelligent Transportation Systems Conference},
    month = {November},
    title = {A heuristic model for pedestrian intention estimation},
    author = {Fanta Camara and Natasha Merat and Charles Fox},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/ITSC.2019.8917195},
    keywords = {ARRAY(0x55578fe88348)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36758/},
    abstract = {Understanding pedestrian behaviour and controlling interactions with pedestrians is of critical importance for autonomous vehicles, but remains a complex and challenging problem. This study infers pedestrian intent during possible road-crossing interactions, to assist autonomous vehicle decisions to yield or not yield when approaching them, and tests a simple heuristic model of intent on pedestrian-vehicle trajectory data for the first time. It relies on a heuristic approach based
    on the observed positions of the agents over time. The method can predict pedestrian crossing intent, crossing or stopping, with 96\% accuracy by the time the pedestrian reaches the curbside, on the standard Daimler pedestrian dataset. This result is important in demarcating scenarios which have a clear winner and can be predicted easily with the simple heuristic, from those which may require more complex game-theoretic models to predict and control.}
    }
  • R. Jiang, X. Li, A. Gao, L. Li, H. Meng, S. Yue, and L. Zhang, “Learning spectral and spatial features based on generative adversarial network for hyperspectral image super-resolution,” in The 2019 ieee international geoscience and remote sensing symposium (igarss2019), 2019. doi:10.1109/IGARSS.2019.8900228
    [BibTeX] [Abstract] [Download PDF]

    Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.

    @inproceedings{lincoln42331,
    booktitle = {The 2019 IEEE International Geoscience and Remote Sensing Symposium (IGARSS2019)},
    month = {November},
    title = {Learning spectral and spatial features based on generative adversarial network for hyperspectral image super-resolution},
    author = {Ruituo Jiang and Xu Li and Ang Gao and Lixin Li and Hongying Meng and Shigang Yue and Lei Zhang},
    year = {2019},
    doi = {10.1109/IGARSS.2019.8900228},
    keywords = {ARRAY(0x55578fe88d50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42331/},
    abstract = {Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.}
    }
  • F. Camara, P. Dickinson, N. Merat, and C. Fox, “Towards game theoretic av controllers: measuring pedestrian behaviour in virtual reality,” in Ieee/rsj international conference on intelligent robots and systems (iros 2019) workshops, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian interaction is of great importance for autonomous vehicles (AVs). The present study investigates pedestrian behaviour during crossing scenarios with an autonomous vehicle using Virtual Reality. The self-driving car is driven by a game theoretic controller which adapts its driving style to pedestrian crossing behaviour. We found that subjects value collision avoidance about 8 times more than saving 0.02 seconds. A previous lab study found time saving to be more important than collision avoidance in a highly unrealistic board game style version of the game. The present result suggests that the VR simulation reproduces real world road-crossings better than the lab study and provides a reliable test-bed for the development of game theoretic models for AVs.

    @inproceedings{lincoln37261,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019) Workshops},
    month = {November},
    title = {Towards game theoretic AV controllers: measuring pedestrian behaviour in Virtual Reality},
    author = {Fanta Camara and Patrick Dickinson and Natasha Merat and Charles Fox},
    publisher = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019) Workshops},
    year = {2019},
    keywords = {ARRAY(0x55578fe88d68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37261/},
    abstract = {Understanding pedestrian interaction is of great importance for autonomous vehicles (AVs). The present study investigates pedestrian behaviour during crossing scenarios with an autonomous vehicle using Virtual Reality. The self-driving car is driven by a game theoretic controller which adapts its driving style to pedestrian crossing behaviour. We found that subjects value collision avoidance about 8 times more than saving 0.02 seconds. A previous lab study found time saving to be more important than collision avoidance in a highly unrealistic board game style version of the game. The present result suggests that the VR simulation reproduces real world road-crossings better than the lab study and provides a reliable test-bed for the development of game theoretic models for AVs.}
    }
  • A. Zaganidis, A. Zerntev, T. Duckett, and G. Cielniak, “Semantically assisted loop closure in slam using ndt histograms,” in International conference on intelligent robots and systems (iros), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Precise knowledge of pose is of great importance for reliable operation of mobile robots in outdoor environments. Simultaneous localization and mapping (SLAM) is the online construction of a map during exploration of an environment. One of the components of SLAM is loop closure detection, identifying that the same location has been visited and is present on the existing map, and localizing against it. We have shown in previous work that using semantics from a deep segmentation network in conjunction with the Normal Distributions Transform point cloud registration improves the robustness, speed and accuracy of lidar odometry. In this work we extend the method for loop closure detection, using the labels already available from local registration into NDT Histograms, and we present a SLAM pipeline based on Semantic assisted NDT and PointNet++. We experimentally demonstrate on sequences from the KITTI benchmark that the map descriptor we propose outperforms NDT Histograms without semantics, and we validate its use on a SLAM task.

    @inproceedings{lincoln37750,
    booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
    month = {November},
    title = {Semantically Assisted Loop Closure in SLAM Using NDT Histograms},
    author = {Anestis Zaganidis and Alexandros Zerntev and Tom Duckett and Grzegorz Cielniak},
    year = {2019},
    keywords = {ARRAY(0x55578fe88d98)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37750/},
    abstract = {Precise knowledge of pose is of great importance for reliable operation of mobile robots in outdoor environments. Simultaneous localization and mapping (SLAM) is the online construction of a map during exploration of an environment. One of the components of SLAM is loop closure detection, identifying that the same location has been visited and is present on the existing map, and localizing against it. We have shown in previous work that using semantics from a deep segmentation network in conjunction with the Normal Distributions Transform point cloud registration improves the robustness, speed and accuracy of lidar odometry. In this work we extend the method for loop closure detection, using the labels already available from local registration into NDT Histograms, and we present a SLAM pipeline based on Semantic assisted NDT and PointNet++. We experimentally demonstrate on sequences from the KITTI benchmark that the map descriptor we propose outperforms NDT Histograms without semantics, and we validate its use on a SLAM task.}
    }
  • J. Lock, I. Gilchrist, G. Cielniak, and N. Bellotto, “Bone-conduction audio interface to guide people with visual impairments,” in International workshop on assistive engineering and information technology (aeit 2019), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The ActiVis project’s aim is to build a mobile guidance aid to help people with limited vision find objects in an unknown environment. This system uses bone-conduction headphones to transmit audio signals to the user and requires an effective non-visual interface. To this end, we propose a new audio-based interface that uses a spatialised signal to convey a target?s position on the horizontal plane. The vertical position on the median plan is given by adjusting the tone?s pitch to overcome the audio localisation limitations of bone-conduction headphones. This interface is validated through a set of experiments with blindfolded and visually impaired participants.

    @inproceedings{lincoln36793,
    booktitle = {International Workshop on Assistive Engineering and Information Technology (AEIT 2019)},
    month = {November},
    title = {Bone-Conduction Audio Interface to Guide People with Visual Impairments},
    author = {Jacobus Lock and Iain Gilchrist and Grzegorz Cielniak and Nicola Bellotto},
    year = {2019},
    keywords = {ARRAY(0x55578fe88db0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36793/},
    abstract = {The ActiVis project's aim is to build a mobile guidance aid to help people with limited vision find objects in an unknown environment. This system uses bone-conduction headphones to transmit audio signals to the user and requires an effective non-visual interface. To this end, we propose a new audio-based interface that uses a spatialised signal to convey a target?s position on the horizontal plane. The vertical position on the median plan is given by adjusting the tone?s pitch to overcome the audio localisation limitations of bone-conduction headphones. This interface is validated through a set of experiments with blindfolded and visually impaired participants.}
    }
  • C. Zhao, L. Sun, Z. Yan, G. Neumann, T. Duckett, and R. Stolkin, “Learning kalman network: a deep monocular visual odometry for on-road driving,” Robotics and autonomous systems, vol. 121, p. 103234, 2019. doi:10.1016/j.robot.2019.07.004
    [BibTeX] [Abstract] [Download PDF]

    This paper proposes a Learning Kalman Network (LKN) based monocular visual odometry (VO), i.e. LKN-VO, for on-road driving. Most existing learning-based VO focus on ego-motion estimation by comparing the two most recent consecutive frames. By contrast, the LKN-VO incorporates a learning ego-motion estimation through the current measurement, and a discriminative state estimator through a sequence of previous measurements. Superior to the model-based monocular VO, a more accurate absolute scale can be learned by LKN without any geometric constraints. In contrast to the model-based Kalman Filter (KF), the optimal model parameters of LKN can be obtained from dynamic and deterministic outputs of the neural network without elaborate human design. LKN is a hybrid approach where we achieve the non-linearity of the observation model and the transition model though deep neural networks, and update the state following the Kalman probabilistic mechanism. In contrast to the learning-based state estimator, a sparse representation is further proposed to learn the correlations within the states from the car?s movement behaviour, thereby applying better filtering on the 6DOF trajectory for on-road driving. The experimental results show that the proposed LKN-VO outperforms both model-based and learning state-estimator-based monocular VO on the most well-cited on-road driving datasets, i.e. KITTI and Apolloscape. In addition, LKN-VO is integrated with dense 3D mapping, which can be deployed for simultaneous localization and mapping in urban environments.

    @article{lincoln43351,
    volume = {121},
    month = {November},
    author = {Cheng Zhao and Li Sun and Zhi Yan and Gerhard Neumann and Tom Duckett and Rustam Stolkin},
    title = {Learning Kalman Network: A deep monocular visual odometry for on-road driving},
    publisher = {Elsevier},
    year = {2019},
    journal = {Robotics and Autonomous Systems},
    doi = {10.1016/j.robot.2019.07.004},
    pages = {103234},
    keywords = {ARRAY(0x55578fe88de0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43351/},
    abstract = {This paper proposes a Learning Kalman Network (LKN) based monocular visual odometry (VO), i.e. LKN-VO, for on-road driving. Most existing learning-based VO focus on ego-motion estimation by comparing the two most recent consecutive frames. By contrast, the LKN-VO incorporates a learning ego-motion estimation through the current measurement, and a discriminative state estimator through a sequence of previous measurements. Superior to the model-based monocular VO, a more accurate absolute scale can be learned by LKN without any geometric constraints. In contrast to the model-based Kalman Filter (KF), the optimal model parameters of LKN can be obtained from dynamic and deterministic outputs of the neural network without elaborate human design. LKN is a hybrid approach where we achieve the non-linearity of the observation model and the transition model though deep neural networks, and update the state following the Kalman probabilistic mechanism. In contrast to the learning-based state estimator, a sparse representation is further proposed to learn the correlations within the states from the car?s movement behaviour, thereby applying better filtering on the 6DOF trajectory for on-road driving. The experimental results show that the proposed LKN-VO outperforms both model-based and learning state-estimator-based monocular VO on the most well-cited on-road driving datasets, i.e. KITTI and Apolloscape. In addition, LKN-VO is integrated with dense 3D mapping, which can be deployed for simultaneous localization and mapping in urban environments.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Lindsey the tour guide robot – usage patterns in a museum long-term deployment,” in International conference on robot & human interactive communication (ro-man), New Delhi, 2019. doi:10.1109/RO-MAN46459.2019.8956329
    [BibTeX] [Abstract] [Download PDF]

    The long-term deployment of autonomous robots co-located with humans in real-world scenarios remains a challenging problem. In this paper, we present the “Lindsey” tour guide robot system in which we attempt to increase the social capability of current state-of-the-art robotic technologies. The robot is currently deployed at a museum displaying local archaeology where it is providing guided tours and information to visitors. The robot is operating autonomously daily, navigating around the museum and engaging with the public, with on-site assistance from roboticists only in cases of hardware/software malfunctions. In a deployment lasting seven months up to now, it has travelled nearly 300km and has delivered more than 2300 guided tours. First, we describe the robot framework and the management interfaces implemented. We then analyse the data collected up to now with the goal of understanding and modelling the visitors’ behavior in terms of their engagement with the technology. These data suggest that while short-term engagement is readily gained, continued engagement with the robot tour guide is likely to require more refined and robust socially interactive behaviours. The deployed system presents us with an opportunity to empirically address these issues.

    @inproceedings{lincoln37348,
    month = {October},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    booktitle = {International Conference on Robot \& Human Interactive Communication (RO-MAN)},
    address = {New Delhi},
    title = {Lindsey the Tour Guide Robot - Usage Patterns in a Museum Long-Term Deployment},
    publisher = {IEEE},
    doi = {10.1109/RO-MAN46459.2019.8956329},
    year = {2019},
    keywords = {ARRAY(0x55578fe88e10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37348/},
    abstract = {The long-term deployment of autonomous robots co-located with humans in real-world scenarios remains a challenging problem. In this paper, we present the ``Lindsey'' tour guide robot system in which we attempt to increase the social capability of current state-of-the-art robotic technologies. The robot is currently deployed at a museum displaying local archaeology where it is providing guided tours and information to visitors. The robot is operating autonomously daily, navigating around the museum and engaging with the public, with on-site assistance from roboticists only in cases of hardware/software malfunctions. In a deployment lasting seven months up to now, it has travelled nearly 300km and has delivered more than 2300 guided tours. First, we describe the robot framework and the management interfaces implemented. We then analyse the data collected up to now with the goal of understanding and modelling the visitors' behavior in terms of their engagement with the technology. These data suggest that while short-term engagement is readily gained, continued engagement with the robot tour guide is likely to require more refined and robust socially interactive behaviours. The deployed system presents us with an opportunity to empirically address these issues.}
    }
  • T. Krajnik, T. Vintr, S. M. Mellado, J. P. Fentanes, G. Cielniak, O. M. Mozos, G. Broughton, and T. Duckett, “Warped hypertime representations for long-term autonomy of mobile robots,” Ieee robotics and automation letters, vol. 4, iss. 4, p. 3310–3317, 2019. doi:10.1109/LRA.2019.2926682
    [BibTeX] [Abstract] [Download PDF]

    This letter presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modeling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.

    @article{lincoln36962,
    volume = {4},
    number = {4},
    month = {October},
    author = {Tomas Krajnik and Tomas Vintr and Sergi Molina Mellado and Jaime Pulido Fentanes and Grzegorz Cielniak and Oscar Martinez Mozos and George Broughton and Tom Duckett},
    title = {Warped Hypertime Representations for Long-Term Autonomy of Mobile Robots},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2926682},
    pages = {3310--3317},
    keywords = {ARRAY(0x55578fe88e40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36962/},
    abstract = {This letter presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modeling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.}
    }
  • R. Madigan, S. Nordhoff, C. Fox, R. E. Amina, T. Louw, M. Wilbrink, A. Schieben, and N. Merat, “Understanding interactions between automated road transport systems and other road users: a video analysis,” Transportation research part f, vol. 66, p. 196–213, 2019. doi:10.1016/j.trf.2019.09.006
    [BibTeX] [Abstract] [Download PDF]

    If automated vehicles (AVs) are to move efficiently through the traffic environment, there is a need for them to interact and communicate with other road users in a comprehensible and predictable manner. For this reason, an understanding of the interaction requirements of other road users is needed. The current study investigated these requirements through an analysis of 22 hours of video footage of the CityMobil2 AV demonstrations in La Rochelle (France) and Trikala (Greece). Manual and automated video-analysis techniques were used to identify typical interactions patterns between AVs and other road users. Results indicate that road infrastructure and road user factors had a major impact on the type of interactions that arose between AVs and other road users. Road infrastructure features such as road width, and the presence or absence of zebra crossings had an impact on road users? trajectory decisions while approaching an AV. Where possible, pedestrians and cyclists appeared to leave as much space as possible between their trajectories and that of the AV. However, in situations where the infrastructure did not allow for the separation of traffic, risky behaviours were more likely to emerge, with cyclists, in particular, travelling closely alongside the AVs on narrow paths of the road, rather than waiting for the AV to pass. In addition, the types of interaction varied considerably across socio-demographic groups, with females and older users more likely to show cautionary behaviour around the AVs than males, or younger road users. Overall, the results highlight the importance of implementing the correct infrastructure to support the safe introduction of AVs, while also ensuring that the behaviour of the AV matches other road users? expectations as closely as possible in order to avoid traffic conflicts.

    @article{lincoln36914,
    volume = {66},
    month = {October},
    author = {Ruth Madigan and Sina Nordhoff and Charles Fox and Roja Ezzati Amina and Tyron Louw and Marc Wilbrink and Anna Schieben and Natasha Merat},
    title = {Understanding interactions between Automated Road Transport Systems and other road users: A video analysis},
    publisher = {Elsevier},
    year = {2019},
    journal = {Transportation Research Part F},
    doi = {10.1016/j.trf.2019.09.006},
    pages = {196--213},
    keywords = {ARRAY(0x55578fe88e70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36914/},
    abstract = {If automated vehicles (AVs) are to move efficiently through the traffic environment, there is a need for them to interact and communicate with other road users in a comprehensible and predictable manner. For this reason, an understanding of the interaction requirements of other road users is needed. The current study investigated these requirements through an analysis of 22 hours of video footage of the CityMobil2 AV demonstrations in La Rochelle (France) and Trikala (Greece). Manual and automated video-analysis techniques were used to identify typical interactions patterns between AVs and other road users. Results indicate that road infrastructure and road user factors had a major impact on the type of interactions that arose between AVs and other road users. Road infrastructure features such as road width, and the presence or absence of zebra crossings had an impact on road users? trajectory decisions while approaching an AV. Where possible, pedestrians and cyclists appeared to leave as much space as possible between their trajectories and that of the AV. However, in situations where the infrastructure did not allow for the separation of traffic, risky behaviours were more likely to emerge, with cyclists, in particular, travelling closely alongside the AVs on narrow paths of the road, rather than waiting for the AV to pass. In addition, the types of interaction varied considerably across socio-demographic groups, with females and older users more likely to show cautionary behaviour around the AVs than males, or younger road users. Overall, the results highlight the importance of implementing the correct infrastructure to support the safe introduction of AVs, while also ensuring that the behaviour of the AV matches other road users? expectations as closely as possible in order to avoid traffic conflicts.}
    }
  • E. Senft, S. Lemaignan, P. Baxter, M. Bartlett, and T. Belpaeme, “Teaching robots social autonomy from in situ human guidance,” Science robotics, vol. 4, iss. 35, p. eaat1186, 2019. doi:10.1126/scirobotics.aat1186
    [BibTeX] [Abstract] [Download PDF]

    Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.

    @article{lincoln38234,
    volume = {4},
    number = {35},
    month = {October},
    author = {Emmanuel Senft and S{\'e}verin Lemaignan and Paul Baxter and Madeleine Bartlett and Tony Belpaeme},
    title = {Teaching robots social autonomy from in situ human guidance},
    publisher = {American Association for the Advancement of Science},
    year = {2019},
    journal = {Science Robotics},
    doi = {10.1126/scirobotics.aat1186},
    pages = {eaat1186},
    keywords = {ARRAY(0x55578fe88ea0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38234/},
    abstract = {Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.}
    }
  • A. Kucukyilmaz and I. Issak, “Online identification of interaction behaviors from haptic data during collaborative object transfer,” Ieee robotics and automation letters, p. 1–1, 2019. doi:10.1109/LRA.2019.2945261
    [BibTeX] [Abstract] [Download PDF]

    Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another?s actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object co-manipulation. The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80\% accuracy with both classifiers when identifying general interaction types.

    @article{lincoln37631,
    month = {October},
    author = {Ayse Kucukyilmaz and Illimar Issak},
    title = {Online Identification of Interaction Behaviors from Haptic Data during Collaborative Object Transfer},
    publisher = {IEEE},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2945261},
    pages = {1--1},
    year = {2019},
    keywords = {ARRAY(0x55578fe88ed0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37631/},
    abstract = {Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another?s actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object co-manipulation.
    The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80\% accuracy with both classifiers when identifying general interaction types.}
    }
  • A. Postnikov, I. Albayati, S. Pearson, C. Bingham, R. Bickerton, and A. Zolotas, “Facilitating static firm frequency response with aggregated networks of commercial food refrigeration systems,” Applied energy, vol. 251, p. 113357, 2019. doi:10.1016/j.apenergy.2019.113357
    [BibTeX] [Abstract] [Download PDF]

    Aggregated electrical loads from massive numbers of distributed retail refrigeration systems could have a significant role in frequency balancing services. To date, no study has realised effective engineering applications of static firm frequency response to these aggregated networks. Here, the authors present a novel and validated approach that enables large scale control of distributed retail refrigeration assets. The authors show a validated model that simulates the operation of retail refrigerators comprising centralised compressor packs feeding multiple in-store display cases. The model was used to determine an optimal control strategy that both minimised the engineering risk to the pack during shut down and potential impacts to food safety. The authors show that following a load shedding frequency response trigger the pack should be allowed to maintain operation but with increased suction pressure set-point. This reduces compressor load whilst enabling a continuous flow of refrigerant to food cases. In addition, the authors simulated an aggregated response of up to three hundred compressor packs (over 2 MW capacity), with refrigeration cases on hysteresis and modulation control. Hysteresis control, compared to modulation, led to undesired load oscillations when the system recovers after a frequency balancing event. Transient responses of the system during the event showed significant fluctuations of active power when compressor network responds to both primary and secondary parts of a frequency balancing event. Enabling frequency response within this system is demonstrated by linking the aggregated refrigeration loads with a simplified power grid model that simulates a power loss incident.

    @article{lincoln36072,
    volume = {251},
    month = {October},
    author = {Andrey Postnikov and Ibrahim Albayati and Simon Pearson and Chris Bingham and Ronald Bickerton and Argyrios Zolotas},
    title = {Facilitating static firm frequency response with aggregated networks of commercial food refrigeration systems},
    publisher = {Elsevier},
    year = {2019},
    journal = {Applied Energy},
    doi = {10.1016/j.apenergy.2019.113357},
    pages = {113357},
    keywords = {ARRAY(0x55578fe88f00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36072/},
    abstract = {Aggregated electrical loads from massive numbers of distributed retail refrigeration systems could have a significant role in frequency balancing services. To date, no study has realised effective engineering applications of static firm frequency response to these aggregated networks. Here, the authors present a novel and validated approach that enables large scale control of distributed retail refrigeration assets. The authors show a validated model that simulates the operation of retail refrigerators comprising centralised compressor packs feeding multiple in-store display cases. The model was used to determine an optimal control strategy that both minimised the engineering risk to the pack during shut down and potential impacts to food safety. The authors show that following a load shedding frequency response trigger the pack should be allowed to maintain operation but with increased suction pressure set-point. This reduces compressor load whilst enabling a continuous flow of refrigerant to food cases. In addition, the authors simulated an aggregated response of up to three hundred compressor packs (over 2 MW capacity), with refrigeration cases on hysteresis and modulation control. Hysteresis control, compared to modulation, led to undesired load oscillations when the system recovers after a frequency balancing event. Transient responses of the system during the event showed significant fluctuations of active power when compressor network responds to both primary and secondary parts of a frequency balancing event. Enabling frequency response within this system is demonstrated by linking the aggregated refrigeration loads with a simplified power grid model that simulates a power loss incident.}
    }
  • A. Nanjangud, C. I. Underwood, C. M. Saaj, A. Young, P. C. Blacker, S. Eckersley, M. Sweeting, and P. Bianco, “Towards on-orbit assembly of large space telescopes: mission architectures, concepts, and analyses,” in 70th international astronautical congress, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39415,
    booktitle = {70th International Astronautical Congress},
    month = {October},
    title = {Towards On-Orbit Assembly of Large Space Telescopes: Mission Architectures, Concepts, and Analyses},
    author = {Angadh Nanjangud and Craig I. Underwood and Chakravarthini M. Saaj and Alex Young and Peter C. Blacker and Steve Eckersley and Martin Sweeting and Paolo Bianco},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39415/}
    }
  • M. Selvaggio, A. G. Esfahani, R. Moccia, F. Ficuciello, and B. Siciliano, “Haptic-guided shared control for needle grasping optimization in minimally invasive robotic surgery,” Ieee/rsj international conference intelligent robotic system, 2019.
    [BibTeX] [Abstract] [Download PDF]

    During suturing tasks performed with minimally invasive surgical robots, configuration singularities and joint limits often force surgeons to interrupt the task and re- grasp the needle using dual-arm movements. This yields an increased operator?s cognitive load, time-to-completion, fatigue and performance degradation. In this paper, we propose a haptic-guided shared control method for grasping the needle with the Patient Side Manipulator (PSM) of the da Vinci robot avoiding such issues. We suggest a cost function consisting of (i) the distance from robot joint limits and (ii) the task-oriented manipulability over the suturing trajectory. We evaluate the cost and its gradient on the needle grasping manifold that allows us to obtain the optimal grasping pose for joint-limit and singularity free movements of the needle during suturing. Then, we compute force cues that are applied to the Master Tool Manipulator (MTM) of the da Vinci to guide the operator towards the optimal grasp. As such, our system helps the operator to choose a grasping configuration allowing the robot to avoid joint limits and singularities during post-grasp suturing movements. We show the effectiveness of our proposed haptic- guided shared control method during suturing using both simulated and real experiments. The results illustrate that our approach significantly improves the performance in terms of needle re-grasping.

    @article{lincoln36571,
    month = {October},
    title = {Haptic-guided shared control for needle grasping optimization in minimally invasive robotic surgery},
    author = {Mario Selvaggio and Amir Ghalamzan Esfahani and Rocco Moccia and Fanny Ficuciello and Bruno Siciliano},
    year = {2019},
    journal = {IEEE/RSJ International Conference Intelligent Robotic System},
    keywords = {ARRAY(0x55578fe88f60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36571/},
    abstract = {During suturing tasks performed with minimally invasive surgical robots, configuration singularities and joint limits often force surgeons to interrupt the task and re- grasp the needle using dual-arm movements. This yields an increased operator?s cognitive load, time-to-completion, fatigue and performance degradation. In this paper, we propose a haptic-guided shared control method for grasping the needle with the Patient Side Manipulator (PSM) of the da Vinci robot avoiding such issues. We suggest a cost function consisting of (i) the distance from robot joint limits and (ii) the task-oriented manipulability over the suturing trajectory. We evaluate the cost and its gradient on the needle grasping manifold that allows us to obtain the optimal grasping pose for joint-limit and singularity free movements of the needle during suturing. Then, we compute force cues that are applied to the Master Tool Manipulator (MTM) of the da Vinci to guide the operator towards the optimal grasp. As such, our system helps the operator to choose a grasping configuration allowing the robot to avoid joint limits and singularities during post-grasp suturing movements. We show the effectiveness of our proposed haptic- guided shared control method during suturing using both simulated and real experiments. The results illustrate that our approach significantly improves the performance in terms of needle re-grasping.}
    }
  • M. G. Lampridi, C. G. S. o, and D. Bochtis, “Agricultural sustainability: a review of concepts and methods,” Sustainability, vol. 11, iss. 18, p. 5120, 2019. doi:10.3390/su11185120
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology analysis. The framework was implemented for the systematic literature review of 38 crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community?s interest in agricultural sustainability is increasing in the last three years. The most used methods include indicator-based tools, frameworks, and indexes, followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. It should also be mentioned that combinational use of methodologies is often observed, thus a clear distinction of methodologies is not always possible

    @article{lincoln39231,
    volume = {11},
    number = {18},
    month = {September},
    author = {Maria G. Lampridi and Claus G. S{\o}rensen and Dionysis Bochtis},
    title = {Agricultural Sustainability: A Review of Concepts and Methods},
    year = {2019},
    journal = {Sustainability},
    doi = {10.3390/su11185120},
    pages = {5120},
    keywords = {ARRAY(0x55578fe88f90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39231/},
    abstract = {This paper presents a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology analysis. The framework was implemented for the systematic literature review of 38 crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community?s interest in agricultural sustainability is increasing in the last three years. The most used methods include indicator-based tools, frameworks, and indexes, followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. It should also be mentioned that combinational use of methodologies is often observed, thus a clear distinction of methodologies is not always possible}
    }
  • V. Marinoudi, C. Sorensen, S. Pearson, and D. Bochtis, “Robotics and labour in agriculture. a context consideration,” Biosystems engineering, vol. 184, p. 111–121, 2019. doi:10.1016/j.biosystemseng.2019.06.013
    [BibTeX] [Abstract] [Download PDF]

    Over the last century, agriculture transformed from a labour-intensive industry towards mechanisation and power-intensive production systems, while over the last 15 years agri- cultural industry has started to digitise. Through this transformation there was a continuous labour outflow from agriculture, mainly from standardized tasks within production process. Robots and artificial intelligence can now be used to conduct non-standardised tasks (e.g. fruit picking, selective weeding, crop sensing) previously reserved for human workers and at economically feasible costs. As a consequence, automation is no longer restricted to stan- dardized tasks within agricultural production (e.g. ploughing, combine harvesting). In addition, many job roles in agriculture may be augmented but not replaced by robots. Robots in many instances will work collaboratively with humans. This new robotic ecosystem creates complex ethical, legislative and social impacts. A key question, we consider here, is what are the short and mid-term effects of robotised agriculture on sector jobs and employment? The presented work outlines the conditions, constraints, and inherent re- lationships between labour input and technology input in bio-production, as well as, pro- vides the procedural framework and research design to be followed in order to evaluate the effect of adoption automation and robotics in agriculture.

    @article{lincoln36279,
    volume = {184},
    month = {August},
    author = {Vasso Marinoudi and Claus Sorensen and Simon Pearson and Dionysis Bochtis},
    title = {Robotics and labour in agriculture. A context consideration},
    publisher = {Elsevier},
    year = {2019},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2019.06.013},
    pages = {111--121},
    keywords = {ARRAY(0x55578fe88fc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36279/},
    abstract = {Over the last century, agriculture transformed from a labour-intensive industry towards mechanisation and power-intensive production systems, while over the last 15 years agri- cultural industry has started to digitise. Through this transformation there was a continuous labour outflow from agriculture, mainly from standardized tasks within production process. Robots and artificial intelligence can now be used to conduct non-standardised tasks (e.g. fruit picking, selective weeding, crop sensing) previously reserved for human workers and at economically feasible costs. As a consequence, automation is no longer restricted to stan- dardized tasks within agricultural production (e.g. ploughing, combine harvesting). In addition, many job roles in agriculture may be augmented but not replaced by robots. Robots in many instances will work collaboratively with humans. This new robotic ecosystem creates complex ethical, legislative and social impacts. A key question, we consider here, is what are the short and mid-term effects of robotised agriculture on sector jobs and employment? The presented work outlines the conditions, constraints, and inherent re- lationships between labour input and technology input in bio-production, as well as, pro- vides the procedural framework and research design to be followed in order to evaluate the effect of adoption automation and robotics in agriculture.}
    }
  • A. Seddaoui and M. Saaj, “Combined nonlinear h? controller for a controlled-floating space robot,” Journal of guidance, control, and dynamics, vol. 42, iss. 8, p. 1878–1885, 2019. doi:10.2514/1.G003811
    [BibTeX] [Download PDF]
    @article{lincoln37396,
    volume = {42},
    number = {8},
    month = {August},
    author = {A. Seddaoui and Mini Saaj},
    note = {cited By 0},
    title = {Combined nonlinear H? controller for a controlled-floating space robot},
    publisher = {Aerospace Research Central},
    year = {2019},
    journal = {Journal of Guidance, Control, and Dynamics},
    doi = {10.2514/1.G003811},
    pages = {1878--1885},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37396/}
    }
  • A. Seddaoui and C. M. Saaj, “Combined nonlinear h? controller for a controlled-floating space robot,” Journal of guidance, control, and dynamics, vol. 42, iss. 8, p. 1878–1885, 2019. doi:10.2514/1.G003811
    [BibTeX] [Download PDF]
    @article{lincoln39389,
    volume = {42},
    number = {8},
    month = {August},
    author = {Asma Seddaoui and Chakravarthini M. Saaj},
    title = {Combined Nonlinear H? Controller for a Controlled-Floating Space Robot},
    publisher = {Aerospace Research Central},
    year = {2019},
    journal = {Journal of Guidance, Control, and Dynamics},
    doi = {10.2514/1.G003811},
    pages = {1878--1885},
    keywords = {ARRAY(0x55578fe89020)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39389/}
    }
  • R. Jiang, X. Li, S. Mei, S. Yue, and L. Zhang, “Learning spatial and spectral features via 2d-1d generative adversarial network for hyperspectral image super-resolution,” in 2019 ieee international conference on image processing (icip2019), 2019. doi:10.1109/ICIP.2019.8803200
    [BibTeX] [Abstract] [Download PDF]

    Three-dimensional (3D) convolutional networks have been proven to be able to explore spatial context and spectral information simultaneously for super-resolution (SR). However, such kind of network can?t be practically designed very ?deep? due to the long training time and GPU memory limitations involved in 3D convolution. Instead, in this paper, spatial context and spectral information in hyperspectral images (HSIs) are explored using Two-dimensional (2D) and Onedimenional (1D) convolution, separately. Therefore, a novel 2D-1D generative adversarial network architecture (2D-1DHSRGAN) is proposed for SR of HSIs. Specifically, the generator network consists of a spatial network and a spectral network, in which spatial network is trained with the least absolute deviations loss function to explore spatial context by 2D convolution and spectral network is trained with the spectral angle mapper (SAM) loss function to extract spectral information by 1D convolution. Experimental results over two real HSIs demonstrate that the proposed 2D-1D-HSRGAN clearly outperforms several state-of-the-art algorithms.

    @inproceedings{lincoln42330,
    booktitle = {2019 IEEE International Conference on Image Processing (ICIP2019)},
    month = {August},
    title = {Learning spatial and spectral features via 2D-1D generative adversarial network for hyperspectral image super-resolution},
    author = {Ruituo Jiang and Xu Li and Shaohui Mei and Shigang Yue and Lei Zhang},
    year = {2019},
    doi = {10.1109/ICIP.2019.8803200},
    journal = {2019 IEEE International Conference on Image Processing (ICIP2019)},
    keywords = {ARRAY(0x55578fe89050)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42330/},
    abstract = {Three-dimensional (3D) convolutional networks have been proven to be able to explore spatial context and spectral information simultaneously for super-resolution (SR). However, such kind of network can?t be practically designed very
    ?deep? due to the long training time and GPU memory limitations involved in 3D convolution. Instead, in this paper, spatial context and spectral information in hyperspectral images (HSIs) are explored using Two-dimensional (2D) and Onedimenional (1D) convolution, separately. Therefore, a novel 2D-1D generative adversarial network architecture (2D-1DHSRGAN) is proposed for SR of HSIs. Specifically, the generator network consists of a spatial network and a spectral network, in which spatial network is trained with the least absolute deviations loss function to explore spatial context by 2D convolution and spectral network is trained with the spectral angle mapper (SAM) loss function to extract spectral information by 1D convolution. Experimental results over two real HSIs demonstrate that the proposed 2D-1D-HSRGAN clearly outperforms several state-of-the-art algorithms.}
    }
  • S. Molina, G. Cielniak, and T. Duckett, “Go with the flow: exploration and mapping of pedestrian flow patterns from partial observations,” in International conference on robotics and automation (icra), 2019, p. 9725–9731. doi:10.1109/ICRA.2019.8794434
    [BibTeX] [Abstract] [Download PDF]

    Understanding how people are likely to behave in an environment is a key requirement for efficient and safe robot navigation. However, mobile platforms are subject to spatial and temporal constraints, meaning that only partial observations of human activities are typically available to a robot, while the activity patterns of people in a given environment may also change at different times. To address these issues we present as the main contribution an exploration strategy for acquiring models of pedestrian flows, which decides not only the locations to explore but also the times when to explore them. The approach is driven by the uncertainty from multiple Poisson processes built from past observations. The approach is evaluated using two long-term pedestrian datasets, comparing its performance against uninformed exploration strategies. The results show that when using the uncertainty in the exploration policy, model accuracy increases, enabling faster learning of human motion patterns.

    @inproceedings{lincoln36396,
    month = {August},
    author = {Sergi Molina and Grzegorz Cielniak and Tom Duckett},
    booktitle = {International Conference on Robotics and Automation (ICRA)},
    title = {Go with the Flow: Exploration and Mapping of Pedestrian Flow Patterns from Partial Observations},
    publisher = {IEEE},
    doi = {10.1109/ICRA.2019.8794434},
    pages = {9725--9731},
    year = {2019},
    keywords = {ARRAY(0x55578fe89080)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36396/},
    abstract = {Understanding how people are likely to behave in an environment is a key requirement for efficient and safe robot navigation. However, mobile platforms are subject to spatial and temporal constraints, meaning that only partial observations of human activities are typically available to a robot, while the activity patterns of people in a given environment may also change at different times. To address these issues we present as the main contribution an exploration strategy for acquiring models of pedestrian flows, which decides not only the locations to explore but also the times when to explore them. The approach is driven by the uncertainty from multiple Poisson processes built from past observations. The approach is evaluated using two long-term pedestrian datasets, comparing its performance against uninformed exploration strategies. The results show that when using the uncertainty in the exploration policy, model accuracy increases, enabling faster learning of human motion patterns.}
    }
  • A. Seddaoui, C. Saaj, and S. Eckersley, “Adaptive h? controller for precise manoeuvring of a space robot,” in 2019 international conference on robotics and automation (icra), 2019, p. 4746–4752. doi:10.1109/ICRA.2019.8794374
    [BibTeX] [Abstract] [Download PDF]

    A space robot working in a controlled-floating mode can be used for performing in-orbit telescope assembly through simultaneously controlling the motion of the spacecraft base and its robotic arm. Handling and assembling optical mirrors requires the space robot to achieve slow and precise manoeuvres regardless of the disturbances and errors in the trajectory. The robustness offered by the nonlinear H ? controller, in the presence of environmental disturbances and parametric uncertainties, makes it a viable solution. However, using fixed tuning parameters for this controller does not always result in the desired performance as the arm’s trajectory is not known a priori for orbital assembly missions. In this paper, a complete study on the impact of the different tuning parameters is performed and a new adaptive H ? controller is developed based on bounded functions. The simulation results presented show that the proposed adaptive H ? controller guarantees robustness and precise tracking using a minimal amount of forces and torques for assembly operations using a small space robot.

    @inproceedings{lincoln37413,
    volume = {2019-M},
    month = {August},
    author = {A. Seddaoui and C. Saaj and S. Eckersley},
    note = {cited By 0},
    booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
    title = {Adaptive H? Controller for Precise Manoeuvring of a Space Robot},
    publisher = {IEEE},
    year = {2019},
    journal = {Proceedings - IEEE International Conference on Robotics and Automation},
    doi = {10.1109/ICRA.2019.8794374},
    pages = {4746--4752},
    keywords = {ARRAY(0x55578fe890b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37413/},
    abstract = {A space robot working in a controlled-floating mode can be used for performing in-orbit telescope assembly through simultaneously controlling the motion of the spacecraft base and its robotic arm. Handling and assembling optical mirrors requires the space robot to achieve slow and precise manoeuvres regardless of the disturbances and errors in the trajectory. The robustness offered by the nonlinear H ? controller, in the presence of environmental disturbances and parametric uncertainties, makes it a viable solution. However, using fixed tuning parameters for this controller does not always result in the desired performance as the arm's trajectory is not known a priori for orbital assembly missions. In this paper, a complete study on the impact of the different tuning parameters is performed and a new adaptive H ? controller is developed based on bounded functions. The simulation results presented show that the proposed adaptive H ? controller guarantees robustness and precise tracking using a minimal amount of forces and torques for assembly operations using a small space robot.}
    }
  • T. Vintr, Z. Yan, T. Duckett, and T. Krajnik, “Spatio-temporal representation for long-term anticipation of human presence in service robotics,” in 2019 international conference on robotics and automation (icra), 2019, p. 2620–2626. doi:10.1109/ICRA.2019.8793534
    [BibTeX] [Abstract] [Download PDF]

    We propose an efficient spatio-temporal model for mobile autonomous robots operating in human populated environments. Our method aims to model periodic temporal patterns of people presence, which are based on peoples? routines and habits. The core idea is to project the time onto a set of wrapped dimensions that represent the periodicities of people presence. Extending a 2D spatial model with this multi-dimensional representation of time results in a memory efficient spatio-temporal model. This model is capable of long-term predictions of human presence, allowing mobile robots to schedule their services better and to plan their paths. The experimental evaluation, performed over datasets gathered by a robot over a period of several weeks, indicates that the proposed method achieves more accurate predictions than the previous state of the art used in robotics.

    @inproceedings{lincoln38253,
    month = {August},
    author = {Tomas Vintr and Zhi Yan and Tom Duckett and Tomas Krajnik},
    booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
    title = {Spatio-temporal representation for long-term anticipation of human presence in service robotics},
    publisher = {IEEE},
    doi = {10.1109/ICRA.2019.8793534},
    pages = {2620--2626},
    year = {2019},
    keywords = {ARRAY(0x55578fe890e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38253/},
    abstract = {We propose an efficient spatio-temporal model for mobile autonomous robots operating in human populated
    environments. Our method aims to model periodic temporal patterns of people presence, which are based on peoples?
    routines and habits. The core idea is to project the time onto a set of wrapped dimensions that represent the periodicities of people presence. Extending a 2D spatial model with this multi-dimensional representation of time results in a memory efficient spatio-temporal model. This model is capable of long-term predictions of human presence, allowing mobile robots to schedule their services better and to plan their paths. The experimental evaluation, performed over datasets gathered by a robot over a period of several weeks, indicates that the proposed
    method achieves more accurate predictions than the previous state of the art used in robotics.}
    }
  • E. Rodias, R. Berruto, D. Bochtis, A. Sopegno, and P. Busato, “Green, yellow, and woody biomass supply-chain management: a review,” Energies, vol. 12, iss. 15, p. 3020, 2019. doi:10.3390/en12153020
    [BibTeX] [Abstract] [Download PDF]

    Various sources of biomass contribute significantly in energy production globally given a series of constraints in its primary production. Green biomass sources (such as perennial grasses), yellow biomass sources (such as crop residues), and woody biomass sources (such as willow) represent the three pillars in biomass production by crops. In this paper, we conducted a comprehensive review on research studies targeted to advancements at biomass supply-chain management in connection to these three types of biomass sources. A framework that classifies the works in problem-based and methodology-based approaches was followed. Results show the use of modern technological means and tools in current management-related problems. From the review, it is evident that the presented up-to-date trends on biomass supply-chain management and the potential for future advanced approach applications play a crucial role on business and sustainability efficiency of biomass supply chain

    @article{lincoln39230,
    volume = {12},
    number = {15},
    month = {August},
    author = {Efthymios Rodias and Remigio Berruto and Dionysis Bochtis and Alessandro Sopegno and Patrizia Busato},
    title = {Green, Yellow, and Woody Biomass Supply-Chain Management: A Review},
    year = {2019},
    journal = {Energies},
    doi = {10.3390/en12153020},
    pages = {3020},
    keywords = {ARRAY(0x55578fe89110)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39230/},
    abstract = {Various sources of biomass contribute significantly in energy production globally given a series of constraints in its primary production. Green biomass sources (such as perennial grasses), yellow biomass sources (such as crop residues), and woody biomass sources (such as willow) represent the three pillars in biomass production by crops. In this paper, we conducted a comprehensive review on research studies targeted to advancements at biomass supply-chain management in connection to these three types of biomass sources. A framework that classifies the works in problem-based and methodology-based approaches was followed. Results show the use of modern technological means and tools in current management-related problems. From the review, it is evident that the presented up-to-date trends on biomass supply-chain management and the potential for future advanced approach applications play a crucial role on business and sustainability efficiency of biomass supply chain}
    }
  • Q. Fu, H. Wang, C. Hu, and S. Yue, “Towards computational models and applications of insect visual systems for motion perception: a review,” Artificial life, vol. 25, iss. 3, p. 263–311, 2019. doi:10.1162/artl_a_00297
    [BibTeX] [Abstract] [Download PDF]

    Motion perception is a critical capability determining a variety of aspects of insects’ life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects’ visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects’ visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.

    @article{lincoln35584,
    volume = {25},
    number = {3},
    month = {August},
    author = {Qinbing Fu and Hongxin Wang and Cheng Hu and Shigang Yue},
    title = {Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review},
    publisher = {MIT Press},
    year = {2019},
    journal = {Artificial life},
    doi = {10.1162/artl\_a\_00297},
    pages = {263--311},
    keywords = {ARRAY(0x55578fe89140)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35584/},
    abstract = {Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.}
    }
  • K. Goher and S. Fadlallah, “Control of a two-wheeled machine with two-directions handling mechanism using pid and pd-flc algorithms,” International journal of automation and computing, vol. 16, iss. 4, p. 511–533, 2019. doi:10.1007/s11633-019-1172-0
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel five degrees of freedom (DOF) two-wheeled robotic machine (TWRM) that delivers solutions for both industrial and service robotic applications by enlarging the vehicle?s workspace and increasing its flexibility. Designing a two-wheeled robot with five degrees of freedom creates a high challenge for the control, therefore the modelling and design of such robot should be precise with a uniform distribution of mass over the robot and the actuators. By employing the Lagrangian modelling approach, the TWRM?s mathematical model is derived and simulated in Matlab/Simulink?. For stabilizing the system?s highly nonlinear model, two control approaches were developed and implemented: proportional-integral-derivative (PID) and fuzzy logic control (FLC) strategies. Considering multiple scenarios with different initial conditions, the proposed control strategies? performance has been assessed.

    @article{lincoln35606,
    volume = {16},
    number = {4},
    month = {August},
    author = {Khaled Goher and Sulaiman Fadlallah},
    title = {Control of a Two-wheeled Machine with Two-directions Handling Mechanism Using PID and PD-FLC Algorithms},
    publisher = {Springer},
    year = {2019},
    journal = {International Journal of Automation and Computing},
    doi = {10.1007/s11633-019-1172-0},
    pages = {511--533},
    keywords = {ARRAY(0x55578fe8f188)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35606/},
    abstract = {This paper presents a novel five degrees of freedom (DOF) two-wheeled robotic machine (TWRM) that delivers solutions
    for both industrial and service robotic applications by enlarging the vehicle?s workspace and increasing its flexibility. Designing a two-wheeled robot with five degrees of freedom creates a high challenge for the control, therefore the modelling and design of such robot should be precise with a uniform distribution of mass over the robot and the actuators. By employing the Lagrangian modelling approach, the TWRM?s mathematical model is derived and simulated in Matlab/Simulink?. For stabilizing the system?s highly nonlinear model, two control approaches were developed and implemented: proportional-integral-derivative (PID) and fuzzy logic control (FLC)
    strategies. Considering multiple scenarios with different initial conditions, the proposed control strategies? performance has been assessed.}
    }
  • B. Ugurlu, M. Acer, D. E. Barkana, I. Gocek, A. Kucukyilmaz, Y. Z. Arslan, H. Basturk, E. Samur, E. Ugur, R. Unal, and O. Bebek, “A soft+rigid hybrid exoskeleton concept in scissors-pendulum mode: a suit for human state sensing and an exoskeleton for assistance,” in 2019 ieee 16th international conference on rehabilitation robotics (icorr), 2019, p. 518–523. doi:10.1109/ICORR.2019.8779394
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a novel concept that can enable the human aware control of exoskeletons through the integration of a soft suit and a robotic exoskeleton. Unlike the state-of-the-art exoskeleton controllers which mostly rely on lumped human-robot models, the proposed concept makes use of the independent state measurements concerning the human user and the robot. The ability to observe the human state independently is the key factor in this approach. In order to realize such a system from the hardware point of view, we propose a system integration frame that combines a soft suit for human state measurement and a rigid exoskeleton for human assistance. We identify the technological requirements that are necessary for the realization of such a system with a particular emphasis on soft suit integration. We also propose a template model, named scissor pendulum, that may encapsulate the dominant dynamics of the human-robot combined model to synthesize a controller for human state regulation. A series of simulation experiments were conducted to check the controller performance. As a result, satisfactory human state regulation was attained, adequately confirming that the proposed system could potentially improve exoskeleton-aided applications.

    @inproceedings{lincoln36661,
    month = {July},
    author = {Barkan Ugurlu and Merve Acer and Duygun E. Barkana and Ikilem Gocek and Ayse Kucukyilmaz and Yunus Z. Arslan and Halil Basturk and Evren Samur and Emre Ugur and Ramazan Unal and Ozkan Bebek},
    booktitle = {2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR)},
    title = {A Soft+Rigid Hybrid Exoskeleton Concept in Scissors-Pendulum Mode: A Suit for Human State Sensing and an Exoskeleton for Assistance},
    publisher = {IEEE},
    doi = {10.1109/ICORR.2019.8779394},
    pages = {518--523},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f1b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36661/},
    abstract = {In this paper, we present a novel concept that can enable the human aware control of exoskeletons through the
    integration of a soft suit and a robotic exoskeleton. Unlike the state-of-the-art exoskeleton controllers which mostly rely on lumped human-robot models, the proposed concept makes use of the independent state measurements concerning the human user and the robot. The ability to observe the human state independently is the key factor in this approach. In order to realize such a system from the hardware point of view, we propose a system integration frame that combines a soft suit for human state measurement and a rigid exoskeleton for human assistance. We identify the technological requirements that are necessary for the realization of such a system with a particular emphasis on soft suit integration. We also propose a template model, named scissor pendulum, that may encapsulate the dominant dynamics of the human-robot combined model to synthesize a controller for human state regulation. A series of simulation experiments were conducted to check the controller performance. As a result, satisfactory human state regulation was attained, adequately confirming that the proposed system could potentially improve exoskeleton-aided applications.}
    }
  • S. Sari and A. Kucukyilmaz, “Vr-fit: walking-in-place locomotion with real time step detection for vr-enabled exercise,” in Mobile web and intelligent information systems, 2019, p. 255–266. doi:10.1007/978-3-030-27192-3_20
    [BibTeX] [Abstract] [Download PDF]

    With recent advances in mobile and wearable technologies, virtual reality (VR) found many applications in daily use. Today, a mobile device can be converted into a low-cost immersive VR kit thanks to the availability of do-it-yourself viewers in the shape of simple cardboards and compatible software for 3D rendering. These applications involve interacting with stationary scenes or moving in between spaces within a VR environment. VR locomotion can be enabled through a variety of methods, such as head movement tracking, joystick-triggered motion and through mapping natural movements to translate to virtual locomotion. In this study, we implemented a walk-in-place (WIP) locomotion method for a VR-enabled exercise application. We investigate the utility of WIP for exercise purposes, and compare it with joystick-based locomotion in terms of step performance and subjective qualities of the activity, such as enjoyment, encouragement for exercise and ease of use. Our technique uses vertical accelerometer data to estimate steps taken during walking or running, and locomotes the user?s avatar accordingly in virtual space. We evaluated our technique in a controlled experimental study with 12 people. Results indicate that the way users control the simulated locomotion affects how they interact with the VR simulation, and influence the subjective sense of immersion and the perceived quality of the interaction. In particular, WIP encourages users to move further, and creates a more enjoyable and interesting experience in comparison to joystick-based navigation.

    @inproceedings{lincoln36870,
    volume = {11673},
    month = {July},
    author = {Sercan Sari and Ayse Kucukyilmaz},
    booktitle = {Mobile Web and Intelligent Information Systems},
    title = {VR-Fit: Walking-in-Place Locomotion with Real Time Step Detection for VR-Enabled Exercise},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/978-3-030-27192-3\_20},
    pages = {255--266},
    keywords = {ARRAY(0x55578fe8f1e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36870/},
    abstract = {With recent advances in mobile and wearable technologies, virtual reality (VR) found many applications in daily use. Today, a mobile device can be converted into a low-cost immersive VR kit thanks to the availability of do-it-yourself viewers in the shape of simple cardboards and compatible software for 3D rendering. These applications involve interacting with stationary scenes or moving in between spaces within a VR environment. VR locomotion can be enabled through a variety of methods, such as head movement tracking, joystick-triggered motion and through mapping natural movements to translate to virtual locomotion. In this study, we implemented a walk-in-place (WIP) locomotion method for a VR-enabled exercise application. We investigate the utility of WIP for exercise purposes, and compare it with joystick-based locomotion in terms of step performance and subjective qualities of the activity, such as enjoyment, encouragement for exercise and ease of use. Our technique uses vertical accelerometer data to estimate steps taken during walking or running, and locomotes the user?s avatar accordingly in virtual space. We evaluated our technique in a controlled experimental study with 12 people. Results indicate that the way users control the simulated locomotion affects how they interact with the VR simulation, and influence the subjective sense of immersion and the perceived quality of the interaction. In particular, WIP encourages users to move further, and creates a more enjoyable and interesting experience in comparison to joystick-based navigation.}
    }
  • N. Tsolakis, D. Bechtsis, and D. Bochtis, “Agros: a robot operating system based emulation tool for agricultural robotics,” Agronomy, vol. 9, iss. 7, p. 403, 2019. doi:10.3390/agronomy9070403
    [BibTeX] [Abstract] [Download PDF]

    This research aims to develop a farm management emulation tool that enables agrifood producers to effectively introduce advanced digital technologies, like intelligent and autonomous unmanned ground vehicles (UGVs), in real-world field operations. To that end, we first provide a critical taxonomy of studies investigating agricultural robotic systems with regard to: (i) the analysis approach, i.e., simulation, emulation, real-world implementation; (ii) farming operations; and (iii) the farming type. Our analysis demonstrates that simulation and emulation modelling have been extensively applied to study advanced agricultural machinery while the majority of the extant research efforts focuses on harvesting/picking/mowing and fertilizing/spraying activities; most studies consider a generic agricultural layout. Thereafter, we developed AgROS, an emulation tool based on the Robot Operating System, which could be used for assessing the efficiency of real-world robot systems in customized fields. The AgROS allows farmers to select their actual field from a map layout, import the landscape of the field, add characteristics of the actual agricultural layout (e.g., trees, static objects), select an agricultural robot from a predefined list of commercial systems, import the selected UGV into the emulation environment, and test the robot?s performance in a quasi-real-world environment. AgROS supports farmers in the ex-ante analysis and performance evaluation of robotized precision farming operations while lays the foundations for realizing ?digital twins? in agriculture

    @article{lincoln39229,
    volume = {9},
    number = {7},
    month = {July},
    author = {Naoum Tsolakis and Dimitrios Bechtsis and Dionysis Bochtis},
    title = {AgROS: A Robot Operating System Based Emulation Tool for Agricultural Robotics},
    year = {2019},
    journal = {Agronomy},
    doi = {10.3390/agronomy9070403},
    pages = {403},
    keywords = {ARRAY(0x55578fe8f218)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39229/},
    abstract = {This research aims to develop a farm management emulation tool that enables agrifood producers to effectively introduce advanced digital technologies, like intelligent and autonomous unmanned ground vehicles (UGVs), in real-world field operations. To that end, we first provide a critical taxonomy of studies investigating agricultural robotic systems with regard to: (i) the analysis approach, i.e., simulation, emulation, real-world implementation; (ii) farming operations; and (iii) the farming type. Our analysis demonstrates that simulation and emulation modelling have been extensively applied to study advanced agricultural machinery while the majority of the extant research efforts focuses on harvesting/picking/mowing and fertilizing/spraying activities; most studies consider a generic agricultural layout. Thereafter, we developed AgROS, an emulation tool based on the Robot Operating System, which could be used for assessing the efficiency of real-world robot systems in customized fields. The AgROS allows farmers to select their actual field from a map layout, import the landscape of the field, add characteristics of the actual agricultural layout (e.g., trees, static objects), select an agricultural robot from a predefined list of commercial systems, import the selected UGV into the emulation environment, and test the robot?s performance in a quasi-real-world environment. AgROS supports farmers in the ex-ante analysis and performance evaluation of robotized precision farming operations while lays the foundations for realizing ?digital twins? in agriculture}
    }
  • H. Wang, J. Peng, Q. Fu, H. Wang, and S. Yue, “Visual cue integration for small target motion detection in natural cluttered backgrounds,” in The 2019 international joint conference on neural networks (ijcnn), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The robust detection of small targets against cluttered background is important for future arti?cial visual systems in searching and tracking applications. The insects? visual systems have demonstrated excellent ability to avoid predators, ?nd prey or identify conspeci?cs ? which always appear as small dim speckles in the visual ?eld. Build a computational model of the insects? visual pathways could provide effective solutions to detect small moving targets. Although a few visual system models have been proposed, they only make use of small-?eld visual features for motion detection and their detection results often contain a number of false positives. To address this issue, we develop a new visual system model for small target motion detection against cluttered moving backgrounds. Compared to the existing models, the small-?eld and wide-?eld visual features are separately extracted by two motion-sensitive neurons to detect small target motion and background motion. These two types of motion information are further integrated to ?lter out false positives. Extensive experiments showed that the proposed model can outperform the existing models in terms of detection rates.

    @inproceedings{lincoln35684,
    booktitle = {The 2019 International Joint Conference on Neural Networks (IJCNN)},
    month = {July},
    title = {Visual Cue Integration for Small Target Motion Detection in Natural Cluttered Backgrounds},
    author = {Hongxin Wang and Jigen Peng and Qinbing Fu and Huatian Wang and Shigang Yue},
    publisher = {IEEE},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f248)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35684/},
    abstract = {The robust detection of small targets against cluttered background is important for future arti?cial visual systems in searching and tracking applications. The insects? visual systems have demonstrated excellent ability to avoid predators, ?nd prey or identify conspeci?cs ? which always appear as small dim speckles in the visual ?eld. Build a computational model of the insects? visual pathways could provide effective solutions to detect small moving targets. Although a few visual system models have been proposed, they only make use of small-?eld visual features for motion detection and their detection results often contain a number of false positives. To address this issue, we develop a new visual system model for small target motion detection against cluttered moving backgrounds. Compared to the existing models, the small-?eld and wide-?eld visual features are separately extracted by two motion-sensitive neurons to detect small target motion and background motion. These two types of motion information are further integrated to ?lter out false positives. Extensive experiments showed that the proposed model can outperform the existing models in terms of detection rates.}
    }
  • H. Wang, Q. Fu, H. Wang, J. Peng, P. Baxter, C. Hu, and S. Yue, “Angular velocity estimation of image motion mimicking the honeybee tunnel centring behaviour,” in The 2019 international joint conference on neural networks, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of ?ight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not ful?lled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee ?ying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee?s image motion detection mechanism guiding the tunnel centring behaviour.

    @inproceedings{lincoln35685,
    booktitle = {The 2019 International Joint Conference on Neural Networks},
    month = {July},
    title = {Angular Velocity Estimation of Image Motion Mimicking the Honeybee Tunnel Centring Behaviour},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Jigen Peng and Paul Baxter and Cheng Hu and Shigang Yue},
    publisher = {IEEE},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f278)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35685/},
    abstract = {Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of ?ight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not ful?lled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee ?ying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee?s image motion detection mechanism guiding the tunnel centring behaviour.}
    }
  • M. F. Carmona, T. Parekh, and M. Hanheide, “Making the case for human-aware navigation in warehouses,” in Taros 2019: towards autonomous robotic systems, 2019, p. 449–453. doi:10.1007/978-3-030-25332-5_38
    [BibTeX] [Abstract] [Download PDF]

    This work addresses the performance of several local planners for navigation of autonomous pallet trucks in the presence of humans in a simulated warehouse as well as a complementary approach developed within the ILIAD project. Our focus is to stress the open problem of a safe manoeuvrability of pallet trucks in the presence of moving humans. We propose a variation of ROS navigation stack that includes in the planning process a model of the human robot interaction.

    @inproceedings{lincoln37347,
    month = {July},
    author = {Manuel Fernandez Carmona and Tejas Parekh and Marc Hanheide},
    booktitle = {TAROS 2019: Towards Autonomous Robotic Systems},
    title = {Making the Case for Human-Aware Navigation in Warehouses},
    publisher = {Springer, Cham},
    doi = {10.1007/978-3-030-25332-5\_38},
    pages = {449--453},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f2a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37347/},
    abstract = {This work addresses the performance of several local planners for navigation of autonomous pallet trucks in the presence of humans in a simulated warehouse as well as a complementary approach developed within the ILIAD project.
    Our focus is to stress the open problem of a safe manoeuvrability of pallet trucks in the presence of moving humans.
    We propose a variation of ROS navigation stack that includes in the planning process a model of the human robot interaction.}
    }
  • H. Cuayahuitl, D. Lee, S. Ryu, S. Choi, I. Hwang, and J. Kim, “Deep reinforcement learning for chatbots using clustered actions and human-likeness rewards,” in International joint conference on neural networks (ijcnn), 2019. doi:10.1109/IJCNN.2019.8852376
    [BibTeX] [Abstract] [Download PDF]

    Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text{–}without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ?10 sentences.

    @inproceedings{lincoln35954,
    booktitle = {International Joint Conference on Neural Networks (IJCNN)},
    month = {July},
    title = {Deep Reinforcement Learning for Chatbots Using Clustered Actions and Human-Likeness Rewards},
    author = {Heriberto Cuayahuitl and Donghyeon Lee and Seonghan Ryu and Sungja Choi and Inchul Hwang and Jihie Kim},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/IJCNN.2019.8852376},
    keywords = {ARRAY(0x55578fe8f2d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35954/},
    abstract = {Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text{--}without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ?10 sentences.}
    }
  • X. Sun, T. liu, C. Hu, Q. Fu, and S. Yue, “Colcos\ensuremath\Phi: a multiple pheromone communication system for swarm robotics and social insects research,” in The 2019 ieee international conference on advanced robotics and mechatronics (icarm), 2019.
    [BibTeX] [Abstract] [Download PDF]

    In the last few decades we have witnessed how the pheromone of social insect has become a rich inspiration source of swarm robotics. By utilising the virtual pheromone in physical swarm robot system to coordinate individuals and realise direct/indirect inter-robot communications like the social insect, stigmergic behaviour has emerged. However, many studies only take one single pheromone into account in solving swarm problems, which is not the case in real insects. In the real social insect world, diverse behaviours, complex collective performances and ?exible transition from one state to another are guided by different kinds of pheromones and their interactions. Therefore, whether multiple pheromone based strategy can inspire swarm robotics research, and inversely how the performances of swarm robots controlled by multiple pheromones bring inspirations to explain the social insects? behaviours will become an interesting question. Thus, to provide a reliable system to undertake the multiple pheromone study, in this paper, we speci?cally proposed and realised a multiple pheromone communication system called ColCOS{\ensuremath{\Phi}}. This system consists of a virtual pheromone sub-system wherein the multiple pheromone is represented by a colour image displayed on a screen, and the micro-robots platform designed for swarm robotics applications. Two case studies are undertaken to verify the effectiveness of this system: one is the multiple pheromone based on an ant?s forage and another is the interactions of aggregation and alarm pheromones. The experimental results demonstrate the feasibility of ColCOS{\ensuremath{\Phi}} and its great potential in directing swarm robotics and social insects research.

    @inproceedings{lincoln36187,
    booktitle = {The 2019 IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)},
    month = {July},
    title = {ColCOS{\ensuremath{\Phi}}: A Multiple Pheromone Communication System for Swarm Robotics and Social Insects Research},
    author = {Xuelong Sun and Tian liu and Cheng Hu and Qinbing Fu and Shigang Yue},
    publisher = {IEEE},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f308)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36187/},
    abstract = {In the last few decades we have witnessed how the pheromone of social insect has become a rich inspiration source of swarm robotics. By utilising the virtual pheromone in physical swarm robot system to coordinate individuals and realise direct/indirect inter-robot communications like the social insect, stigmergic behaviour has emerged. However, many studies only take one single pheromone into account in solving swarm problems, which is not the case in real insects. In the real social insect world, diverse behaviours, complex collective performances and ?exible transition from one state to another are guided by different kinds of pheromones and their interactions. Therefore, whether multiple pheromone based strategy can inspire swarm robotics research, and inversely how the performances of swarm robots controlled by multiple pheromones bring inspirations to explain the social insects? behaviours will become an interesting question. Thus, to provide a reliable system to undertake the multiple pheromone study, in this paper, we speci?cally proposed and realised a multiple pheromone communication system called ColCOS{\ensuremath{\Phi}}. This system consists of a virtual pheromone sub-system wherein the multiple pheromone is represented by a colour image displayed on a screen, and the micro-robots platform designed for swarm robotics applications. Two case studies are undertaken to verify the effectiveness of this system: one is the multiple pheromone based on an ant?s forage and another is the interactions of aggregation and alarm pheromones. The experimental results demonstrate the feasibility of ColCOS{\ensuremath{\Phi}} and its great potential in directing swarm robotics and social insects research.}
    }
  • J. Koleosho and C. Saaj, “System design and control of a di-wheel rover,” in 20th annual conference, taros 2019, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39422,
    booktitle = {20th Annual Conference, TAROS 2019},
    month = {July},
    title = {System Design and Control of a Di-Wheel Rover},
    author = {John Koleosho and Chakravarthini Saaj},
    publisher = {Springer},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39422/}
    }
  • H. Cao, P. G. Esteban, M. Bartlett, P. Baxter, T. Belpaeme, E. Billing, H. Cai, M. Coeckelbergh, C. Costescu, D. David, A. D. Beir, D. Hernandez, J. Kennedy, H. Liu, S. Matu, A. Mazel, A. Pandey, K. Richardson, E. Senft, S. Thill, G. V. de Perre, B. Vanderborght, D. Vernon, K. Wakanuma, H. Yu, X. Zhou, and T. Ziemke, “Robot-enhanced therapy: development and validation of supervised autonomous robotic system for autism spectrum disorders therapy,” Ieee robotics & automation magazine, vol. 26, iss. 2, p. 49–58, 2019. doi:doi:10.1109/MRA.2019.2904121
    [BibTeX] [Abstract] [Download PDF]

    Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.

    @article{lincoln36203,
    volume = {26},
    number = {2},
    month = {June},
    author = {Hoang-Long Cao and Pablo G. Esteban and Madeleine Bartlett and Paul Baxter and Tony Belpaeme and Erik Billing and Haibin Cai and Mark Coeckelbergh and Cristina Costescu and Daniel David and Albert De Beir and Daniel Hernandez and James Kennedy and Honghai Liu and Silviu Matu and Alexandre Mazel and Amit Pandey and Kathleen Richardson and Emmanuel Senft and Serge Thill and Greet Van de Perre and Bram Vanderborght and David Vernon and Kutoma Wakanuma and Hui Yu and Xiaolong Zhou and Tom Ziemke},
    title = {Robot-Enhanced Therapy: Development and Validation of Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Robotics \& Automation Magazine},
    doi = {doi:10.1109/MRA.2019.2904121},
    pages = {49--58},
    keywords = {ARRAY(0x55578fe8f368)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36203/},
    abstract = {Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.}
    }
  • A. Gabriel, S. Cosar, N. Bellotto, and P. Baxter, “A dataset for action recognition in the wild,” in Towards autonomous robotic systems, 2019, p. 362–374. doi:doi:10.1007/978-3-030-23807-0_30
    [BibTeX] [Abstract] [Download PDF]

    The development of autonomous robots for agriculture depends on a successful approach to recognize user needs as well as datasets reflecting the characteristics of the domain. Available datasets for 3D Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions and therefore fail to account for the highly variable conditions the robot would have to work with in the field, e.g. when providing in-field logistic support for human fruit pickers as in our scenario. Existing work on Intention Recognition mostly labels plans or actions as intentions, but neither of those fully capture the extend of human intent. In this work, we argue for a holistic view on human Intention Recognition and propose a set of recording conditions, gestures and behaviors that better reflect the environment and conditions an agricultural robot might find itself in. We demonstrate the utility of the dataset by means of evaluating two human detection methods: bounding boxes and skeleton extraction.

    @inproceedings{lincoln36395,
    volume = {11649},
    month = {June},
    author = {Alexander Gabriel and Serhan Cosar and Nicola Bellotto and Paul Baxter},
    booktitle = {Towards Autonomous Robotic Systems},
    title = {A Dataset for Action Recognition in the Wild},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-23807-0\_30},
    pages = {362--374},
    keywords = {ARRAY(0x55578fe8f398)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36395/},
    abstract = {The development of autonomous robots for agriculture depends on a successful approach to recognize user needs as well as datasets reflecting the characteristics of the domain. Available datasets for 3D Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions and therefore fail to account for the highly variable conditions the robot would have to work with in the field, e.g. when providing in-field logistic support for human fruit pickers as in our scenario. Existing work on Intention Recognition mostly labels plans or actions as intentions, but neither of those fully capture the extend of human intent. In this work, we argue for a holistic view on human Intention Recognition and propose a set of recording conditions, gestures and behaviors that better reflect the environment and conditions an agricultural robot might find itself in. We demonstrate the utility of the dataset by means of evaluating two human detection methods: bounding boxes and skeleton extraction.}
    }
  • R. Akrour, J. Pajarinen, G. Neumann, and J. Peters, “Projections for approximate policy iteration algorithms,” in Proceedings of the international conference on machine learning (icml), 2019, p. 181–190.
    [BibTeX] [Abstract] [Download PDF]

    Approximate policy iteration is a class of reinforcement learning (RL) algorithms where the policy is encoded using a function approximator and which has been especially prominent in RL with continuous action spaces. In this class of RL algorithms, ensuring increase of the policy return during policy update often requires to constrain the change in action distribution. Several approximations exist in the literature to solve this constrained policy update problem. In this paper, we propose to improve over such solutions by introducing a set of projections that transform the constrained problem into an unconstrained one which is then solved by standard gradient descent. Using these projections, we empirically demonstrate that our approach can improve the policy update solution and the control over exploration of existing approximate policy iteration algorithms.

    @inproceedings{lincoln36285,
    booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
    month = {June},
    title = {Projections for Approximate Policy Iteration Algorithms},
    author = {R. Akrour and J. Pajarinen and Gerhard Neumann and J. Peters},
    publisher = {Proceedings of Machine Learning Research},
    year = {2019},
    pages = {181--190},
    keywords = {ARRAY(0x55578fe8f3f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36285/},
    abstract = {Approximate policy iteration is a class of reinforcement learning (RL) algorithms where the policy is encoded using a function approximator and which has been especially prominent in RL with continuous action spaces. In this class of RL algorithms, ensuring increase of the policy return during policy update often requires to constrain the change in action distribution. Several approximations exist in the literature to solve this constrained policy update problem. In this paper, we propose to improve over such solutions by introducing a set of projections that transform the constrained problem into an unconstrained one which is then solved by standard gradient descent. Using these projections, we empirically demonstrate that our approach can improve the policy update solution and the control over exploration of existing approximate policy iteration algorithms.}
    }
  • P. Becker, H. Pandya, G. Gebhardt, C. Zhao, J. C. Taylor, and G. Neumann, “Recurrent kalman networks: factorized inference in high-dimensional deep feature spaces,” in Proceedings of the 36th international conference on machine learning, Long Beach, California, USA, 2019, p. 544–552.
    [BibTeX] [Abstract] [Download PDF]

    In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference tech- niques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.

    @inproceedings{lincoln36286,
    volume = {97},
    month = {June},
    author = {Philipp Becker and Harit Pandya and Gregor Gebhardt and Cheng Zhao and C. James Taylor and Gerhard Neumann},
    series = {Proceedings of Machine Learning Research},
    booktitle = {Proceedings of the 36th International Conference on Machine Learning},
    address = {Long Beach, California, USA},
    title = {Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces},
    publisher = {Proceedings of Machine Learning Research},
    year = {2019},
    pages = {544--552},
    keywords = {ARRAY(0x55578fe8f428)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36286/},
    abstract = {In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference tech- niques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter \& Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.}
    }
  • S. M. Mustaza, Y. Elsayed, C. Lekakou, C. Saaj, and J. Fras, “Dynamic modeling of fiber-reinforced soft manipulator: a visco-hyperelastic material-based continuum mechanics approach,” Soft robotics, vol. 6, iss. 3, p. 305–317, 2019. doi:10.1089/soro.2018.0032
    [BibTeX] [Abstract] [Download PDF]

    Robot-assisted surgery is gaining popularity worldwide and there is increasing scientific interest to explore the potential of soft continuum robots for minimally invasive surgery. However, the remote control of soft robots is much more challenging compared with their rigid counterparts. Accurate modeling of manipulator dynamics is vital to remotely control the diverse movement configurations and is particularly important for safe interaction with the operating environment. However, current dynamic models applied to soft manipulator systems are simplistic and empirical, which restricts the full potential of the new soft robots technology. Therefore, this article provides a new insight into the development of a nonlinear dynamic model for a soft continuum manipulator based on a material model. The continuum manipulator used in this study is treated as a composite material and a modified nonlinear Kelvin?Voigt material model is utilized to embody the visco-hyperelastic dynamics of soft silicone. The Lagrangian approach is applied to derive the equation of motion of the manipulator. Simulation and experimental results prove that this material modeling approach sufficiently captures the nonlinear time- and rate-dependent behavior of a soft manipulator. Material model-based closed-loop trajectory control was implemented to further validate the feasibility of the derived model and increase the performance of the overall system.

    @article{lincoln37436,
    volume = {6},
    number = {3},
    month = {June},
    author = {S.M. Mustaza and Y. Elsayed and C. Lekakou and C. Saaj and J. Fras},
    note = {cited By 1},
    title = {Dynamic modeling of fiber-reinforced soft manipulator: A visco-hyperelastic material-based continuum mechanics approach},
    publisher = {Mary Ann Liebert},
    year = {2019},
    journal = {Soft Robotics},
    doi = {10.1089/soro.2018.0032},
    pages = {305--317},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37436/},
    abstract = {Robot-assisted surgery is gaining popularity worldwide and there is increasing scientific interest to explore the potential of soft continuum robots for minimally invasive surgery. However, the remote control of soft robots is much more challenging compared with their rigid counterparts. Accurate modeling of manipulator dynamics is vital to remotely control the diverse movement configurations and is particularly important for safe interaction with the operating environment. However, current dynamic models applied to soft manipulator systems are simplistic and empirical, which restricts the full potential of the new soft robots technology. Therefore, this article provides a new insight into the development of a nonlinear dynamic model for a soft continuum manipulator based on a material model. The continuum manipulator used in this study is treated as a composite material and a modified nonlinear Kelvin?Voigt material model is utilized to embody the visco-hyperelastic dynamics of soft silicone. The Lagrangian approach is applied to derive the equation of motion of the manipulator. Simulation and experimental results prove that this material modeling approach sufficiently captures the nonlinear time- and rate-dependent behavior of a soft manipulator. Material model-based closed-loop trajectory control was implemented to further validate the feasibility of the derived model and increase the performance of the overall system.}
    }
  • K. Elgeneidy, P. Lightbody, S. Pearson, and G. Neumann, “Characterising 3d-printed soft fin ray robotic fingers with layer jamming capability for delicate grasping,” in Robosoft 2019, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Motivated by the growing need within the agrifood industry to automate the handling of delicate produce, this paper presents soft robotic fingers utilising the Fin Ray effect to passively and gently adapt to delicate targets. The proposed Soft Fin Ray fingers feature thin ribs and are entirely 3D printed from a flexible material (NinjaFlex) to enhance their shape adaptation, compared to the original Fin Ray fingers. To overcome their reduced force generation, the effects of the angle and spacing of the flexible ribs were experimentally characterised. The results showed that at large displacements, layer jamming between tilted flexible ribs can significantly enhance the force generation, while minimal contact forces can be still maintained at small displacements for delicate grasping.

    @inproceedings{lincoln34950,
    booktitle = {RoboSoft 2019},
    month = {June},
    title = {Characterising 3D-printed Soft Fin Ray Robotic Fingers with Layer Jamming Capability for Delicate Grasping},
    author = {Khaled Elgeneidy and Peter Lightbody and Simon Pearson and Gerhard Neumann},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f488)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34950/},
    abstract = {Motivated by the growing need within the agrifood industry to automate the handling of delicate produce, this paper presents soft robotic fingers utilising the Fin Ray effect to passively and gently adapt to delicate targets. The proposed Soft Fin Ray fingers feature thin ribs and are entirely 3D printed from a flexible material (NinjaFlex) to enhance their shape adaptation, compared to the original Fin Ray fingers. To overcome their reduced force generation, the effects of
    the angle and spacing of the flexible ribs were experimentally characterised. The results showed that at large displacements, layer jamming between tilted flexible ribs can significantly enhance the force generation, while minimal contact forces can be still maintained at small displacements for delicate grasping.}
    }
  • P. Bosilj, I. Gould, T. Duckett, and G. Cielniak, “Pattern spectra from different component trees for estimating soil size distribution,” in 14th international symposium on mathematical morphology, 2019, p. 415–427.
    [BibTeX] [Abstract] [Download PDF]

    We study the pattern spectra in context of soil structure analysis. Good soil structure is vital for sustainable crop growth. Accurate and fast measuring methods can contribute greatly to soil management decisions. However, the current in-field approaches contain a degree of subjectivity, while obtaining quantifiable results through laboratory techniques typically involves sieving the soil which is labour- and time-intensive. We aim to replace this physical sieving process through image analysis, and investigate the effectiveness of pattern spectra to capture the size distribution of the soil aggregates. We calculate the pattern spectra from partitioning hierarchies in addition to the traditional max-tree. The study is posed as an image retrieval problem, and confirms the ability of pattern spectra and suitability of different partitioning trees to re-identify soil samples in different arrangements and scales.

    @inproceedings{lincoln35548,
    month = {May},
    author = {Petra Bosilj and Iain Gould and Tom Duckett and Grzegorz Cielniak},
    booktitle = {14th International Symposium on Mathematical Morphology},
    title = {Pattern Spectra from Different Component Trees for Estimating Soil Size Distribution},
    publisher = {Springer},
    journal = {International Symposium on Mathematical Morphology},
    pages = {415--427},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f4b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35548/},
    abstract = {We study the pattern spectra in context of soil structure analysis. Good soil structure is vital for sustainable crop growth. Accurate and fast measuring methods can contribute greatly to soil management decisions. However, the current in-field approaches contain a degree of subjectivity, while obtaining quantifiable results through laboratory techniques typically involves sieving the soil which is labour- and time-intensive. We aim to replace this physical sieving process through image analysis, and investigate the effectiveness of pattern spectra to capture the size distribution of the soil aggregates. We calculate the pattern spectra from partitioning hierarchies in addition to the traditional max-tree. The study is posed as an image retrieval problem, and confirms the ability of pattern spectra and suitability of different partitioning trees to re-identify soil samples in different arrangements and scales.}
    }
  • J. Zhao, X. Ma, Q. Fu, C. Hu, and S. Yue, “An lgmd based competitive collision avoidance strategy for uav,” in The 15th international conference on artificial intelligence applications and innovations, 2019. doi:10.1007/978-3-030-19823-7_6
    [BibTeX] [Abstract] [Download PDF]

    Building a reliable and e?cient collision avoidance system for unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can ?y in dense swarms for hundreds of miles without collision. In the locust?s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identi?ed as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing arti?cial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV?s ?eld of view into four sub?elds each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.

    @inproceedings{lincoln35691,
    booktitle = {The 15th International Conference on Artificial Intelligence Applications and Innovations},
    month = {May},
    title = {An LGMD Based Competitive Collision Avoidance Strategy for UAV},
    author = {Jiannan Zhao and Xingzao Ma and Qinbing Fu and Cheng Hu and Shigang Yue},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/978-3-030-19823-7\_6},
    keywords = {ARRAY(0x55578fe8f4e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35691/},
    abstract = {Building a reliable and e?cient collision avoidance system for unmanned aerial vehicles (UAVs) is still a challenging problem. This research takes inspiration from locusts, which can ?y in dense swarms for hundreds of miles without collision. In the locust?s brain, a visual pathway of LGMD-DCMD (lobula giant movement detector and descending contra-lateral motion detector) has been identi?ed as collision perception system guiding fast collision avoidance for locusts, which is ideal for designing arti?cial vision systems. However, there is very few works investigating its potential in real-world UAV applications. In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation. Compared to previous works, we divided the UAV?s ?eld of view into four sub?elds each handled by an LGMD neuron. Therefore, four individual competitive LGMDs (C-LGMD) compete for guiding the directional collision avoidance of UAV. With more degrees of freedom compared to ground robots and vehicles, the UAV can escape from collision along four cardinal directions (e.g. the object approaching from the left-side triggers a rightward shifting of the UAV). Our proposed method has been validated by both simulations and real-time quadcopter arena experiments.}
    }
  • Q. Fu, N. Bellotto, H. Wang, C. F. Rind, H. Wang, and S. Yue, “A visual neural network for robust collision perception in vehicle driving scenarios,” in 15th international conference on artificial intelligence applications and innovations, 2019. doi:10.1007/978-3-030-19823-7_5
    [BibTeX] [Abstract] [Download PDF]

    This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust’s visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.

    @inproceedings{lincoln35586,
    booktitle = {15th International Conference on Artificial Intelligence Applications and Innovations},
    month = {May},
    title = {A Visual Neural Network for Robust Collision Perception in Vehicle Driving Scenarios},
    author = {Qinbing Fu and Nicola Bellotto and Huatian Wang and F. Claire Rind and Hongxin Wang and Shigang Yue},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/978-3-030-19823-7\_5},
    keywords = {ARRAY(0x55578fe8f518)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35586/},
    abstract = {This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust's visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.}
    }
  • H. Wang, Q. Fu, H. Wang, J. Peng, and S. Yue, “Constant angular velocity regulation for visually guided terrain following,” in 15th international conference on artificial intelligence applications and innovations, 2019, p. 597–608. doi:10.1007/978-3-030-19823-7_50
    [BibTeX] [Abstract] [Download PDF]

    Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee’s behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles’ terrain following.

    @inproceedings{lincoln35595,
    month = {May},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Jigen Peng and Shigang Yue},
    booktitle = {15th International Conference on Artificial Intelligence Applications and Innovations},
    title = {Constant Angular Velocity Regulation for Visually Guided Terrain Following},
    publisher = {Springer},
    doi = {10.1007/978-3-030-19823-7\_50},
    pages = {597--608},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f548)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35595/},
    abstract = {Insects use visual cues to control their flight behaviours. By estimating the angular velocity of the visual stimuli and regulating it to a constant value, honeybees can perform a terrain following task which keeps the certain height above the undulated ground. For mimicking this behaviour in a bio-plausible computation structure, this paper presents a new angular velocity decoding model based on the honeybee's behavioural experiments. The model consists of three parts, the texture estimation layer for spatial information extraction, the motion detection layer for temporal information extraction and the decoding layer combining information from pervious layers to estimate the angular velocity. Compared to previous methods on this field, the proposed model produces responses largely independent of the spatial frequency and contrast in grating experiments. The angular velocity based control scheme is proposed to implement the model into a bee simulated by the game engine Unity. The perfect terrain following above patterned ground and successfully flying over irregular textured terrain show its potential for micro unmanned aerial vehicles' terrain following.}
    }
  • L. Sun, C. Zhao, Z. Yan, P. Liu, T. Duckett, and R. Stolkin, “A novel weakly-supervised approach for rgb-d-based nuclear waste object detection and categorization,” Ieee sensors journal, vol. 19, iss. 9, p. 3487–3500, 2019. doi:10.1109/JSEN.2018.2888815
    [BibTeX] [Abstract] [Download PDF]

    This paper addresses the problem of RGBD-based detection and categorization of waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a novel industrial application, large amounts of annotated waste object data are currently unavailable. To overcome this problem, we propose a weakly-supervised learning approach which is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos while requiring very few annotations. The proposed method also has the potential to be applied to other household or industrial applications. We evaluate our approach on the Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance among semi-supervised methods. More importantly, we introduce a novel dataset, i.e. Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this novel industrial object recognition challenge. We further propose a complete real-time pipeline for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised approach has demonstrated to be highly effective in solving a novel RGB-D object detection and recognition application with limited human annotations.

    @article{lincoln35699,
    volume = {19},
    number = {9},
    month = {May},
    author = {Li Sun and Cheng Zhao and Zhi Yan and Pengcheng Liu and Tom Duckett and Rustam Stolkin},
    title = {A Novel Weakly-supervised approach for RGB-D-based Nuclear Waste Object Detection and Categorization},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Sensors Journal},
    doi = {10.1109/JSEN.2018.2888815},
    pages = {3487--3500},
    keywords = {ARRAY(0x55578fe8f578)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35699/},
    abstract = {This paper addresses the problem of RGBD-based detection and categorization of waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a novel industrial application, large amounts of annotated waste object data are currently unavailable. To overcome this problem, we propose a weakly-supervised learning approach which is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos while requiring very few annotations. The proposed method also has the potential to be
    applied to other household or industrial applications. We evaluate our approach on the Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance among semi-supervised methods. More importantly, we introduce a novel dataset, i.e. Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this novel industrial object recognition challenge. We further propose a complete real-time pipeline for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised approach has demonstrated to be highly effective in solving a novel RGB-D object
    detection and recognition application with limited human annotations.}
    }
  • A. Nanjangud, C. M. Saaj, P. C. Blacker, A. Young, C. I. Underwood, S. Eckersley, M. Sweeting, and P. Bianco, “Robotic architectures for the on-orbit assembly of large space telescopes,” in 15th esa symposium on advanced space technologies in robotics and automation, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39623,
    booktitle = {15th ESA Symposium on Advanced Space Technologies in Robotics and Automation},
    month = {May},
    title = {Robotic Architectures for the On-Orbit Assembly of Large Space Telescopes},
    author = {Angadh Nanjangud and Chakravarthini M Saaj and Peter C. Blacker and Alex Young and Craig I. Underwood and Steve Eckersley and Martin Sweeting and Paolo Bianco},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39623/}
    }
  • E. C. Rodias, M. Lampridi, A. Sopegno, R. Berruto, G. Banias, D. Bochtis, and P. Busato, “Optimal energy performance on allocating energy crops,” Biosystems engineering, vol. 181, p. 11–27, 2019. doi:10.1016/j.biosystemseng.2019.02.007
    [BibTeX] [Abstract] [Download PDF]

    There is a variety of crops that may be considered as potential biomass production crops. In order to select the best suitable for cultivation crop for a given area, a number of several factors should be taken into account. During the crop selection process, a common framework should be followed focussing on financial or energy performance. Combining multiple crops and multiple fields for the extraction of the best allocation requires a model to evaluate various and complex factors given a specific objective. This paper studies the maximisation of total energy gained from the biomass production by energy crops, reduced by the energy costs of the production process. The tool calculates the energy balance using multiple crops allocated to multiple fields. Both binary programming and linear programming methods are employed to solve the allocation problem. Each crop is assigned to a field (or a combination of crops are allocated to each field) with the aim of maximising the energy balance provided by the production system. For the demonstration of the tool, a hypothetical case study of three different crops cultivated for a decade (Miscanthus x giganteus, Arundo donax, and Panicum virgatum) and allocated to 40 dispersed fields around a biogas plant in Italy is presented. The objective of the best allocation is the maximisation of energy balance showing that the linear solution is slightly better than the binary one in the basic scenario while focussing on suggesting alternative scenarios that would have an optimal energy balance.

    @article{lincoln39225,
    volume = {181},
    month = {May},
    author = {Efthymios C. Rodias and Maria Lampridi and Alessandro Sopegno and Remigio Berruto and George Banias and Dionysis Bochtis and Patrizia Busato},
    title = {Optimal energy performance on allocating energy crops},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2019.02.007},
    pages = {11--27},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f5d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39225/},
    abstract = {There is a variety of crops that may be considered as potential biomass production crops. In order to select the best suitable for cultivation crop for a given area, a number of several factors should be taken into account. During the crop selection process, a common framework should be followed focussing on financial or energy performance. Combining multiple crops and multiple fields for the extraction of the best allocation requires a model to evaluate various and complex factors given a specific objective. This paper studies the maximisation of total energy gained from the biomass production by energy crops, reduced by the energy costs of the production process. The tool calculates the energy balance using multiple crops allocated to multiple fields. Both binary programming and linear programming methods are employed to solve the allocation problem. Each crop is assigned to a field (or a combination of crops are allocated to each field) with the aim of maximising the energy balance provided by the production system. For the demonstration of the tool, a hypothetical case study of three different crops cultivated for a decade (Miscanthus x giganteus, Arundo donax, and Panicum virgatum) and allocated to 40 dispersed fields around a biogas plant in Italy is presented. The objective of the best allocation is the maximisation of energy balance showing that the linear solution is slightly better than the binary one in the basic scenario while focussing on suggesting alternative scenarios that would have an optimal energy balance.}
    }
  • F. Brandherm, J. Peters, G. Neumann, and R. Akrour, “Learning replanning policies with direct policy search,” Ieee robotics and automation letters (ra-l), vol. 4, iss. 2, p. 2196 –2203, 2019. doi:10.1109/LRA.2019.2901656
    [BibTeX] [Abstract] [Download PDF]

    Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed context

    @article{lincoln36284,
    volume = {4},
    number = {2},
    month = {April},
    author = {F. Brandherm and J. Peters and Gerhard Neumann and R. Akrour},
    title = {Learning Replanning Policies with Direct Policy Search},
    year = {2019},
    journal = {IEEE Robotics and Automation Letters (RA-L)},
    doi = {10.1109/LRA.2019.2901656},
    pages = {2196 --2203},
    keywords = {ARRAY(0x55578fe8f608)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36284/},
    abstract = {Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed context}
    }
  • M. Lampridi, D. Kateris, G. Vasileiadis, S. Pearson, C. S. o, A. Balafoutis, and D. Bochtis, “A case-based economic assessment of robotics employment in precision arable farming,” Agronomy, vol. 9, iss. 4, p. 175, 2019. doi:10.3390/agronomy9040175
    [BibTeX] [Abstract] [Download PDF]

    The need to intensify agriculture to meet increasing nutritional needs, in combination with the evolution of unmanned autonomous systems has led to the development of a series of ?smart? farming technologies that are expected to replace or complement conventional machinery and human labor. This paper proposes a preliminary methodology for the economic analysis of the employment of robotic systems in arable farming. This methodology is based on the basic processes for estimating the use cost for agricultural machinery. However, for the case of robotic systems, no average norms for the majority of the operational parameters are available. Here, we propose a novel estimation process for these parameters in the case of robotic systems. As a case study, the operation of light cultivation has been selected due the technological readiness for this type of operation.

    @article{lincoln35601,
    volume = {9},
    number = {4},
    month = {April},
    author = {Maria Lampridi and Dimitrios Kateris and Giorgos Vasileiadis and Simon Pearson and Claus S{\o}rensen and Athanasios Balafoutis and Dionysis Bochtis},
    title = {A Case-Based Economic Assessment of Robotics Employment in Precision Arable Farming},
    publisher = {MDPI},
    year = {2019},
    journal = {Agronomy},
    doi = {10.3390/agronomy9040175},
    pages = {175},
    keywords = {ARRAY(0x55578fe8f638)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35601/},
    abstract = {The need to intensify agriculture to meet increasing nutritional needs, in combination with the evolution of unmanned autonomous systems has led to the development of a series of ?smart? farming technologies that are expected to replace or complement conventional machinery and human labor. This paper proposes a preliminary methodology for the economic analysis of the employment of robotic systems in arable farming. This methodology is based on the basic processes for estimating the use cost for agricultural machinery. However, for the case of robotic systems, no average norms for the majority of the operational parameters are available. Here, we propose a novel estimation process for these parameters in the case of robotic systems. As a case study, the operation of light cultivation has been selected due the technological readiness for this type of operation.}
    }
  • A. W. I. Mohamed, C. M. Saaj, A. Seddaoui, and S. Eckersley, “Controlling a non-linear space robot using linear controllers,” in 5th ceas conference on guidance, navigation and control (eurognc), 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39625,
    booktitle = {5th CEAS Conference on Guidance, Navigation and Control (EuroGNC)},
    month = {April},
    title = {Controlling a Non-Linear Space Robot using Linear Controllers},
    author = {A.W.I Mohamed and C. M. Saaj and A. Seddaoui and S. Eckersley},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39625/}
    }
  • T. Angelopoulou, N. Tziolas, A. Balafoutis, G. Zalidis, and D. Bochtis, “Remote sensing techniques for soil organic carbon estimation: a review,” Remote sensing, vol. 11, iss. 6, p. 676, 2019. doi:10.3390/rs11060676
    [BibTeX] [Abstract] [Download PDF]

    Towards the need for sustainable development, remote sensing (RS) techniques in the Visible-Near Infrared?Shortwave Infrared (VNIR?SWIR, 400?2500 nm) region could assist in a more direct, cost-effective and rapid manner to estimate important indicators for soil monitoring purposes. Soil reflectance spectroscopy has been applied in various domains apart from laboratory conditions, e.g., sensors mounted on satellites, aircrafts and Unmanned Aerial Systems. The aim of this review is to illustrate the research made for soil organic carbon estimation, with the use of RS techniques, reporting the methodology and results of each study. It also aims to provide a comprehensive introduction in soil spectroscopy for those who are less conversant with the subject. In total, 28 journal articles were selected and further analysed. It was observed that prediction accuracy reduces from Unmanned Aerial Systems (UASs) to satellite platforms, though advances in machine learning techniques could further assist in the generation of better calibration models. There are some challenges concerning atmospheric, radiometric and geometric corrections, vegetation cover, soil moisture and roughness that still need to be addressed. The advantages and disadvantages of each approach are highlighted and future considerations are also discussed at the end.

    @article{lincoln39227,
    volume = {11},
    number = {6},
    month = {March},
    author = {Theodora Angelopoulou and Nikolaos Tziolas and Athanasios Balafoutis and George Zalidis and Dionysis Bochtis},
    title = {Remote Sensing Techniques for Soil Organic Carbon Estimation: A Review},
    year = {2019},
    journal = {Remote Sensing},
    doi = {10.3390/rs11060676},
    pages = {676},
    keywords = {ARRAY(0x55578fe8f698)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39227/},
    abstract = {Towards the need for sustainable development, remote sensing (RS) techniques in the Visible-Near Infrared?Shortwave Infrared (VNIR?SWIR, 400?2500 nm) region could assist in a more direct, cost-effective and rapid manner to estimate important indicators for soil monitoring purposes. Soil reflectance spectroscopy has been applied in various domains apart from laboratory conditions, e.g., sensors mounted on satellites, aircrafts and Unmanned Aerial Systems. The aim of this review is to illustrate the research made for soil organic carbon estimation, with the use of RS techniques, reporting the methodology and results of each study. It also aims to provide a comprehensive introduction in soil spectroscopy for those who are less conversant with the subject. In total, 28 journal articles were selected and further analysed. It was observed that prediction accuracy reduces from Unmanned Aerial Systems (UASs) to satellite platforms, though advances in machine learning techniques could further assist in the generation of better calibration models. There are some challenges concerning atmospheric, radiometric and geometric corrections, vegetation cover, soil moisture and roughness that still need to be addressed. The advantages and disadvantages of each approach are highlighted and future considerations are also discussed at the end.}
    }
  • S. Pearson, D. May, G. Leontidis, M. Swainson, S. Brewer, L. Bidaut, J. Frey, G. Parr, R. Maull, and A. Zisman, “Are distributed ledger technologies the panacea for food traceability?,” Global food security, vol. 20, p. 145–149, 2019. doi:10.1016/j.gfs.2019.02.002
    [BibTeX] [Abstract] [Download PDF]

    Distributed Ledger Technology (DLT), such as blockchain, has the potential to transform supply chains. It can provide a cryptographically secure and immutable record of transactions and associated metadata (origin, contracts, process steps, environmental variations, microbial records, etc.) linked across whole supply chains. The ability to trace food items within and along a supply chain is legally required by all actors within the chain. It is critical to food safety, underpins trust and global food trade. However, current food traceability systems are not linked between all actors within the supply chain. Key metadata on the age and process history of a food is rarely transferred when a product is bought and sold through multiple steps within the chain. Herein, we examine the potential of massively scalable DLT to securely link the entire food supply chain, from producer to end user. Under such a paradigm, should a food safety or quality issue ever arise, authorized end users could instantly and accurately trace the origin and history of any particular food item. This novel and unparalleled technology could help underpin trust for the safety of all food, a critical component of global food security. In this paper, we investigate the (I) data requirements to develop DLT technology across whole supply chains, (ii) key challenges and barriers to optimizing the complete system, and (iii) potential impacts on production efficiency, legal compliance, access to global food markets and the safety of food. Our conclusion is that while DLT has the potential to transform food systems, this can only be fully realized through the global development and agreement on suitable data standards and governance. In addition, key technical issues need to be resolved including challenges with DLT scalability, privacy and data architectures.

    @article{lincoln35035,
    volume = {20},
    month = {March},
    author = {Simon Pearson and David May and Georgios Leontidis and Mark Swainson and Steve Brewer and Luc Bidaut and Jeremy Frey and Gerard Parr and Roger Maull and Andrea Zisman},
    title = {Are Distributed Ledger Technologies the Panacea for Food Traceability?},
    publisher = {Elsevier},
    year = {2019},
    journal = {Global Food Security},
    doi = {10.1016/j.gfs.2019.02.002},
    pages = {145--149},
    keywords = {ARRAY(0x55578fe8f6c8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35035/},
    abstract = {Distributed Ledger Technology (DLT), such as blockchain, has the potential to transform supply chains. It can provide a cryptographically secure and immutable record of transactions and associated metadata (origin, contracts, process steps, environmental variations, microbial records, etc.) linked across whole supply chains. The ability to trace food items within and along a supply chain is legally required by all actors within the chain. It is critical to food safety, underpins trust and global food trade. However, current food traceability systems are not linked between all actors within the supply chain. Key metadata on the age and process history of a food is rarely transferred when a product is bought and sold through multiple steps within the chain. Herein, we examine the potential of massively scalable DLT to securely link the entire food supply chain, from producer to end user. Under such a paradigm, should a food safety or quality issue ever arise, authorized end users could instantly and accurately trace the origin and history of any particular food item. This novel and unparalleled technology could help underpin trust for the safety of all food, a critical component of global food security. In this paper, we investigate the (I) data requirements to develop DLT technology across whole supply chains, (ii) key challenges and barriers to optimizing the complete system, and (iii) potential impacts on production efficiency, legal compliance, access to global food markets and the safety of food. Our conclusion is that while DLT has the potential to transform food systems, this can only be fully realized through the global development and agreement on suitable data standards and governance. In addition, key technical issues need to be resolved including challenges with DLT scalability, privacy and data architectures.}
    }
  • M. Hüttenrauch, S. Adrian, and G. Neumann, “Deep reinforcement learning for swarm systems,” Journal of machine learning research, vol. 20, iss. 54, p. 1–31, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, the observation vector for decentralized decision making is represented by a concatenation of the (local) information an agent gathers about other agents. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions, where we treat the agents as samples and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and neural networks trained end-to-end. We evaluate the representation on two well-known problems from the swarm literature in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents, facilitating the development of complex collective strategies.

    @article{lincoln36281,
    volume = {20},
    number = {54},
    month = {February},
    author = {Maximilian H{\"u}ttenrauch and Sosic Adrian and Gerhard Neumann},
    title = {Deep Reinforcement Learning for Swarm Systems},
    publisher = {Journal of Machine Learning Research},
    year = {2019},
    journal = {Journal of Machine Learning Research},
    pages = {1--31},
    keywords = {ARRAY(0x55578fe8f6f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36281/},
    abstract = {Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, the observation vector for decentralized decision making is represented by a concatenation of the (local) information an agent gathers about other agents. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions, where we treat the agents as samples and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and neural networks trained end-to-end. We evaluate the representation on two well-known problems from the swarm literature in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents, facilitating the development of complex collective strategies.}
    }
  • L. Jackson, C. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and M. Ferris, “Design of a small space robot for on-orbit assembly missions,” in 5th international conference on mechatronics and robotics engineering, 2019, p. 107–112. doi:10.1145/3314493.3314520
    [BibTeX] [Abstract] [Download PDF]

    Intelligent robots have revolutionised terrestrial assembly and servicing processes, while low-cost small satellites have transformed the economics of space. This paper dovetails both technologies and proposes an innovative design for a small space robot that is potentially capable of assembly operations in-orbit. The drive for such missions stems from the growing commercial interests and scientific benefits offered by massive structures in space, such as the future large aperture astronomical or Earth Observation telescopes. However, limitations in the lifting capacity of launch vehicles currently impose severe restrictions on the size of the self-deployable monolithic telescope structure that can be carried. As a result, there is a growing demand for advancing the capabilities of space robots to assemble modular components in-orbit. To assess the feasibility of a small space robot for future in-space assembly missions, a detailed design is outlined and analysed in this paper. The trade-off between the manipulator configuration and its base spacecraft sizing is presented. This coherent design exercise is driven by various mission requirements that consider the constraints of a small spacecraft as well as its extreme operating environment.

    @inproceedings{lincoln37442,
    volume = {Part F},
    month = {February},
    author = {L. Jackson and C. Saaj and A. Seddaoui and C. Whiting and S. Eckersley and M. Ferris},
    note = {cited By 0},
    booktitle = {5th International Conference on Mechatronics and Robotics Engineering},
    title = {Design of a small space robot for on-orbit assembly missions},
    publisher = {ACM},
    year = {2019},
    journal = {ACM International Conference Proceeding Series},
    doi = {10.1145/3314493.3314520},
    pages = {107--112},
    keywords = {ARRAY(0x55578fe8f728)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37442/},
    abstract = {Intelligent robots have revolutionised terrestrial assembly and servicing processes, while low-cost small satellites have transformed the economics of space. This paper dovetails both technologies and proposes an innovative design for a small space robot that is potentially capable of assembly operations in-orbit. The drive for such missions stems from the growing commercial interests and scientific benefits offered by massive structures in space, such as the future large aperture astronomical or Earth Observation telescopes. However, limitations in the lifting capacity of launch vehicles currently impose severe restrictions on the size of the self-deployable monolithic telescope structure that can be carried. As a result, there is a growing demand for advancing the capabilities of space robots to assemble modular components in-orbit. To assess the feasibility of a small space robot for future in-space assembly missions, a detailed design is outlined and analysed in this paper. The trade-off between the manipulator configuration and its base spacecraft sizing is presented. This coherent design exercise is driven by various mission requirements that consider the constraints of a small spacecraft as well as its extreme operating environment.}
    }
  • J. Ganzer-Ripoll, N. Criado, M. Lopez-Sanchez, S. Parsons, and J. A. Rodriguez-Aguilar, “Combining social choice theory and argumentation: enabling collective decision making,” Group decision and negotiation, vol. 28, iss. 1, p. 127–173, 2019. doi:10.1007/s10726-018-9594-6
    [BibTeX] [Download PDF]
    @article{lincoln38395,
    volume = {28},
    number = {1},
    month = {February},
    author = {J. Ganzer-Ripoll and N. Criado and M. Lopez-Sanchez and Simon Parsons and J.A. Rodriguez-Aguilar},
    note = {cited By 0},
    title = {Combining Social Choice Theory and Argumentation: Enabling Collective Decision Making},
    year = {2019},
    journal = {Group Decision and Negotiation},
    doi = {10.1007/s10726-018-9594-6},
    pages = {127--173},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38395/}
    }
  • D. Bechtsis, V. Moisiadis, N. Tsolakis, D. Vlachos, and D. Bochtis, “Unmanned ground vehicles in precision farming services: an integrated emulation modelling approach,” in Information and communication technologies in modern agricultural development, Springer, 2019, vol. 953, p. 177–190. doi:doi:10.1007/978-3-030-12998-9_13
    [BibTeX] [Abstract] [Download PDF]

    Autonomous systems are a promising alternative for safely executing precision farming activities in a 24/7 perspective. In this context Unmanned Ground Vehicles (UGVs) are used in custom agricultural fields, with sophisticated sensors and data fusion techniques for real-time mapping and navigation. The aim of this study is to present a simulation software tool for providing effective and efficient farming activities in orchard fields and demonstrating the applicability of simulation in routing algorithms, hence increasing productivity, while dynamically addressing operational and tactical level uncertainties. The three dimensional virtual world includes the field layout and the static objects (orchard trees, obstacles, physical boundaries) and is constructed in the open source Gazebo simulation software while the Robot Operating System (ROS) and the implemented algorithms are tested using a custom vehicle. As a result a routing algorithm is executed and enables the UGV to pass through all the orchard trees while dynamically avoiding static and dynamic obstacles. Unlike existing sophisticated tools, the developed mechanism could accommodate an extensive variety of agricultural activities and could be transparently transferred from the simulation environment to real world ROS compatible UGVs providing user-friendly and highly customizable navigation.

    @incollection{lincoln39234,
    volume = {953},
    month = {February},
    author = {Dimitrios Bechtsis and Vasileios Moisiadis and Naoum Tsolakis and Dimitrios Vlachos and Dionysis Bochtis},
    booktitle = {Information and Communication Technologies in Modern Agricultural Development},
    title = {Unmanned Ground Vehicles in Precision Farming Services: An Integrated Emulation Modelling Approach},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-12998-9\_13},
    pages = {177--190},
    keywords = {ARRAY(0x55578fe8f788)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39234/},
    abstract = {Autonomous systems are a promising alternative for safely executing precision farming activities in a 24/7 perspective. In this context Unmanned Ground Vehicles (UGVs) are used in custom agricultural fields, with sophisticated sensors and data fusion techniques for real-time mapping and navigation. The aim of this study is to present a simulation software tool for providing effective and efficient farming activities in orchard fields and demonstrating the applicability of simulation in routing algorithms, hence increasing productivity, while dynamically addressing operational and tactical level uncertainties. The three dimensional virtual world includes the field layout and the static objects (orchard trees, obstacles, physical boundaries) and is constructed in the open source Gazebo simulation software while the Robot Operating System (ROS) and the implemented algorithms are tested using a custom vehicle. As a result a routing algorithm is executed and enables the UGV to pass through all the orchard trees while dynamically avoiding static and dynamic obstacles. Unlike existing sophisticated tools, the developed mechanism could accommodate an extensive variety of agricultural activities and could be transparently transferred from the simulation environment to real world ROS compatible UGVs providing user-friendly and highly customizable navigation.}
    }
  • C. A. G. S. o, D. Kateris, and D. Bochtis, “Ict innovations and smart farming,” in Information and communication technologies in modern agricultural development, Springer, 2019, vol. 953, p. 1–19. doi:doi:10.1007/978-3-030-12998-9_1
    [BibTeX] [Abstract] [Download PDF]

    Agriculture plays a vital role in the global economy with the majority of the rural population in developing countries depending on it. The depletion of natural resources makes the improvement of the agricultural production more important but also more difficult than ever. This is the reason that although the demand is constantly growing, Information and Communication Technology (ICT) offers to producers the adoption of sustainability and improvement of their daily living conditions. ICT offers timely and updated relevant information such as weather forecast, market prices, the occurrence of new diseases and varieties, etc. The new knowledge offers a unique opportunity to bring the production enhancing technologies to the farmers and empower themselves with modern agricultural technology and act accordingly for increasing the agricultural production in a cost effective and profitable manner. The use of ICT itself or combined with other ICT systems results in productivity improvement and better resource use and reduces the time needed for farm management, marketing, logistics and quality assurance.

    @incollection{lincoln39235,
    volume = {953},
    month = {February},
    author = {Claus Aage Gr{\o}n S{\o}rensen and Dimitrios Kateris and Dionysis Bochtis},
    booktitle = {Information and Communication Technologies in Modern Agricultural Development},
    title = {ICT Innovations and Smart Farming},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-12998-9\_1},
    pages = {1--19},
    keywords = {ARRAY(0x55578fe8f7b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39235/},
    abstract = {Agriculture plays a vital role in the global economy with the majority of the rural population in developing countries depending on it. The depletion of natural resources makes the improvement of the agricultural production more important but also more difficult than ever. This is the reason that although the demand is constantly growing, Information and Communication Technology (ICT) offers to producers the adoption of sustainability and improvement of their daily living conditions. ICT offers timely and updated relevant information such as weather forecast, market prices, the occurrence of new diseases and varieties, etc. The new knowledge offers a unique opportunity to bring the production enhancing technologies to the farmers and empower themselves with modern agricultural technology and act accordingly for increasing the agricultural production in a cost effective and profitable manner. The use of ICT itself or combined with other ICT systems results in productivity improvement and better resource use and reduces the time needed for farm management, marketing, logistics and quality assurance.}
    }
  • E. C. Rodias, A. Sopegno, R. Berruto, D. Bochtis, E. Cavallo, and P. Busato, “A combined simulation and linear programming method for scheduling organic fertiliser application,” Biosystems engineering, vol. 178, p. 233–243, 2019. doi:10.1016/j.biosystemseng.2018.11.002
    [BibTeX] [Abstract] [Download PDF]

    Logistics have been used to analyse agricultural operations, such as chemical application, mineral or organic fertilisation and harvesting-handling operations. Recently, due to national or European commitments concerning livestock waste management, this waste is being applied in many crops instead of other mineral fertilisers. The organic fertiliser produced has a high availability although most of the crops it is applied to have strict timeliness issues concerning its application. Here, organic fertilizer (as liquid manure) distribution logistic system is modelled by using a combined simulation and linear programming method. The method applies in certain crops and field areas taking into account specific agronomical, legislation and other constraints with the objective of minimising the optimal annual cost. Given their direct connection with the organic fertiliser distribution, the operations of cultivation and seeding were included. In a basic scenario, the optimal cost was assessed for both crops in total cultivated area of 120 ha. Three modified scenarios are presented. The first regards one more tractor as being available and provides a reduction of 3.8\% in the total annual cost in comparison with the basic scenario. In the second and third modified scenarios fields having high nitrogen demand next to the farm are considered with one or two tractors and savings of 2.5\% and 6.1\%, respectively, compared to the basic scenario are implied. Finally, it was concluded that the effect of distance from the manure production to the location of the fields could reduce costs by 6.5\%.

    @article{lincoln39224,
    volume = {178},
    month = {February},
    author = {Efthymios C. Rodias and Alessandro Sopegno and Remigio Berruto and Dionysis Bochtis and Eugenio Cavallo and Patrizia Busato},
    title = {A combined simulation and linear programming method for scheduling organic fertiliser application},
    publisher = {Elsevier},
    year = {2019},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2018.11.002},
    pages = {233--243},
    keywords = {ARRAY(0x55578fe8f7e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39224/},
    abstract = {Logistics have been used to analyse agricultural operations, such as chemical application, mineral or organic fertilisation and harvesting-handling operations. Recently, due to national or European commitments concerning livestock waste management, this waste is being applied in many crops instead of other mineral fertilisers. The organic fertiliser produced has a high availability although most of the crops it is applied to have strict timeliness issues concerning its application. Here, organic fertilizer (as liquid manure) distribution logistic system is modelled by using a combined simulation and linear programming method. The method applies in certain crops and field areas taking into account specific agronomical, legislation and other constraints with the objective of minimising the optimal annual cost. Given their direct connection with the organic fertiliser distribution, the operations of cultivation and seeding were included. In a basic scenario, the optimal cost was assessed for both crops in total cultivated area of 120 ha. Three modified scenarios are presented. The first regards one more tractor as being available and provides a reduction of 3.8\% in the total annual cost in comparison with the basic scenario. In the second and third modified scenarios fields having high nitrogen demand next to the farm are considered with one or two tractors and savings of 2.5\% and 6.1\%, respectively, compared to the basic scenario are implied. Finally, it was concluded that the effect of distance from the manure production to the location of the fields could reduce costs by 6.5\%.}
    }
  • Y. Zhu, X. Li, S. Pearson, D. Wu, R. Sun, S. Johnson, J. Wheeler, and S. Fang, “Evaluation of fengyun-3c soil moisture products using in-situ data from the chinese automatic soil moisture observation stations: a case study in henan province, china,” Water, vol. 11, iss. 2, p. 248, 2019. doi:doi:10.3390/w11020248
    [BibTeX] [Abstract] [Download PDF]

    Soil moisture (SM) products derived from passive satellite missions are playing an increasingly important role in agricultural applications, especially crop monitoring and disaster warning. Evaluating the dependability of satellite-derived soil moisture products on a large scale is crucial. In this study, we assessed the level 2 (L2) SM product from the Chinese Fengyun-3C (FY-3C) radiometer against in-situ measurements collected from the Chinese Automatic Soil Moisture Observation Stations (CASMOS) during a one-year period from 1 January 2016 to 31 December 2016 across Henan in China. In contrast, we also investigated the skill of the Advanced Microwave Scanning Radiometer 2 (AMSR2) and Soil Moisture Active/Passive (SMAP) SM products simultaneously. Four statistical parameters were used to evaluate these products? reliability: mean difference, root-mean-square error (RMSE), unbiased RMSE (ubRMSE), and the correlation coefficient. Our assessment results revealed that the FY-3C L2 SM product generally showed a poor correlation with the in-situ SM data from CASMOS on both temporal and spatial scales. The AMSR2 L3 SM product of JAXA (Japan Aerospace Exploration Agency) algorithm had a similar level of skill as FY-3C in the study area. The SMAP L3 SM product outperformed the FY-3C temporally but showed lower performance in capturing the SM spatial variation. A time-series analysis indicated that the correlations and estimated error varied systematically through the growing periods of the key crops in our study area. FY-3C L2 SM data tended to overestimate soil moisture during May, August, and September when the crops reached maximum vegetation density and tended to underestimate the soil moisture content during the rest of the year. The comparison between the statistical parameters and the ground vegetation water content (VWC) further showed that the FY-3C SM product performed much better under a low VWC condition ({\ensuremath{}}0.3 kg/m2), and the performance generally decreased with increased VWC. To improve the accuracy of the FY-3C SM product, an improved algorithm that can better characterize the variations of the ground VWC should be applied in the future.

    @article{lincoln35398,
    volume = {11},
    number = {2},
    month = {January},
    author = {Yongchao Zhu and Xuan Li and Simon Pearson and Dongli Wu and Ruijing Sun and Sarah Johnson and James Wheeler and Shibo Fang},
    title = {Evaluation of Fengyun-3C Soil Moisture Products Using In-Situ Data from the Chinese Automatic Soil Moisture Observation Stations: A Case Study in Henan Province, China},
    year = {2019},
    journal = {Water},
    doi = {doi:10.3390/w11020248},
    pages = {248},
    keywords = {ARRAY(0x55578fe8f818)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35398/},
    abstract = {Soil moisture (SM) products derived from passive satellite missions are playing an increasingly important role in agricultural applications, especially crop monitoring and disaster warning. Evaluating the dependability of satellite-derived soil moisture products on a large scale is crucial. In this study, we assessed the level 2 (L2) SM product from the Chinese Fengyun-3C (FY-3C) radiometer against in-situ measurements collected from the Chinese Automatic Soil Moisture Observation Stations (CASMOS) during a one-year period from 1 January 2016 to 31 December 2016 across Henan in China. In contrast, we also investigated the skill of the Advanced Microwave Scanning Radiometer 2 (AMSR2) and Soil Moisture Active/Passive (SMAP) SM products simultaneously. Four statistical parameters were used to evaluate these products? reliability: mean difference, root-mean-square error (RMSE), unbiased RMSE (ubRMSE), and the correlation coefficient. Our assessment results revealed that the FY-3C L2 SM product generally showed a poor correlation with the in-situ SM data from CASMOS on both temporal and spatial scales. The AMSR2 L3 SM product of JAXA (Japan Aerospace Exploration Agency) algorithm had a similar level of skill as FY-3C in the study area. The SMAP L3 SM product outperformed the FY-3C temporally but showed lower performance in capturing the SM spatial variation. A time-series analysis indicated that the correlations and estimated error varied systematically through the growing periods of the key crops in our study area. FY-3C L2 SM data tended to overestimate soil moisture during May, August, and September when the crops reached maximum vegetation density and tended to underestimate the soil moisture content during the rest of the year. The comparison between the statistical parameters and the ground vegetation water content (VWC) further showed that the FY-3C SM product performed much better under a low VWC condition ({\ensuremath{}}0.3 kg/m2), and the performance generally decreased with increased VWC. To improve the accuracy of the FY-3C SM product, an improved algorithm that can better characterize the variations of the ground VWC should be applied in the future.}
    }
  • A. Gabriel, N. Bellotto, and P. Baxter, “Towards a dataset of activities for action recognition in open fields,” in 2nd uk-ras robotics and autonomous systems conference, 2019, p. 64–67.
    [BibTeX] [Abstract] [Download PDF]

    In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural robot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.

    @inproceedings{lincoln36201,
    booktitle = {2nd UK-RAS Robotics and Autonomous Systems Conference},
    month = {January},
    title = {Towards a Dataset of Activities for Action Recognition in Open Fields},
    author = {Alexander Gabriel and Nicola Bellotto and Paul Baxter},
    publisher = {UK-RAS},
    year = {2019},
    pages = {64--67},
    keywords = {ARRAY(0x55578fe8f848)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36201/},
    abstract = {In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural
    robot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.}
    }
  • T. Pardi, R. Stolkin, and A. G. Esfahani, “Choosing grasps to enable collision-free post-grasp manipulations,” Ieee-ras 18th international conference on humanoid robots (humanoids), 2019. doi:10.1109/HUMANOIDS.2018.8625027
    [BibTeX] [Abstract] [Download PDF]

    Consider the task of grasping the handle of a door, and then pushing it until the door opens. These two fundamental robotics problems (selecting secure grasps of a hand on an object, e.g. the door handle, and planning collision-free trajectories of a robot arm that will move that object along a desired path) have predominantly been studied separately from one another. Thus, much of the grasping literature overlooks the fundamental purpose of grasping objects, which is typically to make them move in desirable ways. Given a desired post-grasp trajectory of the object, different choices of grasp will often determine whether or not collision-free post-grasp motions of the arm can be found, which will deliver that trajectory. We address this problem by examining a number of possible stable grasping configurations on an object. For each stable grasp, we explore the motion space of the manipulator which would be needed for post-grasp motions, to deliver the object along the desired trajectory. A criterion, based on potential fields in the post-grasp motion space, is used to assign a collision-cost to each grasp. A grasping configuration is then selected which enables the desired post-grasp object motion while minimising the proximity of all robot parts to obstacles during motion. We demonstrate our method with peg-in-hole and pick-and-place experiments in cluttered scenes, using a Franka Panda robot. Our approach is effective in selecting appropriate grasps, which enable both stable grasp and also desired post-grasp movements without collisions. We also show that, when grasps are selected based on grasp stability alone, without consideration for desired post-grasp manipulations, the corresponding post-grasp movements of the manipulator may result in collisions.

    @article{lincoln35570,
    month = {January},
    title = {Choosing grasps to enable collision-free post-grasp manipulations},
    author = {Tommaso Pardi and Rustam Stolkin and Amir Ghalamzan Esfahani},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/HUMANOIDS.2018.8625027},
    journal = {IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)},
    keywords = {ARRAY(0x55578fe8f878)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35570/},
    abstract = {Consider the task of grasping the handle of a door, and then pushing it until the door opens. These two fundamental robotics problems (selecting secure grasps of a hand on an object, e.g. the door handle, and planning collision-free trajectories of a robot arm that will move that object along a desired path) have predominantly been studied separately from one another. Thus, much of the grasping literature overlooks the fundamental purpose of grasping objects, which is typically to make them move in desirable ways. Given a desired post-grasp trajectory of the object, different choices of grasp will often determine whether or not collision-free post-grasp motions of the arm can be found, which will deliver that trajectory. We address this problem by examining a number of possible stable grasping configurations on an object. For each stable grasp, we explore the motion space of the manipulator which would be needed for post-grasp motions, to deliver the object along the desired trajectory. A criterion, based on potential fields in the post-grasp motion space, is used to assign a collision-cost to each grasp. A grasping configuration is then selected which enables the desired post-grasp object motion while minimising the proximity of all robot parts to obstacles during motion. We demonstrate our method with peg-in-hole and pick-and-place experiments in cluttered scenes, using a Franka Panda robot. Our approach is effective in selecting appropriate grasps, which enable both stable grasp and also desired post-grasp movements without collisions. We also show that, when grasps are selected based on grasp stability alone, without consideration for desired post-grasp manipulations, the corresponding post-grasp movements of the manipulator may result in collisions.}
    }
  • H. Montes, T. Duckett, and G. Cielniak, “Model based 3d point cloud segmentation for automated selective broccoli harvesting,” in Smart industry workshop 2019, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Segmentation of 3D objects in cluttered scenes is a highly relevant problem. Given a 3D point cloud produced by a depth sensor, the goal is to separate objects of interest in the foreground from other elements in the background. We research 3D imaging methods to accurately segment and identify broccoli plants in the field. The ability to separate parts into different sets of sensor readings is an important task towards this goal. Our research is focused on the broccoli head segmentation problem as a first step towards size estimation of each broccoli crop in order to establish whether or not it is suitable for cutting.

    @inproceedings{lincoln39207,
    booktitle = {Smart Industry Workshop 2019},
    month = {January},
    title = {MODEL BASED 3D POINT CLOUD SEGMENTATION FOR AUTOMATED SELECTIVE BROCCOLI HARVESTING},
    author = {Hector Montes and Tom Duckett and Grzegorz Cielniak},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f8a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39207/},
    abstract = {Segmentation of 3D objects in cluttered scenes is a highly relevant problem. Given a 3D point cloud produced by a depth sensor, the goal is to separate objects of interest in the foreground from other elements in the background. We research 3D imaging methods to accurately segment and identify broccoli plants in the field. The ability to separate parts into different sets of sensor readings is an important task towards this goal. Our research is focused on the broccoli head segmentation problem as a first step towards size estimation of each broccoli crop in order to establish whether or not it is suitable for cutting.}
    }
  • K. Elgeneidy, G. Neumann, S. Pearson, M. Jackson, and N. Lohse, “Contact detection and size estimation using a modular soft gripper with embedded flex sensors,” in International conference on intelligent robots and systems (iros 2018), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Grippers made from soft elastomers are able to passively and gently adapt to their targets allowing deformable objects to be grasped safely without causing bruise or damage. However, it is difficult to regulate the contact forces due to the lack of contact feedback for such grippers. In this paper, a modular soft gripper is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex and pressure sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of opposing fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the contact type affects the rate of change in the flex sensor readings against the internal pressure.

    @inproceedings{lincoln34713,
    booktitle = {International Conference on Intelligent Robots and Systems (IROS 2018)},
    month = {January},
    title = {Contact Detection and Size Estimation Using a Modular Soft Gripper with Embedded Flex Sensors},
    author = {Khaled Elgeneidy and Gerhard Neumann and Simon Pearson and Michael Jackson and Niels Lohse},
    year = {2019},
    keywords = {ARRAY(0x55578fe8f8d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34713/},
    abstract = {Grippers made from soft elastomers are able to passively and gently adapt to their targets allowing deformable objects to be grasped safely without causing bruise or damage. However, it is difficult to regulate the contact forces due to the lack of contact feedback for such grippers. In this paper, a modular soft gripper is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex and pressure sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of opposing fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the contact type affects the rate of change in the flex sensor readings against the internal pressure.}
    }
  • C. Zhao, L. Sun, P. Purkait, T. Duckett, and R. Stolkin, “Learning monocular visual odometry with dense 3d mapping from dense 3d flow,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2019. doi:10.1109/IROS.2018.8594151
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.

    @inproceedings{lincoln36001,
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {January},
    title = {Learning Monocular Visual Odometry with Dense 3D Mapping from Dense 3D Flow},
    author = {Cheng Zhao and Li Sun and Pulak Purkait and Tom Duckett and Rustam Stolkin},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/IROS.2018.8594151},
    keywords = {ARRAY(0x55578fe8f908)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36001/},
    abstract = {This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.}
    }
  • R. C. Tieppo, T. L. Romanelli, M. Milan, C. A. G. S. o, and D. Bochtis, “Modeling cost and energy demand in agricultural machinery fleets for soybean and maize cultivated using a no-tillage system,” Computers and electronics in agriculture, vol. 156, p. 282–292, 2019. doi:10.1016/j.compag.2018.11.032
    [BibTeX] [Abstract] [Download PDF]

    Climate, area expansion and the possibility to grow soybean and maize within a same season using the no-tillage system and mechanized agriculture are factors that promoted the agriculture growth in Mato Grosso State ? Brazil. Mechanized operations represent around 23\% of production costs for maize and soybean, demanding a considerably powerful machinery. Energy balance is a tool to verify the sustainability level of mechanized system. Regarding the sustainability components profit and environment, this study aims to develop a deterministic model for agricultural machinery costs and energy demand for no-tillage system production of soybean and maize crops. In addition, scenario simulation aids to analyze the influence of fleet sizing regarding cost and energy demand. The development of the deterministic model consists on equations and data retrieved from literature. A simulation was developed for no-tillage soybean production system in Brazil, considering three basic mechanized operations (sowing, spraying and harvesting). Thereby, for those operations, three sizes of machinery commercially available and regularly used (small, medium, large) and seven levels of cropping area (500, 1000, 2000, 4000, 6000, 8000 and 10,000 ha) were used. The developed model was consistent for predictions of power demand, fuel consumption and costs. We noticed that the increase in area size implies in more working time for the machinery, which decreases the cost difference among the combinations. The greatest difference for the smallest area (500 ha) was 22.1 and 94.8\% for sowing and harvesting operations, respectively. For 4000 and 10,000 ha, the difference decreased to 1.30 and 0.20\%. Simulated scenarios showed the importance of determining operational cost and energy demand when energy efficiency is desired.

    @article{lincoln34502,
    volume = {156},
    month = {January},
    author = {Rafael Ceasar Tieppo and Thiago Lib{\'o}rio Romanelli and Marcos Milan and Claus Aage Gr{\o}n S{\o}rensen and Dionysis Bochtis},
    title = {Modeling cost and energy demand in agricultural machinery fleets for soybean and maize cultivated using a no-tillage system},
    publisher = {Elsevier},
    year = {2019},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2018.11.032},
    pages = {282--292},
    keywords = {ARRAY(0x55578fe8f968)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34502/},
    abstract = {Climate, area expansion and the possibility to grow soybean and maize within a same season using the no-tillage system and mechanized agriculture are factors that promoted the agriculture growth in Mato Grosso State ? Brazil. Mechanized operations represent around 23\% of production costs for maize and soybean, demanding a considerably powerful machinery. Energy balance is a tool to verify the sustainability level of mechanized system. Regarding the sustainability components profit and environment, this study aims to develop a deterministic model for agricultural machinery costs and energy demand for no-tillage system production of soybean and maize crops. In addition, scenario simulation aids to analyze the influence of fleet sizing regarding cost and energy demand. The development of the deterministic model consists on equations and data retrieved from literature. A simulation was developed for no-tillage soybean production system in Brazil, considering three basic mechanized operations (sowing, spraying and harvesting). Thereby, for those operations, three sizes of machinery commercially available and regularly used (small, medium, large) and seven levels of cropping area (500, 1000, 2000, 4000, 6000, 8000 and 10,000 ha) were used. The developed model was consistent for predictions of power demand, fuel consumption and costs. We noticed that the increase in area size implies in more working time for the machinery, which decreases the cost difference among the combinations. The greatest difference for the smallest area (500 ha) was 22.1 and 94.8\% for sowing and harvesting operations, respectively. For 4000 and 10,000 ha, the difference decreased to 1.30 and 0.20\%. Simulated scenarios showed the importance of determining operational cost and energy demand when energy efficiency is desired.}
    }
  • A. Babu, P. Lightbody, G. Das, P. Liu, S. Gomez-Gonzalez, and G. Neumann, “Improving local trajectory optimisation using probabilistic movement primitives,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2019, p. 2666–2671. doi:10.1109/IROS40897.2019.8967980
    [BibTeX] [Abstract] [Download PDF]

    Local trajectory optimisation techniques are a powerful tool for motion planning. However, they often get stuck in local optima depending on the quality of the initial solution and consequently, often do not find a valid (i.e. collision free) trajectory. Moreover, they often require fine tuning of a cost function to obtain the desired motions. In this paper, we address both problems by combining local trajectory optimisation with learning from demonstrations. The human expert demonstrates how to reach different target end-effector locations in different ways. From these demonstrations, we estimate a trajectory distribution, represented by a Probabilistic Movement Primitive (ProMP). For a new target location, we sample different trajectories from the ProMP and use these trajectories as initial solutions for the local optimisation. As the ProMP generates versatile initial solutions for the optimisation, the chance of finding poor local minima is significantly reduced. Moreover, the learned trajectory distribution is used to specify the smoothness costs for the optimisation, resulting in solutions of similar shape as the demonstrations. We demonstrate the effectiveness of our approach in several complex obstacle avoidance scenarios.

    @inproceedings{lincoln40837,
    booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Improving Local Trajectory Optimisation using Probabilistic Movement Primitives},
    author = {Ashith Babu and Peter Lightbody and Gautham Das and Pengcheng Liu and Sebastian Gomez-Gonzalez and Gerhard Neumann},
    publisher = {IEEE},
    year = {2019},
    pages = {2666--2671},
    doi = {10.1109/IROS40897.2019.8967980},
    keywords = {ARRAY(0x55578fe8f998)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40837/},
    abstract = {Local trajectory optimisation techniques are a powerful tool for motion planning. However, they often get stuck in local optima depending on the quality of the initial solution and consequently, often do not find a valid (i.e. collision free) trajectory. Moreover, they often require fine tuning of a cost function to obtain the desired motions. In this paper, we address both problems by combining local trajectory optimisation with learning from demonstrations. The human expert demonstrates how to reach different target end-effector locations in different ways. From these demonstrations, we estimate a trajectory distribution, represented by a Probabilistic Movement Primitive (ProMP). For a new target location, we sample different trajectories from the ProMP and use these trajectories as initial solutions for the local optimisation. As the ProMP generates versatile initial solutions for the optimisation, the chance of finding poor local minima is significantly reduced. Moreover, the learned trajectory distribution is used to specify the smoothness costs for the optimisation, resulting in solutions of similar shape as the demonstrations. We demonstrate the effectiveness of our approach in several complex obstacle avoidance scenarios.}
    }
  • T. Flyr and S. Parsons, “Towards adversarial training for mobile robots,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11649, p. 197–208, 2019. doi:10.1007/978-3-030-23807-0_17
    [BibTeX] [Download PDF]
    @article{lincoln38396,
    volume = {11649},
    author = {T. Flyr and Simon Parsons},
    note = {cited By 0},
    title = {Towards Adversarial Training for Mobile Robots},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-23807-0\_17},
    pages = {197--208},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38396/}
    }
  • L. Jackson, C. M. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and M. Ferris, “The downsizing of a free-flying space robot,” in 20th annual conference, taros 2019, 2019, p. 480–483. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39418,
    booktitle = {20th Annual Conference, TAROS 2019},
    title = {The Downsizing of a Free-Flying Space Robot},
    author = {Lucy Jackson and Chakravarthini M. Saaj and Asma Seddaoui and Calem Whiting and Steve Eckersley and Mark Ferris},
    publisher = {Springer},
    year = {2019},
    pages = {480--483},
    doi = {10.1007/978-3-030-25332-5},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39418/}
    }
  • J. Koleosho and C. M. Saaj, “System design and control of a di-wheel rover,” in Towards autonomous robotic systems, 2019, p. 409–421. doi:10.1007/978-3-030-25332-5_35
    [BibTeX] [Abstract] [Download PDF]

    Traditionally, wheeled rovers are used for planetary surface exploration and six-wheeled chassis designs based on the Rocker-Bogie suspension system have been tested successfully on Mars. However, it is difficult to explore craters and crevasses using large six or four-wheeled rovers. Innovative designs based on smaller Di-Wheel Rovers might be better suited for such challenging terrains. A Di-Wheel Rover is a self – balancing two-wheeled mobile robot that can move in all directions within a two-dimensional plane, as well as stand upright by balancing on two wheels. This paper presents the outcomes of a feasibility study on a Di-Wheel Rover for planetary exploration missions. This includes developing its chassis design based on the hardware and software requirements, prototyping, and subsequent testing. The main contribution of this paper is the design of a self-balancing control system for the Di-Wheel Rover. This challenging design exercise was successfully completed through extensive experimentation thereby validating the performance of the Di-Wheel Rover. The details on the structural design, tuning controller gains based on an inverted pendulum model, and testing on different ground surfaces are described in this paper. The results presented in this paper give a new insight into designing low-cost Di-Wheel Rovers and clearly, there is a potential to use Di-Wheel Rovers for future planetary exploration.

    @inproceedings{lincoln39621,
    volume = {11650},
    author = {John Koleosho and Chakravarthini M. Saaj},
    booktitle = {Towards Autonomous Robotic Systems},
    title = {System Design and Control of a Di-Wheel Rover},
    publisher = {Springer},
    doi = {10.1007/978-3-030-25332-5\_35},
    pages = {409--421},
    year = {2019},
    keywords = {ARRAY(0x55578fe8fa28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39621/},
    abstract = {Traditionally, wheeled rovers are used for planetary surface exploration and six-wheeled chassis designs based on the Rocker-Bogie suspension system have been tested successfully on Mars. However, it is difficult to explore craters and crevasses using large six or four-wheeled rovers. Innovative designs based on smaller Di-Wheel Rovers might be better suited for such challenging terrains. A Di-Wheel Rover is a self - balancing two-wheeled mobile robot that can move in all directions within a two-dimensional plane, as well as stand upright by balancing on two wheels.
    This paper presents the outcomes of a feasibility study on a Di-Wheel Rover for planetary exploration missions. This includes developing its chassis design based on the hardware and software requirements, prototyping, and subsequent testing. The main contribution of this paper is the design of a self-balancing control system for the Di-Wheel Rover. This challenging design exercise was successfully completed through extensive experimentation thereby validating the performance of the Di-Wheel Rover. The details on the structural design, tuning controller gains based on an inverted pendulum model, and testing on different ground surfaces are described in this paper. The results presented in this paper give a new insight into designing low-cost Di-Wheel Rovers and clearly, there is a potential to use Di-Wheel Rovers for future planetary exploration.}
    }
  • J. Lock, A. G. Tramontano, S. Ghidoni, and N. Bellotto, “Activis: mobile object detection and active guidance for people with visual impairments,” in Proc. of the int. conf. on image analysis and processing (iciap), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The ActiVis project aims to deliver a mobile system that is able to guide a person with visual impairments towards a target object or area in an unknown indoor environment. For this, it uses new developments in object detection, mobile computing, action generation and human-computer interfacing to interpret the user’s surroundings and present effective guidance directions. Our approach to direction generation uses a Partially Observable Markov Decision Process (POMDP) to track the system’s state and output the optimal location to be investigated. This system includes an object detector and an audio-based guidance interface to provide a complete active search pipeline. The ActiVis system was evaluated in a set of experiments showing better performance than a simpler unguided case.

    @inproceedings{lincoln36413,
    booktitle = {Proc. of the Int. Conf. on Image Analysis and Processing (ICIAP)},
    title = {ActiVis: Mobile Object Detection and Active Guidance for People with Visual Impairments},
    author = {Jacobus Lock and A. G. Tramontano and S. Ghidoni and Nicola Bellotto},
    year = {2019},
    keywords = {ARRAY(0x55578fe8fa58)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36413/},
    abstract = {The ActiVis project aims to deliver a mobile system that is able to guide a person with visual impairments towards a target object or area in an unknown indoor environment. For this, it uses new developments in object detection, mobile computing, action generation and human-computer interfacing to interpret the user's surroundings and present effective guidance directions. Our approach to direction generation uses a Partially Observable Markov Decision Process (POMDP) to track the system's state and output the optimal location to be investigated. This system includes an object detector and an audio-based guidance interface to provide a complete active search pipeline. The ActiVis system was evaluated in a set of experiments showing better performance than a simpler unguided case.}
    }
  • J. Lock, G. Cielniak, and N. Bellotto, “Active object search with a mobile device for people with visual impairments,” in 14th international conference on computer vision theory and applications (visapp), 2019, p. 476–485. doi:10.5220/0007582304760485
    [BibTeX] [Abstract] [Download PDF]

    Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera’s video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.

    @inproceedings{lincoln34596,
    booktitle = {14th International Conference on Computer Vision Theory and Applications (VISAPP)},
    title = {Active Object Search with a Mobile Device for People with Visual Impairments},
    author = {Jacobus Lock and Grzegorz Cielniak and Nicola Bellotto},
    publisher = {VISIGRAPP},
    year = {2019},
    pages = {476--485},
    doi = {10.5220/0007582304760485},
    keywords = {ARRAY(0x55578fe8fa88)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34596/},
    abstract = {Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera's video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.}
    }
  • S. Lucarotti, C. M. Saaj, E. Allouis, and P. Bianco, “A self-reconfigurable undulating grasper for asteroid mining,” in 15th esa symposium on advanced space technologies in robotics and automation, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39624,
    booktitle = {15th ESA Symposium on Advanced Space Technologies in Robotics and Automation},
    title = {A Self-Reconfigurable Undulating Grasper for Asteroid Mining},
    author = {Suzanna Lucarotti and Chakravarthini M. Saaj and Elie Allouis and Paolo Bianco},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39624/}
    }
  • J. Pajarinen, H. L. Thai, R. Akrour, J. Peters, and G. Neumann, “Compatible natural gradient policy search,” Machine learning, 2019. doi:10.1007/s10994-019-05807-0
    [BibTeX] [Abstract] [Download PDF]

    Trust-region methods have yielded state-of-the-art results in policy search. A common approach is to use KL-divergence to bound the region of trust resulting in a natural gradient policy update. We show that the natural gradient and trust region optimization are equivalent if we use the natural parameterization of a standard exponential policy distribution in combination with compatible value function approximation. Moreover, we show that standard natural gradient updates may reduce the entropy of the policy according to a wrong schedule leading to premature convergence. To control entropy reduction we introduce a new policy search method called compatible policy search (COPOS) which bounds entropy loss. The experimental results show that COPOS yields state-of-the-art results in challenging continuous control tasks and in discrete partially observable tasks.

    @article{lincoln36283,
    title = {Compatible natural gradient policy search},
    author = {J. Pajarinen and H.L. Thai and R. Akrour and J. Peters and Gerhard Neumann},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/s10994-019-05807-0},
    journal = {Machine Learning},
    keywords = {ARRAY(0x55578fe8fae8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36283/},
    abstract = {Trust-region methods have yielded state-of-the-art results in policy search. A common approach is to use KL-divergence to bound the region of trust resulting in a natural gradient policy update. We show that the natural gradient and trust region optimization are equivalent if we use the natural parameterization of a standard exponential policy distribution in combination with compatible value function approximation. Moreover, we show that standard natural gradient updates may reduce the entropy of the policy according to a wrong schedule leading to premature convergence. To control entropy reduction we introduce a
    new policy search method called compatible policy search (COPOS) which bounds entropy loss. The experimental results show that COPOS yields state-of-the-art results in challenging continuous control tasks and in discrete partially observable tasks.}
    }
  • A. R. Panisson, ?. Sarkadi, P. McBurney, S. Parsons, and R. H. Bordini, “On the formal semantics of theory of mind in agent communication,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11327, p. 18–32, 2019. doi:10.1007/978-3-030-17294-7{$_2$}
    [BibTeX] [Download PDF]
    @article{lincoln38400,
    volume = {11327},
    author = {A.R. Panisson and ?. Sarkadi and P. McBurney and Simon Parsons and R.H. Bordini},
    note = {cited By 0},
    title = {On the Formal Semantics of Theory of Mind in Agent Communication},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-17294-7{$_2$}},
    pages = {18--32},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38400/}
    }
  • c S, A. R. Panisson, R. H. Bordini, P. McBurney, and S. Parsons, “Towards an approach for modelling uncertain theory of mind in multi-agent systems,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11327, p. 3–17, 2019. doi:10.1007/978-3-030-17294-7{$_1$}
    [BibTeX] [Abstract] [Download PDF]

    Applying Theory of Mind to multi-agent systems enables agents to model and reason about other agents? minds. Recent work shows that this ability could increase the performance of agents, making them more efficient than agents that lack this ability. However, modelling others agents? minds is a difficult task, given that it involves many factors of uncertainty, e.g., the uncertainty of the communication channel, the uncertainty of reading other agents correctly, and the uncertainty of trust in other agents. In this paper, we explore how agents acquire and update Theory of Mind under conditions of uncertainty. To represent uncertain Theory of Mind, we add probability estimation on a formal semantics model for agent communication based on the BDI architecture and agent communication languages.

    @article{lincoln38399,
    volume = {11327},
    author = {{\c S}. Sarkadi and A.R. Panisson and R.H. Bordini and P. McBurney and S. Parsons},
    note = {cited By 0},
    title = {Towards an Approach for Modelling Uncertain Theory of Mind in Multi-Agent Systems},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-17294-7{$_1$}},
    pages = {3--17},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38399/},
    abstract = {Applying Theory of Mind to multi-agent systems enables agents to model and reason about other agents? minds. Recent work shows that this ability could increase the performance of agents, making them more efficient than agents that lack this ability. However, modelling others agents? minds is a difficult task, given that it involves many factors of uncertainty, e.g., the uncertainty of the communication channel, the uncertainty of reading other agents correctly, and the uncertainty of trust in other agents. In this paper, we explore how agents acquire and update Theory of Mind under conditions of uncertainty. To represent uncertain Theory of Mind, we add probability estimation on a formal semantics model for agent communication based on the BDI architecture and agent communication languages.}
    }
  • c S, A. R. Panisson, R. H. Bordini, P. McBurney, S. Parsons, and M. Chapman, “Modelling deception using theory of mind in multi-agent systems,” Ai communications, vol. 32, iss. 4, p. 287–302, 2019. doi:10.3233/AIC-190615
    [BibTeX] [Abstract] [Download PDF]

    Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.

    @article{lincoln38401,
    volume = {32},
    number = {4},
    author = {{\c S}. Sarkadi and A.R. Panisson and R.H. Bordini and P. McBurney and S. Parsons and M. Chapman},
    note = {cited By 0},
    title = {Modelling deception using theory of mind in multi-agent systems},
    year = {2019},
    journal = {AI Communications},
    doi = {10.3233/AIC-190615},
    pages = {287--302},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38401/},
    abstract = {Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.}
    }
  • I. Sassoon, N. Kökciyan, E. Sklar, and S. Parsons, “Explainable argumentation for wellness consultation,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11763, p. 186–202, 2019. doi:10.1007/978-3-030-30391-4{$_1$}{$_1$}
    [BibTeX] [Download PDF]
    @article{lincoln38398,
    volume = {11763},
    author = {I. Sassoon and N. K{\"o}kciyan and E. Sklar and Simon Parsons},
    note = {cited By 0},
    title = {Explainable argumentation for wellness consultation},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-30391-4{$_1$}{$_1$}},
    pages = {186--202},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38398/}
    }
  • I. Sassoon, N. Kökciyan, E. Sklar, and S. Parsons, “Explainable argumentation for wellness consultation,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11763, p. 186–202, 2019. doi:10.1007/978-3-030-30391-4
    [BibTeX] [Download PDF]
    @article{lincoln38539,
    volume = {11763},
    author = {I. Sassoon and N. K{\"o}kciyan and Elizabeth Sklar and S. Parsons},
    note = {cited By 0},
    title = {Explainable argumentation for wellness consultation},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-30391-4},
    pages = {186--202},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38539/}
    }
  • A. Seddaoui, C. M. Saaj, and S. Eckersley, “Collision-free optimal trajectory generator for a controlled floating space robot,” in Towards autonomous robotic systems conference, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39420,
    booktitle = {Towards Autonomous Robotic Systems Conference},
    title = {Collision-Free Optimal Trajectory Generator for a Controlled Floating Space Robot},
    author = {Asma Seddaoui and Chakravarthini M. Saaj and Steve Eckersley},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39420/}
    }
  • D. Zhang, E. Schneider, and E. Sklar, “A cross-landscape evaluation of multi-robot team performance in static task-allocation domains,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11650, p. 261–272, 2019. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @article{lincoln38537,
    volume = {11650},
    author = {D. Zhang and E. Schneider and Elizabeth Sklar},
    note = {cited By 0},
    title = {A cross-landscape evaluation of multi-robot team performance in static task-allocation domains},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-25332-5},
    pages = {261--272},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38537/}
    }
  • T. Zhivkov, E. Schneider, and E. Sklar, “Mrcomm: multi-robot communication testbed,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11650, p. 346–357, 2019. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @article{lincoln38538,
    volume = {11650},
    author = {T. Zhivkov and E. Schneider and Elizabeth Sklar},
    note = {cited By 0},
    title = {MRComm: Multi-robot communication testbed},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-25332-5},
    pages = {346--357},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38538/}
    }

2018

  • F. J. Comin, C. Saaj, S. M. Mustaza, and R. Saaj, “Safe testing of electrical diathermy cutting using a new generation soft manipulator,” Ieee transactions on robotics, vol. 34, iss. 6, p. 1659–1666, 2018. doi:10.1109/TRO.2018.2861898
    [BibTeX] [Abstract] [Download PDF]

    The first demonstration of a pneumatic soft continuum robot is integrated in series with a rigid robot arm, safely performing teleoperated diathermic tissue-cutting. The rigid arm autonomously maintains a safe tool contact force, while the soft arm manually follows the desired cutting path. Ex-vivo experimentation demonstrates submillimetric deviations from target paths.

    @article{lincoln37426,
    volume = {34},
    number = {6},
    month = {December},
    author = {F.J. Comin and C. Saaj and S.M. Mustaza and R. Saaj},
    note = {cited By 0},
    title = {Safe Testing of Electrical Diathermy Cutting Using a New Generation Soft Manipulator},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Transactions on Robotics},
    doi = {10.1109/TRO.2018.2861898},
    pages = {1659--1666},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37426/},
    abstract = {The first demonstration of a pneumatic soft continuum robot is integrated in series with a rigid robot arm, safely performing teleoperated diathermic tissue-cutting. The rigid arm autonomously maintains a safe tool contact force, while the soft arm manually follows the desired cutting path. Ex-vivo experimentation demonstrates submillimetric deviations from target paths.}
    }
  • H. Cuayahuitl, S. Ryu, D. Lee, and J. Kim, “A study on dialogue reward prediction for open-ended conversational agents,” in Neurips workshop on conversational ai, 2018.
    [BibTeX] [Abstract] [Download PDF]

    The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way. This suggests that principled investigations into optimal context windows are urgently needed given that the amount of dialogue history and corresponding representations can play an important role in the overall performance of a conversational system. This paper studies the amount of history required by conversational agents for reliably predicting dialogue rewards. The task of dialogue reward prediction is chosen for investigating the effects of varying amounts of dialogue history and their impact on system performance. Experimental results using a dataset of 18K human-human dialogues report that lengthy dialogue histories of at least 10 sentences are preferred (25 sentences being the best in our experiments) over short ones, and that lengthy histories are useful for training dialogue reward predictors with strong positive correlations between target dialogue rewards and predicted ones.

    @inproceedings{lincoln34433,
    booktitle = {NeurIPS Workshop on Conversational AI},
    month = {December},
    title = {A Study on Dialogue Reward Prediction for Open-Ended Conversational Agents},
    author = {Heriberto Cuayahuitl and Seonghan Ryu and Donghyeon Lee and Jihie Kim},
    publisher = {arXiv},
    year = {2018},
    keywords = {ARRAY(0x55578fe81688)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34433/},
    abstract = {The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way. This suggests that principled investigations into optimal context windows are urgently needed given that the amount of dialogue history and corresponding representations can play an important role in the overall performance of a conversational system. This paper studies the amount of history required by conversational agents for reliably predicting dialogue rewards. The task of dialogue reward prediction is chosen for investigating the effects of varying amounts of dialogue history and their impact on system performance. Experimental results using a dataset of 18K human-human dialogues report that lengthy dialogue histories of at least 10 sentences are preferred (25 sentences being the best in our experiments) over short ones, and that lengthy histories are useful for training dialogue reward predictors with strong positive correlations between target dialogue rewards and predicted ones.}
    }
  • H. Wang, S. Yue, J. Peng, P. Baxter, C. Zhang, and Z. Wang, “A model for detection of angular velocity of image motion based on the temporal tuning of the drosophila,” in Icann 2018, 2018, p. 37–46. doi:https://doi.org/10.1007/978-3-030-01421-6_4
    [BibTeX] [Abstract] [Download PDF]

    We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects? eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila?s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies.

    @inproceedings{lincoln33104,
    month = {December},
    author = {Huatian Wang and Shigang Yue and Jigen Peng and Paul Baxter and Chun Zhang and Zhihua Wang},
    booktitle = {ICANN 2018},
    title = {A Model for Detection of Angular Velocity of Image Motion Based on the Temporal Tuning of the Drosophila},
    publisher = {Springer, Cham},
    doi = {https://doi.org/10.1007/978-3-030-01421-6\_4},
    pages = {37--46},
    year = {2018},
    keywords = {ARRAY(0x55578fe81670)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33104/},
    abstract = {We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects? eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila?s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies.}
    }
  • Q. Fu, N. Bellotto, C. Hu, and S. Yue, “Performance of a visual fixation model in an autonomous micro robot inspired by drosophila physiology,” in Ieee international conference on robotics and biomimetics, 2018.
    [BibTeX] [Abstract] [Download PDF]

    In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects? visual systems is not only attractive to neural system modellers, but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to ?xation. The proposed model was realised on embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual ?xation model: the robot showed motion tracking and ?xation behaviours similarly to insects; the image processing frequency can maintain 25 {$\sim$} 45Hz. Arena tests also demonstrated a successful following behaviour aroused by ?xation in navigation.

    @inproceedings{lincoln33846,
    booktitle = {IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS},
    month = {December},
    title = {Performance of a Visual Fixation Model in an Autonomous Micro Robot Inspired by Drosophila Physiology},
    author = {Qinbing Fu and Nicola Bellotto and Cheng Hu and Shigang Yue},
    year = {2018},
    keywords = {ARRAY(0x55578fecd478)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33846/},
    abstract = {In nature, lightweight and low-powered insects are ideal model systems to study motion perception strategies. Understanding the underlying characteristics and functionality of insects? visual systems is not only attractive to neural system modellers, but also critical in providing effective solutions to future robotics. This paper presents a novel modelling of dynamic vision system inspired by Drosophila physiology for mimicking fast motion tracking and a closed-loop behavioural response to ?xation. The proposed model was realised on embedded system in an autonomous micro robot which has limited computational resources. A monocular camera was applied as the only motion sensing modality. Systematic experiments including open-loop and closed-loop bio-robotic tests validated the proposed visual ?xation model: the robot showed motion tracking and ?xation behaviours similarly to insects; the image processing frequency can maintain 25 {$\sim$} 45Hz. Arena tests also demonstrated a successful following behaviour aroused by ?xation in navigation.}
    }
  • W. Lewinger, F. Comin, M. Matthews, and C. Saaj, “Earth analogue testing and analysis of martian duricrust properties,” Acta astronautica, vol. 152, p. 567–579, 2018. doi:10.1016/j.actaastro.2018.05.025
    [BibTeX] [Abstract] [Download PDF]

    Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.

    @article{lincoln37427,
    volume = {152},
    month = {November},
    author = {W. Lewinger and F. Comin and M. Matthews and C. Saaj},
    note = {cited By 0},
    title = {Earth analogue testing and analysis of Martian duricrust properties},
    year = {2018},
    journal = {Acta Astronautica},
    doi = {10.1016/j.actaastro.2018.05.025},
    pages = {567--579},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37427/},
    abstract = {Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.}
    }
  • J. O?Keeffe, D. Tarapore, A. Millard, and J. Timmis, “Adaptive online fault diagnosis in autonomous robot swarms,” Frontiers in robotics and ai, vol. 5, p. 131, 2018. doi:10.3389/frobt.2018.00131
    [BibTeX] [Abstract] [Download PDF]

    Previous work has shown that robot swarms are not always tolerant to the failure of individual robots, particularly those that have only partially failed and continue to contribute to collective behaviors. A case has been made for an active approach to fault tolerance in swarm robotic systems, whereby the swarm can identify and resolve faults that occur during operation. Existing approaches to active fault tolerance in swarms have so far omitted fault diagnosis, however we propose that diagnosis is a feature of active fault tolerance that is necessary if swarms are to obtain long-term autonomy. This paper presents a novel method for fault diagnosis that attempts to imitate some of the observed functions of natural immune system. The results of our simulated experiments show that our system is flexible, scalable, and improves swarm tolerance to various electro-mechanical faults in the cases examined.

    @article{lincoln43299,
    volume = {5},
    month = {November},
    author = {James O?Keeffe and Danesh Tarapore and Alan Millard and Jon Timmis},
    title = {Adaptive Online Fault Diagnosis in Autonomous Robot Swarms},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2018.00131},
    pages = {131},
    year = {2018},
    keywords = {ARRAY(0x55578fecd4c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43299/},
    abstract = {Previous work has shown that robot swarms are not always tolerant to the failure of individual robots, particularly those that have only partially failed and continue to contribute to collective behaviors. A case has been made for an active approach to fault tolerance in swarm robotic systems, whereby the swarm can identify and resolve faults that occur during operation. Existing approaches to active fault tolerance in swarms have so far omitted fault diagnosis, however we propose that diagnosis is a feature of active fault tolerance that is necessary if swarms are to obtain long-term autonomy. This paper presents a novel method for fault diagnosis that attempts to imitate some of the observed functions of natural immune system. The results of our simulated experiments show that our system is flexible, scalable, and improves swarm tolerance to various electro-mechanical faults in the cases examined.}
    }
  • F. Camara, O. Giles, M. Rothmuller, P. Rasmussen, A. Vendelbo-Larsen, G. Markkula, Y-M. Lee, N. Merat, and C. Fox, “Predicting pedestrian road-crossing assertiveness for autonomous vehicle control,” in 21st ieee international conference on intelligent transportation systems, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles (AVs) must interact with other road users including pedestrians. Unlike passive environments, pedestrians are active agents having their own utilities and decisions, which must be inferred and predicted by AVs in order to control interactions with them and navigation around them. In particular, when a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform AV controllers in this setting, this study collects and analyses data from real-world human road crossings to determine what features of crossing behaviours are predictive about the level of assertiveness of pedestrians and of the eventual winner of the interactions. It presents the largest and most detailed data set of its kind known to us, and new methods to analyze and predict pedestrian-vehicle interactions based upon it. Pedestrian-vehicle interactions are decomposed into sequences of independent discrete events. We use probabilistic methods ? regression and decision tree regression ? and sequence analysis to analyze sets and sub-sequences of actions used by both pedestrians and human drivers while crossing at an intersection, to find common patterns of behaviour and to predict the winner of each interaction. We report on the particular features found to be predictive and which can thus be integrated into game- theoretic AV controllers to inform real-time interactions.

    @inproceedings{lincoln33089,
    booktitle = {21st IEEE International Conference on Intelligent Transportation Systems},
    month = {November},
    title = {Predicting pedestrian road-crossing assertiveness for autonomous vehicle control},
    author = {F Camara and O Giles and M Rothmuller and PH Rasmussen and A Vendelbo-Larsen and G Markkula and Y-M Lee and N Merat and Charles Fox},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55578fecde38)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33089/},
    abstract = {Autonomous vehicles (AVs) must interact with other
    road users including pedestrians. Unlike passive environments,
    pedestrians are active agents having their own utilities and
    decisions, which must be inferred and predicted by AVs in order
    to control interactions with them and navigation around them.
    In particular, when a pedestrian wishes to cross the road in
    front of the vehicle at an unmarked crossing, the pedestrian
    and AV must compete for the space, which may be considered
    as a game-theoretic interaction in which one agent must yield
    to the other. To inform AV controllers in this setting, this study
    collects and analyses data from real-world human road crossings
    to determine what features of crossing behaviours are predictive
    about the level of assertiveness of pedestrians and of the eventual
    winner of the interactions. It presents the largest and most
    detailed data set of its kind known to us, and new methods to
    analyze and predict pedestrian-vehicle interactions based upon
    it. Pedestrian-vehicle interactions are decomposed into sequences
    of independent discrete events. We use probabilistic methods ?
    regression and decision tree regression ? and sequence analysis
    to analyze sets and sub-sequences of actions used by both
    pedestrians and human drivers while crossing at an intersection,
    to find common patterns of behaviour and to predict the winner
    of each interaction. We report on the particular features found
    to be predictive and which can thus be integrated into game-
    theoretic AV controllers to inform real-time interactions.}
    }
  • K. Goher and S. Fadlallah, “Pid, bfo-optimized pid, and pd-flc control of a two-wheeled machine with two-direction handling mechanism: a comparative study,” Robotics and biomimetics, vol. 5, iss. 6, 2018. doi:10.1186/s40638-018-0089-3
    [BibTeX] [Abstract] [Download PDF]

    In this paper; three control approaches are utilized in order to control the stability of a novel five-degrees-of-freedom two-wheeled robotic machine designed for industrial applications that demand a limited-space working environment. Proportional?integral?derivative (PID) control scheme, bacterial foraging optimization of PID control method, and fuzzy logic control method are applied to the wheeled machine to obtain the optimum control strategy that provides the best system stabilization performance. According to simulation results, considering multiple motion scenarios, the PID controller optimized by bacterial foraging optimization method outperformed the other two control methods in terms of minimum overshoot, rise time, and applied input forces.

    @article{lincoln34106,
    volume = {5},
    number = {6},
    month = {November},
    author = {Khaled Goher and Sulaiman Fadlallah},
    title = {PID, BFO-optimized PID, and PD-FLC control of a two-wheeled machine with two-direction handling mechanism: a comparative study},
    publisher = {SpringerOpen},
    year = {2018},
    journal = {Robotics and Biomimetics},
    doi = {10.1186/s40638-018-0089-3},
    keywords = {ARRAY(0x55578fecd400)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34106/},
    abstract = {In this paper; three control approaches are utilized in order to control the stability of a novel five-degrees-of-freedom two-wheeled robotic machine designed for industrial applications that demand a limited-space working environment. Proportional?integral?derivative (PID) control scheme, bacterial foraging optimization of PID control method, and fuzzy logic control method are applied to the wheeled machine to obtain the optimum control strategy that provides the best system stabilization performance. According to simulation results, considering multiple motion scenarios, the PID controller optimized by bacterial foraging optimization method outperformed the other two control methods in terms of minimum overshoot, rise time, and applied input forces.}
    }
  • Q. Fu, C. Hu, J. Peng, and S. Yue, “Shaping the collision selectivity in a looming sensitive neuron model with parallel on and off pathways and spike frequency adaptation,” Neural networks, vol. 106, p. 127–143, 2018. doi:10.1016/j.neunet.2018.04.001
    [BibTeX] [Abstract] [Download PDF]

    Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1’s collision selectivity to its neighbouring looming detector – the LGMD2.The SFA mechanism can enhance the LGMD1’s collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner.

    @article{lincoln31536,
    volume = {106},
    month = {October},
    author = {Qinbing Fu and Cheng Hu and Jigen Peng and Shigang Yue},
    title = {Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation},
    publisher = {Elsevier for European Neural Network Society (ENNS)},
    year = {2018},
    journal = {Neural Networks},
    doi = {10.1016/j.neunet.2018.04.001},
    pages = {127--143},
    keywords = {ARRAY(0x55578fecd418)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31536/},
    abstract = {Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1's collision selectivity to its neighbouring looming detector -- the LGMD2.The SFA mechanism can enhance the LGMD1's collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner.}
    }
  • S. Indurthi, S. Yu, S. Back, and H. Cuayahuitl, “Cut to the chase: a context zoom-in network for reading comprehension,” in Empirical methods in natural language processing (emnlp), 2018.
    [BibTeX] [Abstract] [Download PDF]

    In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ?NarrativeQA?. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62\% (ROUGE-L) relative improvement.

    @inproceedings{lincoln34105,
    booktitle = {Empirical Methods in Natural Language Processing (EMNLP)},
    month = {October},
    title = {Cut to the Chase: A Context Zoom-in Network for Reading Comprehension},
    author = {Satish Indurthi and Seunghak Yu and Seohyun Back and Heriberto Cuayahuitl},
    publisher = {Association for Computational Linguistics},
    year = {2018},
    keywords = {ARRAY(0x55578fecde68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34105/},
    abstract = {In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ?NarrativeQA?. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62\% (ROUGE-L) relative improvement.}
    }
  • L. Sun, Z. Yan, A. Zaganidis, C. Zhao, and T. Duckett, “Recurrent-octomap: learning state-based map refinement for long-term semantic mapping with 3d-lidar data,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 3749–3756, 2018. doi:10.1109/LRA.2018.2856268
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term 3D Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding of single frames, rather than 3D refinement of semantic maps (i.e. fusing semantic observations). The most widely-used approach for 3D semantic map refinement is a Bayes update, which fuses the consecutive predictive probabilities following a Markov-Chain model. Instead, we propose a learning approach to fuse the semantic features, rather than simply fusing predictions from a classifier. In our approach, we represent and maintain our 3D map as an OctoMap, and model each cell as a recurrent neural network (RNN), to obtain a Recurrent-OctoMap. In this case, the semantic mapping process can be formulated as a sequenceto-sequence encoding-decoding problem. Moreover, in order to extend the duration of observations in our Recurrent-OctoMap, we developed a robust 3D localization and mapping system for successively mapping a dynamic environment using more than two weeks of data, and the system can be trained and deployed with arbitrary memory length. We validate our approach on the ETH long-term 3D Lidar dataset [1]. The experimental results show that our proposed approach outperforms the conventional ?Bayes update? approach.

    @article{lincoln32558,
    volume = {3},
    number = {4},
    month = {October},
    author = {Li Sun and Zhi Yan and Anestis Zaganidis and Cheng Zhao and Tom Duckett},
    title = {Recurrent-OctoMap: Learning State-based Map Refinement for Long-Term Semantic Mapping with 3D-Lidar Data},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2856268},
    pages = {3749--3756},
    keywords = {ARRAY(0x55578fecde80)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32558/},
    abstract = {This paper presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term 3D
    Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding of single frames, rather than 3D refinement of semantic maps (i.e. fusing semantic observations). The most widely-used approach for 3D semantic map refinement is a Bayes update, which fuses the consecutive predictive probabilities following a Markov-Chain model. Instead, we propose a learning approach to fuse the semantic features, rather than simply fusing predictions from a classifier. In our approach, we represent and maintain our 3D map as an OctoMap, and model each cell as a recurrent neural network (RNN), to obtain a Recurrent-OctoMap. In this case, the semantic mapping process can be formulated as a sequenceto-sequence encoding-decoding problem. Moreover, in order to extend the duration of observations in our Recurrent-OctoMap, we developed a robust 3D localization and mapping system for successively mapping a dynamic environment using more than two weeks of data, and the system can be trained and deployed with arbitrary memory length. We validate our approach on the ETH long-term 3D Lidar dataset [1]. The experimental results show that our proposed approach outperforms the conventional ?Bayes update? approach.}
    }
  • A. Zaganidis, L. Sun, T. Duckett, and G. Cielniak, “Integrating deep semantic segmentation into 3-d point cloud registration,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 2942–2949, 2018. doi:10.1109/LRA.2018.2848308
    [BibTeX] [Abstract] [Download PDF]

    Point cloud registration is the task of aligning 3D scans of the same environment captured from different poses. When semantic information is available for the points, it can be used as a prior in the search for correspondences to improve registration. Semantic-assisted Normal Distributions Transform (SE-NDT) is a new registration algorithm that reduces the complexity of the problem by using the semantic information to partition the point cloud into a set of normal distributions, which are then registered separately. In this paper we extend the NDT registration pipeline by using PointNet, a deep neural network for segmentation and classification of point clouds, to learn and predict per-point semantic labels. We also present the Iterative Closest Point (ICP) equivalent of the algorithm, a special case of Multichannel Generalized ICP. We evaluate the performance of SE-NDT against the state of the art in point cloud registration on the publicly available classification data set Semantic3d.net. We also test the trained classifier and algorithms on dynamic scenes, using a sequence from the public dataset KITTI. The experiments demonstrate the improvement of the registration in terms of robustness, precision and speed, across a range of initial registration errors, thanks to the inclusion of semantic information.

    @article{lincoln32390,
    volume = {3},
    number = {4},
    month = {October},
    author = {Anestis Zaganidis and Li Sun and Tom Duckett and Grzegorz Cielniak},
    title = {Integrating Deep Semantic Segmentation Into 3-D Point Cloud Registration},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2848308},
    pages = {2942--2949},
    keywords = {ARRAY(0x55578fed4be8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32390/},
    abstract = {Point cloud registration is the task of aligning 3D scans of the same environment captured from different poses. When semantic information is available for the points, it can be used as a prior in the search for correspondences to improve registration. Semantic-assisted Normal Distributions Transform (SE-NDT) is a new registration algorithm that reduces the complexity of the problem by using the semantic information to partition the point cloud into a set of normal distributions, which are then registered separately. In this paper we extend the NDT registration pipeline by using PointNet, a deep neural network for segmentation and classification of point clouds, to learn and predict per-point semantic labels. We also present the Iterative Closest Point (ICP) equivalent of the algorithm, a special case of Multichannel Generalized ICP. We evaluate the performance of SE-NDT against the state of the art in point cloud registration on the publicly available classification data set Semantic3d.net. We also test the trained classifier and algorithms on dynamic scenes, using a sequence from the public dataset KITTI. The experiments demonstrate the improvement of the registration in terms of robustness, precision and speed, across a range of initial registration errors, thanks to the inclusion of semantic information.}
    }
  • J. Zhao, C. Hu, C. Zhang, Z. Wang, and S. Yue, “A bio-inspired collision detector for small quadcopter,” in 2018 international joint conference on neural networks (ijcnn), 2018, p. 1–7. doi:10.1109/IJCNN.2018.8489298
    [BibTeX] [Abstract] [Download PDF]

    The sense and avoid capability enables insects to fly versatilely and robustly in dynamic and complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from Lobula giant movement detector (LGMD) neurons in the locusts, and modeled into an STM32F407 Microcontroller Unit (MCU). Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision accuracy in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex and dynamic environment. We designed the quadcopter’s responding operation to imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter’s collision avoidance task.

    @inproceedings{lincoln34847,
    month = {October},
    author = {Jiannan Zhao and Cheng Hu and Chun Zhang and Zhihua Wang and Shigang Yue},
    booktitle = {2018 International Joint Conference on Neural Networks (IJCNN)},
    title = {A Bio-inspired Collision Detector for Small Quadcopter},
    publisher = {IEEE},
    doi = {10.1109/IJCNN.2018.8489298},
    pages = {1--7},
    year = {2018},
    keywords = {ARRAY(0x55578fed4c00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34847/},
    abstract = {The sense and avoid capability enables insects to fly versatilely and robustly in dynamic and complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from Lobula giant movement detector (LGMD) neurons in the locusts, and modeled into an STM32F407 Microcontroller Unit (MCU).
    Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision accuracy in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex and dynamic environment. We designed the quadcopter's responding operation to imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter's collision avoidance task.}
    }
  • P. Bosilj, T. Duckett, and G. Cielniak, “Analysis of morphology-based features for classification of crop and weeds in precision agriculture,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 2950–2956, 2018. doi:10.1109/LRA.2018.2848305
    [BibTeX] [Abstract] [Download PDF]

    Determining the types of vegetation present in an image is a core step in many precision agriculture tasks. In this paper, we focus on pixel-based approaches for classification of crops versus weeds, especially for complex cases involving overlapping plants and partial occlusion. We examine the benefits of multi-scale and content-driven morphology-based descriptors called Attribute Profiles. These are compared to state-of-the art keypoint descriptors with a fixed neighbourhood previously used in precision agriculture, namely Histograms of Oriented Gradients and Local Binary Patterns. The proposed classification technique is especially advantageous when coupled with morphology-based segmentation on a max-tree structure, as the same representation can be re-used for feature extraction. The robustness of the approach is demonstrated by an experimental evaluation on two datasets with different crop types. The proposed approach compared favourably to state-of-the-art approaches without an increase in computational complexity, while being able to provide descriptors at a higher resolution.

    @article{lincoln32371,
    volume = {3},
    number = {4},
    month = {October},
    author = {Petra Bosilj and Tom Duckett and Grzegorz Cielniak},
    title = {Analysis of morphology-based features for classification of crop and weeds in precision agriculture},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2848305},
    pages = {2950--2956},
    keywords = {ARRAY(0x55578fed4c30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32371/},
    abstract = {Determining the types of vegetation present in an image is a core step in many precision agriculture tasks. In this paper, we focus on pixel-based approaches for classification of crops versus weeds, especially for complex cases involving overlapping plants and partial occlusion. We examine the benefits of multi-scale and content-driven morphology-based descriptors called Attribute Profiles. These are compared to state-of-the art keypoint descriptors with a fixed neighbourhood previously used in precision agriculture, namely Histograms of Oriented Gradients and Local Binary Patterns. The proposed classification technique is especially advantageous when coupled with morphology-based segmentation on a max-tree structure, as the same representation can be re-used for feature extraction. The robustness of the approach is demonstrated by an experimental evaluation on two datasets with different crop types. The proposed approach compared favourably to state-of-the-art approaches without an increase in computational complexity, while being able to provide descriptors at a higher resolution.}
    }
  • E. Senft, S. Lemaignan, P. Baxter, and T. Belpaeme, “From evaluating to teaching: rewards and challenges of human control for learning robots,” in Iros 2018 workshop on human/robot in the loop machine learning, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Keeping a human in a robot learning cycle can provide many advantages to improve the learning process. However, most of these improvements are only available when the human teacher is in complete control of the robot?s behaviour, and not just providing feedback. This human control can make the learning process safer, allowing the robot to learn in high-stakes interaction scenarios especially social ones. Furthermore, it allows faster learning as the human guides the robot to the relevant parts of the state space and can provide additional information to the learner. This information can also enable the learning algorithms to learn for wider world representations, thus increasing the generalisability of a deployed system. Additionally, learning from end users improves the precision of the final policy as it can be specifically tailored to many situations. Finally, this progressive teaching might create trust between the learner and the teacher, easing the deployment of the autonomous robot. However, with such control comes a range of challenges. Firstly, the rich communication between the robot and the teacher needs to be handled by an interface, which may require complex features. Secondly, the teacher needs to be embedded within the robot action selection cycle, imposing time constraints, which increases the cognitive load on the teacher. Finally, given a cycle of interaction between the robot and the teacher, any mistakes made by the teacher can be propagated to the robot?s policy. Nevertheless, we are are able to show that empowering the teacher with ways to control a robot?s behaviour has the potential to drastically improve both the learning process (allowing robots to learn in a wider range of environments) and the experience of the teacher.

    @inproceedings{lincoln36200,
    booktitle = {IROS 2018 Workshop on Human/Robot in the Loop Machine Learning},
    month = {October},
    title = {From Evaluating to Teaching: Rewards and Challenges of Human Control for Learning Robots},
    author = {Emmanuel Senft and Severin Lemaignan and Paul Baxter and Tony Belpaeme},
    publisher = {Imperial College London},
    year = {2018},
    keywords = {ARRAY(0x55578fed4c60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36200/},
    abstract = {Keeping a human in a robot learning cycle can provide many advantages to improve the learning process. However, most of these improvements are only available when the human teacher is in complete control of the robot?s behaviour, and not just providing feedback. This human control can make the learning process safer, allowing the robot to learn in high-stakes interaction scenarios especially social ones. Furthermore, it allows faster learning as the human guides the robot to the relevant parts of the state space and can provide additional information to the learner. This information can also enable the
    learning algorithms to learn for wider world representations, thus increasing the generalisability of a deployed system. Additionally, learning from end users improves the precision of the final policy as it can be specifically tailored to many situations. Finally, this progressive teaching might create trust between the learner and the teacher, easing the deployment of the autonomous robot. However, with such control comes a range of challenges. Firstly, the rich communication between the robot and the teacher needs to be handled by an interface, which may require complex features. Secondly, the teacher needs to be embedded within the robot action selection cycle, imposing time constraints, which increases the cognitive load on the teacher. Finally, given a cycle of interaction between the robot and the teacher, any mistakes made by the teacher can be propagated to the robot?s policy. Nevertheless, we are are able to show that empowering the teacher with ways to control a robot?s behaviour has the potential to drastically improve both the learning process (allowing robots to learn in a wider range of environments) and the experience of the teacher.}
    }
  • Z. Yan, L. Sun, T. Duckett, and N. Bellotto, “Multisensor online transfer learning for 3d lidar-based human detection with a mobile robot,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Human detection and tracking is an essential task for service robots, where the combined use of multiple sensors has potential advantages that are yet to be fully exploited. In this paper, we introduce a framework allowing a robot to learn a new 3D LiDAR-based human classifier from other sensors over time, taking advantage of a multisensor tracking system. The main innovation is the use of different detectors for existing sensors (i.e. RGB-D camera, 2D LiDAR) to train, online, a new 3D LiDAR-based human classifier based on a new ?trajectory probability?. Our framework uses this probability to check whether new detections belongs to a human trajectory, estimated by different sensors and/or detectors, and to learn a human classifier in a semi-supervised fashion. The framework has been implemented and tested on a real-world dataset collected by a mobile robot. We present experiments illustrating that our system is able to effectively learn from different sensors and from the environment, and that the performance of the 3D LiDAR-based human classification improves with the number of sensors/detectors used.

    @inproceedings{lincoln32541,
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {October},
    title = {Multisensor Online Transfer Learning for 3D LiDAR-based Human Detection with a Mobile Robot},
    author = {Zhi Yan and Li Sun and Tom Duckett and Nicola Bellotto},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55578fed4c90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32541/},
    abstract = {Human detection and tracking is an essential task for service robots, where the combined use of multiple sensors has potential advantages that are yet to be fully exploited. In this paper, we introduce a framework allowing a robot to learn a new 3D LiDAR-based human classifier from other sensors over time, taking advantage of a multisensor tracking system. The main innovation is the use of different detectors for existing sensors (i.e. RGB-D camera, 2D LiDAR) to train, online, a new 3D LiDAR-based human classifier based on a new ?trajectory probability?. Our framework uses this probability to check whether new detections belongs to a human trajectory, estimated by different sensors and/or detectors, and to learn a human classifier in a semi-supervised fashion. The framework has been implemented and tested on a real-world dataset collected by a mobile robot. We present experiments illustrating that our system is able to effectively learn from different sensors and from the environment, and that the performance of the 3D LiDAR-based human classification improves with the number of sensors/detectors used.}
    }
  • S. Papadaki, G. Banias, C. Achillas, D. Aidonis, D. Folinas, D. Bochtis, and S. Papangelou, “A humanitarian logistics case study for the intermediary phase accommodation center for refugees and other humanitarian disaster victims,” in Dynamics of disasters, Springer, 2018, vol. 140, p. 157–202. doi:doi:10.1007/978-3-319-97442-2_8
    [BibTeX] [Abstract] [Download PDF]

    The growing and uncontrollable stream of refugees from Middle East and North Africa has created considerable pressure to governments and societies all over Europe. To establish the theoretical framework, the concept of humanitarian logistics is briefly examined in this paper. Historical data from the nineteenth century onwards illuminates the fact that this influx is not a novelty in the European continent and the interpretation of statistical data highlights the characteristics and particularities of the current refugee wave, as well as the possible repercussions these could inflict both to hosting societies and to displaced populations. Finally, a review of European and national legislation and policies shows that measures taken so far are disjointed and that no complete but at the same time fair and humanitarian management strategy exists. Within this context, the paper elaborates on the development of a compact accommodation center made of shipping containers, to function as one of the initial stages in adaptation before full social integration of the displaced populations. It aims at maximizing the respect for human rights and values while minimizing the impact on society and on the environment. Some of the humanitarian and ecological issues discussed are: integration of medical, educational, religious and social functions within the unit, optimal land utilization, renewable energy use, and waste management infrastructures. Creating added value for the ?raw? material (shipping containers) and prolonging the unit?s life span by enabling transformation and change of use, transportation and reuse, and finally end-of-life dismantlement and recycling also lie within the scope of the project. The overall goal is not only to address the current needs stemming from the refugee crisis, but also to develop a project versatile enough to be adapted for implementation on further social groups in need of support. The paper?s results could serve as a useful tool for governments and organizations to better plan ahead and respond fast and efficiently not only in regard to the present humanitarian emergency, but also in any possible similar major disaster situation, including the potential consequences of climate change.

    @incollection{lincoln39233,
    volume = {140},
    month = {September},
    author = {Sofia Papadaki and Georgios Banias and Charisios Achillas and Dimitris Aidonis and Dimitris Folinas and Dionysis Bochtis and Stamatis Papangelou},
    booktitle = {Dynamics of Disasters},
    title = {A Humanitarian Logistics Case Study for the Intermediary Phase Accommodation Center for Refugees and Other Humanitarian Disaster Victims},
    publisher = {Springer},
    year = {2018},
    doi = {doi:10.1007/978-3-319-97442-2\_8},
    pages = {157--202},
    keywords = {ARRAY(0x55578fed4cc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39233/},
    abstract = {The growing and uncontrollable stream of refugees from Middle East and North Africa has created considerable pressure to governments and societies all over Europe. To establish the theoretical framework, the concept of humanitarian logistics is briefly examined in this paper. Historical data from the nineteenth century onwards illuminates the fact that this influx is not a novelty in the European continent and the interpretation of statistical data highlights the characteristics and particularities of the current refugee wave, as well as the possible repercussions these could inflict both to hosting societies and to displaced populations. Finally, a review of European and national legislation and policies shows that measures taken so far are disjointed and that no complete but at the same time fair and humanitarian management strategy exists.
    Within this context, the paper elaborates on the development of a compact accommodation center made of shipping containers, to function as one of the initial stages in adaptation before full social integration of the displaced populations. It aims at maximizing the respect for human rights and values while minimizing the impact on society and on the environment. Some of the humanitarian and ecological issues discussed are: integration of medical, educational, religious and social functions within the unit, optimal land utilization, renewable energy use, and waste management infrastructures. Creating added value for the ?raw? material (shipping containers) and prolonging the unit?s life span by enabling transformation and change of use, transportation and reuse, and finally end-of-life dismantlement and recycling also lie within the scope of the project.
    The overall goal is not only to address the current needs stemming from the refugee crisis, but also to develop a project versatile enough to be adapted for implementation on further social groups in need of support. The paper?s results could serve as a useful tool for governments and organizations to better plan ahead and respond fast and efficiently not only in regard to the present humanitarian emergency, but also in any possible similar major disaster situation, including the potential consequences of climate change.}
    }
  • H. Wang, J. Peng, and S. Yue, “A feedback neural network for small target motion detection in cluttered backgrounds,” in The 27th international conference on artificial neural networks, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Small target motion detection is critical for insects to search for and track mates or prey which always appear as small dim speckles in the visual field. A class of specific neurons, called small target motion detectors (STMDs), has been characterized by exquisite sensitivity for small target motion. Understanding and analyzing visual pathway of STMD neurons are beneficial to design artificial visual systems for small target motion detection. Feedback loops have been widely identified in visual neural circuits and play an important role in target detection. However, if there exists a feedback loop in the STMD visual pathway or if a feedback loop could significantly improve the detection performance of STMD neurons, is unclear. In this paper, we propose a feedback neural network for small target motion detection against naturally cluttered backgrounds. In order to form a feedback loop, model output is temporally delayed and relayed to previous neural layer as feedback signal. Extensive experiments showed that the significant improvement of the proposed feedback neural network over the existing STMD-based models for small target motion detection.

    @inproceedings{lincoln33422,
    booktitle = {The 27th International Conference on Artificial Neural Networks},
    month = {September},
    title = {A Feedback Neural Network for Small Target Motion Detection in Cluttered Backgrounds},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55578fed4cf0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33422/},
    abstract = {Small target motion detection is critical for insects to search for and track mates or prey which always appear as small dim speckles in the visual field. A class of specific neurons, called small target motion detectors (STMDs), has been characterized by exquisite sensitivity for small target motion. Understanding and analyzing visual pathway of STMD neurons are beneficial to design artificial visual systems for small target motion detection. Feedback loops have been widely identified in visual neural circuits and play an important role in target detection. However, if there exists a feedback loop in the STMD visual pathway or if a feedback loop could significantly improve the detection performance of STMD neurons, is unclear. In this paper, we propose a feedback neural network for small target motion detection against naturally cluttered backgrounds. In order to form a feedback loop, model output is temporally delayed and relayed to previous neural layer as feedback signal. Extensive experiments showed that the significant improvement of the proposed feedback neural network over the existing STMD-based models for small target motion detection.}
    }
  • S. Fadlallah and K. Goher, “System identification and hsdbc-optimized pid control of a portable lower-limb rehabilitation device,” in Robotics, World scientfic, 2018.
    [BibTeX] [Abstract] [Download PDF]

    The present paper introduces a novel portable leg rehabilitation system (PLRS) that is developed to provide the user with the necessary rehabilitation exercises for both the knee and ankle in addition to the portability feature to overcome the hardships associated with both effort and cost of hospitals and rehabilitation clinics? steady sessions. Prior realizing the actual prototype, the proposed configuration was visualized using SolidWorks including its main components. Aiming to control the developed system, and given the fact that tuning controller parameters is not an easy task, Hybrid Spiral-Dynamics Bacteria-Chemotaxis (HSDBC) algorithm has been applied on the proposed control strategy in order to obtain a satisfactory performance. The obtained system performance was satisfactory in terms of desired elevation and settling time.

    @incollection{lincoln34108,
    booktitle = {Robotics},
    month = {September},
    title = {System Identification and HSDBC-Optimized PID Control of a Portable Lower-Limb Rehabilitation Device},
    author = {Sulaiman Fadlallah and Khaled Goher},
    publisher = {World Scientfic},
    year = {2018},
    keywords = {ARRAY(0x55578fed4d20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34108/},
    abstract = {The present paper introduces a novel portable leg rehabilitation system (PLRS) that is developed to provide the user with the necessary rehabilitation exercises for both the knee and ankle in addition to the portability feature to overcome the hardships associated with both effort and cost of hospitals and rehabilitation clinics? steady sessions. Prior realizing the actual prototype, the proposed configuration was visualized using SolidWorks including its main components. Aiming to control the developed system, and given the fact that tuning controller parameters is not an easy task, Hybrid Spiral-Dynamics Bacteria-Chemotaxis (HSDBC) algorithm has been applied on the proposed control strategy in order to obtain a satisfactory performance. The obtained system performance was satisfactory in terms of desired elevation and settling time.}
    }
  • C. Zhao, L. Sun, P. Purkait, T. Duckett, and R. Stolkin, “Dense rgb-d semantic mapping with pixel-voxel neural network,” Sensors, vol. 18, iss. 9, p. 3099, 2018. doi:10.3390/s18093099
    [BibTeX] [Abstract] [Download PDF]

    In this paper, a novel Pixel-Voxel network is proposed for dense 3D semantic mapping, which can perform dense 3D mapping while simultaneously recognizing and labelling the semantic category each point in the 3D map. In our approach, we fully leverage the advantages of different modalities. That is, the PixelNet can learn the high-level contextual information from 2D RGB images, and the VoxelNet can learn 3D geometrical shapes from the 3D point cloud. Unlike the existing architecture that fuses score maps from different modalities with equal weights, we propose a softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet and fuses the score maps according to their respective confidence levels. Our approach achieved competitive results on both the SUN RGB-D and NYU V2 benchmarks, while the runtime of the proposed system is boosted to around 13 Hz, enabling near-real-time performance using an i7 eight-cores PC with a single Titan X GPU.

    @article{lincoln34138,
    volume = {18},
    number = {9},
    month = {September},
    author = {Cheng Zhao and Li Sun and Pulak Purkait and Tom Duckett and Rustam Stolkin},
    title = {Dense RGB-D Semantic Mapping with Pixel-Voxel Neural Network},
    publisher = {Multidisciplinary Digital Publishing Institute},
    year = {2018},
    journal = {Sensors},
    doi = {10.3390/s18093099},
    pages = {3099},
    keywords = {ARRAY(0x55578fed4d50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34138/},
    abstract = {In this paper, a novel Pixel-Voxel network is proposed for dense 3D semantic mapping, which can perform dense 3D mapping while simultaneously recognizing and labelling the semantic category each point in the 3D map. In our approach, we fully leverage the advantages of different modalities. That is, the PixelNet can learn the high-level contextual information from 2D RGB images, and the VoxelNet can learn 3D geometrical shapes from the 3D point cloud. Unlike the existing architecture that fuses score maps from different modalities with equal weights, we propose a softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet and fuses the score maps according to their respective confidence levels. Our approach achieved competitive results on both the SUN RGB-D and NYU V2 benchmarks, while the runtime of the proposed system is boosted to around 13 Hz, enabling near-real-time performance using an i7 eight-cores PC with a single Titan X GPU.}
    }
  • R. Pinsler, R. Akrour, T. Osa, J. Peters, and G. Neumann, “Sample and feedback efficient hierarchical reinforcement learning from human preferences,” in Ieee international conference on robotics and automation (icra), 2018, p. 596–601. doi:10.1109/ICRA.2018.8460907
    [BibTeX] [Abstract] [Download PDF]

    While reinforcement learning has led to promising results in robotics, defining an informative reward function can sometimes prove to be challenging. Prior work considered including the human in the loop to jointly learn the reward function and the optimal policy. Generating samples from a physical robot and requesting human feedback are both taxing efforts for which efficiency is critical. In contrast to prior work, in this paper we propose to learn reward functions from both the robot and the human perspectives in order to improve on both efficiency metrics. On one side, learning a reward function from the human perspective increases feedback efficiency by assuming that humans rank trajectories according to an outcome space of reduced dimensionaltiy. On the other side, learning a reward function from the robot perspective circumvents the need for learning a dynamics model while retaining the sample efficiency of model-based approaches. We provide an algorithm that incorporates bi-perspective reward learning into a general hierarchical reinforcement learning framework and demonstrate the merits of our approach on a toy task and a simulated robot grasping task.

    @inproceedings{lincoln31675,
    month = {September},
    author = {R. Pinsler and R. Akrour and T. Osa and J. Peters and G. Neumann},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Sample and feedback efficient hierarchical reinforcement learning from human preferences},
    publisher = {IEEE},
    doi = {10.1109/ICRA.2018.8460907},
    pages = {596--601},
    year = {2018},
    keywords = {ARRAY(0x55578fed4d80)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31675/},
    abstract = {While reinforcement learning has led to promising results in robotics, defining an informative reward function can sometimes prove to be challenging. Prior work considered including the human in the loop to jointly learn the reward function and the optimal policy. Generating samples from a physical robot and requesting human feedback are both taxing efforts for which efficiency is critical. In contrast to prior work, in this paper we propose to learn reward functions from both the robot and the human perspectives in order to improve on both efficiency metrics. On one side, learning a reward function from the human perspective increases feedback efficiency by assuming that humans rank trajectories according to an outcome space of reduced dimensionaltiy. On the other side, learning a reward function from the robot perspective circumvents the need for learning a dynamics model while retaining the sample efficiency of model-based approaches. We provide an algorithm that incorporates bi-perspective reward learning into a general hierarchical reinforcement learning framework and demonstrate the merits of our approach on a toy task and a simulated robot grasping task.}
    }
  • L. Sun, Z. Yan, S. M. Mellado, M. Hanheide, and T. Duckett, “3dof pedestrian trajectory prediction learned from long-term autonomous mobile robot deployment data,” International conference on robotics and automation (icra) 2018, 2018. doi:10.1109/icra.2018.8461228
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel 3DOF pedestrian trajectory prediction approach for autonomous mobile service robots. While most previously reported methods are based on learning of 2D positions in monocular camera images, our approach uses range-finder sensors to learn and predict 3DOF pose trajectories (i.e. 2D position plus 1D rotation within the world coordinate system). Our approach, T-Pose-LSTM (Temporal 3DOF-Pose Long-Short-Term Memory), is trained using long-term data from real-world robot deployments and aims to learn context-dependent (environment- and time-specific) human activities. Our approach incorporates long-term temporal information (i.e. date and time) with short-term pose observations as input. A sequence-to-sequence LSTM encoder-decoder is trained, which encodes observations into LSTM and then decodes the resulting predictions. On deployment, the approach can perform on-the-fly prediction in real-time. Instead of using manually annotated data, we rely on a robust human detection, tracking and SLAM system, providing us with examples in a global coordinate system. We validate the approach using more than 15 km of pedestrian trajectories recorded in a care home environment over a period of three months. The experiments show that the proposed T-PoseLSTM model outperforms the state-of-the-art 2D-based method for human trajectory prediction in long-term mobile robot deployments.

    @article{lincoln31956,
    month = {September},
    title = {3DOF Pedestrian Trajectory Prediction Learned from Long-Term Autonomous Mobile Robot Deployment Data},
    author = {Li Sun and Zhi Yan and Sergi Molina Mellado and Marc Hanheide and Tom Duckett},
    publisher = {IEEE},
    year = {2018},
    doi = {10.1109/icra.2018.8461228},
    journal = {International Conference on Robotics and Automation (ICRA) 2018},
    keywords = {ARRAY(0x55578fed4db0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31956/},
    abstract = {This paper presents a novel 3DOF pedestrian trajectory prediction approach for autonomous mobile service
    robots. While most previously reported methods are based on learning of 2D positions in monocular camera images,
    our approach uses range-finder sensors to learn and predict 3DOF pose trajectories (i.e. 2D position plus 1D rotation within the world coordinate system). Our approach, T-Pose-LSTM (Temporal 3DOF-Pose Long-Short-Term Memory), is trained using long-term data from real-world robot deployments and aims to learn context-dependent (environment- and time-specific) human activities. Our approach incorporates long-term temporal information (i.e. date and time) with short-term pose observations as input. A sequence-to-sequence LSTM encoder-decoder is trained, which encodes observations into LSTM and then decodes the resulting predictions. On deployment, the approach can perform on-the-fly prediction in real-time. Instead of using manually annotated data, we rely on a robust human detection, tracking and SLAM system, providing us with examples in a global coordinate system. We validate the approach using more than 15 km of pedestrian trajectories recorded in a care home environment over a period of three months. The experiments show that the proposed T-PoseLSTM model outperforms the state-of-the-art 2D-based method for human trajectory prediction in long-term mobile robot deployments.}
    }
  • A. Kucukyilmaz and Y. Demiris, “Learning shared control by demonstration for personalized wheelchair assistance,” Ieee transactions on haptics, vol. 11, iss. 3, p. 431–442, 2018. doi:10.1109/TOH.2018.2804911
    [BibTeX] [Abstract] [Download PDF]

    An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user’s previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e. in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user’s joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant’s commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

    @article{lincoln31131,
    volume = {11},
    number = {3},
    month = {September},
    author = {Ayse Kucukyilmaz and Yiannis Demiris},
    title = {Learning shared control by demonstration for personalized wheelchair assistance},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2018},
    journal = {IEEE Transactions on Haptics},
    doi = {10.1109/TOH.2018.2804911},
    pages = {431--442},
    keywords = {ARRAY(0x55578fed4de0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31131/},
    abstract = {An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e. in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.}
    }
  • W. Jeon, G. Cielniak, and R. Sang-Yong, “Semantic segmentation using trade-off and internal ensemble,” International journal of fuzzy logic and intelligent systems, vol. 18, iss. 3, p. 196–203, 2018. doi:10.5391/IJFIS.2018.18.3.196
    [BibTeX] [Abstract] [Download PDF]

    The computer vision consists of image classification, image segmentation, object detection, and tracking, etc. Among them, image segmentation is the most basic technique of the computer vision, which divides an image into foreground and background. This paper proposes an ensemble model using a concept of physical perception for image segmentation. Practically two connected models, the DeepLab and a modified VGG model, get feedback each other in the training process. On inference processing, we combine the results of two parallel models and execute an atrous spatial pyramid pooling (ASPP) and post-processing by using conditional random field (CRF). The proposed model shows better performance than the DeepLab in local area and about 1\% improvement on average on comparison of pixel-by-pixel.

    @article{lincoln34496,
    volume = {18},
    number = {3},
    month = {September},
    author = {Wang-Su Jeon and Grzegorz Cielniak and Rhee Sang-Yong},
    title = {Semantic Segmentation Using Trade-Off and Internal Ensemble},
    year = {2018},
    journal = {International Journal of Fuzzy Logic and Intelligent Systems},
    doi = {10.5391/IJFIS.2018.18.3.196},
    pages = {196--203},
    keywords = {ARRAY(0x55578fed4e10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34496/},
    abstract = {The computer vision consists of image classification, image segmentation, object detection, and tracking, etc. Among them, image segmentation is the most basic technique of the computer vision, which divides an image into foreground and background. This paper proposes an ensemble model using a concept of physical perception for image segmentation. Practically two connected models, the DeepLab and a modified VGG model, get feedback each other in the training process. On inference processing, we combine the results of two parallel models and execute an atrous spatial pyramid pooling (ASPP) and post-processing by using conditional random field (CRF). The proposed model shows better performance than the DeepLab in local area and about 1\% improvement on average on comparison of pixel-by-pixel.}
    }
  • T. Osa, J. Peters, and G. Neumann, “Hierarchical reinforcement learning of multiple grasping strategies with human instructions,” Advanced robotics, vol. 32, iss. 18, p. 955–968, 2018. doi:10.1080/01691864.2018.1509018
    [BibTeX] [Abstract] [Download PDF]

    Grasping is an essential component for robotic manipulation and has been investigated for decades. Prior work on grasping often assumes that a sufficient amount of training data is available for learning and planning robotic grasps. However, since constructing such an exhaustive training dataset is very challenging in practice, it is desirable that a robotic system can autonomously learn and improves its grasping strategy. In this paper, we address this problem using reinforcement learning. Although recent work has presented autonomous data collection through trial and error, such methods are often limited to a single grasp type, e.g., vertical pinch grasp. We present a hierarchical policy search approach for learning multiple grasping strategies. Our framework autonomously constructs a database of grasping motions and point clouds of objects to learn multiple grasping types autonomously. We formulate the problem of selecting the grasp location and grasp policy as a bandit problem, which can be interpreted as a variant of active learning. We applied our reinforcement learning to grasping both rigid and deformable objects. The experimental results show that our framework autonomously learns and improves its performance through trial and error and can grasp previously unseen objects with a high accuracy.

    @article{lincoln32981,
    volume = {32},
    number = {18},
    month = {September},
    author = {T. Osa and J. Peters and Gerhard Neumann},
    title = {Hierarchical Reinforcement Learning of Multiple Grasping Strategies with Human Instructions},
    publisher = {Taylor \& Francis},
    year = {2018},
    journal = {Advanced Robotics},
    doi = {10.1080/01691864.2018.1509018},
    pages = {955--968},
    keywords = {ARRAY(0x55578fed4e40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32981/},
    abstract = {Grasping is an essential component for robotic manipulation and has been investigated for decades. Prior work on grasping often assumes that a sufficient amount of training data is available for learning and planning robotic grasps. However, since constructing such an exhaustive training dataset is very challenging in practice, it is desirable that a robotic system can autonomously learn and improves its grasping strategy. In this paper, we address this problem using reinforcement learning. Although recent work has presented autonomous data collection through trial and error, such methods are often limited to a single grasp type, e.g., vertical pinch grasp. We present a hierarchical policy search approach for learning multiple grasping strategies. Our framework autonomously constructs a database of grasping motions and point clouds of objects to learn multiple grasping types autonomously. We formulate the problem of selecting the grasp location and grasp policy as a bandit problem, which can be interpreted as a variant of active learning. We applied our reinforcement learning to grasping both rigid and deformable objects. The experimental results show that our framework autonomously learns and improves its performance through trial and error and can grasp previously unseen objects with a high accuracy.}
    }
  • S. M. Mustaza, C. Saaj, F. J. Comin, W. A. Albukhanajer, D. Mahdi, and C. Lekakou, “Stiffness control for soft surgical manipulators,” International journal of humanoid robotics, vol. 15, iss. 5, 2018. doi:10.1142/S0219843618500214
    [BibTeX] [Abstract] [Download PDF]

    Tunable stiffness control is critical for undertaking surgical procedures using soft manipulators. However, active stiffness control in soft continuum manipulators is very challenging and has been rarely realized for real-time surgical applications. Low stiffness at the tip is much preferred for safe navigation of the robot in restricted spaces inside the human body. On the other hand, high stiffness at the tip is demanded for efficiently operating surgical instruments. In this paper, the manipulability and characteristics of a class of soft hyper-redundant manipulator, fabricated using Ecoflex-0050TM silicone, is discussed and a new methodology is introduced to actively tune the stiffness matrix, in real-time, for disturbance rejection and stiffness control. Experimental results are used to derive a more accurate description of the characteristics of the soft manipulator, capture the varying stiffness effects of the actuated arm and consequently offer a more accurate response using closed loop feedback control in real-time. The novel results presented in this paper advances the state-of-the-art of tunable stiffness control in soft continuum manipulators for real-time applications.

    @article{lincoln37443,
    volume = {15},
    number = {5},
    month = {August},
    author = {S.M. Mustaza and C Saaj and F.J. Comin and W.A. Albukhanajer and D. Mahdi and C. Lekakou},
    note = {cited By 1},
    title = {Stiffness Control for Soft Surgical Manipulators},
    publisher = {World Scientific},
    year = {2018},
    journal = {International Journal of Humanoid Robotics},
    doi = {10.1142/S0219843618500214},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37443/},
    abstract = {Tunable stiffness control is critical for undertaking surgical procedures using soft manipulators. However, active stiffness control in soft continuum manipulators is very challenging and has been rarely realized for real-time surgical applications. Low stiffness at the tip is much preferred for safe navigation of the robot in restricted spaces inside the human body. On the other hand, high stiffness at the tip is demanded for efficiently operating surgical instruments. In this paper, the manipulability and characteristics of a class of soft hyper-redundant manipulator, fabricated using Ecoflex-0050TM silicone, is discussed and a new methodology is introduced to actively tune the stiffness matrix, in real-time, for disturbance rejection and stiffness control. Experimental results are used to derive a more accurate description of the characteristics of the soft manipulator, capture the varying stiffness effects of the actuated arm and consequently offer a more accurate response using closed loop feedback control in real-time. The novel results presented in this paper advances the state-of-the-art of tunable stiffness control in soft continuum manipulators for real-time applications.}
    }
  • C. Hu, Q. Fu, T. liu, and S. Yue, “A hybrid visual-model based robot control strategy for micro ground robots,” Sab 2018: from animals to animats 15, vol. 10994, p. 162–174, 2018. doi:10.1007/978-3-319-97628-0_14
    [BibTeX] [Abstract] [Download PDF]

    This paper proposed a hybrid vision-based robot control strategy for micro ground robots by mediating two vision models from mixed categories: a bio-inspired collision avoidance model and a segmentation based target following model. The implemented model coordination strategy is described as a probabilistic model using ?nite state machine (FSM) that allows the robot to switch behaviours adapting to the acquired visual information. Experiments demonstrated the stability and convergence of the embedded hybrid system by real robots, including the studying of collective behaviour by a swarm of such robots with environment mediation. This research enables micro robots to run visual models with more complexity. Moreover, it showed the possibility to realize aggregation behaviour on micro robots by utilizing vision as the only sensing modality from non-omnidirectional cameras.

    @article{lincoln32842,
    volume = {10994},
    month = {August},
    author = {Cheng Hu and Qinbing Fu and Tian liu and Shigang Yue},
    booktitle = {Manoonpong P., Larsen J., Xiong X., Hallam J., Triesch J. (eds) From Animals to Animats 15. SAB 2018. Lecture Notes in Computer Science},
    title = {A Hybrid Visual-Model Based Robot Control Strategy for Micro Ground Robots},
    publisher = {Springer, Cham},
    year = {2018},
    journal = {SAB 2018: From Animals to Animats 15},
    doi = {10.1007/978-3-319-97628-0\_14},
    pages = {162--174},
    keywords = {ARRAY(0x55578fed4ea0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32842/},
    abstract = {This paper proposed a hybrid vision-based robot control strategy for micro ground robots by mediating two vision models from mixed categories: a bio-inspired collision avoidance model and a segmentation based target following model. The implemented model coordination strategy is described as a probabilistic model using ?nite state machine (FSM) that allows the robot to switch behaviours adapting to the acquired visual information. Experiments demonstrated the stability and convergence of the embedded hybrid system by real robots, including the studying of collective behaviour by a swarm of such robots with environment mediation. This research enables micro robots to run visual models with more complexity. Moreover, it showed the possibility to realize aggregation behaviour on micro robots by utilizing vision as the only sensing modality from non-omnidirectional cameras.}
    }
  • K. Elgeneidy, P. Liu, S. Pearson, N. Lohse, and G. Neumann, “Printable soft grippers with integrated bend sensing for handling of crops,” Towards autonomous robotic systems (taros) conference, vol. 2018, iss. 10965, p. 479–480, 2018. doi:10.1007/978-3-319-96728-8
    [BibTeX] [Abstract] [Download PDF]

    Handling delicate crops without damaging or bruising is a challenge facing the au-tomation of tasks within the agri-food sector, which encourages the utilization of soft grippers that are inherently safe and passively compliant. In this paper we present a brief overview of the development of a printable soft gripper integrated with printable bend sensors. The softness of the gripper fingers allows delicate crops to be grasped gently, while the bend sensors are calibrated to measure bending and detect contact. This way the soft gripper not only benefits from the passive compliance of its soft fingers, but also demonstrates a sensor-guided approach for improved grasp control.

    @article{lincoln32296,
    volume = {2018},
    number = {10965},
    month = {August},
    author = {Khaled Elgeneidy and Pengcheng Liu and Simon Pearson and Niels Lohse and Gerhard Neumann},
    title = {Printable Soft Grippers with Integrated Bend Sensing for Handling of Crops},
    publisher = {Springer},
    year = {2018},
    journal = {Towards Autonomous Robotic Systems (TAROS) Conference},
    doi = {10.1007/978-3-319-96728-8},
    pages = {479--480},
    keywords = {ARRAY(0x55578fed4ed0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32296/},
    abstract = {Handling delicate crops without damaging or bruising is a challenge facing the au-tomation of tasks within the agri-food sector, which encourages the utilization of soft grippers that are inherently safe and passively compliant. In this paper we present a brief overview of the development of a printable soft gripper integrated with printable bend sensors. The softness of the gripper fingers allows delicate crops to be grasped gently, while the bend sensors are calibrated to measure bending and detect contact. This way the soft gripper not only benefits from the passive compliance of its soft fingers, but also demonstrates a sensor-guided approach for improved grasp control.}
    }
  • F. D. Duchetto, A. Kucukyilmaz, L. Iocchi, and M. Hanheide, “Don’t make the same mistakes again and again: learning local recovery policies for navigation from human demonstrations,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 4084–4091, 2018. doi:10.1109/LRA.2018.2861080
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a human-in-the-loop learning framework for mobile robots to generate effective local policies in order to recover from navigation failures in long-term autonomy. We present an analysis of failure and recovery cases derived from long-term autonomous operation of a mobile robot, and propose a two-layer learning framework that allows to detect and recover from such navigation failures. Employing a learning by demonstration (LbD) approach, our framework can incrementally learn to autonomously recover from situations it initially needs humans to help with. The learning framework allows for both real-time failure detection and regression using Gaussian processes (GPs). Our empirical results on two different failure scenarios indicate that given 40 failure state observations, the true positive rate of the failure detection model exceeds 90\%, ending with successful recovery actions in more than 90\% of all detected cases.

    @article{lincoln32850,
    volume = {3},
    number = {4},
    month = {July},
    author = {Francesco Del Duchetto and Ayse Kucukyilmaz and Luca Iocchi and Marc Hanheide},
    note = {{\copyright} 2018 IEEE},
    title = {Don't Make the Same Mistakes Again and Again: Learning Local Recovery Policies for Navigation from Human Demonstrations},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2861080},
    pages = {4084--4091},
    keywords = {ARRAY(0x55578fed4f00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32850/},
    abstract = {In this paper, we present a human-in-the-loop learning framework for mobile robots to generate effective local policies in order to recover from navigation failures in long-term autonomy. We present an analysis of failure and recovery cases derived from long-term autonomous operation of a mobile robot, and propose a two-layer learning framework that allows to detect and recover from such navigation failures. Employing a learning by demonstration (LbD) approach, our framework can incrementally learn to autonomously recover from situations it initially needs humans to help with. The learning framework allows for both real-time failure detection and regression using Gaussian processes (GPs). Our empirical results on two different failure scenarios indicate that given 40 failure state observations, the true positive rate of the failure detection model exceeds 90\%, ending with successful recovery actions in more than 90\% of all detected cases.}
    }
  • Q. Fu, C. Hu, P. Liu, and S. Yue, “Towards computational models of insect motion detectors for robot vision,” in M. giuliani et al. (eds.): taros 2018, lnai, Springer international publishing ag, part of springer nature 2018, 2018, vol. 10965, p. 465–467.
    [BibTeX] [Abstract] [Download PDF]

    In this essay, we provide a brief survey of computational models of insect motion detectors, and bio-robotic solutions to build fast and reliable motion-sensing systems for robot vision. Vision is an important sensing modality for autonomous robots, since it can extract abundant useful features from visually cluttered and dynamic environments. Fast development of computer vision technology facilitates the modeling of dynamic vision systems for mobile robots.

    @incollection{lincoln31671,
    volume = {10965},
    month = {July},
    author = {Qinbing Fu and Cheng Hu and Pengcheng Liu and Shigang Yue},
    booktitle = {M. Giuliani et al. (Eds.): TAROS 2018, LNAI},
    title = {Towards computational models of insect motion detectors for robot vision},
    publisher = {Springer International Publishing AG, part of Springer Nature 2018},
    pages = {465--467},
    year = {2018},
    keywords = {ARRAY(0x55578fed4f30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31671/},
    abstract = {In this essay, we provide a brief survey of computational models of insect motion detectors, and bio-robotic solutions to build fast and reliable motion-sensing systems for robot vision. Vision is an important sensing modality for autonomous robots, since it can extract abundant useful features from visually cluttered and dynamic environments. Fast development of computer vision technology facilitates the modeling of dynamic vision systems for mobile robots.}
    }
  • P. Liu, K. Elgeneidy, S. Pearson, N. Huda, and G. Neumann, “Towards real-time robotic motion planning for grasping in cluttered and uncertain environments,” in 19th towards autonomous robotic systems (taros) conference, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Adaptation to unorganized, congested and uncertain environment is a desirable capability but challenging task in development of robotic motion planning algorithms for object grasping. We have to make a tradeoff between coping with the environmental complexities using computational expensive approaches, and enforcing practical manipulation and grasping in real-time. In this paper, we present a brief overview and research objectives towards real-time motion planning for grasping in cluttered and uncertain environments. We present feasible ways in approaching this goal, in which key challenges and plausible solutions are discussed.

    @inproceedings{lincoln31679,
    booktitle = {19th Towards Autonomous Robotic Systems (TAROS) Conference},
    month = {July},
    title = {Towards real-time robotic motion planning for grasping in cluttered and uncertain environments},
    author = {Pengcheng Liu and Khaled Elgeneidy and Simon Pearson and Nazmul Huda and Gerhard Neumann},
    publisher = {Springer},
    year = {2018},
    keywords = {ARRAY(0x55578fed4f60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31679/},
    abstract = {Adaptation to unorganized, congested and uncertain environment is a desirable capability but challenging task in development of robotic motion planning algorithms for object grasping. We have to make a tradeoff between coping with the environmental complexities using computational expensive approaches, and enforcing practical manipulation and grasping in real-time. In this paper, we present a brief overview and research objectives towards real-time motion planning for grasping in cluttered and uncertain environments. We present feasible ways in approaching this goal, in which key challenges and plausible solutions are discussed.}
    }
  • A. Millard, R. Redpath, J. A. M., A. Charlotte, J. Russell, H. J. A., M. L. J., and H. D. M., “Ardebug: an augmented reality tool for analysing and debugging swarm robotic systems,” Frontiers in robotics and ai, 2018. doi:10.3389/frobt.2018.00087
    [BibTeX] [Abstract] [Download PDF]

    Despite growing interest in collective robotics over the past few years, analysing and debugging the behaviour of swarm robotic systems remains a challenge due to the lack of appropriate tools. We present a solution to this problem{–}ARDebug: an open-source, cross-platform, and modular tool that allows the user to visualise the internal state of a robot swarm using graphical augmented reality techniques. In this paper we describe the key features of the software, the hardware required to support it, its implementation, and usage examples. ARDebug is specifically designed with adoption by other institutions in mind, and aims to provide an extensible tool that other researchers can easily integrate with their own experimental infrastructure.

    @article{lincoln43298,
    month = {July},
    title = {ARDebug: An Augmented Reality
    Tool for Analysing and Debugging
    Swarm Robotic Systems},
    author = {Alan Millard and Richard Redpath and Jewers Alistair M. and Arndt Charlotte and Joyce Russell and Hilder James A. and McDaid Liam J. and Halliday David M.},
    year = {2018},
    doi = {10.3389/frobt.2018.00087},
    journal = {Frontiers in robotics and AI},
    keywords = {ARRAY(0x55578fed4f90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43298/},
    abstract = {Despite growing interest in collective robotics over the past few years, analysing and
    debugging the behaviour of swarm robotic systems remains a challenge due to the lack
    of appropriate tools. We present a solution to this problem{--}ARDebug: an open-source,
    cross-platform, and modular tool that allows the user to visualise the internal state of a
    robot swarm using graphical augmented reality techniques. In this paper we describe the
    key features of the software, the hardware required to support it, its implementation, and
    usage examples. ARDebug is specifically designed with adoption by other institutions in
    mind, and aims to provide an extensible tool that other researchers can easily integrate
    with their own experimental infrastructure.}
    }
  • C. Hu, Q. Fu, and S. Yue, “Colias iv: the affordable micro robot platform with bio-inspired vision,” in Giuliani m., assaf t., giannaccini m. (eds) towards autonomous robotic systems. taros 2018. lecture notes in computer science, M. Giuliani, T. Assaf, and M. E. Giannaccini, Eds., Springer, 2018, vol. 10965. doi:10.1007/978-3-319-96728-8_17
    [BibTeX] [Abstract] [Download PDF]

    Vision is one of the most important sensing modalities for robots and has been realized on mostly large platforms. However for micro robots which are commonly utilized in swarm robotic studies, the visual ability is seldom applied or with only limited functions and resolution, due to the challenging requirements on the computation power and high data volume to deal with. This research has proposed the low-cost micro ground robot Colias IV, which is particularly designed to meet the requirements to allow embedded vision based tasks onboard, such as bio-inspired collision detection neural networks. Numerous of successful approaches have demonstrated that the proposed micro robot Colias IV to be a feasible platform for introducing visual based algorithms into swarm robotics.

    @incollection{lincoln31672,
    volume = {10965},
    month = {July},
    author = {Cheng Hu and Qinbing Fu and Shigang Yue},
    booktitle = {Giuliani M., Assaf T., Giannaccini M. (eds) Towards Autonomous Robotic Systems. TAROS 2018. Lecture Notes in Computer Science},
    editor = {Manuel Giuliani and Tareq Assaf and Maria Elena Giannaccini},
    title = {Colias IV: The Affordable Micro Robot Platform with Bio-inspired Vision},
    publisher = {Springer},
    year = {2018},
    doi = {10.1007/978-3-319-96728-8\_17},
    keywords = {ARRAY(0x55578fed4fc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31672/},
    abstract = {Vision is one of the most important sensing modalities for robots and has been realized on mostly large platforms. However for micro robots which are commonly utilized in swarm robotic studies, the visual ability is seldom applied or with only limited functions and resolution, due to the challenging requirements on the computation power and high data volume to deal with. This research has proposed the low-cost micro ground robot Colias IV, which is particularly designed to meet the requirements to allow embedded vision based tasks onboard, such as bio-inspired collision detection neural networks. Numerous of successful approaches have demonstrated that the proposed micro robot Colias IV to be a feasible platform for introducing visual based algorithms into swarm robotics.}
    }
  • S. Cosar, Z. Yan, F. Zhao, T. Lambrou, S. Yue, and N. Bellotto, “Thermal camera based physiological monitoring with an assistive robot,” in Ieee international engineering in medicine and biology conference, 2018.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a physiological monitoring system for assistive robots using a thermal camera. It is based on the detection of subtle changes in temperature observed on different parts of the face. First, we segment and estimate these face regions on thermal images. Then, by applying Fourier analysis on temperature data, we estimate respiration and heartbeat rate. This physiological monitoring system has been integrated in an assistive robot for elderly people at home, as part of the ENRICHME project. Its performance has been evaluated on a new thermal dataset for physiological monitoring, which is made publicly available for research purposes.

    @inproceedings{lincoln31779,
    booktitle = {IEEE International Engineering in Medicine and Biology Conference},
    month = {July},
    title = {Thermal camera based physiological monitoring with an assistive robot},
    author = {Serhan Cosar and Zhi Yan and Feng Zhao and Tryphon Lambrou and Shigang Yue and Nicola Bellotto},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55578fed4ff0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31779/},
    abstract = {This paper presents a physiological monitoring system for assistive robots using a thermal camera. It is based on the detection of subtle changes in temperature observed on different parts of the face. First, we segment and estimate these face regions on thermal images. Then, by applying Fourier analysis on temperature data, we estimate respiration and heartbeat rate. This physiological monitoring system has been integrated in an assistive robot for elderly people at home, as part of the ENRICHME project. Its performance has been evaluated on a new thermal dataset for physiological monitoring, which is made publicly available for research purposes.}
    }
  • J. P. Fentanes, I. Gould, T. Duckett, S. Pearson, and G. Cielniak, “3d soil compaction mapping through kriging-based exploration with a mobile robot,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 3066 –3072, 2018. doi:10.1109/LRA.2018.2849567
    [BibTeX] [Abstract] [Download PDF]

    This paper presents an automated method for creating spatial maps of soil condition with an outdoor mobile robot. Effective soil mapping on farms can enhance yields, reduce inputs and help protect the environment. Traditionally, data are collected manually at an arbitrary set of locations, then soil maps are constructed offline using kriging, a form of Gaussian process regression. This process is laborious and costly, limiting the quality and resolution of the resulting information. Instead, we propose to use an outdoor mobile robot for automatic collection of soil condition data, building soil maps online and also adapting the robot’s exploration strategy on-the-fly based on the current quality of the map. We show how using kriging variance as a reward function for robotic exploration allows for both more efficient data collection and better soil models. This work presents the theoretical foundations for our proposal and an experimental comparison of exploration strategies using soil compaction data from a field generated with a mobile robot.

    @article{lincoln32172,
    volume = {3},
    number = {4},
    month = {July},
    author = {Jaime Pulido Fentanes and Iain Gould and Tom Duckett and Simon Pearson and Grzegorz Cielniak},
    title = {3D Soil Compaction Mapping through Kriging-based Exploration with a Mobile Robot},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2849567},
    pages = {3066 --3072},
    keywords = {ARRAY(0x55578fed5020)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32172/},
    abstract = {This paper presents an automated method for creating spatial maps of soil condition with an outdoor mobile robot. Effective soil mapping on farms can enhance yields, reduce inputs and help protect the environment. Traditionally, data are collected manually at an arbitrary set of locations, then soil maps are constructed offline using kriging, a form of Gaussian process regression. This process is laborious and costly, limiting the quality and resolution of the resulting information.
    Instead, we propose to use an outdoor mobile robot for automatic collection of soil condition data, building soil maps online and also adapting the robot's exploration strategy on-the-fly based on the current quality of the map. We show how using kriging variance as a reward function for robotic exploration allows for both more efficient data collection and better soil models. This work presents the theoretical foundations for our proposal and an experimental comparison of exploration strategies using soil compaction data from a field generated with a mobile robot.}
    }
  • X. Sun, M. Mangan, and S. Yue, “An analysis of a ring attractor model for cue integration,” in Biomimetic and biohybrid systems, Springer, 2018, p. 459–470. doi:https://doi.org/10.1007/978-3-319-95972-6_49
    [BibTeX] [Abstract] [Download PDF]

    Animals and robots must constantly combine multiple streams of noisy information from their senses to guide their actions. Recently, it has been proposed that animals may combine cues optimally using a ring attractor neural network architecture inspired by the head direction system of rats augmented with a dynamic re-weighting mechanism. In this work we report that an older and simpler ring attractor network architecture, requiring no re-weighting property combines cues according to their certainty for moderate cue conflicts but converges on the most certain cue for larger conflicts. These results are consistent with observations in animal experiments that show sub-optimal cue integration and switching from cue integration to cue selection strategies. This work therefore demonstrates an alternative architecture for those seeking neural correlates of sensory integration in animals. In addition, performance is shown robust to noise and miniaturization and thus provides an efficient solution for artificial systems.

    @incollection{lincoln33007,
    month = {July},
    author = {Xuelong Sun and Michael Mangan and Shigang Yue},
    note = {This publication can be purchased online at https://www.springer.com/us/book/9783319959719},
    booktitle = {Biomimetic and Biohybrid Systems},
    title = {An Analysis of a Ring Attractor Model for Cue Integration},
    publisher = {Springer},
    year = {2018},
    doi = {https://doi.org/10.1007/978-3-319-95972-6\_49},
    pages = {459--470},
    keywords = {ARRAY(0x55578fed5050)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33007/},
    abstract = {Animals and robots must constantly combine multiple streams of noisy information from their senses to guide their actions. Recently, it has been proposed that animals may combine cues optimally using a ring attractor neural network architecture inspired by the head direction system of rats augmented with a dynamic re-weighting mechanism. In this work we report that an older and simpler ring attractor network architecture, requiring no re-weighting property combines cues according to their certainty for moderate cue conflicts but converges on the most certain cue for larger conflicts. These results are consistent with observations in animal experiments that show sub-optimal cue integration and switching from cue integration to cue selection strategies. This work therefore demonstrates an alternative architecture for those seeking neural correlates of sensory integration in animals. In addition, performance is shown robust to noise and miniaturization and thus provides an efficient solution for artificial systems.}
    }
  • P. Bosilj, T. Duckett, and G. Cielniak, “Connected attribute morphology for unified vegetation segmentation and classification in precision agriculture,” Computers in industry, vol. 98, p. 226–240, 2018. doi:10.1016/j.compind.2018.02.003
    [BibTeX] [Abstract] [Download PDF]

    Discriminating value crops from weeds is an important task in precision agriculture. In this paper, we propose a novel image processing pipeline based on attribute morphology for both the segmentation and classification tasks. The commonly used approaches for vegetation segmentation often rely on thresholding techniques which reach their decisions globally. By contrast, the proposed method works with connected components obtained by image threshold decomposition, which are naturally nested in a hierarchical structure called the max-tree, and various attributes calculated from these regions. Image segmentation is performed by attribute filtering, preserving or discarding the regions based on their attribute value and allowing for the decision to be reached locally. This segmentation method naturally selects a collection of foreground regions rather than pixels, and the same data structure used for segmentation can be further reused to provide the features for classification, which is realised in our experiments by a support vector machine (SVM). We apply our methods to normalised difference vegetation index (NDVI) images, and demonstrate the performance of the pipeline on a dataset collected by the authors in an onion field, as well as a publicly available dataset for sugar beets. The results show that the proposed segmentation approach can segment the fine details of plant regions locally, in contrast to the state-of-the-art thresholding methods, while providing discriminative features which enable efficient and competitive classification rates for crop/weed discrimination.

    @article{lincoln31634,
    volume = {98},
    month = {June},
    author = {Petra Bosilj and Tom Duckett and Grzegorz Cielniak},
    title = {Connected attribute morphology for unified vegetation segmentation and classification in precision agriculture},
    publisher = {Elsevier},
    year = {2018},
    journal = {Computers in Industry},
    doi = {10.1016/j.compind.2018.02.003},
    pages = {226--240},
    keywords = {ARRAY(0x55578fed50b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31634/},
    abstract = {Discriminating value crops from weeds is an important task in precision agriculture. In this paper, we propose a novel image processing pipeline based on attribute morphology for both the segmentation and classification tasks. The commonly used approaches for vegetation segmentation often rely on thresholding techniques which reach their decisions globally. By contrast, the proposed method works with connected components obtained by image threshold decomposition, which are naturally nested in a hierarchical structure called the max-tree, and various attributes calculated from these regions. Image segmentation is performed by attribute filtering, preserving or discarding the regions based on their attribute value and allowing for the decision to be reached locally. This segmentation method naturally selects a collection of foreground regions rather than pixels, and the same data structure used for segmentation can be further reused to provide the features for classification, which is realised in our experiments by a support vector machine (SVM). We apply our methods to normalised difference vegetation index (NDVI) images, and demonstrate the performance of the pipeline on a dataset collected by the authors in an onion field, as well as a publicly available dataset for sugar beets. The results show that the proposed segmentation approach can segment the fine details of plant regions locally, in contrast to the state-of-the-art thresholding methods, while providing discriminative features which enable efficient and competitive classification rates for crop/weed discrimination.}
    }
  • A. Binch, N. Cooke, and C. Fox, “Rumex and urtica detection in grassland by uav,” in 14th international conference on precision agriculture, 2018.
    [BibTeX] [Abstract] [Download PDF]

    . Previous work (Binch & Fox, 2017) used autonomous ground robotic platforms to successfully detect Urtica (nettle) and Rumex (dock) weeds in grassland, to improve farm productivity and the environment through precision herbicide spraying. It assumed that ground robots swathe entire fields to both detect and spray weeds, but this is a slow process as the slow ground platform must drive over every square meter of the field even where there are no weeds. The present study examines a complimentary approach, using unmanned aerial vehicles (UAVs) to perform faster detections, in order to inform slower ground robots of weed location and direct them to spray them from the ground. In a controlled study, it finds that the existing state-of-the-art (Binch & Fox, 2017) ground detection algorithm based on local binary patterns and support vector machines is easily re-usable from a UAV with 4K camera despite large differences in camera type, distance, perspective and motion, without retraining. The algorithm achieves 83-95\% accuracy on ground platform data with 1-3 independent views, and improves to 90\% from single views on aerial data. However this is only attainable at low altitudes up to 8 feet, speeds below 0.3m/s, and a vertical view angle, suggesting that autonomous or manual UAV swathing is required to cover fields, rather than use of a single high-altitude photograph. This demonstrates for the first time that combined aerial detection with ground spraying system is feasible for Rumex and Urtica in grassland, using UAVs to replace the swathing and detection of weeds then dispatching ground platforms to spray them at the detection sites (as spraying by UAV is illegal in EU countries). This reduces total time requires to spray as the UAV performs the survey stage faster than a ground platform.

    @inproceedings{lincoln31363,
    booktitle = {14th International Conference on Precision Agriculture},
    month = {June},
    title = {Rumex and Urtica detection in grassland by UAV},
    author = {Adam Binch and Nigel Cooke and Charles Fox},
    publisher = {14th International Conference on Precision Agriculture},
    year = {2018},
    keywords = {ARRAY(0x55578fed50e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31363/},
    abstract = {. Previous work (Binch \& Fox, 2017) used autonomous ground robotic platforms to successfully detect Urtica (nettle) and Rumex (dock) weeds in grassland, to improve farm productivity and the environment through precision herbicide spraying. It assumed that ground robots swathe entire fields to both detect and spray weeds, but this is a slow process as the slow ground platform must drive over every square meter of the field even where there are no weeds. The present study examines a complimentary approach, using unmanned aerial vehicles (UAVs) to perform faster detections, in order to inform slower ground robots of weed location and direct them to spray them from the ground. In a controlled study, it finds that the existing state-of-the-art (Binch \& Fox, 2017) ground detection algorithm based on local binary patterns and support vector machines is easily re-usable from a UAV with 4K camera despite large differences in camera type, distance, perspective and motion, without retraining. The algorithm achieves 83-95\% accuracy on ground platform data with 1-3 independent views, and improves to 90\% from single views on aerial data. However this is only attainable at low altitudes up to 8 feet, speeds below 0.3m/s, and a vertical view angle, suggesting that autonomous or manual UAV swathing is required to cover fields, rather than use of a single high-altitude photograph. This demonstrates for the first time that combined aerial detection with ground spraying system is feasible for Rumex and Urtica in grassland, using UAVs to replace the swathing and detection of weeds then dispatching ground platforms to spray them at the detection sites (as spraying by UAV is illegal in EU countries). This reduces total time requires to spray as the UAV performs the survey stage faster than a ground platform.}
    }
  • T. Duckett, S. Pearson, S. Blackmore, B. Grieve, W. Chen, G. Cielniak, J. Cleaversmith, J. Dai, S. Davis, C. Fox, P. From, I. Georgilas, R. Gill, I. Gould, M. Hanheide, F. Iida, L. Mihalyova, S. Nefti-Meziani, G. Neumann, P. Paoletti, T. Pridmore, D. Ross, M. Smith, M. Stoelen, M. Swainson, S. Wane, P. Wilson, I. Wright, and G. Yang, “Agricultural robotics: the future of robotic agriculture,” UK-RAS Network White Papers, Other , 2018.
    [BibTeX] [Abstract] [Download PDF]

    Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (“Transforming Food Production: from Farm to Fork”). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.

    @techreport{lincoln32517,
    month = {June},
    type = {Other},
    title = {Agricultural Robotics: The Future of Robotic Agriculture},
    author = {Tom Duckett and Simon Pearson and Simon Blackmore and Bruce Grieve and Wen-Hua Chen and Grzegorz Cielniak and Jason Cleaversmith and Jian Dai and Steve Davis and Charles Fox and Pal From and Ioannis Georgilas and Richie Gill and Iain Gould and Marc Hanheide and Fumiya Iida and Lyudmila Mihalyova and Samia Nefti-Meziani and Gerhard Neumann and Paolo Paoletti and Tony Pridmore and Dave Ross and Melvyn Smith and Martin Stoelen and Mark Swainson and Sam Wane and Peter Wilson and Isobel Wright and Guang-Zhong Yang},
    publisher = {UK-RAS Network White Papers},
    year = {2018},
    institution = {UK-RAS Network White Papers},
    keywords = {ARRAY(0x55578fed5110)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32517/},
    abstract = {Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment ("Transforming Food Production: from Farm to Fork"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.}
    }
  • F. Camara and C. Fox, “Filtration analysis of pedestrian-vehicle interactions for autonomous vehicle control,” in Proceedings of the 15th international conference on intelligent autonomous systems, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Abstract. Interacting with humans remains a challenge for autonomous vehicles (AVs). When a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform development of new real-time AV controllers in this setting, this study collects and analy- ses detailed, manually-annotated, temporal data from real-world human road crossings as they interact with manual drive vehicles. It studies the temporal orderings (filtrations) in which features are revealed to the ve- hicle and their informativeness over time. It presents a new framework suggesting how optimal stopping controllers may then use such data to enable an AV to decide when to act (by speeding up, slowing down, or otherwise signalling intent to the pedestrian) or alternatively, to continue at its current speed in order to gather additional information from new features, including signals from that pedestrian, before acting itself.

    @inproceedings{lincoln32484,
    booktitle = {Proceedings of the 15th International Conference on Intelligent Autonomous Systems},
    month = {June},
    title = {Filtration analysis of pedestrian-vehicle interactions for autonomous vehicle control},
    author = {Fanta Camara and Charles Fox},
    publisher = {15th International Conference on Intelligent Autonomous Systems},
    year = {2018},
    keywords = {ARRAY(0x55578fed5170)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32484/},
    abstract = {Abstract. Interacting with humans remains a challenge for autonomous
    vehicles (AVs). When a pedestrian wishes to cross the road in front of the
    vehicle at an unmarked crossing, the pedestrian and AV must compete
    for the space, which may be considered as a game-theoretic interaction in
    which one agent must yield to the other. To inform development of new
    real-time AV controllers in this setting, this study collects and analy-
    ses detailed, manually-annotated, temporal data from real-world human
    road crossings as they interact with manual drive vehicles. It studies the
    temporal orderings (filtrations) in which features are revealed to the ve-
    hicle and their informativeness over time. It presents a new framework
    suggesting how optimal stopping controllers may then use such data to
    enable an AV to decide when to act (by speeding up, slowing down, or
    otherwise signalling intent to the pedestrian) or alternatively, to continue
    at its current speed in order to gather additional information from new
    features, including signals from that pedestrian, before acting itself.}
    }
  • N. Gildert, A. Millard, A. Pomfret, and J. Timmis, “The need for combining implicit and explicit communication in cooperative robotic systems,” Frontiers in robotics and ai, 2018. doi:10.3389/frobt.2018.00065
    [BibTeX] [Abstract] [Download PDF]

    As the number of robots used in warehouses and manufacturing increases, so too does the need for robots to be able to manipulate objects, not only independently, but also in collaboration with humans and other robots. Our ability to effectively coordinate our actions with fellow humans encompasses several behaviours that are collectively referred to as joint action, and has inspired advances in human-robot interaction by leveraging our natural ability to interpret implicit cues. However, our capacity to efficiently coordinate on object manipulation tasks remains an advantageous process that is yet to be fully exploited in robotic applications. Humans achieve this form of coordination by combining implicit communication (where information is inferred) and explicit communication (direct communication through an established channel) in varying degrees according to the task at hand. Although these two forms of communication have previously been implemented in robotic systems, no system exists that integrates the two in a task-dependent adaptive manner. In this paper, we review existing work on joint action in human-robot interaction, and analyse the state-of-the-art in robot-robot interaction that could act as a foundation for future cooperative object manipulation approaches. We identify key mechanisms that must be developed in order for robots to collaborate more effectively, with other robots and humans, on object manipulation tasks in shared autonomy spaces.

    @article{lincoln43297,
    month = {June},
    title = {The Need for Combining Implicit and
    Explicit Communication in
    Cooperative Robotic Systems},
    author = {Naomi Gildert and Alan Millard and Andrew Pomfret and Jon Timmis},
    year = {2018},
    doi = {10.3389/frobt.2018.00065},
    journal = {Frontiers in Robotics and AI},
    keywords = {ARRAY(0x55578fed51a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43297/},
    abstract = {As the number of robots used in warehouses and manufacturing increases, so too does
    the need for robots to be able to manipulate objects, not only independently, but also
    in collaboration with humans and other robots. Our ability to effectively coordinate our
    actions with fellow humans encompasses several behaviours that are collectively referred
    to as joint action, and has inspired advances in human-robot interaction by leveraging
    our natural ability to interpret implicit cues. However, our capacity to efficiently coordinate
    on object manipulation tasks remains an advantageous process that is yet to be fully
    exploited in robotic applications. Humans achieve this form of coordination by combining
    implicit communication (where information is inferred) and explicit communication (direct
    communication through an established channel) in varying degrees according to the task
    at hand. Although these two forms of communication have previously been implemented
    in robotic systems, no system exists that integrates the two in a task-dependent adaptive
    manner. In this paper, we review existing work on joint action in human-robot interaction,
    and analyse the state-of-the-art in robot-robot interaction that could act as a foundation
    for future cooperative object manipulation approaches. We identify key mechanisms that
    must be developed in order for robots to collaborate more effectively, with other robots
    and humans, on object manipulation tasks in shared autonomy spaces.}
    }
  • P. From, L. Grimstad, M. Hanheide, S. Pearson, and G. Cielniak, “Rasberry – robotic and autonomous systems for berry production,” Mechanical engineering magazine select articles, vol. 140, iss. 6, 2018. doi:10.1115/1.2018-JUN-6
    [BibTeX] [Abstract] [Download PDF]

    The soft fruit industry is facing unprecedented challenges due to its reliance of manual labour. We are presenting a newly launched robotics initiative which will help to address the issues faced by the industry and enable automation of the main processes involved in soft fruit production. The RASberry project (Robotics and Autonomous Systems for Berry Production) aims to develop autonomous fleets of robots for horticultural industry. To achieve this goal, the project will bridge several current technological gaps including the development of a mobile platform suitable for the strawberry fields, software components for fleet management, in-field navigation and mapping, long-term operation, and safe human-robot collaboration. In this paper, we provide a general overview of the project, describe the main system components, highlight interesting challenges from a control point of view and then present three specific applications of the robotic fleets in soft fruit production. The applications demonstrate how robotic fleets can benefit the soft fruit industry by significantly decreasing production costs, addressing labour shortages and being the first step towards fully autonomous robotic systems for agriculture.

    @article{lincoln32874,
    volume = {140},
    number = {6},
    month = {June},
    author = {Pal From and Lars Grimstad and Marc Hanheide and Simon Pearson and Grzegorz Cielniak},
    title = {RASberry - Robotic and Autonomous Systems for Berry Production},
    publisher = {ASME},
    year = {2018},
    journal = {Mechanical Engineering Magazine Select Articles},
    doi = {10.1115/1.2018-JUN-6},
    keywords = {ARRAY(0x55578fed51d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32874/},
    abstract = {The soft fruit industry is facing unprecedented challenges due to its reliance of manual labour. We are presenting a newly launched robotics initiative which will help to address the issues faced by the industry and enable automation of the main processes involved in soft fruit production. The RASberry project (Robotics and Autonomous Systems for Berry Production) aims to develop autonomous fleets of robots for horticultural industry. To achieve this goal, the project will bridge several current technological gaps including the development of a mobile platform suitable for the strawberry fields, software components for fleet management, in-field navigation and mapping, long-term operation, and safe human-robot collaboration.
    In this paper, we provide a general overview of the project, describe the main system components, highlight interesting challenges from a control point of view and then present three specific applications of the robotic fleets in soft fruit production. The applications demonstrate how robotic fleets can benefit the soft fruit industry by significantly decreasing production costs, addressing labour shortages and being the first step towards fully autonomous robotic systems for agriculture.}
    }
  • A. Schofield, I. Gilchrist, M. Bloj, A. Leonardis, and N. Bellotto, “Understanding images in biological and computer vision,” Interface focus, vol. 8, iss. 4, p. 1–3, 2018. doi:10.1098/rsfs.2018.0027
    [BibTeX] [Abstract] [Download PDF]

    This issue of Interface Focus is a collection of papers arising out of a Royal Society Discussion meeting entitled ?Understanding images in biological and computer vision? held at Carlton Terrace on the 19th and 20th February, 2018. There is a strong tradition of inter-disciplinarity in the study of visual perception and visual cognition. Many of the great natural scientists including Newton [1], Young [2] and Maxwell (see [3]) were intrigued by the relationship between light, surfaces and perceived colour considering both physical and perceptual processes. Brewster [4] invented both the lenticular stereoscope and the binocular camera but also studied the perception of shape-from-shading. More recently, Marr’s [5] description of visual perception as an information processing problem led to great advances in our understanding of both biological and computer vision: both the computer vision and biological vision communities have a Marr medal. The recent successes of deep neural networks in classifying the images that we see and the fMRI images that reveal the activity in our brains during the act of seeing are both intriguing. The links between machine vision systems and biology may at sometimes be weak but the similarity of some of the operations is nonetheless striking [6]. This two-day meeting brought together researchers from the fields of biological and computer vision, robotics, neuroscience, computer science and psychology to discuss the most recent developments in the field. The meeting was divided into four themes: vision for action, visual appearance, vision for recognition and machine learning.

    @article{lincoln32403,
    volume = {8},
    number = {4},
    month = {June},
    author = {Andrew Schofield and Iain Gilchrist and Marina Bloj and Ales Leonardis and Nicola Bellotto},
    title = {Understanding images in biological and computer vision},
    publisher = {The Royal Society},
    year = {2018},
    journal = {Interface Focus},
    doi = {10.1098/rsfs.2018.0027},
    pages = {1--3},
    keywords = {ARRAY(0x55578fed5230)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32403/},
    abstract = {This issue of Interface Focus is a collection of papers arising out of a Royal Society Discussion meeting entitled ?Understanding images in biological and computer vision? held at Carlton Terrace on the 19th and 20th February, 2018. There is a strong tradition of inter-disciplinarity in the study of visual perception and visual cognition. Many of the great natural scientists including Newton [1], Young [2] and Maxwell (see [3]) were intrigued by the relationship between light, surfaces and perceived colour considering both physical and perceptual processes. Brewster [4] invented both the lenticular stereoscope and the binocular camera but also studied the perception of shape-from-shading. More recently, Marr's [5] description of visual perception as an information processing problem led to great advances in our understanding of both biological and computer vision: both the computer vision and biological vision communities have a Marr medal. The recent successes of deep neural networks in classifying the images that we see and the fMRI images that reveal the activity in our brains during the act of seeing are both intriguing. The links between machine vision systems and biology may at sometimes be weak but the similarity of some of the operations is nonetheless striking [6]. This two-day meeting brought together researchers from the fields of biological and computer vision, robotics, neuroscience, computer science and psychology to discuss the most recent developments in the field. The meeting was divided into four themes: vision for action, visual appearance, vision for recognition and machine learning.}
    }
  • G. Das, G. Cielniak, P. From, and M. Hanheide, “Discrete event simulations for scalability analysis of robotic in-field logistics in agriculture ? a case study,” in Ieee international conference on robotics and automation, workshop on robotic vision and action in agriculture, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Agriculture lends itself to automation due to its labour-intensive processes and the strain posed on workers in the domain. This paper presents a discrete event simulation (DES) framework allowing to rapidly assess different processes and layouts for in-field logistics operations employing a fleet of autonomous transportation robots supporting soft-fruit pickers. The proposed framework can help to answer pressing questions regarding the economic viability and scalability of such fleet operations, which we illustrate and discuss in the context of a specific case study considering strawberry picking operations. In particular, this paper looks into the effect of a robotic fleet in scenarios with different transportation requirements, as well as on the effect of allocation algorithms, all without requiring resource demanding field trials. The presented framework demonstrates a great potential for future development and optimisation of the efficient robotic fleet operations in agriculture.

    @inproceedings{lincoln32170,
    booktitle = {IEEE International Conference on Robotics and Automation, Workshop on Robotic Vision and Action in Agriculture},
    month = {May},
    title = {Discrete Event Simulations for Scalability Analysis of Robotic In-Field Logistics in Agriculture ? A Case Study},
    author = {Gautham Das and Grzegorz Cielniak and Pal From and Marc Hanheide},
    year = {2018},
    keywords = {ARRAY(0x55578fed5290)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32170/},
    abstract = {Agriculture lends itself to automation due to its labour-intensive processes and the strain posed on workers in the domain. This paper presents a discrete event simulation (DES) framework allowing to rapidly assess different processes and layouts for in-field logistics operations employing a fleet of autonomous transportation robots supporting soft-fruit pickers. The proposed framework can help to answer pressing questions regarding the economic viability and scalability of such fleet operations, which we illustrate and discuss in the context of a specific case study considering strawberry picking operations. In particular, this paper looks into the effect of a robotic fleet in scenarios with different transportation requirements, as well as on the effect of allocation algorithms, all without requiring resource demanding field trials. The presented framework demonstrates a great potential for future development and optimisation of the efficient robotic fleet operations in agriculture.}
    }
  • J. P. Fentanes, I. Gould, T. Duckett, S. Pearson, and G. Cielniak, “Soil compaction mapping through robot exploration: a study into kriging parameters,” in Icra 2018 workshop on robotic vision and action in agriculture, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Soil condition mapping is a manual, laborious and costly process which requires soil measurements to be taken at fixed, pre-defined locations, limiting the quality of the resulting information maps. For these reasons, we propose the use of an outdoor mobile robot equipped with an actuated soil probe for automatic mapping of soil condition, allowing for both, more efficient data collection and better soil models. The robot is building soil models on-line using standard geo-statistical methods such as kriging, and is using the quality of the model to drive the exploration. In this work, we take a closer look at the kriging process itself and how its parameters affect the exploration outcome. For this purpose, we employ soil compaction datasets collected from two real fields of varying characteristics and analyse how the parameters vary between fields and how they change during the exploration process. We particularly focus on the stability of the kriging parameters, their evolution over the exploration process and influence on the resulting soil maps.

    @inproceedings{lincoln32171,
    booktitle = {ICRA 2018 Workshop on Robotic Vision and Action in Agriculture},
    month = {May},
    title = {Soil Compaction Mapping Through Robot Exploration: A Study into Kriging Parameters},
    author = {Jaime Pulido Fentanes and Iain Gould and Tom Duckett and Simon Pearson and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55578fed52c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32171/},
    abstract = {Soil condition mapping is a manual, laborious and costly process which requires soil measurements to be taken at fixed, pre-defined locations, limiting the quality of the resulting information maps. For these reasons, we propose the use of an outdoor mobile robot equipped with an actuated soil probe for automatic mapping of soil condition, allowing for both, more efficient data collection and better soil models. The robot is building soil models on-line using standard geo-statistical methods such as kriging, and is using the quality of the model to drive the exploration. In this work, we take a closer look at the kriging process itself and how its parameters affect the exploration outcome. For this purpose, we employ soil compaction datasets collected from two real fields of varying characteristics and analyse how the parameters vary between fields and how they change during the exploration process. We particularly focus on the stability of the kriging parameters, their evolution over the exploration process and influence on the resulting soil maps.}
    }
  • G. H. W. Gebhardt, K. Daun, M. Schnaubelt, and G. Neumann, “Learning robust policies for object manipulation with robot swarms,” in Ieee international conference on robotics and automation, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly. Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source. In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution. Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.

    @inproceedings{lincoln31674,
    booktitle = {IEEE International Conference on Robotics and Automation},
    month = {May},
    title = {Learning robust policies for object manipulation with robot swarms},
    author = {G. H. W. Gebhardt and K. Daun and M. Schnaubelt and G. Neumann},
    year = {2018},
    keywords = {ARRAY(0x55578fed52f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31674/},
    abstract = {Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly.
    Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source.
    In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution. Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.}
    }
  • G. H. W. Gebhardt, K. Daun, M. Schnaubelt, and G. Neumann, “Robust learning of object assembly tasks with an invariant representation of robot swarms,” in International conference on robotics and automation (icra), 2018.
    [BibTeX] [Abstract] [Download PDF]

    {–} Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly. Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source. In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution. Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.

    @inproceedings{lincoln30920,
    booktitle = {International Conference on Robotics and Automation (ICRA)},
    month = {May},
    title = {Robust learning of object assembly tasks with an invariant representation of robot swarms},
    author = {G. H. W. Gebhardt and K. Daun and M. Schnaubelt and G. Neumann},
    year = {2018},
    keywords = {ARRAY(0x55578fed5320)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/30920/},
    abstract = {{--} Swarm robotics investigates how a large population of robots with simple actuation and limited sensors can collectively solve complex tasks. One particular interesting application with robot swarms is autonomous object assembly. Such tasks have been solved successfully with robot swarms that are controlled by a human operator using a light source. In this paper, we present a method to solve such assembly tasks autonomously based on policy search methods. We split the assembly process in two subtasks: generating a high-level assembly plan and learning a low-level object movement policy. The assembly policy plans the trajectories for each object and the object movement policy controls the trajectory execution.
    Learning the object movement policy is challenging as it depends on the complex state of the swarm which consists of an individual state for each agent. To approach this problem, we introduce a representation of the swarm which is based on Hilbert space embeddings of distributions. This representation is invariant to the number of agents in the swarm as well as to the allocation of an agent to its position in the swarm. These invariances make the learned policy robust to changes in the swarm and also reduce the search space for the policy search method significantly. We show that the resulting system is able to solve assembly tasks with varying object shapes in multiple simulation scenarios and evaluate the robustness of our representation to changes in the swarm size. Furthermore, we demonstrate that the policies learned in simulation are robust enough to be transferred to real robots.}
    }
  • D. Koert, G. Maeda, G. Neumann, and J. Peters, “Learning coupled forward-inverse models with combined prediction errors,” in International conference on robotics and automation (icra), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Challenging tasks in unstructured environments require robots to learn complex models. Given a large amount of information, learning multiple simple models can offer an efficient alternative to a monolithic complex network. Training multiple models{–}that is, learning their parameters and their responsibilities{–}has been shown to be prohibitively hard as optimization is prone to local minima. To efficiently learn multiple models for different contexts, we thus develop a new algorithm based on expectation maximization (EM). In contrast to comparable concepts, this algorithm trains multiple modules of paired forward-inverse models by using the prediction errors of both forward and inverse models simultaneously. In particular, we show that our method yields a substantial improvement over only considering the errors of the forward models on tasks where the inverse space contains multiple solutions

    @inproceedings{lincoln31686,
    booktitle = {International Conference on Robotics and Automation (ICRA)},
    month = {May},
    title = {Learning coupled forward-inverse models with combined prediction errors},
    author = {D. Koert and G. Maeda and G. Neumann and J. Peters},
    year = {2018},
    keywords = {ARRAY(0x55578fed5350)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31686/},
    abstract = {Challenging tasks in unstructured environments require robots to learn complex models. Given a large amount of information, learning multiple simple models can offer an efficient alternative to a monolithic complex network. Training multiple models{--}that is, learning their parameters and their responsibilities{--}has been shown to be prohibitively hard as optimization is prone to local minima. To efficiently learn multiple models for different contexts, we thus develop a new algorithm based on expectation maximization (EM). In contrast to comparable concepts, this algorithm trains multiple modules of paired forward-inverse models by using the prediction errors of both forward and inverse models simultaneously. In particular, we show that our method yields a substantial improvement over only considering the errors of the forward models on tasks where the inverse space contains multiple solutions}
    }
  • N. Bellotto, S. Cosar, and Z. Yan, “Human detection and tracking,” in Encyclopedia of robotics, M. H. Ang, O. Khatib, and B. Siciliano, Eds., Springer, 2018. doi:10.1007/978-3-642-41610-1_34-1
    [BibTeX] [Abstract] [Download PDF]

    In robotics, detecting and tracking moving objects is key to implementing useful and safe robot behaviours. Identifying which of the detected objects are humans is particularly important for domestic and public environments. Typically the robot is required to collect environmental data of the surrounding area using its on-board sensors, estimating where humans are and where they are going to. Moreover, robots should detect and track humans accurately and as early as possible in order to have enough time to react accordingly

    @incollection{lincoln30916,
    month = {May},
    author = {Nicola Bellotto and Serhan Cosar and Zhi Yan},
    booktitle = {Encyclopedia of Robotics},
    editor = {M. H. Ang and O. Khatib and B. Siciliano},
    title = {Human detection and tracking},
    publisher = {Springer},
    doi = {10.1007/978-3-642-41610-1\_34-1},
    year = {2018},
    keywords = {ARRAY(0x55578fed5380)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/30916/},
    abstract = {In robotics, detecting and tracking moving objects is key to implementing useful and safe robot behaviours. Identifying which of the detected objects are humans is particularly important for domestic and public environments.
    Typically the robot is required to collect environmental data of the surrounding area using its on-board sensors, estimating where humans are and where they are going to. Moreover, robots should detect and track humans accurately and as early as possible in order to have enough time to react accordingly}
    }
  • S. Basu, A. Omotubora, and C. Fox, “Legal framework for small autonomous agricultural robots,” Ai and society, p. 1–22, 2018. doi:10.1007/s00146-018-0846-4
    [BibTeX] [Abstract] [Download PDF]

    Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in current law, it argues that new proposals could be developed to reform these to promote further innovation and investment in agricultural robots

    @article{lincoln32026,
    month = {May},
    author = {Subhajit Basu and Adekemi Omotubora and Charles Fox},
    title = {Legal framework for small autonomous agricultural robots},
    publisher = {Springer},
    journal = {AI and Society},
    doi = {10.1007/s00146-018-0846-4},
    pages = {1--22},
    year = {2018},
    keywords = {ARRAY(0x55578fed53b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32026/},
    abstract = {Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous
    agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from
    a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural
    roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of
    it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal
    concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can
    be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use
    of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in
    current law, it argues that new proposals could be developed to reform these to promote further innovation and investment
    in agricultural robots}
    }
  • A. Postnikov, A. Zolotas, C. Bingham, I. Saleh, C. Arsene, S. Pearson, and R. Bickerton, “Modelling of thermostatically controlled loads to analyse the potential of delivering ffr dsr with a large network of compressor packs,” in 2017 european modelling symposium (ems), 2018, p. 163–167. doi:doi:10.1109/EMS.2017.37
    [BibTeX] [Abstract] [Download PDF]

    This paper presents preliminary work from a current study on large refrigeration pack network. In particular, the simulation model of a typical refrigeration system with a single pack of 6 compressor units operating as fixed volume displacement machines is presented, and the potential of delivering static FFR with a large population of such packs is studied. Tuning of the model is performed using experimental data collected at the Refrigeration Research Centre in Riseholme, Lincoln. The purpose of modelling is to monitor the essential dynamics of what resembles a typical supermarket convenience-type store and to measure the capacity of a massive refrigeration network to hold off a considerable amount of load in response to FFR DSR event. This study focuses on investigation of the aggregated response of 150 packs (approx. 1 MW capacity) with refrigeration cases on hysteresis and modulation control. The presented model captures interconnected dynamics (refrigerant flow in the system linked to temperature control and the system’s refrigerant demand and to compressors’ power consumption). Type of refrigerant used for simulation is R407F. Refrigerant properties such as specific enthalpy, pressure and temperature at different state points are computed on each time step of simulation with REFPROP.

    @inproceedings{lincoln32195,
    month = {May},
    author = {Andrey Postnikov and Argyrios Zolotas and Chris Bingham and Ibrahim Saleh and Corneliu Arsene and Simon Pearson and Ronald Bickerton},
    booktitle = {2017 European Modelling Symposium (EMS)},
    title = {Modelling of Thermostatically Controlled Loads to Analyse the Potential of Delivering FFR DSR with a Large Network of Compressor Packs},
    publisher = {IEEE},
    doi = {doi:10.1109/EMS.2017.37},
    pages = {163--167},
    year = {2018},
    keywords = {ARRAY(0x55578fed53e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32195/},
    abstract = {This paper presents preliminary work from a current study on large refrigeration pack network. In particular, the simulation model of a typical refrigeration system with a single pack of 6 compressor units operating as fixed volume displacement machines is presented, and the potential of delivering static FFR with a large population of such packs is studied. Tuning of the model is performed using experimental data collected at the Refrigeration Research Centre in Riseholme, Lincoln. The purpose of modelling is to monitor the essential dynamics of what resembles a typical supermarket convenience-type store and to measure the capacity of a massive refrigeration network to hold off a considerable amount of load in response to FFR DSR event. This study focuses on investigation of the aggregated response of 150 packs (approx. 1 MW capacity) with refrigeration cases on hysteresis and modulation control. The presented model captures interconnected dynamics (refrigerant flow in the system linked to temperature control and the system's refrigerant demand and to compressors' power consumption). Type of refrigerant used for simulation is R407F. Refrigerant properties such as specific enthalpy, pressure and temperature at different state points are computed on each time step of simulation with REFPROP.}
    }
  • D. Liu and S. Yue, “Event-driven continuous stdp learning with deep structure for visual pattern recognition,” Ieee transactions on cybernetics, vol. 49, iss. 4, p. 1377–1390, 2018. doi:10.1109/tcyb.2018.2801476
    [BibTeX] [Abstract] [Download PDF]

    Human beings can achieve reliable and fast visual pattern recognition with limited time and learning samples. Underlying this capability, ventral stream plays an important role in object representation and form recognition. Modeling the ventral steam may shed light on further understanding the visual brain in humans and building artificial vision systems for pattern recognition. The current methods to model the mechanism of ventral stream are far from exhibiting fast, continuous and event-driven learning like the human brain. To create a visual system similar to ventral stream in human with fast learning capability, in this study, we propose a new spiking neural system with an event-driven continuous spike timing dependent plasticity (STDP) learning method using specific spiking timing sequences. Two novel continuous input mechanisms have been used to obtain the continuous input spiking pattern sequence. With the event-driven STDP learning rule, the proposed learning procedure will be activated if the neuron receive one pre- or post-synaptic spike event. The experimental results on MNIST database show that the proposed method outperforms all other methods in fast learning scenarios and most of the current models in exhaustive learning experiments.

    @article{lincoln31010,
    volume = {49},
    number = {4},
    month = {April},
    author = {Daqi Liu and Shigang Yue},
    title = {Event-driven continuous STDP learning with deep structure for visual pattern recognition},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2018},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/tcyb.2018.2801476},
    pages = {1377--1390},
    keywords = {ARRAY(0x55578fed5440)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31010/},
    abstract = {Human beings can achieve reliable and fast visual pattern recognition with limited time and learning samples. Underlying this capability, ventral stream plays an important role in object representation and form recognition. Modeling the ventral steam may shed light on further understanding the visual brain in humans and building artificial vision systems for pattern recognition. The current methods to model the mechanism of ventral stream are far from exhibiting fast, continuous and event-driven learning like the human brain. To create a visual system similar to ventral stream in human with fast learning capability, in this study, we propose a new spiking neural system with an event-driven continuous spike timing dependent plasticity (STDP) learning method using specific spiking timing sequences. Two novel continuous input mechanisms have been used to obtain the continuous input spiking pattern sequence. With the event-driven STDP learning rule, the proposed learning procedure will be activated if the neuron receive one pre- or post-synaptic spike event. The experimental results on MNIST database show that the proposed method outperforms all other methods in fast learning scenarios and most of the current models in exhaustive learning experiments.}
    }
  • H. Wang, J. Peng, and S. Yue, “An improved lptc neural model for background motion direction estimation,” in 2017 joint ieee international conference on development and learning and epigenetic robotics (icdl-epirob), 2018. doi:10.1109/DEVLRN.2017.8329786
    [BibTeX] [Abstract] [Download PDF]

    A class of specialized neurons, called lobula plate tangential cells (LPTCs) has been shown to respond strongly to wide-field motion. The classic model, elementary motion detector (EMD) and its improved model, two-quadrant detector (TQD) have been proposed to simulate LPTCs. Although EMD and TQD can percept background motion, their outputs are so cluttered that it is difficult to discriminate actual motion direction of the background. In this paper, we propose a max operation mechanism to model a newly-found transmedullary neuron Tm9 whose physiological properties do not map onto EMD and TQD. This proposed max operation mechanism is able to improve the detection performance of TQD in cluttered background by filtering out irrelevant motion signals. We will demonstrate the functionality of this proposed mechanism in wide-field motion perception.

    @inproceedings{lincoln33421,
    booktitle = {2017 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)},
    month = {April},
    title = {An Improved LPTC Neural Model for Background Motion Direction Estimation},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2018},
    doi = {10.1109/DEVLRN.2017.8329786},
    keywords = {ARRAY(0x55578fed54a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33421/},
    abstract = {A class of specialized neurons, called lobula plate tangential cells (LPTCs) has been shown to respond strongly to wide-field motion. The classic model, elementary motion detector (EMD) and its improved model, two-quadrant detector (TQD) have been proposed to simulate LPTCs. Although EMD and TQD can percept background motion, their outputs are so cluttered that it is difficult to discriminate actual motion direction of the background. In this paper, we propose a max operation mechanism to model a newly-found transmedullary neuron Tm9 whose physiological properties do not map onto EMD and TQD. This proposed max operation mechanism is able to improve the detection performance of TQD in cluttered background by filtering out irrelevant motion signals. We will demonstrate the functionality of this proposed mechanism in wide-field motion perception.}
    }
  • K. Elgeneidy, N. Lohse, and M. Jackson, “Bending angle prediction and control of soft pneumatic actuators with embedded flex sensors: a data-driven approach,” Mechatronics, vol. 50, p. 234–247, 2018. doi:10.1016/j.mechatronics.2017.10.005
    [BibTeX] [Abstract] [Download PDF]

    In this paper, a purely data-driven modelling approach is presented for predicting and controlling the free bending angle response of a typical soft pneumatic actuator (SPA), embedded with a resistive flex sensor. An experimental setup was constructed to test the SPA at different input pressure values and orientations, while recording the resulting feedback from the embedded flex sensor and on-board pressure sensor. A calibrated high speed camera captures image frames during the actuation, which are then analysed using an image processing program to calculate the actual bending angle and synchronise it with the recorded sensory feedback. Empirical models were derived based on the generated experimental data using two common data-driven modelling techniques; regression analysis and artificial neural networks. Both techniques were validated using a new dataset at untrained operating conditions to evaluate their prediction accuracy. Furthermore, the derived empirical model was used as part of a closed-loop PID controller to estimate and control the bending angle of the tested SPA based on the real-time sensory feedback generated. The tuned PID controller allowed the bending SPA to accurately follow stepped and sinusoidal reference signals, even in the presence of pressure leaks in the pneumatic supply. This work demonstrates how purely data-driven models can be effectively used in controlling the bending of SPAs under different operating conditions, avoiding the need for complex analytical modelling and material characterisation. Ultimately, the aim is to create more controllable soft grippers based on such SPAs with embedded sensing capabilities, to be used in applications requiring both a ?soft touch? as well as a more controllable object manipulation.

    @article{lincoln30386,
    volume = {50},
    month = {April},
    author = {Khaled Elgeneidy and Niels Lohse and Michael Jackson},
    title = {Bending angle prediction and control of soft pneumatic actuators with embedded flex sensors: a data-driven approach},
    publisher = {Elsevier for International Federation of Automatic Control (IFAC)},
    year = {2018},
    journal = {Mechatronics},
    doi = {10.1016/j.mechatronics.2017.10.005},
    pages = {234--247},
    keywords = {ARRAY(0x55578fed54d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/30386/},
    abstract = {In this paper, a purely data-driven modelling approach is presented for predicting and controlling the free bending angle response of a typical soft pneumatic actuator (SPA), embedded with a resistive flex sensor. An experimental setup was constructed to test the SPA at different input pressure values and orientations, while recording the resulting feedback from the embedded flex sensor and on-board pressure sensor. A calibrated high speed camera captures image frames during the actuation, which are then analysed using an image processing program to calculate the actual bending angle and synchronise it with the recorded sensory feedback. Empirical models were derived based on the generated experimental data using two common data-driven modelling techniques; regression analysis and artificial neural networks. Both techniques were validated using a new dataset at untrained operating conditions to evaluate their prediction accuracy. Furthermore, the derived empirical model was used as part of a closed-loop PID controller to estimate and control the bending angle of the tested SPA based on the real-time sensory feedback generated. The tuned PID controller allowed the bending SPA to accurately follow stepped and sinusoidal reference signals, even in the presence of pressure leaks in the pneumatic supply. This work demonstrates how purely data-driven models can be effectively used in controlling the bending of SPAs under different operating conditions, avoiding the need for complex analytical modelling and material characterisation. Ultimately, the aim is to create more controllable soft grippers based on such SPAs with embedded sensing capabilities, to be used in applications requiring both a ?soft touch? as well as a more controllable object manipulation.}
    }
  • A. Cohen, S. Parsons, E. Sklar, and P. McBurney, “A characterization of types of support between structured arguments and their relationship with support in abstract argumentation,” International journal of approximate reasoning, vol. 94, p. 76–104, 2018. doi:10.1016/j.ijar.2017.12.008
    [BibTeX] [Abstract] [Download PDF]

    Argumentation is an important approach in artificial intelligence and multiagent systems, providing a basis for single agents to make rational decisions, and for groups of agents to reach agreements, as well as a mechanism to underpin a wide range of agent interactions. In such work, a crucial role is played by the notion of attack between arguments, and the notion of attack is well-studied. There is, for example, a range of different approaches to identifying which of a set of arguments should be accepted given the attacks between them. Less well studied is the notion of support between arguments, yet the idea that one argument may support another is very intuitive and seems particularly relevant in the area of decision-making where decision options may have multiple arguments for and against them. In the last decade, the study of support in argumentation has regained attention among researchers, but most approaches address support in the context of abstract argumentation where the elements from which arguments are composed are ignored. In contrast, this paper studies the notion of support between arguments in the context of structured argumentation systems where the elements from which arguments are composed play a crucial role. Different forms of support are presented, each of which takes into account the structure of arguments; and the relationships between these forms of support are studied. Then, the paper investigates whether there is a correspondence between the structured and abstract forms of support, and determines whether the abstract formalisms may be instantiated using concrete forms of support in terms of structured arguments. The conclusion is that support in structured argumentation does not mesh well with support in abstract argumentation, and this suggests that more work is required to develop forms of support in abstract argumentation that model what happens in structured argumentation.

    @article{lincoln38544,
    volume = {94},
    month = {March},
    author = {Andrea Cohen and Simon Parsons and Elizabeth Sklar and Peter McBurney},
    note = {cited By 1},
    title = {A characterization of types of support between structured arguments and their relationship with support in abstract argumentation},
    publisher = {Elsevier},
    year = {2018},
    journal = {International Journal of Approximate Reasoning},
    doi = {10.1016/j.ijar.2017.12.008},
    pages = {76--104},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38544/},
    abstract = {Argumentation is an important approach in artificial intelligence and multiagent systems, providing a basis for single agents to make rational decisions, and for groups of agents to reach agreements, as well as a mechanism to underpin a wide range of agent interactions. In such work, a crucial role is played by the notion of attack between arguments, and the notion of attack is well-studied. There is, for example, a range of different approaches to identifying which of a set of arguments should be accepted given the attacks between them. Less well studied is the notion of support between arguments, yet the idea that one argument may support another is very intuitive and seems particularly relevant in the area of decision-making where decision options may have multiple arguments for and against them. In the last decade, the study of support in argumentation has regained attention among researchers, but most approaches address support in the context of abstract argumentation where the elements from which arguments are composed are ignored. In contrast, this paper studies the notion of support between arguments in the context of structured argumentation systems where the elements from which arguments are composed play a crucial role. Different forms of support are presented, each of which takes into account the structure of arguments; and the relationships between these forms of support are studied. Then, the paper investigates whether there is a correspondence between the structured and abstract forms of support, and determines whether the abstract formalisms may be instantiated using concrete forms of support in terms of structured arguments. The conclusion is that support in structured argumentation does not mesh well with support in abstract argumentation, and this suggests that more work is required to develop forms of support in abstract argumentation that model what happens in structured argumentation.}
    }
  • A. G. Esfahani and M. R. and, “Robot learning from demonstrations: emulation learning in environments with moving obstacles,” Robotics and autonomous systems, vol. 101, p. 45–56, 2018. doi:10.1016/j.robot.2017.12.001
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present an approach to the problem of Robot Learning from Demonstration (RLfD) in a dynamic environment, i.e. an environment whose state changes throughout the course of performing a task. RLfD mostly has been successfully exploited only in non-varying environments to reduce the programming time and cost, e.g. fixed manufacturing workspaces. Non-conventional production lines necessitate Human?Robot Collaboration (HRC) implying robots and humans must work in shared workspaces. In such conditions, the robot needs to avoid colliding with the objects that are moved by humans in the workspace. Therefore, not only is the robot: (i) required to learn a task model from demonstrations; but also, (ii) must learn a control policy to avoid a stationary obstacle. Furthermore, (iii) it needs to build a control policy from demonstration to avoid moving obstacles. Here, we present an incremental approach to RLfD addressing all these three problems. We demonstrate the effectiveness of the proposed RLfD approach, by a series of pick-and-place experiments by an ABB YuMi robot. The experimental results show that a person can work in a workspace shared with a robot where the robot successfully avoids colliding with him.

    @article{lincoln34519,
    volume = {101},
    month = {March},
    author = {Amir Ghalamzan Esfahani and Matteo Ragaglia and },
    title = {Robot learning from demonstrations: Emulation learning in environments with moving obstacles},
    publisher = {Elsevier},
    year = {2018},
    journal = {Robotics and autonomous systems},
    doi = {10.1016/j.robot.2017.12.001},
    pages = {45--56},
    keywords = {ARRAY(0x55578fed5560)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34519/},
    abstract = {In this paper, we present an approach to the problem of Robot Learning from Demonstration (RLfD) in a dynamic environment, i.e. an environment whose state changes throughout the course of performing a task. RLfD mostly has been successfully exploited only in non-varying environments to reduce the programming time and cost, e.g. fixed manufacturing workspaces. Non-conventional production lines necessitate Human?Robot Collaboration (HRC) implying robots and humans must work in shared workspaces. In such conditions, the robot needs to avoid colliding with the objects that are moved by humans in the workspace. Therefore, not only is the robot: (i) required to learn a task model from demonstrations; but also, (ii) must learn a control policy to avoid a stationary obstacle. Furthermore, (iii) it needs to build a control policy from demonstration to avoid moving obstacles. Here, we present an incremental approach to RLfD addressing all these three problems. We demonstrate the effectiveness of the proposed RLfD approach, by a series of pick-and-place experiments by an ABB YuMi robot. The experimental results show that a person can work in a workspace shared with a robot where the robot successfully avoids colliding with him.}
    }
  • R. Shang, B. Du, K. Dai, L. Jiao, A. G. Esfahani, R. Stolkin, and and, “Quantum-inspired immune clonal algorithm for solving large-scale capacitated arc routing problems,” Memetic computing, vol. 10, iss. 1, p. 81–102, 2018. doi:10.1007/s12293-017-0224-7
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present an approach to Large-Scale CARP called Quantum-Inspired Immune Clonal Algorithm (QICA-CARP). This algorithm combines the feature of an artificial immune system and quantum computation ground on the qubit and the quantum superposition. We call an antibody of population quantum bit encoding, in QICA-CARP. For this encoding, to control the population with a high probability evolution towards a good schema we use the information on the current optimal antibody. The mutation strategy of quantum rotation gate accelerates the convergence of the original clone operator. Moreover, quantum crossover operator enhances the exchange of information and increases the diversity of the population. Furthermore, it avoids falling into local optimum. We also use the repair operator to amend the infeasible solutions to ensure the diversity of solutions. This makes QICA-CARP approximating the optimal solution. We demonstrate the effectiveness of our approach by a set of experiments and by Comparing the results of our approach with ones obtained with the RDG-MAENS and RAM using different test sets. Experimental results show that QICA-CARP outperforms other algorithms in terms of convergence rate and the quality of the obtained solutions. Especially, QICA-CARP converges to a better lower bound at a faster rate illustrating that it is suitable for solving large-scale CARP.

    @article{lincoln34759,
    volume = {10},
    number = {1},
    month = {March},
    author = {Ronghua Shang and Bingqi Du and Kaiyun Dai and Licheng Jiao and Amir Ghalamzan Esfahani and Rustam Stolkin and and },
    title = {Quantum-Inspired Immune Clonal Algorithm for solving large-scale capacitated arc routing problems},
    publisher = {Springer},
    year = {2018},
    journal = {Memetic Computing},
    doi = {10.1007/s12293-017-0224-7},
    pages = {81--102},
    keywords = {ARRAY(0x55578fed5590)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34759/},
    abstract = {In this paper, we present an approach to Large-Scale CARP called Quantum-Inspired Immune Clonal Algorithm (QICA-CARP). This algorithm combines the feature of an artificial immune system and quantum computation ground on the qubit and the quantum superposition. We call an antibody of population quantum bit encoding, in QICA-CARP. For this encoding, to control the population with a high probability evolution towards a good schema we use the information on the current optimal antibody. The mutation strategy of quantum rotation gate accelerates the convergence of the original clone operator. Moreover, quantum crossover operator enhances the exchange of information and increases the diversity of the population. Furthermore, it avoids falling into local optimum. We also use the repair operator to amend the infeasible solutions to ensure the diversity of solutions. This makes QICA-CARP approximating the optimal solution. We demonstrate the effectiveness of our approach by a set of experiments and by Comparing the results of our approach with ones obtained with the RDG-MAENS and RAM using different test sets. Experimental results show that QICA-CARP outperforms other algorithms in terms of convergence rate and the quality of the obtained solutions. Especially, QICA-CARP converges to a better lower bound at a faster rate illustrating that it is suitable for solving large-scale CARP.}
    }
  • A. Paraschos, C. Daniel, J. Peters, and G. Neumann, “Using probabilistic movement primitives in robotics,” Autonomous robots, vol. 42, iss. 3, p. 529–551, 2018. doi:10.1007/s10514-017-9648-7
    [BibTeX] [Abstract] [Download PDF]

    Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a unified framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios.

    @article{lincoln27883,
    volume = {42},
    number = {3},
    month = {March},
    author = {Alexandros Paraschos and Christian Daniel and Jan Peters and Gerhard Neumann},
    title = {Using probabilistic movement primitives in robotics},
    publisher = {Springer Verlag},
    year = {2018},
    journal = {Autonomous Robots},
    doi = {10.1007/s10514-017-9648-7},
    pages = {529--551},
    keywords = {ARRAY(0x55578fed55c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/27883/},
    abstract = {Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a unified framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the
    movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios.}
    }
  • J. Guo, K. Elgeneidy, C. Xiang, N. Lohse, L. Justham, and J. Rossiter, “Soft pneumatic grippers embedded with stretchable electroadhesion,” Smart materials and structures, vol. 27, iss. 5, p. 55006, 2018. doi:10.1088/1361-665X/aab579
    [BibTeX] [Abstract] [Download PDF]

    Current soft pneumatic grippers cannot robustly grasp flat materials and flexible objects on curved surfaces without distorting them. Current electroadhesive grippers, on the other hand, are difficult to actively deform to complex shapes to pick up free-form surfaces or objects. An easy-to-implement PneuEA gripper is proposed by the integration of an electroadhesive gripper and a two-fingered soft pneumatic gripper. The electroadhesive gripper was fabricated by segmenting a soft conductive silicon sheet into a two-part electrode design and embedding it in a soft dielectric elastomer. The two-fingered soft pneumatic gripper was manufactured using a standard soft lithography approach. This novel integration has combined the benefits of both the electroadhesive and soft pneumatic grippers. As a result, the proposed PneuEA gripper was not only able to pick-and-place flat and flexible materials such as a porous cloth but also delicate objects such as a light bulb. By combining two soft touch sensors with the electroadhesive, an intelligent and shape-adaptive PneuEA material handling system has been developed. This work is expected to widen the applications of both soft gripper and electroadhesion technologies.

    @article{lincoln32297,
    volume = {27},
    number = {5},
    month = {March},
    author = {Jianglong Guo and Khaled Elgeneidy and C Xiang and Niels Lohse and Laura Justham and Jonathan Rossiter},
    title = {Soft pneumatic grippers embedded with stretchable electroadhesion},
    publisher = {IOP Publishing},
    year = {2018},
    journal = {Smart Materials and Structures},
    doi = {10.1088/1361-665X/aab579},
    pages = {055006},
    keywords = {ARRAY(0x55578fed55f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32297/},
    abstract = {Current soft pneumatic grippers cannot robustly grasp flat materials and flexible objects on curved surfaces without distorting them. Current electroadhesive grippers, on the other hand, are difficult to actively deform to complex shapes to pick up free-form surfaces or objects. An easy-to-implement PneuEA gripper is proposed by the integration of an electroadhesive gripper and a two-fingered soft pneumatic gripper. The electroadhesive gripper was fabricated by segmenting a soft conductive silicon sheet into a two-part electrode design and embedding it in a soft dielectric elastomer. The two-fingered soft pneumatic gripper was manufactured using a standard soft lithography approach. This novel integration has combined the benefits of both the electroadhesive and soft pneumatic grippers. As a result, the proposed PneuEA gripper was not only able to pick-and-place flat and flexible materials such as a porous cloth but also delicate objects such as a light bulb. By combining two soft touch sensors with the electroadhesive, an intelligent and shape-adaptive PneuEA material handling system has been developed. This work is expected to widen the applications of both soft gripper and electroadhesion technologies.}
    }
  • T. Osa, J. Pajarinen, G. Neumann, A. J. Bagnell, P. Abbeel, and J. Peters, “An algorithmic perspective on imitation learning,” Foundations and trends in robotics, vol. 7, iss. 1-2, p. 1–179, 2018. doi:10.1561/2300000053
    [BibTeX] [Abstract] [Download PDF]

    As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning. We pay particular attention to the intimate connection between imitation learning approaches and those of structured prediction Daumé III et al. [2009]. To structure this discussion, we categorize imitation learning techniques based on the following key criteria which drive algorithmic decisions: 1) The structure of the policy space. Is the learned policy a time-index trajectory (trajectory learning), a mapping from observations to actions (so called behavioral cloning [Bain and Sammut, 1996]), or the result of a complex optimization or planning problem at each execution as is common in inverse optimal control methods [Kalman, 1964, Moylan and Anderson, 1973]. 2) The information available during training and testing. In particular, is the learning algorithm privy to the full state that the teacher possess? Is the learner able to interact with the teacher and gather corrections or more data? Does the learner have a (typically a priori) model of the system with which it interacts? Does the learner have access to the reward (cost) function that the teacher is attempting to optimize? 3) The notion of success. Different algorithmic approaches provide varying guarantees on the resulting learned behavior. These guarantees range from weaker (e.g., measuring disagreement with the agent?s decision) to stronger (e.g., providing guarantees on the performance of the learner with respect to a true cost function, either known or unknown). We organize our work by paying particular attention to distinction (1): dividing imitation learning into directly replicating desired behavior (sometimes called behavioral cloning) and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]). In the latter case, behavior arises as the result of an optimization problem solved for each new instance that the learner faces. In addition to method analysis, we discuss the design decisions a practitioner must make when selecting an imitation learning approach. Moreover, application examples{–}such as robots that play table tennis [Kober and Peters, 2009], programs that play the game of Go [Silver et al., 2016], and systems that understand natural language [Wen et al., 2015]{–} illustrate the properties and motivations behind different forms of imitation learning. We conclude by presenting a set of open questions and point towards possible future research directions for machine learning.

    @article{lincoln31687,
    volume = {7},
    number = {1-2},
    month = {March},
    author = {Takayuki Osa and Joni Pajarinen and Gerhard Neumann and J. Andrew Bagnell and Pieter Abbeel and Jan Peters},
    title = {An algorithmic perspective on imitation learning},
    publisher = {Now publishers},
    year = {2018},
    journal = {Foundations and Trends in Robotics},
    doi = {10.1561/2300000053},
    pages = {1--179},
    keywords = {ARRAY(0x55578fed5620)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31687/},
    abstract = {As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning. We pay particular attention to the intimate connection between imitation learning approaches and those of structured prediction Daum{\'e} III et al. [2009]. To structure this discussion, we categorize imitation learning techniques based on the following key criteria which drive algorithmic decisions:
    1) The structure of the policy space. Is the learned policy a time-index trajectory (trajectory learning), a mapping from observations to actions (so called behavioral cloning [Bain and Sammut, 1996]), or the result of a complex optimization or planning problem at each execution as is common in inverse optimal control methods [Kalman, 1964, Moylan and Anderson, 1973].
    2) The information available during training and testing. In particular, is the learning algorithm privy to the full state that the teacher possess? Is the learner able to interact with the teacher and gather corrections or more data? Does the learner have a (typically a priori) model of the system with which it interacts? Does the learner have access to the reward (cost) function that the teacher is attempting to optimize?
    3) The notion of success. Different algorithmic approaches provide varying guarantees on the resulting learned behavior. These guarantees range from weaker (e.g., measuring disagreement with the agent?s decision) to stronger (e.g., providing guarantees on the performance of the learner with respect to a true cost function, either known or unknown). We organize our work by paying particular attention to distinction (1): dividing imitation learning into directly replicating desired behavior (sometimes called behavioral cloning) and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]). In the latter case, behavior arises as the result of an optimization problem solved for each new instance that the learner faces. In addition to method analysis, we discuss the design decisions a practitioner must make when selecting an imitation learning approach. Moreover, application examples{--}such as robots that play table tennis [Kober and Peters, 2009], programs that play the game of Go [Silver et al., 2016], and systems that understand natural language [Wen et al., 2015]{--} illustrate the properties and motivations behind different forms of imitation learning. We conclude by presenting a set of open questions and point towards possible future research directions for machine learning.}
    }
  • E. Senft, S. Lemaignan, M. Bartlett, P. Baxter, and T. Belpaeme, “Robots in the classroom: learning to be a good tutor,” in R4l @ hri2018, 2018.
    [BibTeX] [Abstract] [Download PDF]

    To broaden the adoption and be more inclusive, robotic tutors need to tailor their behaviours to their audience. Traditional approaches, such as Bayesian Knowledge Tracing, try to adapt the content of lessons or the difficulty of tasks to the current estimated knowledge of the student. However, these variations only happen in a limited domain, predefined in advance, and are not able to tackle unexpected variation in a student’s behaviours. We argue that robot adaptation needs to go beyond variations in preprogrammed behaviours and that robots should in effect learn online how to become better tutors. A study is currently being carried out to evaluate how human supervision can teach a robot to support child learning during an educational game using one implementation of this approach.

    @inproceedings{lincoln31959,
    booktitle = {R4L @ HRI2018},
    month = {March},
    title = {Robots in the classroom: Learning to be a Good Tutor},
    author = {Emmanuel Senft and Severin Lemaignan and Madeleine Bartlett and Paul Baxter and Tony Belpaeme},
    year = {2018},
    keywords = {ARRAY(0x55578fed5650)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31959/},
    abstract = {To broaden the adoption and be more inclusive, robotic tutors need to tailor their
    behaviours to their audience. Traditional approaches, such as Bayesian Knowledge
    Tracing, try to adapt the content of lessons or the difficulty of tasks to the current
    estimated knowledge of the student. However, these variations only happen in a limited
    domain, predefined in advance, and are not able to tackle unexpected variation in a
    student's behaviours. We argue that robot adaptation needs to go beyond variations in
    preprogrammed behaviours and that robots should in effect learn online how to become
    better tutors. A study is currently being carried out to evaluate how human supervision
    can teach a robot to support child learning during an educational game using one
    implementation of this approach.}
    }
  • P. Lightbody, P. Baxter, and M. Hanheide, “Studying table-top manipulation tasks: a robust framework for object tracking in collaboration,” in The 13th annual acm/ieee international conference on human robot interaction, 2018. doi:10.1145/3173386.3177045
    [BibTeX] [Abstract] [Download PDF]

    Table-top object manipulation is a well-established test bed on which to study both basic foundations of general human-robot interaction and more specific collaborative tasks. A prerequisite, both for studies and for actual collaborative or assistive tasks, is the robust perception of any objects involved. This paper presents a real-time capable and ROS-integrated approach, bringing together state-of-the-art detection and tracking algorithms, integrating perceptual cues from multiple cameras and solving detection, sensor fusion and tracking in one framework. The highly scalable framework was tested in a HRI use-case scenario with 25 objects being reliably tracked under significant temporary occlusions. The use-case demonstrates the suitability of the approach when working with multiple objects in small table-top environments and highlights the versatility and range of analysis available with this framework.

    @inproceedings{lincoln31204,
    booktitle = {The 13th Annual ACM/IEEE International Conference on Human Robot Interaction},
    month = {March},
    title = {Studying table-top manipulation tasks: a robust framework for object tracking in collaboration},
    author = {Peter Lightbody and Paul Baxter and Marc Hanheide},
    publisher = {ACM/IEEE},
    year = {2018},
    doi = {10.1145/3173386.3177045},
    keywords = {ARRAY(0x55578fed5680)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31204/},
    abstract = {Table-top object manipulation is a well-established test bed on which to study both basic foundations of general human-robot interaction and more specific collaborative tasks. A prerequisite, both for studies and for actual collaborative or assistive tasks, is the robust perception of any objects involved. This paper presents a real-time capable and ROS-integrated approach, bringing together state-of-the-art detection and tracking algorithms, integrating perceptual cues from multiple cameras and solving detection, sensor fusion and tracking in one framework. The highly scalable framework was tested in a HRI use-case scenario with 25 objects being reliably tracked under significant temporary occlusions. The use-case demonstrates the suitability of the approach when working with multiple objects in small table-top environments and highlights the versatility and range of analysis available with this framework.}
    }
  • I. Saleh, A. Postnikov, C. Arsene, A. Zolotas, C. Bingham, R. Bickerton, and S. Pearson, “Impact of demand side response on a commercial retail refrigeration system,” Energies, vol. 11, iss. 2, p. 371, 2018. doi:10.3390/en11020371
    [BibTeX] [Abstract] [Download PDF]

    The UK National Grid has placed increased emphasis on the development of Demand Side Response (DSR) tariff mechanisms to manage load at peak times. Refrigeration systems, along with HVAC, are estimated to consume 14\% of the UK?s electricity and could have a significant role for DSR application. However, characterized by relatively low individual electrical loads and massive asset numbers, multiple low power refrigerators need aggregation for inclusion in these tariffs. In this paper, the impact of the Demand Side Response (DSR) control mechanisms on food retailing refrigeration systems is investigated. The experiments are conducted in a test-rig built to resemble a typical small supermarket store. The paper demonstrates how the temperature and pressure profiles of the system, the active power and the drawn current of the compressors are affected following a rapid shut down and subsequent return to normal operation as a response to a DSR event. Moreover, risks and challenges associated with primary and secondary Firm Frequency Response (FFR) mechanisms, where the load is rapidly shed at high speed in response to changes in grid frequency, is considered. For instance, measurements are included that show a significant increase in peak inrush currents of approx. 30\% when the system returns to normal operation at the end of a DSR event. Consideration of how high inrush currents after a DSR event can produce voltage fluctuations of the supply and we assess risks to the local power supply system.

    @article{lincoln31137,
    volume = {11},
    number = {2},
    month = {February},
    author = {Ibrahim Saleh and Andrey Postnikov and Corneliu Arsene and Argyrios Zolotas and Chris Bingham and Ronald Bickerton and Simon Pearson},
    title = {Impact of demand side response on a commercial retail refrigeration system},
    publisher = {MDPI},
    year = {2018},
    journal = {Energies},
    doi = {10.3390/en11020371},
    pages = {371},
    keywords = {ARRAY(0x55578fed56b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31137/},
    abstract = {The UK National Grid has placed increased emphasis on the development of Demand Side Response (DSR) tariff mechanisms to manage load at peak times. Refrigeration systems, along with HVAC, are estimated to consume 14\% of the UK?s electricity and could have a significant role for DSR application. However, characterized by relatively low individual electrical loads and massive asset numbers, multiple low power refrigerators need aggregation for inclusion in these tariffs. In this paper, the impact of the Demand Side Response (DSR) control mechanisms on food retailing refrigeration systems is investigated. The experiments are conducted in a test-rig built to resemble a typical small supermarket store. The paper demonstrates how the temperature and pressure profiles of the system, the active power and the drawn current of the compressors are affected following a rapid shut down and subsequent return to normal operation as a response to a DSR event. Moreover, risks and challenges associated with primary and secondary Firm Frequency Response (FFR) mechanisms, where the load is rapidly shed at high speed in response to changes in grid frequency, is considered. For instance, measurements are included that show a significant increase in peak inrush currents of approx. 30\% when the system returns to normal operation at the end of a DSR event. Consideration of how high inrush currents after a DSR event can produce voltage fluctuations of the supply and we assess risks to the local power supply system.}
    }
  • R. Shang, Y. Yuan, L. Jiao, Y. Meng, and A. G. Esfahani, “A self-paced learning algorithm for change detection in synthetic aperture radar images,” Signal processing, vol. 142, p. 375–387, 2018. doi:10.1016/j.sigpro.2017.07.023
    [BibTeX] [Abstract] [Download PDF]

    Detecting changed regions between two given synthetic aperture radar images is very important to monitor the change of landscapes, change of ecosystem and so on. This can be formulated as a classification problem and addressed by learning a classifier, traditional machine learning classification methods very easily stick to local optima which can be caused by noises of data. Hence, we propose an unsupervised algorithm aiming at constructing a classifier based on self-paced learning. Self-paced learning is a recently developed supervised learning approach and has been proven to be capable to overcome effectively this shortcoming. After applying a pre-classification to the difference image, we uniformly select samples using the initial result. Then, self-paced learning is utilized to train a classifier. Finally, a filter is used based on spatial contextual information to further smooth the classification result. In order to demonstrate the efficiency of the proposed algorithm, we apply our proposed algorithm on five real synthetic aperture radar images datasets. The results obtained by our algorithm are compared with five other state-of-the-art algorithms, which demonstrates that our algorithm outperforms those state-of-the-art algorithms in terms of accuracy and robustness.

    @article{lincoln34757,
    volume = {142},
    month = {January},
    author = {Ronghua Shang and Yijing Yuan and Licheng Jiao and Yang Meng and Amir Ghalamzan Esfahani},
    title = {A self-paced learning algorithm for change detection in synthetic aperture radar images},
    publisher = {Elsevier},
    year = {2018},
    journal = {Signal Processing},
    doi = {10.1016/j.sigpro.2017.07.023},
    pages = {375--387},
    keywords = {ARRAY(0x55578fed56e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34757/},
    abstract = {Detecting changed regions between two given synthetic aperture radar images is very important to monitor the change of landscapes, change of ecosystem and so on. This can be formulated as a classification problem and addressed by learning a classifier, traditional machine learning classification methods very easily stick to local optima which can be caused by noises of data. Hence, we propose an unsupervised algorithm aiming at constructing a classifier based on self-paced learning. Self-paced learning is a recently developed supervised learning approach and
    has been proven to be capable to overcome effectively this shortcoming. After applying a pre-classification to the difference image, we uniformly select samples using the initial result. Then, self-paced learning is utilized to train a classifier. Finally, a filter is used based on spatial contextual information to further smooth the classification result. In order to demonstrate the efficiency of the proposed algorithm, we apply our proposed algorithm on five real synthetic aperture radar images datasets. The results obtained by our algorithm are compared with five other state-of-the-art algorithms, which demonstrates that our algorithm outperforms those state-of-the-art algorithms in terms of accuracy and robustness.}
    }
  • G. Petropoulos, P. Srivastava, M. Piles, and S. Pearson, “Earth observation-based operational estimation of soil moisture and evapotranspiration for agricultural crops in support of sustainable water management,” Sustainability, vol. 10, iss. 1, p. 181, 2018. doi:10.3390/su10010181
    [BibTeX] [Abstract] [Download PDF]

    Global information on the spatio-temporal variation of parameters driving the Earth?s terrestrial water and energy cycles, such as evapotranspiration (ET) rates and surface soil moisture (SSM), is of key significance. The water and energy cycles underpin global food and water security and need to be fully understood as the climate changes. In the last few decades, Earth Observation (EO) technology has played an increasingly important role in determining both ET and SSM. This paper reviews the state of the art in the use specifically of operational EO of both ET and SSM estimates. We discuss the key technical and operational considerations to derive accurate estimates of those parameters from space. The review suggests significant progress has been made in the recent years in retrieving ET and SSM operationally; yet, further work is required to optimize parameter accuracy and to improve the operational capability of services developed using EO data. Emerging applications on which ET/SSM operational products may be included in the context specifically in relation to agriculture are also highlighted; the operational use of those operational products in such applications remains to be seen.

    @article{lincoln30806,
    volume = {10},
    number = {1},
    month = {January},
    author = {George Petropoulos and Prashant Srivastava and Maria Piles and Simon Pearson},
    note = {This article belongs to the Special Issue Precision Agriculture Technologies for a Sustainable Future: Current Trends and Perspectives},
    title = {Earth observation-based operational estimation of soil moisture and evapotranspiration for agricultural crops in support of sustainable water management},
    publisher = {MDPI},
    year = {2018},
    journal = {Sustainability},
    doi = {10.3390/su10010181},
    pages = {181},
    keywords = {ARRAY(0x55578fed5710)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/30806/},
    abstract = {Global information on the spatio-temporal variation of parameters driving the Earth?s terrestrial water and energy cycles, such as evapotranspiration (ET) rates and surface soil moisture (SSM), is of key significance. The water and energy cycles underpin global food and water security and need to be fully understood as the climate changes. In the last few decades, Earth Observation (EO) technology has played an increasingly important role in determining both ET and SSM. This paper reviews the state of the art in the use specifically of operational EO of both ET and SSM estimates. We discuss the key technical and operational considerations to derive accurate estimates of those parameters from space. The review suggests significant progress has been made in the recent years in retrieving ET and SSM operationally; yet, further work is required to optimize parameter accuracy and to improve the operational capability of services developed using EO data. Emerging applications on which ET/SSM operational products may be included in the context specifically in relation to agriculture are also highlighted; the operational use of those operational products in such applications remains to be seen.}
    }
  • G. Markkula, R. Romano, R. Madigan, C. Fox, O. Giles, and N. Merat, “Models of human decision-making as tools for estimating and optimising impacts of vehicle automation,” in Transportation research board, 2018.
    [BibTeX] [Abstract] [Download PDF]

    With the development of increasingly automated vehicles (AVs) comes the increasingly difficult challenge of comprehensively validating these for acceptable, and ideally beneficial, impacts on the transport system. There is a growing consensus that virtual testing, where simulated AVs are deployed in simulated traffic, will be key for cost-effective testing and optimisation. The least mature model components in such simulations are those generating the behaviour of human agents in or around the AVs. In this paper, human models and virtual testing applications are presented for two example scenarios: (i) a human pedestrian deciding whether to cross a street in front of an approaching automated vehicle, with or without external human-machine interface elements, and (ii) an AV handing over control to a human driver in a critical rear-end situation. These scenarios have received much recent research attention, yet simulation-ready human behaviour models are lacking. They are discussed here in the context of existing models of perceptual decision-making, situational awareness, and traffic interactions. It is argued that the human behaviour in question might be usefully conceptualised as a number of interrelated decision processes, not all of which are necessarily directly associated with externally observable behaviour. The results show that models based on this type of framework can reproduce qualitative patterns of behaviour reported in the literature for the two addressed scenarios, and it is demonstrated how computer simulations based on the models, once these have been properly validated, could allow prediction and optimisation of the AV.

    @inproceedings{lincoln33098,
    booktitle = {Transportation Research Board},
    month = {January},
    title = {Models of human decision-making as tools for estimating and optimising impacts of vehicle automation},
    author = {G Markkula and R Romano and R Madigan and Charles Fox and O Giles and N Merat},
    publisher = {Transportatio n Research Record},
    year = {2018},
    keywords = {ARRAY(0x55578fed5740)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33098/},
    abstract = {With the development of increasingly automated vehicles (AVs) comes the increasingly difficult challenge of comprehensively validating these for acceptable, and ideally beneficial, impacts on the transport system. There is a growing consensus that virtual testing, where simulated AVs are deployed in simulated traffic, will be key for cost-effective testing and optimisation. The least mature model components in such simulations are those generating the behaviour of human agents in or around the AVs. In this paper, human models and virtual testing applications are presented for two example scenarios: (i) a human pedestrian deciding whether to cross a street in front of an approaching automated vehicle, with or without external human-machine interface elements, and (ii) an AV handing over control to a human driver in a critical rear-end situation. These scenarios have received much recent research attention, yet simulation-ready human behaviour models are lacking. They are discussed here in the context of existing models of perceptual decision-making, situational awareness, and traffic interactions. It is argued that the human behaviour in question might be usefully conceptualised as a number of interrelated decision processes, not all of which are necessarily directly associated with externally observable behaviour. The results show that models based on this type of framework can reproduce qualitative patterns of behaviour reported in the literature for the two addressed scenarios, and it is demonstrated how computer simulations based on the models, once these have been properly validated, could allow prediction and optimisation of the AV.}
    }
  • R. Akrour, A. Abdolmaleki, H. Abdulsamad, J. Peters, and G. Neumann, “Model-free trajectory-based policy optimization with monotonic improvement,” Journal of machine learning research (jmlr), vol. 19, iss. 14, p. 1–25, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Many of the recent trajectory optimization algorithms alternate between linear approximation of the system dynamics around the mean trajectory and conservative policy update. One way of constraining the policy change is by bounding the Kullback-Leibler (KL) divergence between successive policies. These approaches already demonstrated great experimental success in challenging problems such as end-to-end control of physical systems. However, these approaches lack any improvement guarantee as the linear approximation of the system dynamics can introduce a bias in the policy update and prevent convergence to the optimal policy. In this article, we propose a new model-free trajectory-based policy optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates a local, quadratic and time-dependent Q-Function learned from trajectory data instead of a model of the system dynamics. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics. We experimentally demonstrate on highly non-linear control tasks the improvement in performance of our algorithm in comparison to approaches linearizing the system dynamics. In order to show the monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of our policy update scheme to derive a lower bound of the change in policy return between successive iterations.

    @article{lincoln32457,
    volume = {19},
    number = {14},
    author = {R. Akrour and A. Abdolmaleki and H. Abdulsamad and J. Peters and Gerhard Neumann},
    title = {Model-Free Trajectory-based Policy Optimization with Monotonic Improvement},
    publisher = {Journal of Machine Learning Research},
    journal = {Journal of Machine Learning Research (JMLR)},
    pages = {1--25},
    year = {2018},
    keywords = {ARRAY(0x55578fed5770)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32457/},
    abstract = {Many of the recent trajectory optimization algorithms alternate between linear approximation
    of the system dynamics around the mean trajectory and conservative policy update.
    One way of constraining the policy change is by bounding the Kullback-Leibler (KL)
    divergence between successive policies. These approaches already demonstrated great experimental
    success in challenging problems such as end-to-end control of physical systems.
    However, these approaches lack any improvement guarantee as the linear approximation of
    the system dynamics can introduce a bias in the policy update and prevent convergence
    to the optimal policy. In this article, we propose a new model-free trajectory-based policy
    optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates
    a local, quadratic and time-dependent Q-Function learned from trajectory data
    instead of a model of the system dynamics. Our policy update ensures exact KL-constraint
    satisfaction without simplifying assumptions on the system dynamics. We experimentally
    demonstrate on highly non-linear control tasks the improvement in performance of our algorithm
    in comparison to approaches linearizing the system dynamics. In order to show the
    monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of
    our policy update scheme to derive a lower bound of the change in policy return between
    successive iterations.}
    }
  • O. Arenz, M. Zhong, and G. Neumann, “Efficient gradient-free variational inference using policy search,” in Proceedings of the international conference on machine learning, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.

    @inproceedings{lincoln32456,
    booktitle = {Proceedings of the International Conference on Machine Learning},
    title = {Efficient Gradient-Free Variational Inference using Policy Search},
    author = {O. Arenz and M. Zhong and Gerhard Neumann},
    year = {2018},
    keywords = {ARRAY(0x55578fed57a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32456/},
    abstract = {Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.}
    }
  • O. Arenz, G. Neumann, and M. Zhong, “Efficient gradient-free variational inference using policy search,” Proceedings of the 35th international conference on machine learning, vol. 80, p. 234–243, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.

    @article{lincoln33871,
    volume = {80},
    title = {Efficient Gradient-Free Variational Inference using Policy Search},
    author = {Oleg Arenz and Gerhard Neumann and Mingjun Zhong},
    publisher = {Proceedings of Machine Learning Research},
    year = {2018},
    pages = {234--243},
    journal = {Proceedings of the 35th International Conference on Machine Learning},
    keywords = {ARRAY(0x55578fed57d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33871/},
    abstract = {Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.}
    }
  • P. Baxter, G. Cielniak, M. Hanheide, and P. From, “Safe human-robot interaction in agriculture,” in Companion of the 2018 acm/ieee international conference on human-robot interaction – hri ’18, 2018, p. 59–60. doi:doi:10.1145/3173386.3177072
    [BibTeX] [Abstract] [Download PDF]

    Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application.

    @inproceedings{lincoln33320,
    booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18},
    title = {Safe Human-Robot Interaction in Agriculture},
    author = {Paul Baxter and Grzegorz Cielniak and Marc Hanheide and Pal From},
    publisher = {ACM},
    year = {2018},
    pages = {59--60},
    doi = {doi:10.1145/3173386.3177072},
    keywords = {ARRAY(0x55578fed5800)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33320/},
    abstract = {Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application.}
    }
  • P. Baxter, P. Lightbody, and M. Hanheide, “Robots providing cognitive assistance in shared workspaces,” in Companion of the 2018 acm/ieee international conference on human-robot interaction – hri ’18, 2018, p. 57–58. doi:doi:10.1145/3173386.3177070
    [BibTeX] [Abstract] [Download PDF]

    Human-Robot Collaboration is an area of particular current interest, with the attempt to make robots more generally useful in contexts where they work side-by-side with humans. Currently, efforts typically focus on the sensory and motor aspects of the task on the part of the robot to enable them to function safely and effectively given an assigned task. In the present contribution, we rather focus on the cognitive faculties of the human worker by attempting to incorporate known (from psychology) properties of human cognition. In a proof-of-concept study, we demonstrate how applying characteristics of human categorical perception to the type of robot assistance impacts on task performance and experience of the participants. This lays the foundation for further developments in cognitive assistance and collaboration in side-by-side working for humans and robots.

    @inproceedings{lincoln33321,
    booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18},
    title = {Robots Providing Cognitive Assistance in Shared Workspaces},
    author = {Paul Baxter and Peter Lightbody and Marc Hanheide},
    publisher = {ACM},
    year = {2018},
    pages = {57--58},
    doi = {doi:10.1145/3173386.3177070},
    keywords = {ARRAY(0x55578fed5830)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33321/},
    abstract = {Human-Robot Collaboration is an area of particular current interest, with the attempt to make robots more generally useful in contexts where they work side-by-side with humans. Currently, efforts typically focus on the sensory and motor aspects of the task on the part of the robot to enable them to function safely and effectively given an assigned task. In the present contribution, we rather focus on the cognitive faculties of the human worker by attempting to incorporate known (from psychology) properties of human cognition. In a proof-of-concept study, we demonstrate how applying characteristics of human categorical perception to the type of robot assistance impacts on task performance and experience of the participants. This lays the foundation for further developments in cognitive assistance and collaboration in side-by-side working for humans and robots.}
    }
  • F. Camara, S. Cosar, N. Bellotto, N. Merat, and C. Fox, “Towards pedestrian-av interaction: method for elucidating pedestrian preferences,” in Ieee/rsj international conference on intelligent robots and systems (iros 2018) workshops, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicle navigation around human pedestrians remains a challenge due to the potential for complex interactions and feedback loops between the agents. As a small step towards better understanding of these interactions, this Methods Paper presents a new empirical protocol based on tracking real humans in a controlled lab environment, which is able to make inferences about the human?s preferences for interaction (how they trade off the cost of their time against the cost of a collision). Knowledge of such preferences if collected in more realistic environments could then be used by future AVs to predict and control for pedestrian behaviour. This study is intended as a work-in-progress report on methods working towards real-time and less controlled experiments, demonstrating successful use of several key components required by such systems, but in its more controlled setting. This suggests that these components could be extended to more realistic situations and results in an ongoing research programme.

    @inproceedings{lincoln33565,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018) Workshops},
    title = {Towards pedestrian-AV interaction: method for elucidating pedestrian preferences},
    author = {Fanta Camara and Serhan Cosar and Nicola Bellotto and Natasha Merat and Charles Fox},
    year = {2018},
    keywords = {ARRAY(0x55578fed5890)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33565/},
    abstract = {Autonomous vehicle navigation around human pedestrians remains a challenge due to the potential for complex interactions and feedback loops between the agents. As a small step towards better understanding of these interactions, this Methods Paper presents a new empirical protocol based on tracking real humans in a controlled lab environment, which is able to make inferences about the human?s preferences for interaction (how they trade off the cost of their time against the cost of a collision). Knowledge of such preferences if collected in more realistic environments could then be used by future AVs to predict and control for pedestrian behaviour. This study is intended as a work-in-progress report on methods working towards real-time and less controlled experiments, demonstrating successful use of several key components required by such systems, but in its more controlled setting. This suggests that these components could be extended to more realistic situations and results in an ongoing research programme.}
    }
  • F. Camara, O. Giles, R. Madigan, M. Rothmueller, H. P. Rasmussen, S. Vendelbo-Larsen, G. Markkula, Y. Lee, L. Garach, N. Merat, and C. Fox, “Predicting pedestrian road-crossing assertiveness for autonomous vehicle control,” in The 21st ieee international conference on intelligent transportation systems, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles (AVs) must interact with other road users including pedestrians. Unlike passive environments, pedestrians are active agents having their own utilities and decisions, which must be inferred and predicted by AVs in order to control interactions with them and navigation around them. In particular, when a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform AV controllers in this setting, this study collects and analyses data from real-world human road crossings to determine what features of crossing behaviours are predictive about the level of assertiveness of pedestrians and of the eventual winner of the interactions. It presents the largest and most detailed data set of its kind known to us, and new methods to analyze and predict pedestrian-vehicle interactions based upon it. Pedestrian-vehicle interactions are decomposed into sequences of independent discrete events. We use probabilistic methods ?logistic regression and decision tree regression ? and sequence analysis to analyze sets and sub-sequences of actions used by both pedestrians and human drivers while crossing at an intersection, to find common patterns of behaviour and to predict the winner of each interaction. We report on the particular features found to be predictive and which can thus be integrated into game-theoretic AV controllers to inform real-time interactions.

    @inproceedings{lincoln33126,
    booktitle = {The 21st IEEE International Conference on Intelligent Transportation Systems},
    title = {Predicting pedestrian road-crossing assertiveness for autonomous vehicle control},
    author = {Fanta Camara and O Giles and R Madigan and M Rothmueller and P Holm Rasmussen and SA Vendelbo-Larsen and G Markkula and YM Lee and L Garach and N Merat and CW Fox},
    publisher = {IEEE Xplore},
    year = {2018},
    keywords = {ARRAY(0x55578fed58c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33126/},
    abstract = {Autonomous vehicles (AVs) must interact with other road users including pedestrians. Unlike passive environments, pedestrians are active agents having their own utilities and decisions, which must be inferred and predicted by AVs in order to control interactions with them and navigation around them. In particular, when a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform AV controllers in this setting, this study collects and analyses data from real-world human road crossings to determine what features of crossing behaviours are predictive about the level of assertiveness of pedestrians and of the eventual winner of the interactions. It presents the largest and most detailed data set of its kind known to us, and new methods to analyze and predict pedestrian-vehicle interactions based upon it. Pedestrian-vehicle interactions are decomposed into sequences of independent discrete events. We use probabilistic methods ?logistic regression and decision tree regression ? and sequence analysis to analyze sets and sub-sequences of actions used by both pedestrians and human drivers while crossing at an intersection, to find common patterns of behaviour and to predict the winner of each interaction. We report on the particular features found to be predictive and which can thus be integrated into game-theoretic AV controllers to inform real-time interactions.}
    }
  • F. Camara, O. Giles, R. Madigan, M. Rothmüller, P. H. Rasmussen, S. A. Vendelbo-Larsen, G. Markkula, Y. M. Lee, L. Garach, N. Merat, and C. Fox, “Filtration analysis of pedestrian-vehicle interactions for autonomous vehicles control,” in 15th international conference on intelligent autonomous systems (ias-15) workshops, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Interacting with humans remains a challenge for autonomous vehicles (AVs). When a pedestrian wishes to cross the road in front of the vehicle at an unmarked crossing, the pedestrian and AV must compete for the space, which may be considered as a game-theoretic interaction in which one agent must yield to the other. To inform development of new real-time AV controllers in this setting, this study collects and analy- ses detailed, manually-annotated, temporal data from real-world human road crossings as they interact with manual drive vehicles. It studies the temporal orderings (filtrations) in which features are revealed to the ve- hicle and their informativeness over time. It presents a new framework suggesting how optimal stopping controllers may then use such data to enable an AV to decide when to act (by speeding up, slowing down, or otherwise signalling intent to the pedestrian) or alternatively, to continue at its current speed in order to gather additional information from new features, including signals from that pedestrian, before acting itself.

    @inproceedings{lincoln33564,
    booktitle = {15th International Conference on Intelligent Autonomous Systems (IAS-15) workshops},
    title = {Filtration analysis of pedestrian-vehicle interactions for autonomous vehicles control},
    author = {Fanta Camara and Oscar Giles and Ruth Madigan and Markus Rothm{\"u}ller and Pernille Holm Rasmussen and Signe Alexandra Vendelbo-Larsen and Gustav Markkula and Yee Mun Lee and Laura Garach and Natasha Merat and Charles Fox},
    year = {2018},
    keywords = {ARRAY(0x55578fed58f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33564/},
    abstract = {Interacting with humans remains a challenge for autonomous
    vehicles (AVs). When a pedestrian wishes to cross the road in front of the
    vehicle at an unmarked crossing, the pedestrian and AV must compete
    for the space, which may be considered as a game-theoretic interaction in
    which one agent must yield to the other. To inform development of new
    real-time AV controllers in this setting, this study collects and analy-
    ses detailed, manually-annotated, temporal data from real-world human
    road crossings as they interact with manual drive vehicles. It studies the
    temporal orderings (filtrations) in which features are revealed to the ve-
    hicle and their informativeness over time. It presents a new framework
    suggesting how optimal stopping controllers may then use such data to
    enable an AV to decide when to act (by speeding up, slowing down, or
    otherwise signalling intent to the pedestrian) or alternatively, to continue
    at its current speed in order to gather additional information from new
    features, including signals from that pedestrian, before acting itself.}
    }
  • F. Camara, R. A. Romano, G. Markkula, R. Madigan, N. Merat, and C. W. Fox, “Empirical game theory of pedestrian interaction for autonomous vehicles,” in Proc. measuring behaviour 2018: international conference on methods and techniques in behavioral research, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles (AV?s) are appearing on roads, based on standard robotic mapping and navigation algorithms. However their ability to interact with other road-users is much less well understood. If AVs are programmed to stop every time another road user obstructs them, then other road users simply learn that they can take priority at every interaction, and the AV will make little or no progress. This issue is especially important in the case of a pedestrian crossing the road in front of the AV. The present methods paper expands the sequential chicken model introduced in (Fox et al., 2018), using empirical data to measure behavior of humans in a controlled plus-maze experiment, and showing how such data can be used to infer parameters of the model via a Gaussian Process. This providing a more realistic, empirical understanding of the human factors intelligence required by future autonomous vehicles.

    @inproceedings{lincoln32028,
    booktitle = {Proc. Measuring Behaviour 2018: International Conference on Methods and Techniques in Behavioral Research},
    title = {Empirical game theory of pedestrian interaction for autonomous vehicles},
    author = {Fanta Camara and Richard A. Romano and Gustav Markkula and Ruth Madigan and Natasha Merat and Charles W. Fox},
    year = {2018},
    journal = {Proceedings of Measuring Behavior 2018.},
    keywords = {ARRAY(0x55578fed5920)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32028/},
    abstract = {Autonomous vehicles (AV?s) are appearing on roads, based on standard robotic mapping and
    navigation algorithms. However their ability to interact with other road-users is much less well understood. If
    AVs are programmed to stop every time another road user obstructs them, then other road users simply learn that
    they can take priority at every interaction, and the AV will make little or no progress. This issue is especially
    important in the case of a pedestrian crossing the road in front of the AV. The present methods paper expands the
    sequential chicken model introduced in (Fox et al., 2018), using empirical data to measure behavior of humans in
    a controlled plus-maze experiment, and showing how such data can be used to infer parameters of the model via
    a Gaussian Process. This providing a more realistic, empirical understanding of the human factors intelligence
    required by future autonomous vehicles.}
    }
  • K. Elgeneidy, A. Al-Yacoub, Z. Usman, N. Lohsa, M. jackson, and I. Wright, “Towards an automated masking process: a model-based approach,” Journal of engineering manufacture, 2018. doi:10.1177/0954405418810058
    [BibTeX] [Abstract] [Download PDF]

    The masking of aircraft engine parts, such as turbine blades, is a major bottleneck for the aerospace industry. The process is often carried out manually in multiple stages of coating and curing, which requires extensive time and introduces variations in the masking quality. This article investigates the automation of the masking process utilising the well-established time?pressure dispensing process for controlled maskant dispensing and a robotic manipulator for accurate part handling. A mathematical model for the time?pressure dispensing process was derived, extending previous models from the literature by incorporating the robot velocity for controlled masking line width. An experiment was designed, based on the theoretical analysis of the dispensing process, to derive an empirical model from the generated data that incorporate the losses that are otherwise difficult to model mathematically. The model was validated under new input conditions to demonstrate the feasibility of the proposed approach and the masking accuracy using the derived model.

    @article{lincoln33938,
    title = {Towards an automated masking process: A model-based approach},
    author = {Khaled Elgeneidy and Ali Al-Yacoub and Zahid Usman and Niels Lohsa and Michael jackson and Iain Wright},
    publisher = {Sage},
    year = {2018},
    doi = {10.1177/0954405418810058},
    note = {The final published version of this article can be found online at http://www.uk.sagepub.com/journals/Journal202016/},
    journal = {Journal of Engineering Manufacture},
    keywords = {ARRAY(0x55578fed5950)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33938/},
    abstract = {The masking of aircraft engine parts, such as turbine blades, is a major bottleneck for the aerospace industry. The process is often carried out manually in multiple stages of coating and curing, which requires extensive time and introduces variations in the masking quality. This article investigates the automation of the masking process utilising the well-established time?pressure dispensing process for controlled maskant dispensing and a robotic manipulator for accurate part handling. A mathematical model for the time?pressure dispensing process was derived, extending previous models from the literature by incorporating the robot velocity for controlled masking line width. An experiment was designed, based on the theoretical analysis of the dispensing process, to derive an empirical model from the generated data that incorporate the losses that are otherwise difficult to model mathematically. The model was validated under new input conditions to demonstrate the feasibility of the proposed approach and the masking accuracy using the derived model.}
    }
  • K. Elgeneidy, G. Neumann, M. Jackson, and N. Lohse, “Directly printable flexible strain sensors for bending and contact feedback of soft actuators,” Front. robot. ai, 2018. doi:10.3389/frobt.2018.00002
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a fully printable sensorized bending actuator that can be calibrated to provide reliable bending feedback and simple contact detection. A soft bending actuator following a pleated morphology, as well as a flexible resistive strain sensor, were directly 3D printed using easily accessible FDM printer hardware with a dual-extrusion tool head. The flexible sensor was directly welded to the bending actuator?s body and systematically tested to characterize and evaluate its response under variable input pressure. A signal conditioning circuit was developed to enhance the quality of the sensory feedback, and flexible conductive threads were used for wiring. The sensorized actuator?s response was then calibrated using a vision system to convert the sensory readings to real bending angle values. The empirical relationship was derived using linear regression and validated at untrained input conditions to evaluate its accuracy. Furthermore, the sensorized actuator was tested in a constrained setup that prevents bending, to evaluate the potential of using the same sensor for simple contact detection by comparing the constrained and free-bending responses at the same input pressures. The results of this work demonstrated how a dual-extrusion FDM printing process can be tuned to directly print highly customizable flexible strain sensors that were able to provide reliable bending feedback and basic contact detection. The addition of such sensing capability to bending actuators enhances their functionality and reliability for applications such as controlled soft grasping, flexible wearables, and haptic devices.

    @article{lincoln32562,
    title = {Directly Printable Flexible Strain Sensors for Bending and Contact Feedback of Soft Actuators},
    author = {Khaled Elgeneidy and Gerhard Neumann and Michael Jackson and Niels Lohse},
    publisher = {Frontiers Media},
    year = {2018},
    doi = {10.3389/frobt.2018.00002},
    journal = {Front. Robot. AI},
    keywords = {ARRAY(0x55578fed5980)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32562/},
    abstract = {This paper presents a fully printable sensorized bending actuator that can be calibrated to provide reliable bending feedback and simple contact detection. A soft bending actuator following a pleated morphology, as well as a flexible resistive strain sensor, were directly 3D printed using easily accessible FDM printer hardware with a dual-extrusion tool head. The flexible sensor was directly welded to the bending actuator?s body and systematically tested to characterize and evaluate its response under variable input pressure. A signal conditioning circuit was developed to enhance the quality of the sensory feedback, and flexible conductive threads were used for wiring. The sensorized actuator?s response was then calibrated using a vision system to convert the sensory readings to real bending angle values. The empirical relationship was derived using linear regression and validated at untrained input conditions to evaluate its accuracy. Furthermore, the sensorized actuator was tested in a constrained setup that prevents bending, to evaluate the potential of using the same sensor for simple contact detection by comparing the constrained and free-bending responses at the same input pressures. The results of this work demonstrated how a dual-extrusion FDM printing process can be tuned to directly print highly customizable flexible strain sensors that were able to provide reliable bending feedback and basic contact detection. The addition of such sensing capability to bending actuators enhances their functionality and reliability for applications such as controlled soft grasping, flexible wearables, and haptic devices.}
    }
  • K. Elgeneidy, G. Neumann, S. Pearson, M. Jackson, and N. Lohse, “Contact detection and object size estimation using a modular soft gripper with embedded flex sensors,” in Iros 2018, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Soft-grippers can grasp delicate and deformable objects without bruise or damage as the gripper can adapt to the object?s shape. However, the contact forces are still hard to regulate due to missing contact feedback of such grippers. In this paper, a modular soft gripper design is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of all fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the nature of the contact affects the rate of change in the flex sensor readings against the internal pressure.

    @inproceedings{lincoln32544,
    booktitle = {IROS 2018},
    title = {Contact Detection and Object Size Estimation using a Modular Soft Gripper with Embedded Flex Sensors},
    author = {Khaled Elgeneidy and Gerhard Neumann and Simon Pearson and Michael Jackson and Niels Lohse},
    year = {2018},
    journal = {IROS 2018},
    keywords = {ARRAY(0x55578fed59b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32544/},
    abstract = {Soft-grippers can grasp delicate and deformable objects without bruise or damage as the gripper can adapt to the object?s shape. However, the contact forces are still hard to regulate due to missing contact feedback of such grippers. In this paper, a modular soft gripper design is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of all fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the nature of the contact affects the rate of change in the flex sensor readings against the internal pressure.}
    }
  • K. Essers, M. Chapman, N. Kokciyan, I. Sassoon, T. Porat, P. Balatsoukas, P. Young, M. Ashworth, V. Curcin, S. Modgil, S. Parsons, and E. Sklar, “The consult system: demonstration.” 2018, p. 385–386. doi:10.1145/3284432.3287170
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38402,
    title = {The CONSULT system: Demonstration},
    author = {K. Essers and M. Chapman and N. Kokciyan and I. Sassoon and T. Porat and P. Balatsoukas and P. Young and M. Ashworth and V. Curcin and S. Modgil and Simon Parsons and Elizabeth Sklar},
    year = {2018},
    pages = {385--386},
    doi = {10.1145/3284432.3287170},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38402/}
    }
  • K. Essers, M. Chapman, N. Kokciyan, I. Sassoon, T. Porat, P. Balatsoukas, P. Young, M. Ashworth, V. Curcin, S. Modgil, S. Parsons, and E. Sklar, “The consult system: demonstration.” 2018, p. 385–386. doi:10.1145/3284432.3287170
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38543,
    title = {The CONSULT system: Demonstration},
    author = {K. Essers and M. Chapman and N. Kokciyan and I. Sassoon and T. Porat and P. Balatsoukas and P. Young and M. Ashworth and V. Curcin and S. Modgil and Simon Parsons and Elizabeth Sklar},
    year = {2018},
    pages = {385--386},
    doi = {10.1145/3284432.3287170},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38543/}
    }
  • K. Essers, R. Rogers, J. Sturt, E. Sklar, and E. Black, “Assessing the posture prototype: a late-breaking report on patient views.” 2018, p. 344–346. doi:10.1145/3284432.3287181
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38542,
    title = {Assessing the POSTURE prototype: A late-breaking report on patient views},
    author = {K. Essers and R. Rogers and J. Sturt and Elizabeth Sklar and E. Black},
    year = {2018},
    pages = {344--346},
    doi = {10.1145/3284432.3287181},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38542/}
    }
  • C. Fox, F. Camara, G. Markkula, R. Romano, R. Madigan, and N. Merat, “When should the chicken cross the road?: game theory for autonomous vehicle-human interactions,” in Proc. 4th international conference on vehicle technology and intelligent transport systems (vehits), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicle control is well understood for local- [15], good approximations exist such as particle ?ltering, ization, mapping and planning in un-reactive environ- which make use of large compute power to draw samples ments, but the human factors of complex interactions near solutions. stood [16], and despite its exact solution being NP-hard with other road users are not yet developed. Route planning in non-interactive envi- ronments also has well known tractable solutions such as This po- the A-star algorithm. Given a route, localizing and con- sition paper presents an initial model for negotiation be- trol to follow that route then becomes a similar task to tween an autonomous vehicle and another vehicle at an that performed by the 1959 General Motors Firebird-III unsigned intersections or (equivalently) with a pedestrian self-driving car [1], which used electromagnetic sensing at an unsigned road-crossing (jaywalking), using discrete to follow a wire built into the road. Such path follow- sequential game theory. The model is intended as a ba- ing, using wires or SLAM, can then be augmented with sic framework for more realistic and data-driven future simple safety logic to stop the vehicle if any obstacle is extensions. The model shows that when only vehicle po- in its way, as detected by any range sensor. sition is used to signal intent, the optimal behaviors for open source systems for this level of `self-driving’ are now both agents must include a non-zero probability of al- widely available [6]. lowing a collision to occur. In contrast, This suggests extensions to problems that these vehicles will face around interacting with other road users are much harder reduce this probability in future, such as other forms of both to formulate and solve. Autonomous vehicles do not signaling and control. Unlike most Game Theory appli- just have to deal with inanimate objects, sensors, and cations in Economics, active vehicle control requires real- maps. time selection from multiple equilibria with no history, They have to deal with other agents, currently human drivers and pedestrians and eventually other au- and we present and argue for a novel solution concept, meta-strategy convergence , suited to this task.

    @inproceedings{lincoln32029,
    booktitle = {Proc. 4th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS)},
    title = {When should the chicken cross the road?: Game theory for autonomous vehicle-human interactions},
    author = {Charles Fox and F. Camara and G. Markkula and R. Romano and R. Madigan and N. Merat},
    year = {2018},
    keywords = {ARRAY(0x55578fed5a70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32029/},
    abstract = {Autonomous vehicle control is well understood for local- [15], good approximations exist such as particle ?ltering,
    ization, mapping and planning in un-reactive environ- which make use of large compute power to draw samples
    ments, but the human factors of complex interactions near solutions.
    stood [16], and despite its exact solution being NP-hard
    with other road users are not yet developed.
    Route planning in non-interactive envi-
    ronments also has well known tractable solutions such as
    This po-
    the A-star algorithm. Given a route, localizing and con-
    sition paper presents an initial model for negotiation be-
    trol to follow that route then becomes a similar task to
    tween an autonomous vehicle and another vehicle at an
    that performed by the 1959 General Motors Firebird-III
    unsigned intersections or (equivalently) with a pedestrian
    self-driving car [1], which used electromagnetic sensing
    at an unsigned road-crossing (jaywalking), using discrete
    to follow a wire built into the road.
    Such path follow-
    sequential game theory. The model is intended as a ba- ing, using wires or SLAM, can then be augmented with
    sic framework for more realistic and data-driven future simple safety logic to stop the vehicle if any obstacle is
    extensions. The model shows that when only vehicle po- in its way, as detected by any range sensor.
    sition is used to signal intent, the optimal behaviors for open source systems for this level of `self-driving' are now
    both agents must include a non-zero probability of al- widely available [6].
    lowing a collision to occur.
    In contrast,
    This suggests extensions to
    problems that these vehicles will face
    around interacting with other road users are much harder
    reduce this probability in future, such as other forms of
    both to formulate and solve. Autonomous vehicles do not
    signaling and control. Unlike most Game Theory appli-
    just have to deal with inanimate objects, sensors, and
    cations in Economics, active vehicle control requires real-
    maps.
    time selection from multiple equilibria with no history,
    They have to deal with other agents, currently
    human drivers and pedestrians and eventually other au-
    and we present and argue for a novel solution concept,
    meta-strategy convergence , suited to this task.}
    }
  • R. P. Herrero, J. P. Fentanes, and M. Hanheide, “Getting to know your robot customers: automated analysis of user identity and demographics for robots in the wild,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 3733–3740, 2018. doi:doi:10.1109/LRA.2018.2856264
    [BibTeX] [Abstract] [Download PDF]

    Long-term studies with autonomous robots ?in the wild? (deployed in real-world human-inhabited environments) are among the most laborious and resource-intensive endeavours in human-robot interaction. Even if a robot system itself is robust and well-working, the analysis of the vast amounts of user data one aims to collect and analyze poses a significant challenge. This letter proposes an automated processing pipeline, using state-of-the-art computer vision technology to estimate demographic factors from users? faces and reidentify them to establish usage patterns. It overcomes the problem of explicitly recruiting participants and having them fill questionnaires about their demographic background and allows one to study completely unsolicited and nonprimed interactions over long periods of time. This letter offers a comprehensive assessment of the performance of the automated analysis with data from 68 days of continuous deployment of a robot in a care home and also presents a set of findings obtained through the analysis, underpinning the viability of the approach. Index

    @article{lincoln33158,
    volume = {3},
    number = {4},
    author = {Roberto Pinillos Herrero and Jaime Pulido Fentanes and Marc Hanheide},
    note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8411093/},
    title = {Getting to Know Your Robot Customers: Automated Analysis of User Identity and Demographics for Robots in the Wild},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {doi:10.1109/LRA.2018.2856264},
    pages = {3733--3740},
    keywords = {ARRAY(0x55578fed5aa0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33158/},
    abstract = {Long-term studies with autonomous robots ?in the wild? (deployed in real-world human-inhabited environments) are among the most laborious and resource-intensive endeavours in human-robot interaction. Even if a robot system itself is robust and well-working, the analysis of the vast amounts of user data one aims to collect and analyze poses a significant challenge. This letter proposes an automated processing pipeline, using state-of-the-art computer vision technology to estimate demographic factors from users? faces and reidentify them to establish usage patterns. It overcomes the problem of explicitly recruiting participants and having them fill questionnaires about their demographic background and allows one to study completely unsolicited and nonprimed interactions over long periods of time. This letter offers a comprehensive assessment of the performance of the automated analysis with data from 68 days of continuous deployment of a robot in a care home and also presents a set of findings obtained through the analysis, underpinning the viability of the approach.
    Index}
    }
  • M. Huttenrauch, A. Sosic, and G. Neumann, “Exploiting local communication protocols for learning complex swarm behaviors with deep reinforcement learning,” in International conference for swarm intelligence (ants), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Swarm systems constitute a challenging problem for reinforcement learning (RL) as the algorithm needs to learn decentralized control policies that can cope with limited local sensing and communication abilities of the agents. While it is often difficult to directly define the behavior of the agents, simple communication protocols can be defined more easily using prior knowledge about the given task. In this paper, we propose a number of simple communication protocols that can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment. The protocols are based on histograms that encode the local neighborhood relations of the gents and can also transmit task-specific information, such as the shortest distance and direction to a desired target. In our framework, we use an adaptation of Trust Region Policy Optimization to learn complex collaborative tasks, such as formation building and building a communication link. We evaluate our findings in a simulated 2D-physics environment, and compare the implications of different communication protocols.

    @inproceedings{lincoln32460,
    booktitle = {International Conference for Swarm Intelligence (ANTS)},
    title = {Exploiting Local Communication Protocols for Learning Complex Swarm Behaviors with Deep Reinforcement Learning},
    author = {Max Huttenrauch and Adrian Sosic and Gerhard Neumann},
    publisher = {Springer International Publishing},
    year = {2018},
    keywords = {ARRAY(0x55578fed5ad0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32460/},
    abstract = {Swarm systems constitute a challenging problem for reinforcement learning (RL) as the algorithm needs to learn decentralized control policies that can cope with limited local sensing and communication abilities of the agents. While it is often difficult to directly define the behavior of the agents, simple communication protocols can be defined more easily using prior knowledge about the given task. In this paper, we propose a number of simple communication protocols that can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment. The protocols are based on histograms that encode the local neighborhood relations of the gents
    and can also transmit task-specific information, such as the shortest distance and direction to a desired target. In our framework, we use an adaptation of Trust Region Policy Optimization to learn complex collaborative tasks, such as formation building and building a communication link. We evaluate our findings in a simulated 2D-physics environment, and compare the implications of different communication protocols.}
    }
  • M. Imai, E. Sklar, T. J. Norman, and T. Komatsu, “Hai 2018 chairs? welcome.” 2018, p. III.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38541,
    title = {HAI 2018 Chairs? Welcome},
    author = {M. Imai and Elizabeth Sklar and T.J. Norman and T. Komatsu},
    year = {2018},
    pages = {III},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38541/}
    }
  • N. Kokciyan, I. Sassoon, A. P. Young, S. Modgil, and S. Parsons, “Reasoning with metalevel argumentation frameworks in aspartix,” in Computational models of argument, Ios press, 2018, vol. 305, p. 463–464. doi:10.3233/978-1-61499-906-5-463
    [BibTeX] [Abstract] [Download PDF]

    In this demo paper, we propose an encoding for Metalevel Argumentation Frameworks (MAFs) to be used in Aspartix, an Answer Set Programming (ASP) approach to find the justified arguments of an AF [2]. MAFs provide a uniform encoding of object level Dung Frameworks and extensions thereof that include values, preferences and attacks on attacks (EAFs). The justification status of arguments in the object level AF can then be evaluated and explained through evaluation of the arguments in the MAF. The demo includes multiple examples from the literature to show the applicability of our proposed encoding for translating various object level AFs to the uniform language of MAFs.

    @incollection{lincoln38408,
    volume = {305},
    author = {N. Kokciyan and I. Sassoon and A.P. Young and S. Modgil and S. Parsons},
    series = {Frontiers in Artificial Intelligence and Applications},
    note = {cited By 0},
    booktitle = {Computational Models of Argument},
    title = {Reasoning with metalevel argumentation frameworks in aspartix},
    publisher = {IOS Press},
    year = {2018},
    journal = {Frontiers in Artificial Intelligence and Applications},
    doi = {10.3233/978-1-61499-906-5-463},
    pages = {463--464},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38408/},
    abstract = {In this demo paper, we propose an encoding for Metalevel Argumentation Frameworks
    (MAFs) to be used in Aspartix, an Answer Set Programming (ASP) approach to find
    the justified arguments of an AF [2]. MAFs provide a uniform encoding of object level
    Dung Frameworks and extensions thereof that include values, preferences and attacks
    on attacks (EAFs). The justification status of arguments in the object level AF can then
    be evaluated and explained through evaluation of the arguments in the MAF. The demo
    includes multiple examples from the literature to show the applicability of our proposed
    encoding for translating various object level AFs to the uniform language of MAFs.}
    }
  • L. Kunze, N. Hawes, T. Duckett, and M. Hanheide, “Introduction to the special issue on ai for long-term autonomy,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 4431–4434, 2018. doi:10.1109/LRA.2018.2870466
    [BibTeX] [Abstract] [Download PDF]

    The papers in this special section focus on the use of artificial intelligence (AI) for long term autonomy. Autonomous systems have a long history in the fields of AI and robotics. However, only through recent advances in technology has it been possible to create autonomous systems capable of operating in long-term, real-world scenarios. Examples include autonomous robots that operate outdoors on land, in air, water, and space; and indoors in offices, care homes, and factories. Designing, developing, and maintaining intelligent autonomous systems that operate in real-world environments over long periods of time, i.e. weeks, months, or years, poses many challenges. This special issue focuses on such challenges and on ways to overcome them using methods from AI. Long-term autonomy can be viewed as both a challenge and an opportunity. The challenge of long-term autonomy requires system designers to ensure that an autonomous system can continue operating successfully according to its real-world application demands in unstructured and semi-structured environments. This means addressing issues related to hardware and software robustness (e.g., gluing in screws and profiling for memory leaks), as well as ensuring that all modules and functions of the system can deal with the variation in the environment and tasks that is expected to occur over its operating time.

    @article{lincoln34133,
    volume = {3},
    number = {4},
    author = {Lars Kunze and Nick Hawes and Tom Duckett and Marc Hanheide},
    title = {Introduction to the Special Issue on AI for Long-Term Autonomy},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2870466},
    pages = {4431--4434},
    keywords = {ARRAY(0x55578fed5b60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34133/},
    abstract = {The papers in this special section focus on the use of artificial intelligence (AI) for long term autonomy. Autonomous systems have a long history in the fields of AI and robotics. However, only through recent advances in technology has it been possible to create autonomous systems capable of operating in long-term, real-world scenarios. Examples include autonomous robots that operate outdoors on land, in air, water, and space; and indoors in offices, care homes, and factories. Designing, developing, and maintaining intelligent autonomous systems that operate in real-world environments over long periods of time, i.e. weeks, months, or years, poses many challenges. This special issue focuses on such challenges and on ways to overcome them using methods from AI. Long-term autonomy can be viewed as both a challenge and an opportunity. The challenge of long-term autonomy requires system designers to ensure that an autonomous system can continue operating successfully according to its real-world application demands in unstructured and semi-structured environments. This means addressing issues related to hardware and software robustness (e.g., gluing in screws and profiling for memory leaks), as well as ensuring that all modules and functions of the system can deal with the variation in the environment and tasks that is expected to occur over its operating time.}
    }
  • L. Kunze, N. Hawes, T. Duckett, M. Hanheide, and T. Krajnik, “Artificial intelligence for long-term robot autonomy: a survey,” Ieee robotics and automation letters, vol. 3, iss. 4, p. 4023–4030, 2018. doi:10.1109/LRA.2018.2860628
    [BibTeX] [Abstract] [Download PDF]

    Autonomous systems will play an essential role in many applications across diverse domains including space, marine, air, field, road, and service robotics. They will assist us in our daily routines and perform dangerous, dirty and dull tasks. However, enabling robotic systems to perform autonomously in complex, real-world scenarios over extended time periods (i.e. weeks, months, or years) poses many challenges. Some of these have been investigated by sub-disciplines of Artificial Intelligence (AI) including navigation & mapping, perception, knowledge representation & reasoning, planning, interaction, and learning. The different sub-disciplines have developed techniques that, when re-integrated within an autonomous system, can enable robots to operate effectively in complex, long-term scenarios. In this paper, we survey and discuss AI techniques as ?enablers? for long-term robot autonomy, current progress in integrating these techniques within long-running robotic systems, and the future challenges and opportunities for AI in long-term autonomy.

    @article{lincoln32829,
    volume = {3},
    number = {4},
    author = {Lars Kunze and Nick Hawes and Tom Duckett and Marc Hanheide and Tomas Krajnik},
    title = {Artificial Intelligence for Long-Term Robot Autonomy: A Survey},
    publisher = {IEEE},
    year = {2018},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2018.2860628},
    pages = {4023--4030},
    keywords = {ARRAY(0x55578fed5b90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32829/},
    abstract = {Autonomous systems will play an essential role in many applications across diverse domains including space, marine, air, field, road, and service robotics. They will assist us in our daily routines and perform dangerous, dirty and dull tasks. However, enabling robotic systems to perform autonomously in complex, real-world scenarios over extended time periods (i.e. weeks, months, or years) poses many challenges. Some of these have been investigated by sub-disciplines of Artificial Intelligence (AI) including navigation \& mapping, perception, knowledge representation \& reasoning, planning, interaction, and learning. The different sub-disciplines have developed techniques that, when re-integrated within an autonomous system, can enable robots to operate effectively in complex, long-term scenarios. In this paper, we survey and discuss AI techniques as ?enablers? for long-term robot autonomy, current progress in integrating these techniques within long-running robotic systems, and the future challenges and opportunities for AI in long-term autonomy.}
    }
  • W. Lewinger, F. Comin, M. Matthews, and C. Saaj, “Earth analogue testing and analysis of martian duricrust properties,” in 14th symposium on advanced space technologies in robotics and automation, 2018, p. 567–579. doi:10.1016/j.actaastro.2018.05.025
    [BibTeX] [Abstract] [Download PDF]

    Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.

    @inproceedings{lincoln39622,
    volume = {152},
    author = {William Lewinger and Francisco Comin and Marcus Matthews and Chakravarthini Saaj},
    booktitle = {14th Symposium on Advanced Space Technologies in Robotics and Automation},
    title = {Earth analogue testing and analysis of Martian duricrust properties},
    publisher = {Elsevier},
    year = {2018},
    journal = {Acta Astronautica},
    doi = {10.1016/j.actaastro.2018.05.025},
    pages = {567--579},
    keywords = {ARRAY(0x55578fed2c08)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39622/},
    abstract = {Previous and current Mars rover missions have noted a nearly ubiquitous presence of duricrusts on the planet surface. Duricrusts are thin, brittle layers of cemented regolith that cover the underlying terrain. In some cases, the duricrust hides safe or relatively safe underneath the top soil. However, as was observed by both Mars exploration rovers, Spirit and Opportunity, such crusts can also hide loose, untrafficable terrain, leading to Spirit becoming permanently incapacitated in 2009. Whilst several reports of the Martian surface have indicated the presence of duricrusts, none have been able to provide details on the physical properties of the material, which may indicate the level of safe traversability of duricrust terrains. This paper presents the findings of testing terrestrially-created duricrusts with simulated Martian soil properties, in order to determine the properties of such duricrusts and to discover what level of hazard that they may represent (e.g. can vehicles traverse the duricrust surface without penetration to lower sub-surface soils?). Combinations of elements that have been observed in the Martian soil were used as the basis for forming the laboratory-created duricrusts. Variations in duricrust thickness, water content, and the iron oxide compound were investigated. As was observed throughout the testing process, duricrusts behave in a rather brittle fashion and are easily destroyed by low surface pressures. This indicates that duricrusts are not safe for traversing and they present a definite hazard for travelling on the Martian landscape when utilising only visual terrain classification, as the surface appearance is not necessarily representative of what may be lying beneath.}
    }
  • Z. Li, A. Cohen, and S. Parsons, “Two forms of minimality in aspic+,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 10767, p. 203–218, 2018. doi:10.1007/978-3-030-01713-2{$_1$}{$_5$}
    [BibTeX] [Download PDF]
    @article{lincoln38404,
    volume = {10767},
    author = {Z. Li and A. Cohen and Simon Parsons},
    note = {cited By 0},
    title = {Two forms of minimality in ASPIC+},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-01713-2{$_1$}{$_5$}},
    pages = {203--218},
    year = {2018},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38404/}
    }
  • Z. Li, N. Oren, and S. Parsons, “On the links between argumentation-based reasoning and nonmonotonic reasoning,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 10757, p. 67–85, 2018. doi:10.1007/978-3-319-75553-3_5
    [BibTeX] [Abstract] [Download PDF]

    In this paper we investigate the links between instantiated argumentation systems and the axioms for non-monotonic reasoning described in [15] with the aim of characterising the nature of argument based reasoning. In doing so, we consider two possible interpretations of the consequence relation, and describe which axioms are met by ASPIC+ under each of these interpretations. We then consider the links between these axioms and the rationality postulates. Our results indicate that argument based reasoning as characterised by ASPIC+ is{–}according to the axioms of [15]{–}non-cumulative and non-monotonic, and therefore weaker than the weakest non-monotonic reasoning systems considered in [15]. This weakness underpins ASPIC+ ?s success in modelling other reasoning systems. We conclude by considering the relationship between ASPIC+ and other weak logical systems.

    @article{lincoln38405,
    volume = {10757},
    author = {Z. Li and N. Oren and S. Parsons},
    note = {cited By 0},
    title = {On the links between argumentation-based reasoning and nonmonotonic reasoning},
    publisher = {Springer},
    year = {2018},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-319-75553-3\_5},
    pages = {67--85},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38405/},
    abstract = {In this paper we investigate the links between instantiated argumentation systems and the axioms for non-monotonic reasoning described in [15] with the aim of characterising the nature of argument based reasoning. In doing so, we consider two possible interpretations of the consequence relation, and describe which axioms are met by ASPIC+ under each of these interpretations. We then consider the links between these axioms and the rationality postulates. Our results indicate that argument based reasoning as characterised by ASPIC+ is{--}according to the axioms of [15]{--}non-cumulative and non-monotonic, and therefore weaker than the weakest non-monotonic reasoning systems considered in [15]. This weakness underpins ASPIC+ ?s success in modelling other reasoning systems. We conclude by considering the relationship between ASPIC+ and other weak logical systems.}
    }
  • K. Liakos, P. Busato, D. Moshou, S. Pearson, and D. Bochtis, “Machine learning in agriculture: a review,” Sensors, vol. 18, iss. 8, p. 2674, 2018. doi:10.3390/s18082674
    [BibTeX] [Abstract] [Download PDF]

    Machine learning has emerged with big data technologies and high-performance computing to create new opportunities for data intensive science in the multi-disciplinary agri-technologies domain. In this paper, we present a comprehensive review of research dedicated to applications of machine learning in agricultural production systems. The works analyzed were categorized in (a) crop management, including applications on yield prediction, disease detection, weed detection crop quality, and species recognition; (b) livestock management, including applications on animal welfare and livestock production; (c) water management; and (d) soil management. The filtering and classification of the presented articles demonstrate how agriculture will benefit from machine learning technologies. By applying machine learning to sensor data, farm management systems are evolving into real time artificial intelligence enabled programs that provide rich recommendations and insights for farmer decision support and action

    @article{lincoln33015,
    volume = {18},
    number = {8},
    author = {Konstantinos Liakos and Patrizia Busato and Dimitrios Moshou and Simon Pearson and Dionysis Bochtis},
    note = {This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).},
    title = {Machine Learning in Agriculture: A Review},
    publisher = {MDPI},
    year = {2018},
    journal = {Sensors},
    doi = {10.3390/s18082674},
    pages = {2674},
    keywords = {ARRAY(0x55578fed2c98)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33015/},
    abstract = {Machine learning has emerged with big data technologies and high-performance computing to create new opportunities for data intensive science in the multi-disciplinary agri-technologies domain. In this paper, we present a comprehensive review of research dedicated to applications of machine learning in agricultural production systems. The works analyzed were categorized in (a) crop management, including applications on yield prediction, disease detection, weed detection crop quality, and species recognition; (b) livestock management, including applications on animal welfare and livestock production; (c) water management; and (d) soil management. The filtering and classification of the presented articles demonstrate how agriculture will benefit from machine learning technologies. By applying machine learning to sensor data, farm management systems are evolving into real time artificial intelligence enabled programs that provide rich recommendations and insights for farmer decision support and action}
    }
  • P. Liu, G. Neumann, Q. Fu, S. Pearson, and H. Yu, “Energy-efficient design and control of a vibro-driven robot,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Vibro-driven robotic (VDR) systems use stick-slip motions for locomotion. Due to the underactuated nature of the system, efficient design and control are still open problems. We present a new energy preserving design based on a spring-augmented pendulum. We indirectly control the friction-induced stick-slip motions by exploiting the passive dynamics in order to achieve an improvement in overall travelling distance and energy efficacy. Both collocated and non-collocated constraint conditions are elaborately analysed and considered to obtain a desired trajectory generation profile. For tracking control, we develop a partial feedback controller which for the pendulum which counteracts the dynamic contributions from the platform. Comparative simulation studies show the effectiveness and intriguing performance of the proposed approach, while its feasibility is experimentally verified through a physical robot. Our robot is to the best of our knowledge the first nonlinear-motion prototype in literature towards the VDR systems.

    @inproceedings{lincoln32540,
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Energy-efficient design and control of a vibro-driven robot},
    author = {Pengcheng Liu and Gerhard Neumann and Qinbing Fu and Simon Pearson and Hongnian Yu},
    publisher = {IEEE},
    year = {2018},
    keywords = {ARRAY(0x55578ff182c8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32540/},
    abstract = {Vibro-driven robotic (VDR) systems use stick-slip motions for locomotion. Due to the underactuated nature of the system, efficient design and control are still open problems. We present a new energy preserving design based on a spring-augmented pendulum. We indirectly control the friction-induced stick-slip motions by exploiting the passive dynamics in order to achieve an improvement in overall travelling distance and energy efficacy. Both collocated and non-collocated constraint conditions are elaborately analysed and considered to obtain a desired trajectory generation profile. For tracking control, we develop a partial feedback controller which for the pendulum which counteracts the dynamic contributions from the platform. Comparative simulation studies show the effectiveness and intriguing performance of the proposed approach, while its feasibility is experimentally verified through a physical robot. Our robot is to the best of our knowledge the first nonlinear-motion prototype in literature towards the VDR systems.}
    }
  • S. M. Mellado, G. Cielniak, T. Krajník, and T. Duckett, “Modelling and predicting rhythmic flow patterns in dynamic environments,” in Taros, 2018, p. 135–146.
    [BibTeX] [Abstract] [Download PDF]

    We present a time-dependent probabilistic map able to model and predict flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction on a grid-based map by a set of harmonic functions, which efficiently capture long-term (minutes to weeks) variations of crowd movements over time. The evaluation, performed on data from two real environments, shows that the proposed model enables prediction of human movement patterns in the future. Potential applications include human-aware motion planning, improving the efficiency and safety of robot navigation.

    @inproceedings{lincoln33448,
    booktitle = {TAROS},
    title = {Modelling and Predicting Rhythmic Flow Patterns in Dynamic Environments},
    author = {Sergi Molina Mellado and Grzegorz Cielniak and Tom{\'a}{\v s} Krajn{\'i}k and Tom Duckett},
    year = {2018},
    pages = {135--146},
    keywords = {ARRAY(0x55578fd5f3f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33448/},
    abstract = {We present a time-dependent probabilistic map able to model and predict flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction on a grid-based map by a set of harmonic functions, which efficiently capture long-term (minutes to weeks) variations of crowd movements over time. The evaluation, performed on data from two real environments, shows that the proposed model enables prediction of human movement patterns in the future. Potential applications include human-aware motion planning, improving the efficiency and safety of robot navigation.}
    }
  • A. R. Panisson, S. Parsons, P. McBurney, and R. H. Bordini, “Choosing appropriate arguments from trustworthy sources,” Frontiers in artificial intelligence and applications, vol. 305, p. 345–352, 2018. doi:10.3233/978-1-61499-906-5-345
    [BibTeX] [Download PDF]
    @article{lincoln38406,
    volume = {305},
    author = {A.R. Panisson and Simon Parsons and P. McBurney and R.H. Bordini},
    note = {cited By 0},
    title = {Choosing appropriate arguments from trustworthy sources},
    journal = {Frontiers in Artificial Intelligence and Applications},
    doi = {10.3233/978-1-61499-906-5-345},
    pages = {345--352},
    year = {2018},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38406/}
    }
  • A. R. Panisson, S. Sarkadi, P. McBurney, S. Parsons, and R. H. Bordini, “Lies, bullshit, and deception in agent-oriented programming languages.” 2018, p. 50–61.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38409,
    volume = {2154},
    title = {Lies, bullshit, and deception in agent-oriented programming languages},
    author = {A.R. Panisson and S. Sarkadi and P. McBurney and Simon Parsons and R.H. Bordini},
    year = {2018},
    pages = {50--61},
    note = {cited By 2},
    journal = {CEUR Workshop Proceedings},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38409/}
    }
  • J. Raphael and E. Sklar, “Towards dynamic coalition formation for intelligent traffic management,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 10767, p. 400–414, 2018. doi:10.1007/978-3-030-01713-2
    [BibTeX] [Download PDF]
    @article{lincoln38545,
    volume = {10767},
    author = {J. Raphael and Elizabeth Sklar},
    note = {cited By 0},
    title = {Towards dynamic coalition formation for intelligent traffic management},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-01713-2},
    pages = {400--414},
    year = {2018},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38545/}
    }
  • I. Saleh, A. Postnikov, C. Bingham, R. Bickerton, A. Zolotas, and S. Pearson, “Aggregated power profile of a large network of refrigeration compressors following ffr dsr events,” in International conference on energy engineering and smart grids, 2018.
    [BibTeX] [Abstract] [Download PDF]

    Refrigeration systems and HVAC are estimated to consume approximately 14\% of the UK?s electricity and could make a significant contribution towards the application of DSR. In this paper, active power profiles of single and multi-pack refrigeration systems responding DSR events are experimentally investigated. Further, a large population of 300 packs (approx. 1.5 MW capacity) is simulated to investigate the potential of delivering DSR using a network of refrigeration compressors, in common with commercial retail refrigeration systems. Two scenarios of responding to DSR are adopted for the studies viz. with and without applying a suction pressure offset after an initial 30 second shut-down of the compressors. The experiments are conducted at the Refrigeration Research Centre at University of Lincoln. Simulations of the active power profile for the compressors following triggered DSR events are realized based on a previously reported model of the thermodynamic properties of the refrigeration system. A Simulink model of a three phase power supply system is used to determine the impact of compressor operation on the power system performance, and in particular, on the line voltage of the local power supply system. The authors demonstrate how the active power and the drawn current of the multi-pack refrigeration system are affected following a rapid shut down and subsequent return to operation. Specifically, it is shown that there is a significant increase in power consumption post DSR, approximately two times higher than during normal operation, particularly when many packs of compressors are synchronized post DSR event, which can have a significant effect on the line voltage of the power supply.

    @inproceedings{lincoln32931,
    booktitle = {International Conference on Energy Engineering and Smart Grids},
    title = {Aggregated power profile of a large network of refrigeration compressors following FFR DSR events},
    author = {Ibrahim Saleh and Andrey Postnikov and Chris Bingham and Ronald Bickerton and Argyrios Zolotas and Simon Pearson},
    publisher = {ESG2018},
    year = {2018},
    keywords = {ARRAY(0x55578fdc1c50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/32931/},
    abstract = {Refrigeration systems and HVAC are estimated to consume approximately 14\% of the UK?s electricity and could make a significant contribution towards the application of DSR. In this paper, active power profiles of single and multi-pack refrigeration systems responding DSR events are experimentally investigated. Further, a large population of 300 packs (approx. 1.5 MW capacity) is simulated to investigate the potential of delivering DSR using a network of refrigeration compressors, in common with commercial retail refrigeration systems. Two scenarios of responding to DSR are adopted for the studies viz. with and without applying a suction pressure offset after an initial 30 second shut-down of the compressors. The experiments are conducted at the Refrigeration Research Centre at University of Lincoln. Simulations of the active power profile for the compressors following triggered DSR events are realized based on a previously reported model of the thermodynamic properties of the refrigeration system. A Simulink model of a three phase power supply system is used to determine the impact of compressor operation on the power system performance, and in particular, on the line voltage of the local power supply system. The authors demonstrate how the active power and the drawn current of the multi-pack refrigeration system are affected following a rapid shut down and subsequent return to operation. Specifically, it is shown that there is a significant increase in power consumption post DSR, approximately two times higher than during normal operation, particularly when many packs of compressors are synchronized post DSR event, which can have a significant effect on the line voltage of the power supply.}
    }
  • E. Sklar and M. Q. Azhar, “Explanation through argumentation.” 2018, p. 277–285. doi:10.1145/3284432.3284473
    [BibTeX] [Download PDF]
    @inproceedings{lincoln38540,
    title = {Explanation through argumentation},
    author = {Elizabeth Sklar and M.Q. Azhar},
    year = {2018},
    pages = {277--285},
    doi = {10.1145/3284432.3284473},
    note = {cited By 0},
    journal = {HAI 2018 - Proceedings of the 6th International Conference on Human-Agent Interaction},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38540/}
    }
  • A. P. Young, N. Kokciyan, I. Sassoon, S. Modgil, and S. Parsons, “Instantiating metalevel argumentation frameworks,” in Computational models of argument, Ios press, 2018, vol. 305, p. 97–108. doi:10.3233/978-1-61499-906-5-97
    [BibTeX] [Abstract] [Download PDF]

    We directly instantiate metalevel argumentation frameworks (MAFs) to enable argumentation-based reasoning about information relevant to various applications. The advantage of this is that information that typically cannot be incorporated via the instantiation of object-level argumentation frameworks can now be incorporated, in particular information referencing (1) preferences over arguments, (2) the rationale for attacks, and (3) the dialectical effect of critical questions that shifts the burden of proof when posed. We achieve this by using a variant of ASPIC+ and a higher-order typed language that can reference object-level formulae and arguments. We illustrate these representational advantages with a running example from clinical decision support.

    @incollection{lincoln38407,
    volume = {305},
    author = {A.P. Young and N. Kokciyan and I. Sassoon and S. Modgil and S. Parsons},
    series = {Frontiers in Artificial Intelligence and Applications},
    note = {cited By 1},
    booktitle = {Computational Models of Argument},
    title = {Instantiating metalevel argumentation frameworks},
    publisher = {IOS Press},
    year = {2018},
    journal = {Frontiers in Artificial Intelligence and Applications},
    doi = {10.3233/978-1-61499-906-5-97},
    pages = {97--108},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38407/},
    abstract = {We directly instantiate metalevel argumentation frameworks (MAFs) to enable argumentation-based reasoning about information relevant to various applications. The advantage of this is that information that typically cannot be incorporated via the instantiation of object-level argumentation frameworks can now be incorporated, in particular information referencing (1) preferences over arguments, (2) the rationale for attacks, and (3) the dialectical effect of critical questions that shifts the burden of proof when posed. We achieve this by using a variant of ASPIC+ and a higher-order typed language that can reference object-level formulae and arguments. We illustrate these representational advantages with a running example from clinical decision support.}
    }

2017

  • F. J. Comin and C. M. Saaj, “Models for slip estimation and soft terrain characterization with multilegged wheel-legs,” Ieee transactions on robotics, vol. 33, iss. 6, p. 1438–1452, 2017. doi:10.1109/TRO.2017.2723904
    [BibTeX] [Abstract] [Download PDF]

    Successful operation of off-road mobile robots faces the challenge of mobility hazards posed by soft, deformable terrain, e.g., sand traps. The slip caused by these hazards has a significant impact on tractive efficiency, leading to complete immobilization in extreme circumstances. This paper addresses the interaction between dry frictional soil and the multilegged wheel-leg concept, with the aim of exploiting its enhanced mobility for safe, in situ terrain sensing. The influence of multiple legs and different foot designs on wheel-leg-soil interaction is analyzed by incorporating these aspects to an existing terradynamics model. In addition, new theoretical models are proposed and experimentally validated to relate wheel-leg slip to both motor torque and stick-slip vibrations. These models, which are capable of estimating wheel-leg slip from purely proprioceptive sensors, are then applied in combination with detected wheel-leg sinkage to successfully characterize the load bearing and shear strength properties of different types of deformable soil. The main contribution of this paper enables nongeometric hazard detection based on detected wheel-leg slip and sinkage.

    @article{lincoln37397,
    volume = {33},
    number = {6},
    month = {December},
    author = {F.J. Comin and C. M. Saaj},
    note = {cited By 0},
    title = {Models for slip estimation and soft terrain characterization with multilegged wheel-legs},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Transactions on Robotics},
    doi = {10.1109/TRO.2017.2723904},
    pages = {1438--1452},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37397/},
    abstract = {Successful operation of off-road mobile robots faces the challenge of mobility hazards posed by soft, deformable terrain, e.g., sand traps. The slip caused by these hazards has a significant impact on tractive efficiency, leading to complete immobilization in extreme circumstances. This paper addresses the interaction between dry frictional soil and the multilegged wheel-leg concept, with the aim of exploiting its enhanced mobility for safe, in situ terrain sensing. The influence of multiple legs and different foot designs on wheel-leg-soil interaction is analyzed by incorporating these aspects to an existing terradynamics model. In addition, new theoretical models are proposed and experimentally validated to relate wheel-leg slip to both motor torque and stick-slip vibrations. These models, which are capable of estimating wheel-leg slip from purely proprioceptive sensors, are then applied in combination with detected wheel-leg sinkage to successfully characterize the load bearing and shear strength properties of different types of deformable soil. The main contribution of this paper enables nongeometric hazard detection based on detected wheel-leg slip and sinkage.}
    }
  • K. Goher, A. Almeshal, S. Agouri, A. Nasir, O. Tokhi, M. Alenizi, T. Alzanki, and S. Fadlallah, “Hybrid spiral-dynamic bacteria-chemotaxis algorithm with application to control two-wheeled machines,” Robotics and biomimetics, vol. 4, iss. 1, p. 3, 2017. doi:10.1186/s40638-017-0059-1
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the implementation of the hybrid spiral-dynamic bacteria-chemotaxis (HSDBC) approach to control two different configurations of a two-wheeled vehicle. The HSDBC is a combination of bacterial chemotaxis used in bacterial forging algorithm (BFA) and the spiral-dynamic algorithm (SDA). BFA provides a good exploration strategy due to the chemotaxis approach. However, it endures an oscillation problem near the end of the search process when using a large step size. Conversely; for a small step size, it affords better exploitation and accuracy with slower convergence. SDA provides better stability when approaching an optimum point and has faster convergence speed. This may cause the search agents to get trapped into local optima which results in low accurate solution. HSDBC exploits the chemotactic strategy of BFA and fitness accuracy and convergence speed of SDA so as to overcome the problems associated with both the SDA and BFA algorithms alone. The HSDBC thus developed is evaluated in optimizing the performance and energy consumption of two highly nonlinear platforms, namely single and double inverted pendulum-like vehicles with an extended rod. Comparative results with BFA and SDA show that the proposed algorithm is able to result in better performance of the highly nonlinear systems.

    @article{lincoln33057,
    volume = {4},
    number = {1},
    month = {December},
    author = {Khaled Goher and Abdullah Almeshal and Saad Agouri and Ahmed Nasir and Osman Tokhi and Mohamed Alenizi and Talal Alzanki and Sulaiman Fadlallah},
    note = {This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.},
    title = {Hybrid spiral-dynamic bacteria-chemotaxis algorithm with application to control two-wheeled machines},
    publisher = {Springer},
    year = {2017},
    journal = {Robotics and Biomimetics},
    doi = {10.1186/s40638-017-0059-1},
    pages = {3},
    keywords = {ARRAY(0x55578fe74880)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33057/},
    abstract = {This paper presents the implementation of the hybrid spiral-dynamic bacteria-chemotaxis (HSDBC) approach to control two different configurations of a two-wheeled vehicle. The HSDBC is a combination of bacterial chemotaxis used in bacterial forging algorithm (BFA) and the spiral-dynamic algorithm (SDA). BFA provides a good exploration strategy due to the chemotaxis approach. However, it endures an oscillation problem near the end of the search process when using a large step size. Conversely; for a small step size, it affords better exploitation and accuracy with slower convergence. SDA provides better stability when approaching an optimum point and has faster convergence speed. This may cause the search agents to get trapped into local optima which results in low accurate solution. HSDBC exploits the chemotactic strategy of BFA and fitness accuracy and convergence speed of SDA so as to overcome the problems associated with both the SDA and BFA algorithms alone. The HSDBC thus developed is evaluated in optimizing the performance and energy consumption of two highly nonlinear platforms, namely single and double inverted pendulum-like vehicles with an extended rod. Comparative results with BFA and SDA show that the proposed algorithm is able to result in better performance of the highly nonlinear systems.}
    }
  • D. Wang, X. Hou, J. Xu, S. Yue, and C. Liu, “Traffic sign detection using a cascade method with fast feature extraction and saliency test,” Ieee transactions on intelligent transportation systems, vol. 18, iss. 12, p. 3290–3302, 2017. doi:10.1109/tits.2017.2682181
    [BibTeX] [Abstract] [Download PDF]

    Automatic traffic sign detection is challenging due to the complexity of scene images, and fast detection is required in real applications such as driver assistance systems. In this paper, we propose a fast traffic sign detection method based on a cascade method with saliency test and neighboring scale awareness. In the cascade method, feature maps of several channels are extracted efficiently using approximation techniques. Sliding windows are pruned hierarchically using coarse-to-fine classifiers and the correlation between neighboring scales. The cascade system has only one free parameter, while the multiple thresholds are selected by a data-driven approach. To further increase speed, we also use a novel saliency test based on mid-level features to pre-prune background windows. Experiments on two public traffic sign data sets show that the proposed method achieves competing performance and runs 27 times as fast as most of the state-of-the-art methods.

    @article{lincoln27022,
    volume = {18},
    number = {12},
    month = {December},
    author = {Dongdong Wang and Xinwen Hou and Jiawei Xu and Shigang Yue and Cheng-Lin Liu},
    title = {Traffic sign detection using a cascade method with fast feature extraction and saliency test},
    publisher = {IEEE},
    year = {2017},
    journal = {IEEE Transactions on Intelligent Transportation Systems},
    doi = {10.1109/tits.2017.2682181},
    pages = {3290--3302},
    keywords = {ARRAY(0x55578fe74868)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/27022/},
    abstract = {Automatic traffic sign detection is challenging due to the complexity of scene images, and fast detection is required in real applications such as driver assistance systems. In this paper, we propose a fast traffic sign detection method based on a cascade method with saliency test and neighboring scale awareness. In the cascade method, feature maps of several channels are extracted efficiently using approximation techniques. Sliding windows are pruned hierarchically using coarse-to-fine classifiers and the correlation between neighboring scales. The cascade system has only one free parameter, while the multiple thresholds are selected by a data-driven approach. To further increase speed, we also use a novel saliency test based on mid-level features to pre-prune background windows. Experiments on two public traffic sign data sets show that the proposed method achieves competing performance and runs 27 times as fast as most of the state-of-the-art methods.}
    }
  • M. T. Lazaro, G. Grisetti, L. Iocchi, J. P. Fentanes, and M. Hanheide, “A lightweight navigation system for mobile robots,” in Iberian robotics conference, 2017, p. 295–306. doi:10.1007/978-3-319-70836-2_25
    [BibTeX] [Abstract] [Download PDF]

    {\copyright} Springer International Publishing AG 2018. In this paper, we describe a navigation system requiring very few computational resources, but still providing performance comparable with commonly used tools in the ROS universe. This lightweight navigation system is thus suitable for robots with low computational resources and provides interfaces for both ROS and NAOqi middlewares. We have successfully evaluated the software on different robots and in different situations, including SoftBank Pepper robot for RoboCup@Home SSPL competitions and on small home-made robots for RoboCup@Home Education workshops. The developed software is well documented and easy to understand. It is released open-source and as Debian package to facilitate ease of use, in particular for the young researchers participating in robotic competitions and for educational activities.

    @inproceedings{lincoln37349,
    volume = {694},
    month = {December},
    author = {Maria Teresa Lazaro and G. Grisetti and Luca Iocchi and Jaime Pulido Fentanes and Marc Hanheide},
    booktitle = {Iberian Robotics conference},
    title = {A Lightweight Navigation System for Mobile Robots},
    doi = {10.1007/978-3-319-70836-2\_25},
    pages = {295--306},
    year = {2017},
    keywords = {ARRAY(0x55578fead768)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37349/},
    abstract = {{\copyright} Springer International Publishing AG 2018. In this paper, we describe a navigation system requiring very few computational resources, but still providing performance comparable with commonly used tools in the ROS universe. This lightweight navigation system is thus suitable for robots with low computational resources and provides interfaces for both ROS and NAOqi middlewares. We have successfully evaluated the software on different robots and in different situations, including SoftBank Pepper robot for RoboCup@Home SSPL competitions and on small home-made robots for RoboCup@Home Education workshops. The developed software is well documented and easy to understand. It is released open-source and as Debian package to facilitate ease of use, in particular for the young researchers participating in robotic competitions and for educational activities.}
    }
  • M. Heshmat, M. Fernandez-Carmona, Z. Yan, and N. Bellotto, “Active human detection with a mobile robot,” in Uk-ras conference on robotics and autonomous systems, 2017.
    [BibTeX] [Abstract] [Download PDF]

    The problem of active human detection with a mobile robot equipped with an RGB-D camera is considered in this work. Traditional human detection algorithms for indoor mobile robots face several challenges, including occlusions due to cluttered dynamic environments, changing backgrounds, and large variety of human movements. Active human detection aims to improve classic detection systems by actively selecting new and potentially better observation points of the person. In this preliminary work, we present a system that actively guides a mobile robot towards high-confidence human detections, including initial simulation tests that highlight pros and cons of the proposed approach.

    @inproceedings{lincoln29946,
    booktitle = {UK-RAS Conference on Robotics and Autonomous Systems},
    month = {December},
    title = {Active human detection with a mobile robot},
    author = {Mohamed Heshmat and Manuel Fernandez-Carmona and Zhi Yan and Nicola Bellotto},
    year = {2017},
    keywords = {ARRAY(0x55578fead7c8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/29946/},
    abstract = {The problem of active human detection with a mobile robot equipped with an RGB-D camera is considered in this work. Traditional human detection algorithms for indoor mobile robots face several challenges, including occlusions due to cluttered dynamic environments, changing backgrounds, and large variety of human movements. Active human detection aims to improve classic detection systems by actively selecting new and potentially better observation points of the person. In this preliminary work, we present a system that actively guides a mobile robot towards high-confidence human detections, including initial simulation tests that highlight pros and cons of the proposed approach.}
    }
  • S. M. Mellado, G. Cielniak, T. Krajnik, and T. Duckett, “Modelling and predicting rhythmic flow patterns in dynamic environments,” in Uk-ras network conference, 2017.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we introduce a time-dependent probabilistic map able to model and predict future flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction by a set of harmonic functions, which efficiently capture long-term (hours to months) variations of crowd movements over time, so from a robotics perspective, this model could be useful to add the predicted human behaviour into the control loop to influence the actions of the robot. Our approach is evaluated with data collected from a real environment and initial qualitative results are presented.

    @inproceedings{lincoln31053,
    booktitle = {UK-RAS Network Conference},
    month = {December},
    title = {Modelling and predicting rhythmic flow patterns in dynamic environments},
    author = {Sergi Molina Mellado and Grzegorz Cielniak and Tomas Krajnik and Tom Duckett},
    year = {2017},
    keywords = {ARRAY(0x55578feae140)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31053/},
    abstract = {In this paper, we introduce a time-dependent probabilistic map able to model and predict future flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction by a set of harmonic functions, which efficiently capture long-term (hours to months) variations of crowd movements over time, so from a robotics perspective, this model could be useful to add the predicted human behaviour into the control loop to influence the actions of the robot. Our approach is evaluated with data collected from a real environment and initial qualitative results are presented.}
    }
  • J. P. Fentanes, C. Dondrup, and M. Hanheide, “Navigation testing for continuous integration in robotics,” in Uk-ras conference on robotics and autonomous systems, 2017.
    [BibTeX] [Abstract] [Download PDF]

    Robots working in real-world applications need to be robust and reliable. However, ensuring robust software in an academic development environment with dozens of developers poses a significant challenge. This work presents a testing framework, successfully employed in a large-scale integrated robotics project, based on continuous integration and the fork-and-pull model of software development, implementing automated system regression testing for robot navigation. It presents a framework suitable for both regression testing and also providing processes for parameter optimisation and benchmarking.

    @inproceedings{lincoln31547,
    booktitle = {UK-RAS Conference on Robotics and Autonomous Systems},
    month = {December},
    title = {Navigation testing for continuous integration in robotics},
    author = {Jaime Pulido Fentanes and Christian Dondrup and Marc Hanheide},
    publisher = {UK-RAS Conference on Robotics and Autonomous Systems (RAS 2017)},
    year = {2017},
    keywords = {ARRAY(0x55578fead708)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/31547/},
    abstract = {Robots working in real-world applications need to be robust and reliable. However, ensuring robust software in an academic development environment with dozens of developers poses a significant challenge. This work presents a testing framework, successfully employed in a large-scale integrated robotics project, based on continuous integration and the fork-and-pull model of software development, implementing automated system regression testing for robot navigation. It presents a framework suitable for both regression testing and also providing processes for parameter optimisation and benchmarking.}
    }
  • Q. Fu and S. Yue, “Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network,” in 2017 ieee international conference on robotics and biomimetics (robio), Ieee, 2017, p. 1636–1641.
    [BibTeX] [Abstract] [Download PDF]

    How do animals, e.g. insects, detect meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? This open question has been very attractive for decades. In this paper, with respect to latest biological research progress made on motion detection circuitry, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely motion and position pathways explored in fly visual system, for mimicking the tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information on moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) Compared to conventional computer vision techniques, it turns out the computational simplicity of this model may benefit the utility in small robots for real time fixating. (2) The hybrid neural network structure fulfills the characteristics of a putative signal tuning map in physiology. (3) It also satisfies with a profound implication proposed by biologists: visual fixation behaviors could be simply tuned via only the position pathway; nevertheless, the motion-detecting pathway enhances the tracking precision.

    @incollection{lincoln28879,
    month = {December},
    author = {Qinbing Fu and Shigang Yue},
    note = {{\copyright} 2017 IEEE},
    booktitle = {2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)},
    title = {Mimicking fly motion tracking and fixation behaviors with a hybrid visual neural network},
    publisher = {IEEE},
    pages = {1636--1641},
    year = {2017},
    keywords = {ARRAY(0x55578fead720)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/28879/},
    abstract = {How do animals, e.g. insects, detect meaningful visual motion cues involving directional and locational information of moving objects in visual clutter accurately and efficiently? This open question has been very attractive for decades. In this paper, with respect to latest biological research progress made on motion detection circuitry, we conduct a novel hybrid visual neural network, combining the functionality of two bio-plausible, namely motion and position pathways explored in fly visual system, for mimicking the tracking and fixation behaviors. This modeling study extends a former direction selective neurons model to the higher level of behavior. The motivated algorithms can be used to guide a system that extracts location information on moving objects in a scene regardless of background clutter, using entirely low-level visual processing. We tested it against translational movements in synthetic and real-world scenes. The results demonstrated the following contributions: (1) Compared to conventional computer vision techniques, it turns out the computational simplicity of this model may benefit the utility in small robots for real time fixating. (2) The hybrid neural network structure fulfills the characteristics of a putative signal tuning map in physiology. (3) It also satisfies with a profound implication proposed by biologists: visual fixation behaviors could be simply tuned via only the position pathway; nevertheless, the motion-detecting pathway enhances the tracking precision.}
    }
  • C. Keeble, P. A. Thwaites, S. Barber, G. R. Law, and P. D. Baxter, “Adaptation of chain event graphs for use with case-control studies in epidemiology,” The international journal of biostatistics, vol. 13, iss. 2, 2017. doi:10.1515/ijb-2016-0073
    [BibTeX] [Abstract] [Download PDF]

    Case-control studies are used in epidemiology to try to uncover the causes of diseases, but are a retrospective study design known to suffer from non-participation and recall bias, which may explain their decreased popularity in recent years. Traditional analyses report usually only the odds ratio for given exposures and the binary disease status. Chain event graphs are a graphical representation of a statistical model derived from event trees which have been developed in artificial intelligence and statistics, and only recently introduced to the epidemiology literature. They are a modern Bayesian technique which enable prior knowledge to be incorporated into the data analysis using the agglomerative hierarchical clustering algorithm, used to form a suitable chain event graph. Additionally, they can account for missing data and be used to explore missingness mechanisms. Here we adapt the chain event graph framework to suit scenarios often encountered in case-control studies, to strengthen this study design which is time and financially efficient. We demonstrate eight adaptations to the graphs, which consist of two suitable for full case-control study analysis, four which can be used in interim analyses to explore biases, and two which aim to improve the ease and accuracy of analyses. The adaptations are illustrated with complete, reproducible, fully-interpreted examples, including the event tree and chain event graph. Chain event graphs are used here for the first time to summarise non-participation, data collection techniques, data reliability, and disease severity in case-control studies. We demonstrate how these features of a case-control study can be incorporated into the analysis to provide further insight, which can help to identify potential biases and lead to more accurate study results.

    @article{lincoln29511,
    volume = {13},
    number = {2},
    month = {December},
    author = {Claire Keeble and Peter Adam Thwaites and Stuart Barber and Graham Richard Law and Paul David Baxter},
    title = {Adaptation of chain event graphs for use with case-Control studies in epidemiology},
    publisher = {De Gruyter},
    year = {2017},
    journal = {The International Journal of Biostatistics},
    doi = {10.1515/ijb-2016-0073},
    keywords = {ARRAY(0x55578feae170)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/29511/},
    abstract = {Case-control studies are used in epidemiology to try to uncover the causes of diseases, but are a retrospective study design known to suffer from non-participation and recall bias, which may explain their decreased popularity in recent years. Traditional analyses report usually only the odds ratio for given exposures and the binary disease status. Chain event graphs are a graphical representation of a statistical model derived from event trees which have been developed in artificial intelligence and statistics, and only recently introduced to the epidemiology literature. They are a modern Bayesian technique which enable prior knowledge to be incorporated into the data analysis using the agglomerative hierarchical clustering algorithm, used to form a suitable chain event graph. Additionally, they can account for missing data and be used to explore missingness mechanisms. Here we adapt the chain event graph framework to suit scenarios often encountered in case-control studies, to strengthen this study design which is time and financially efficient. We demonstrate eight adaptations to the graphs, which consist of two suitable for full case-control study analysis, four which can be used in interim analyses to explore biases, and two which aim to improve the ease and accuracy of analyses. The adaptations are illustrated with complete, reproducible, fully-interpreted examples, including the event tree and chain event graph. Chain event graphs are used here for the first time to summarise non-participation, data collection techniques, data reliability, and disease severity in case-control studies. We demonstrate how these features of a case-control study can be incorporated into the analysis to provide further insight, which can help to identify potential biases and lead to more accurate study results.}
    }
  • G. Maeda, M. Ewerton, G. Neumann, R. Lioutikov, and J. Peters, “Phase estimation for fast action recognition and trajectory generation in human?robot collaboration,” The international journal of robotics research, vol. 36, iss. 13-14, p. 1579–1594, 2017. doi:10.1177/0278364917693927
    [BibTeX] [Abstract] [Download PDF]