Publications

Download the BibTeX file of all L-CAS publications

Below a list of all outputs created with the involvement of L-CAS academics. Filter by author or type.

Authors: Type:

2022

  • C. Qi, J. Gao, K. Chen, L. Shu, and S. Pearson, “Tea chrysanthemum detection by leveraging generative adversarial networks and edge computing,” Frontiers in plant science, 2022.
    [BibTeX] [Abstract] [Download PDF]

    A high resolution dataset is one of the prerequisites for tea chrysanthemum detection with deep learning algorithms. This is crucial for further developing a selective chrysanthemum harvesting robot. However, generating high resolution datasets of the tea chrysanthemum with complex unstructured environments is a challenge. In this context, we propose a novel generative adversarial network (TC-GAN) that attempts to deal with this challenge. First, we designed a non-linear mapping network for untangling the features of the underlying code. Then, a customized regularisation method was used to provide fine-grained control over the image details. Finally, a gradient diversion design with multi-scale feature extraction capability was adopted to optimize the training process. The proposed TC-GAN was compared with 12 state-of-the-art generative adversarial networks, showing that an optimal average precision (AP) of 90.09\% was achieved with the generated images (512*512) on the developed TC-YOLO object detection model under the NVIDIA Tesla P100 GPU environment. Moreover, the detection model was deployed into the embedded NVIDIA Jetson TX2 platform with 0.1s inference time, and this edge computing device could be further developed into a perception system for selective chrysanthemum picking robots in the future.

    @article{lincoln48499,
    title = {Tea Chrysanthemum Detection by Leveraging Generative Adversarial Networks and Edge Computing},
    author = {Chao Qi and Junfeng Gao and Kunjie Chen and Lei Shu and Simon Pearson},
    publisher = {Frontiers Media},
    year = {2022},
    journal = {Frontiers in plant science},
    keywords = {ARRAY(0x558e723f3520)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48499/},
    abstract = {A high resolution dataset is one of the prerequisites for tea chrysanthemum detection with deep learning algorithms. This is crucial for further developing a selective chrysanthemum harvesting robot. However, generating high resolution datasets of the tea chrysanthemum with complex unstructured environments is a challenge. In this context, we propose a novel generative adversarial network (TC-GAN) that attempts to deal with this challenge. First, we designed a non-linear mapping network for untangling the features of the underlying code. Then, a customized regularisation method was used to provide fine-grained control over the image details. Finally, a gradient diversion design with multi-scale feature extraction capability was adopted to optimize the training process. The proposed TC-GAN was compared with 12 state-of-the-art generative adversarial networks, showing that an optimal average precision (AP) of 90.09\% was achieved with the generated images (512*512) on the developed TC-YOLO object detection model under the NVIDIA Tesla P100 GPU environment. Moreover, the detection model was deployed into the embedded NVIDIA Jetson TX2 platform with 0.1s inference time, and this edge computing device could be further developed into a perception system for selective chrysanthemum picking robots in the future.}
    }
  • H. Luan, Q. Fu, Y. Zhang, M. Hua, S. Chen, and S. Yue, “A looming spatial localization neural network inspired by mlg1 neurons in the crab neohelice,” Frontiers in neuroscience, 2022. doi:10.3389/fnins.2021.787256
    [BibTeX] [Abstract] [Download PDF]

    Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s? receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.

    @article{lincoln49094,
    month = {January},
    title = {A Looming Spatial Localization Neural Network Inspired by MLG1 Neurons in the Crab Neohelice},
    author = {Hao Luan and Qingbing Fu and Yicheng Zhang and Mu Hua and Shengyong Chen and Shigang Yue},
    publisher = {Frontiers Media},
    year = {2022},
    doi = {10.3389/fnins.2021.787256},
    journal = {Frontiers in Neuroscience},
    keywords = {ARRAY(0x558e723f33d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49094/},
    abstract = {Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s? receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons.
    The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.}
    }
  • K. J. Parnell, J. Fischer, J. Clarck, A. Bodenman, M. G. J. Trigo, M. P. Brito, M. D. Soorati, K. Plant, and S. Ramchurn, “Trustworthy uav relationships: applying the schema action world taxonomy to uavs and uav swarm operations,” International journal of human?computer interaction, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Human Factors play a significant role in the development and integration of avionic systems to ensure that they are trusted and can be used effectively. As Unoccupied Aerial Vehicle (UAV) technology becomes increasingly important to the aviation domain this holds true. The study presented in this paper aims to gain an understanding of UAV operators? trust requirements when piloting UAVs by utilising a popular aviation interview methodology (Schema World Action Research Method), in combination with key questions on trust identified from the literature. Interviews were conducted with six UAV operators, with a range of experience, to identify the trust requirements that UAV operators hold and their views on how UAV swarms may alter the trust relationship between the operator and the UAV technology. Both methodological and practical contributions of the research interviews are discussed.

    @article{lincoln49326,
    title = {Trustworthy UAV relationships: Applying the Schema Action World taxonomy to UAVs and UAV swarm operations},
    author = {Katie J. Parnell and Joel Fischer and Jed Clarck and Adrian Bodenman and Maria J. Galvez Trigo and Mario P. Brito and Mohammad Divband Soorati and Katherine Plant and Sarvapali Ramchurn},
    publisher = {Taylor and Francis},
    year = {2022},
    journal = {International Journal of Human?Computer Interaction},
    keywords = {ARRAY(0x558e723f34f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49326/},
    abstract = {Human Factors play a significant role in the development and integration of avionic systems to ensure that they are trusted and can be used effectively. As Unoccupied Aerial Vehicle (UAV) technology becomes increasingly important to the aviation domain this holds true. The study presented in this paper aims to gain an understanding of UAV operators? trust requirements when piloting UAVs by utilising a popular aviation interview methodology (Schema World Action Research Method), in combination with key questions on trust identified from the literature. Interviews were conducted with six UAV operators, with a range of experience, to identify the trust requirements that UAV operators hold and their views on how UAV swarms may alter the trust relationship between the operator and the UAV technology. Both methodological and practical contributions of the research interviews are discussed.}
    }
  • X. Li, R. Lloyd, S. Ward, J. Cox, S. Coutts, and C. Fox, “Robotic crop row tracking around weeds using cereal-specific features,” Computers and electronics in agriculture, vol. 197, 2022. doi:10.1016/j.compag.2022.106941
    [BibTeX] [Abstract] [Download PDF]

    Crop row following is especially challenging in narrow row cereal crops, such as wheat. Separation between plants within a row disappears at an early growth stage, and canopy closure between rows, when leaves from different rows start to occlude each other, occurs three to four months after the crop emerges. Canopy closure makes it challenging to identify separate rows through computer vision as clear lanes become obscured. Cereal crops are grass species and so their leaves have a predictable shape and orientation. We introduce an image processing pipeline which exploits grass shape to identify and track rows. The key observation exploited is that leaf orientations tend to be vertical along rows and horizontal between rows due to the location of the stems within the rows. Adaptive mean-shift clustering on Hough line segments is then used to obtain lane centroids, and followed by a nearest neighbor data association creating lane line candidates in 2D space. Lane parameters are fit with linear regression and a Kalman filter is used for tracking lanes between frames. The method is achieves sub-50 mm accuracy which is sufficient for placing a typical agri-robot?s wheels between real-world, early-growth wheat crop rows to drive between them, as long as the crop is seeded in a wider spacing such as 180 mm row spacing for an 80 mm wheel width robot.

    @article{lincoln49340,
    volume = {197},
    month = {June},
    author = {Xiaodong Li and Rob Lloyd and Sarah Ward and Jonathan Cox and Shaun Coutts and Charles Fox},
    title = {Robotic crop row tracking around weeds using cereal-specific features},
    publisher = {Elsevier},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2022.106941},
    year = {2022},
    keywords = {ARRAY(0x558e723ee800)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49340/},
    abstract = {Crop row following is especially challenging in narrow row cereal crops, such as wheat. Separation between plants within a row disappears at an early growth stage, and canopy closure between rows, when leaves from different rows start to occlude each other, occurs three to four months after the crop emerges. Canopy closure makes it challenging to identify separate rows through computer vision as clear lanes become obscured. Cereal crops are grass species and so their leaves have a predictable shape and orientation. We introduce an image processing pipeline which exploits grass shape to identify and track rows. The key observation exploited is that leaf orientations tend to be vertical along rows and horizontal between rows due to the location of the stems within the rows. Adaptive mean-shift clustering on Hough line segments is then used to obtain lane centroids, and followed by a nearest neighbor data association creating lane line candidates in 2D space. Lane parameters are fit with linear regression and a Kalman filter is used for tracking lanes between frames. The method is achieves sub-50 mm accuracy which is sufficient for placing a typical agri-robot?s wheels between real-world, early-growth wheat crop rows to drive between them, as long as the crop is seeded in a wider spacing such as 180 mm row spacing for an 80 mm wheel width robot.}
    }
  • S. Pearson, C. Camacho?Villa, R. Valluru, G. Oorbessy, M. Rai, I. Gould, S. Brewer, and E. Sklar, “Robotics and autonomous systems for net zero agriculture,” Agriculture robotics current robotics reports, vol. 3, p. 57–64, 2022. doi:10.1007/s43154-022-00077-6
    [BibTeX] [Abstract] [Download PDF]

    The paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emissions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.

    @article{lincoln49460,
    volume = {3},
    month = {June},
    author = {Simon Pearson and Carolina Camacho?Villa and Ravi Valluru and Gaju Oorbessy and Mini Rai and Iain Gould and Steve Brewer and Elizabeth Sklar},
    title = {Robotics and Autonomous Systems for Net Zero Agriculture},
    publisher = {Springer},
    year = {2022},
    journal = {AGRICULTURE ROBOTICS Current Robotics Reports},
    doi = {10.1007/s43154-022-00077-6},
    pages = {57--64},
    keywords = {ARRAY(0x558e723ee7e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49460/},
    abstract = {The paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise
    agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emissions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.}
    }
  • L. Manning, S. Brewer, P. Craigon, P. J. Frey, A. Gutierrez, N. Jacobs, S. Kanza, S. Munday, J. Sacks, and S. Pearson, “Artificial intelligence and ethics within the food sector: developing a common language for technology adoption across the supply chain,” Trends in food science and technology, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Background: The use of artificial intelligence (AI) is growing in food supply chains. The ethical language associated with food supply and technology is contextualised and framed by the meaning given to it by stakeholders. Failure to differentiate between these nuanced meanings can create a barrier to technology adoption and reduce the benefit derived. Scope and approach: The aim of this review paper is to consider the embedded ethical language used by stakeholders who collaborate in the adoption of AI in food supply chains. Ethical perspectives frame this literature review and provide structure to consider how to shape a common discourse to build trust in, and frame more considered utilisation of, AI in food supply chains to the benefit of users, and wider society. Key findings and conclusions: Whilst the nature of data within the food system is much broader than the personal data covered by the European Union General Data Protection Regulation (GDPR), the ethical issues for computational and AI systems are similar and can be considered in terms of particular aspects: transparency, traceability, explainability, interpretability, accessibility, accountability and responsibility. The outputs of this research assist in giving a more rounded understanding of the language used, exploring the ethical interaction of aspects of AI used in food supply chains and also the management activities and actions that can be adopted to improve the applicability of AI technology, increase engagement and derive greater performance benefits. This work has implications for those developing AI governance protocols for the food supply chain as well as supply chain practitioners.

    @article{lincoln49072,
    title = {Artificial intelligence and ethics within the food sector: developing a common language for technology adoption across the supply chain},
    author = {Louise Manning and Steve Brewer and Peter Craigon and P.J Frey and Anabel Gutierrez and Naomi Jacobs and Samantha Kanza and Samuel Munday and Justin Sacks and Simon Pearson},
    publisher = {Elsevier},
    year = {2022},
    journal = {Trends in Food Science and Technology},
    keywords = {ARRAY(0x558e723f34c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49072/},
    abstract = {Background: The use of artificial intelligence (AI) is growing in food supply chains. The ethical language associated with food supply and technology is contextualised and framed by the meaning given to it by stakeholders. Failure to differentiate between these nuanced meanings can create a barrier to technology adoption and reduce the benefit derived.
    Scope and approach: The aim of this review paper is to consider the embedded ethical language used by stakeholders who collaborate in the adoption of AI in food supply chains. Ethical perspectives frame this literature review and provide structure to consider how to shape a common discourse to build trust in, and frame more considered utilisation of, AI in food supply chains to the benefit of users, and wider society.
    Key findings and conclusions: Whilst the nature of data within the food system is much broader than the personal data covered by the European Union General Data Protection Regulation (GDPR), the ethical issues for computational and AI systems are similar and can be considered in terms of particular aspects: transparency, traceability, explainability, interpretability, accessibility, accountability and responsibility. The outputs of this research assist in giving a more rounded understanding of the language used, exploring the ethical interaction of aspects of AI used in food supply chains and also the management activities and actions that can be adopted to improve the applicability of AI technology, increase engagement and derive greater performance benefits. This work has implications for those developing AI governance protocols for the food supply chain as well as supply chain practitioners.}
    }
  • C. Qi, J. Gao, S. Pearson, H. Harman, K. Chen, and L. Shu, “Tea chrysanthemum detection under unstructured environments using the tc-yolo model,” Expert systems with applications, vol. 193, 2022. doi:10.1016/j.eswa.2021.116473
    [BibTeX] [Abstract] [Download PDF]

    Tea chrysanthemum detection at its flowering stage is one of the key components for selective chrysanthemum harvesting robot development. However, it is a challenge to detect flowering chrysanthemums under unstructured field environments given variations on illumination, occlusion and object scale. In this context, we propose a highly fused and lightweight deep learning architecture based on YOLO for tea chrysanthemum detection (TC-YOLO). First, in the backbone component and neck component, the method uses the Cross-Stage Partially Dense network (CSPDenseNet) and the Cross-Stage Partial ResNeXt network (CSPResNeXt) as the main networks, respectively, and embeds custom feature fusion modules to guide the gradient flow. In the final head component, the method combines the recursive feature pyramid (RFP) multiscale fusion reflow structure and the Atrous Spatial Pyramid Pool (ASPP) module with cavity convolution to achieve the detection task. The resulting model was tested on 300 field images using a data enhancement strategy combining flipping and rotation, showing that under the NVIDIA Tesla P100 GPU environment, if the inference speed is 47.23 FPS for each image (416 {$\times$} 416), TC-YOLO can achieve the average precision (AP) of 92.49\% on our own tea chrysanthemum dataset. Through further validation, it was found that overlap had the least effect on tea chrysanthemum detection, and illumination had the greatest effect on tea chrysanthemum detection. In addition, this method (13.6 M) can be deployed on a single mobile GPU, and it could be further developed as a perception system for a selective chrysanthemum harvesting robot in the future.

    @article{lincoln47700,
    volume = {193},
    month = {May},
    author = {Chao Qi and Junfeng Gao and Simon Pearson and Helen Harman and Kunjie Chen and Lei Shu},
    title = {Tea chrysanthemum detection under unstructured environments using the TC-YOLO model},
    publisher = {Elsevier},
    journal = {Expert Systems with Applications},
    doi = {10.1016/j.eswa.2021.116473},
    year = {2022},
    keywords = {ARRAY(0x558e723f32e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47700/},
    abstract = {Tea chrysanthemum detection at its flowering stage is one of the key components for selective chrysanthemum harvesting robot development. However, it is a challenge to detect flowering chrysanthemums under unstructured field environments given variations on illumination, occlusion and object scale. In this context, we propose a highly fused and lightweight deep learning architecture based on YOLO for tea chrysanthemum detection (TC-YOLO). First, in the backbone component and neck component, the method uses the Cross-Stage Partially Dense network (CSPDenseNet) and the Cross-Stage Partial ResNeXt network (CSPResNeXt) as the main networks, respectively, and embeds custom feature fusion modules to guide the gradient flow. In the final head component, the method combines the recursive feature pyramid (RFP) multiscale fusion reflow structure and the Atrous Spatial Pyramid Pool (ASPP) module with cavity convolution to achieve the detection task. The resulting model was tested on 300 field images using a data enhancement strategy combining flipping and rotation, showing that under the NVIDIA Tesla P100 GPU environment, if the inference speed is 47.23 FPS for each image (416 {$\times$} 416), TC-YOLO can achieve the average precision (AP) of 92.49\% on our own tea chrysanthemum dataset. Through further validation, it was found that overlap had the least effect on tea chrysanthemum detection, and illumination had the greatest effect on tea chrysanthemum detection. In addition, this method (13.6 M) can be deployed on a single mobile GPU, and it could be further developed as a perception system for a selective chrysanthemum harvesting robot in the future.}
    }
  • M. Badaoui, P. Buigues, D. Berta, G. Mandana, H. Gu, T. Földes, C. Dickson, V. Hornak, M. Kato, C. Molteni, S. Parsons, and E. Rosta, “Combined free-energy calculation and machine learning methods for understanding ligand unbinding kinetics,” Journal of chemical theory and computation, vol. 18, iss. 4, p. 2543–2555, 2022. doi:https://doi.org/10.1021/acs.jctc.1c00924
    [BibTeX] [Abstract] [Download PDF]

    The determination of drug residence times, which define the time an inhibitor is in complex with its target, is a fundamental part of the drug discovery process. Synthesis and experimental measurements of kinetic rate constants are, however, expensive, and time-consuming. In this work, we aimed to obtain drug residence times computationally. Furthermore, we propose a novel algorithm to identify molecular design objectives based on ligand unbinding kinetics. We designed an enhanced sampling technique to accurately predict the free energy profiles of the ligand unbinding process, focusing on the free energy barrier for unbinding. Our method first identifies unbinding paths determining a corresponding set of internal coordinates (IC) that form contacts between the protein and the ligand, it then iteratively updates these interactions during a series of biased molecular-dynamics (MD) simulations to reveal the ICs that are important for the whole of the unbinding process. Subsequently, we performed finite temperature string simulations to obtain the free energy barrier for unbinding using the set of ICs as a complex reaction coordinate. Importantly, we also aimed to enable further design of drugs focusing on improved residence times. To this end, we developed a supervised machine learning (ML) approach with inputs from unbiased ?downhill? trajectories initiated near the transition state (TS) ensemble of the string unbinding path. We demonstrate that our ML method can identify key ligand-protein interactions driving the system through the TS. Some of the most important drugs for cancer treatment are kinase inhibitors. One of these kinase targets is Cyclin Dependent Kinase 2 (CDK2), an appealing target for anticancer drug development. Here, we tested our method using two different CDK2 inhibitors for potential further development of these compounds. We compared the free energy barriers obtained from our calculations with those observed in available experimental data. We highlighted important interactions at the distal ends of the ligands that can be targeted for improved residence times. Our method provides a new tool to determine unbinding rates, and to identify key structural features of the inhibitors that can be used as starting points for novel design strategies in drug discovery.

    @article{lincoln49062,
    volume = {18},
    number = {4},
    month = {April},
    author = {Magd Badaoui and Pedro Buigues and Denes Berta and Guarav Mandana and Hankang Gu and Tam{\'a}s F{\"o}ldes and Callum Dickson and Viktor Hornak and Mitsunori Kato and Carla Molteni and Simon Parsons and Edina Rosta},
    title = {Combined Free-Energy Calculation and Machine Learning Methods for Understanding Ligand Unbinding Kinetics},
    publisher = {American Chemical Society},
    year = {2022},
    journal = {Journal of Chemical Theory and Computation},
    doi = {https://doi.org/10.1021/acs.jctc.1c00924},
    pages = {2543--2555},
    keywords = {ARRAY(0x558e723ee7a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49062/},
    abstract = {The determination of drug residence times, which define the time an inhibitor is in complex with its
    target, is a fundamental part of the drug discovery process. Synthesis and experimental
    measurements of kinetic rate constants are, however, expensive, and time-consuming. In this work,
    we aimed to obtain drug residence times computationally. Furthermore, we propose a novel
    algorithm to identify molecular design objectives based on ligand unbinding kinetics. We designed
    an enhanced sampling technique to accurately predict the free energy profiles of the ligand
    unbinding process, focusing on the free energy barrier for unbinding. Our method first identifies
    unbinding paths determining a corresponding set of internal coordinates (IC) that form contacts
    between the protein and the ligand, it then iteratively updates these interactions during a series of
    biased molecular-dynamics (MD) simulations to reveal the ICs that are important for the whole of
    the unbinding process. Subsequently, we performed finite temperature string simulations to obtain
    the free energy barrier for unbinding using the set of ICs as a complex reaction coordinate.
    Importantly, we also aimed to enable further design of drugs focusing on improved residence
    times. To this end, we developed a supervised machine learning (ML) approach with inputs from
    unbiased ?downhill? trajectories initiated near the transition state (TS) ensemble of the string
    unbinding path. We demonstrate that our ML method can identify key ligand-protein interactions
    driving the system through the TS. Some of the most important drugs for cancer treatment are
    kinase inhibitors. One of these kinase targets is Cyclin Dependent Kinase 2 (CDK2), an appealing
    target for anticancer drug development. Here, we tested our method using two different CDK2
    inhibitors for potential further development of these compounds. We compared the free energy
    barriers obtained from our calculations with those observed in available experimental data. We
    highlighted important interactions at the distal ends of the ligands that can be targeted for
    improved residence times. Our method provides a new tool to determine unbinding rates, and to
    identify key structural features of the inhibitors that can be used as starting points for novel design
    strategies in drug discovery.}
    }
  • S. M. Mellado, G. Cielniak, and T. Duckett, “Robotic exploration for learning human motion patterns,” Ieee transaction on robotics, 2022. doi:10.1109/TRO.2021.3101358
    [BibTeX] [Abstract] [Download PDF]

    Understanding how people are likely to move is key to efficient and safe robot navigation in human environments. However, mobile robots can only observe a fraction of the environment at a time, while the activity patterns of people may also change at different times. This paper introduces a new methodology for mobile robot exploration to maximise the knowledge of human activity patterns by deciding where and when to collect observations. We introduce an exploration policy driven by the entropy levels in a spatio-temporal map of pedestrian flows, and compare multiple spatio-temporal exploration strategies including both informed and uninformed approaches. The evaluation is performed by simulating mobile robot exploration using real sensory data from three long-term pedestrian datasets. The results show that for certain scenarios the models built with proposed exploration system can better predict the flow patterns than uninformed strategies, allowing the robot to move in a more socially compliant way, and that the exploration ratio is a key factor when it comes to the model prediction accuracy.

    @article{lincoln46497,
    month = {April},
    title = {Robotic Exploration for Learning Human Motion Patterns},
    author = {Sergio Molina Mellado and Grzegorz Cielniak and Tom Duckett},
    publisher = {IEEE},
    year = {2022},
    doi = {10.1109/TRO.2021.3101358},
    journal = {IEEE Transaction on Robotics},
    keywords = {ARRAY(0x558e723f3328)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46497/},
    abstract = {Understanding how people are likely to move is key to efficient and safe robot navigation in human environments. However, mobile robots can only observe a fraction of the environment at a time, while the activity patterns of people may also change at different times. This paper introduces a new methodology for mobile robot exploration to maximise the knowledge of human activity patterns by deciding where and when to collect observations. We introduce an exploration policy driven by the entropy levels in a spatio-temporal map of pedestrian flows, and compare multiple spatio-temporal exploration strategies including both informed and uninformed approaches. The evaluation is performed by simulating mobile robot exploration using real sensory data from three long-term pedestrian datasets. The results show that for certain scenarios the models built with proposed exploration system can better predict the flow patterns than uninformed strategies, allowing the robot to move in a more socially compliant way, and that the exploration ratio is a key factor when it comes to the model prediction accuracy.}
    }
  • F. Lei, Z. Peng, M. Liu, J. Peng, V. Cutsuridis, and S. Yue, “A robust visual system for looming cue detection against translating motion,” Ieee transactions on neural networks and learning systems, p. 1–15, 2022. doi:10.1109/TNNLS.2022.3149832
    [BibTeX] [Abstract] [Download PDF]

    Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models can not distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This paper presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events – the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.

    @article{lincoln48358,
    month = {February},
    author = {Fang Lei and Zhiping Peng and Mei Liu and Jigen Peng and Vassilis Cutsuridis and Shigang Yue},
    title = {A Robust Visual System for Looming Cue Detection Against Translating Motion},
    publisher = {IEEE},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2022.3149832},
    pages = {1--15},
    year = {2022},
    keywords = {ARRAY(0x558e723f3370)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48358/},
    abstract = {Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models can not distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This paper presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events -- the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.}
    }
  • A. Drake, I. Sassoon, P. Balatsoukas, T. Porat, M. Ashworth, E. Wright, V. Curcin, M. Chapman, N. Kokciyan, M. Sanjay, E. Sklar, and S. Parsons, “The relationship of socio-demographic factors and patient attitudes to connected health technologies: a survey of stroke survivors.,” Health informatics journal, 2022.
    [BibTeX] [Abstract] [Download PDF]

    More evidence is needed on technology implementation for remote monitoring and self-management across the various settings relevant to chronic conditions. This paper describes the findings of a survey designed to explore the relevance of socio-demographic factors to attitudes towards connected health technologies in a community of patients. Stroke survivors living in the UK were invited to answer questions about themselves and about their attitudes to a prototype remote monitoring and self-management app developed around their preferences. Eighty (80) responses were received and analysed, with limitations and results presented in full. Socio-demographic factors were not found to be associated with variations in participants? willingness to use the system and attitudes to data sharing. Individuals? levels of interest in relevant technology was suggested as a more important determinant of attitudes. These observations run against the grain of most relevant literature to date, and tend to underline the importance of prioritising patient-centred participatory research in efforts to advance connected health technologies.

    @article{lincoln49216,
    title = {The relationship of socio-demographic factors and patient attitudes to connected health technologies: a survey of stroke survivors.},
    author = {Archie Drake and Isabel Sassoon and Panos Balatsoukas and Talya Porat and Mark Ashworth and Ellen Wright and Vasa Curcin and Martin Chapman and Nadin Kokciyan and Modgil Sanjay and Elizabeth Sklar and Simon Parsons},
    publisher = {SAGE Publications},
    year = {2022},
    journal = {Health Informatics Journal},
    keywords = {ARRAY(0x558e723f3400)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49216/},
    abstract = {More evidence is needed on technology implementation for remote monitoring and self-management across the various settings relevant to chronic conditions. This paper describes the findings of a survey designed to explore the relevance of socio-demographic factors to attitudes towards connected health technologies in a community of patients. Stroke survivors living in the UK were invited to answer questions about themselves and about their attitudes to a prototype remote monitoring and self-management app developed around their preferences. Eighty (80) responses were received and analysed, with limitations and results presented in full. Socio-demographic factors were not found to be associated with variations in participants? willingness to use the system and attitudes to data sharing. Individuals? levels of interest in relevant technology was suggested as a more important determinant of attitudes. These observations run against the grain of most relevant literature to date, and tend to underline the importance of prioritising patient-centred participatory research in efforts to advance connected health technologies.}
    }
  • H. Harman and E. I. Sklar, “Challenges for multi-agent based agricultural workforce management,” in The 23rd international workshop on multi-agent-based simulation (mabs)), 2022.
    [BibTeX] [Abstract] [Download PDF]

    Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks, assigned by farm managers. The work here explores the application of artificial intelligence planning methodologies to optimise the existing workforce and applies multi-agent based simulation to evaluate the efficacy of the AI strategies. Key challenges threatening the acceptance of such an approach are highlighted and solutions are evaluated experimentally.

    @inproceedings{lincoln49036,
    booktitle = {The 23rd International Workshop on Multi-Agent-Based Simulation (MABS))},
    title = {Challenges for Multi-Agent Based Agricultural Workforce Management},
    author = {Helen Harman and Elizabeth I. Sklar},
    publisher = {Springer},
    year = {2022},
    keywords = {ARRAY(0x558e723f3490)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49036/},
    abstract = {Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks, assigned by farm managers. The work here explores the application of artificial intelligence planning methodologies to optimise the existing workforce and applies multi-agent based simulation to evaluate the efficacy of the AI strategies. Key challenges threatening the acceptance of such an approach are highlighted and solutions are evaluated experimentally.}
    }
  • S. Ghidoni, M. Terreran, D. Evangelista, E. Menegatti, C. Eitzinger, E. Villagrossi, N. Pedrocchi, N. Castaman, M. Malecha, S. Mghames, L. Castri, M. Hanheide, and N. Bellotto, “From human perception and action recognition to causal understanding of human-robot interaction in industrial environments,” in Ital-ia 2022, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Human-robot collaboration is migrating from lightweight robots in laboratory environments to industrial applications, where heavy tasks and powerful robots are more common. In this scenario, a reliable perception of the humans involved in the process and related intentions and behaviors is fundamental. This paper presents two projects investigating the use of robots in relevant industrial scenarios, providing an overview of how industrial human-robot collaborative tasks can be successfully addressed.

    @inproceedings{lincoln48515,
    booktitle = {Ital-IA 2022},
    title = {From Human Perception and Action Recognition to Causal Understanding of Human-Robot Interaction in Industrial Environments},
    author = {Stefano Ghidoni and Matteo Terreran and Daniele Evangelista and Emanuele Menegatti and Christian Eitzinger and Enrico Villagrossi and Nicola Pedrocchi and Nicola Castaman and Marcin Malecha and Sariah Mghames and Luca Castri and Marc Hanheide and Nicola Bellotto},
    year = {2022},
    keywords = {ARRAY(0x558e723f3460)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48515/},
    abstract = {Human-robot collaboration is migrating from lightweight robots in laboratory environments to industrial applications, where heavy tasks and powerful robots are more common. In this scenario, a reliable perception of the humans involved in the process and related intentions and behaviors is fundamental. This paper presents two projects investigating the use of robots in relevant industrial scenarios, providing an overview of how industrial human-robot collaborative tasks can be successfully addressed.}
    }
  • M. G. Trigo, P. Standen, and S. Cobb, “Educational robots and their control interfaces: how can we make them more accessible for special education?,” in Hci international conference 2022, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Existing design standards and guidelines provide guidance on what factors to consider to produce interactive systems that are not only usable, but also accessible. However, these standards are usually general, and when it comes to designing an interactive system for children with Learning Difficulties or Disabilities (LD) and/or Autism Spectrum Conditions (ASC) they are often not specific enough, leading to systems that are not fit for that purpose. If we dive into the area of educational robotics, we face even more issues, in part due to the relative novelty of these technologies. In this paper, we present an analysis of 26 existing educational robots and the interfaces used to control them. Furthermore, we present the results of running focus groups and a questionnaire with 32 educators with expertise in Special Education and parents at four different institutions, to explore potential accessibility issues of existing systems and to identify desirable characteristics. We conclude introduc- ing an initial set of design recommendations, to complement existing design standards and guidelines, that would help with producing future more accessible control interfaces for educational robots, with an especial focus on helping pupils with LDs and/or ASC.

    @inproceedings{lincoln48058,
    booktitle = {HCI International Conference 2022},
    title = {Educational robots and their control interfaces: how can we make them more accessible for Special Education?},
    author = {Maria Galvez Trigo and Penelope Standen and Sue Cobb},
    publisher = {Springer},
    year = {2022},
    keywords = {ARRAY(0x558e723f3430)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48058/},
    abstract = {Existing design standards and guidelines provide guidance on what factors to consider to produce interactive systems that are not only usable, but also accessible. However, these standards are usually general, and when it comes to designing an interactive system for children with Learning Difficulties or Disabilities (LD) and/or Autism Spectrum Conditions (ASC) they are often not specific enough, leading to systems that are not fit for that purpose. If we dive into the area of educational robotics, we face even more issues, in part due to the relative novelty of these technologies. In this paper, we present an analysis of 26 existing educational robots and the interfaces used to control them. Furthermore, we present the results of running focus groups and a questionnaire with 32 educators with expertise in Special Education and parents at four different institutions, to explore potential accessibility issues of existing systems and to identify desirable characteristics. We conclude introduc- ing an initial set of design recommendations, to complement existing design standards and guidelines, that would help with producing future more accessible control interfaces for educational robots, with an especial focus on helping pupils with LDs and/or ASC.}
    }
  • J. Gregory, M. H. Nair, G. Bullegas, and M. R. Saaj, “Using semantic systems engineering techniques to verity the large aperture space telescope mission ? current status,” in Model based space systems and software engineering mbse2021, 2022.
    [BibTeX] [Abstract] [Download PDF]

    MBSE aims to integrate engineering models across tools and domain boundaries to support traditional systems engineering activities (e.g., requirements elicitation and traceability, design, analysis, verification and validation). However, MBSE does not inherently solve interoperability with the multiple model-based infrastructures involved in a complex systems engineering project. The challenge is to implement digital continuity in the three dimensions of systems engineering: across disciplines, throughout the lifecycle, and along the supply chain. Space systems are ideal candidates for the application of MBSE and semantic modelling as these complex and expensive systems are mission-critical and often co-developed by multiple stakeholders. In this paper, the authors introduce the concept of Semantic Systems Engineering (SES) as an expansion of MBSE practices to include semantic modelling through SWTs. The paper also presents the progress and status of a novel Semantic Systems Engineering Ontology (SESO) in the context of a specific design case study ? the Large Aperture Space Telescope mission.

    @inproceedings{lincoln49463,
    booktitle = {Model Based Space Systems and Software Engineering MBSE2021},
    month = {September},
    title = {Using Semantic Systems Engineering Techniques to Verity the Large Aperture Space Telescope Mission ? Current Status},
    author = {Joe Gregory and Manu H. Nair and Gianmaria Bullegas and Mini Rai Saaj},
    publisher = {European Space Agency},
    year = {2022},
    keywords = {ARRAY(0x558e723ee770)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49463/},
    abstract = {MBSE aims to integrate engineering models across tools and domain boundaries to support traditional systems engineering activities (e.g., requirements elicitation and traceability, design, analysis, verification and validation). However, MBSE does not inherently solve interoperability with the multiple model-based infrastructures involved in a complex systems engineering project. The challenge is to implement digital continuity in the three dimensions of systems engineering: across disciplines, throughout the lifecycle, and along the supply chain. Space systems are ideal candidates for the application of MBSE and semantic modelling as these complex and expensive systems are mission-critical and often co-developed by multiple stakeholders. In this paper, the authors introduce the concept of Semantic Systems Engineering (SES) as an expansion of MBSE practices to include semantic modelling through SWTs. The paper also presents the progress and status of a novel Semantic Systems Engineering Ontology (SESO) in the context of a specific design case study ? the Large Aperture Space Telescope mission.}
    }
  • T. Choi and G. Cielniak, “Channel randomisation with domain control for effective representation learning of visual anomalies in strawberries,” in Ai for agriculture and food systems, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Channel Randomisation (CH-Rand) has appeared as a key data augmentation technique for anomaly detection on fruit images because neural networks can learn useful representations of colour irregularity whilst classifying the samples from the augmented “domain”. Our previous study has revealed its success with significantly more reliable performance than other state-of-the-art methods, largely specialised for identifying structural implausibility on non-agricultural objects (e.g., screws). In this paper, we further enhance CH-Rand with additional guidance to generate more informative data for representation learning of anomalies in fruits as most of its fundamental designs are still maintained. To be specific, we first control the “colour space” on which CH-Rand is executed to investigate whether a particular model{–}e.g., HSV , YCbCr, or L*a*b* {–}can better help synthesise realistic anomalies than the RGB, suggested in the original design. In addition, we develop a learning “curriculum” in which CH-Rand shifts its augmented domain to gradually increase the difficulty of the examples for neural networks to classify. To the best of our best knowledge, we are the first to connect the concept of curriculum to self-supervised representation learning for anomaly detection. Lastly, we perform evaluations with the Riseholme-2021 dataset, which contains {\ensuremath{>}} 3.5K real strawberry images at various growth levels along with anomalous examples. Our experimental results show that the trained models with the proposed strategies can achieve over 16\% higher scores of AUC-PR with more than three times less variability than the naive CH-Rand whilst using the same deep networks and data.

    @inproceedings{lincoln48676,
    booktitle = {AI for Agriculture and Food Systems},
    month = {January},
    title = {Channel Randomisation with Domain Control for Effective Representation Learning of Visual Anomalies in Strawberries},
    author = {Taeyeong Choi and Grzegorz Cielniak},
    year = {2022},
    keywords = {ARRAY(0x558e723f33a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48676/},
    abstract = {Channel Randomisation (CH-Rand) has appeared as a key data augmentation technique for anomaly detection on fruit
    images because neural networks can learn useful representations of colour irregularity whilst classifying the samples
    from the augmented "domain". Our previous study has revealed its success with significantly more reliable performance than other state-of-the-art methods, largely specialised for identifying structural implausibility on non-agricultural objects (e.g., screws). In this paper, we further enhance CH-Rand with additional guidance to generate more informative data for representation learning of anomalies in fruits as most of its fundamental designs are still maintained. To be specific, we first control the "colour space" on which CH-Rand is executed to investigate whether a particular model{--}e.g., HSV , YCbCr, or L*a*b* {--}can better help synthesise realistic anomalies than the RGB, suggested in the original design. In addition, we develop a learning "curriculum" in which CH-Rand shifts its augmented domain to gradually increase the difficulty of the examples for neural networks to classify. To the best of our best knowledge, we are the first to connect the concept of curriculum to self-supervised representation learning for anomaly detection. Lastly, we perform evaluations with the Riseholme-2021 dataset, which contains {\ensuremath{>}} 3.5K real strawberry images at various growth levels along with anomalous examples. Our experimental results show that the trained models with the proposed strategies can achieve over 16\% higher scores of AUC-PR with more than three times less variability than the naive CH-Rand whilst using the same deep networks and data.}
    }
  • J. Bennett, B. Moncur, K. Fogarty, G. Clawson, and C. Fox, “Towards open source hardware robotic woodwind: an internal duct flute player,” in International computer music conference, 2022.
    [BibTeX] [Abstract] [Download PDF]

    We present the first open source hardware (OSH) design and build of an automated robotic internal duct flute player, including an artificial lung and pitch calibration system. Using a recorder as an introductory instrument, the system is designed to be as modular as possible, enabling modification to fit further instruments across the woodwind family. Design considerations include the need to be as open to modification and accessible to as many people and instruments as possible. The system is split into two physical modules: a blowing module and a fingering module, and three software modules: actuator control, pitch calibration and musical note processing via MIDI. The system is able to perform beginner level recorder player melodies.

    @inproceedings{lincoln49154,
    booktitle = {International Computer Music Conference},
    month = {July},
    title = {Towards Open Source Hardware Robotic Woodwind: an Internal Duct Flute Player},
    author = {James Bennett and Bethan Moncur and Kyle Fogarty and Garry Clawson and Charles Fox},
    publisher = {ICMA},
    year = {2022},
    keywords = {ARRAY(0x558e723ec088)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49154/},
    abstract = {We present the first open source hardware (OSH) design and build of an automated robotic internal duct flute player, including an artificial lung and pitch calibration system. Using a recorder as an introductory instrument, the system is designed to be as modular as possible, enabling modification to fit further instruments across the woodwind family. Design considerations include the need to be as open to modification and accessible to as many people and instruments as possible. The system is split into two physical modules: a blowing module and a fingering module, and three software modules: actuator control, pitch calibration and musical note processing via MIDI.
    The system is able to perform beginner level recorder player melodies.}
    }
  • Y. Zhang, C. Hu, M. Liu, H. Luan, F. Lei, H. Cuayahuitl, and S. Yue, “Temperature-based collision detection in extreme low light condition with bio-inspired lgmd neural network,” in 2021 2nd international symposium on automation, information and computing (isaic 2021), 2022. doi:10.1088/1742-6596/2224/1/012004
    [BibTeX] [Abstract] [Download PDF]

    It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.

    @inproceedings{lincoln49117,
    booktitle = {2021 2nd International Symposium on Automation, Information and Computing (ISAIC 2021)},
    month = {April},
    title = {Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network},
    author = {Yicheng Zhang and Cheng Hu and Mei Liu and Hao Luan and Fang Lei and Heriberto Cuayahuitl and Shigang Yue},
    publisher = {IOP Publishing Ltd},
    year = {2022},
    doi = {10.1088/1742-6596/2224/1/012004},
    keywords = {ARRAY(0x558e723f3310)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49117/},
    abstract = {It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.}
    }
  • T. Choi, O. Would, A. Salazar-Gomez, and G. Cielniak, “Self-supervised representation learning for reliable robotic monitoring of fruit anomalies,” in 2022 ieee international conference on robotics and automation (icra), 2022.
    [BibTeX] [Abstract] [Download PDF]

    Data augmentation can be a simple yet powerful tool for autonomous robots to fully utilise available data for self-supervised identification of atypical scenes or objects. State-of-the-art augmentation methods arbitrarily embed “structural” peculiarity on typical images so that classifying these artefacts can provide guidance for learning representations for the detection of anomalous visual signals. In this paper, however, we argue that learning such structure-sensitive representations can be a suboptimal approach to some classes of anomaly (e.g., unhealthy fruits) which could be better recognised by a different type of visual element such as “colour”. We thus propose Channel Randomisation as a novel data augmentation method for restricting neural networks to learn encoding of “colour irregularity” whilst predicting channel-randomised images to ultimately build reliable fruit-monitoring robots identifying atypical fruit qualities. Our experiments show that (1) this colour-based alternative can better learn representations for consistently accurate identification of fruit anomalies in various fruit species, and also, (2) unlike other methods, the validation accuracy can be utilised as a criterion for early stopping of training in practice due to positive correlation between the performance in the self-supervised colour-differentiation task and the subsequent detection rate of actual anomalous fruits. Also, the proposed approach is evaluated on a new agricultural dataset, Riseholme-2021, consisting of 3.5K strawberry images gathered by a mobile robot, which we share online to encourage active agri-robotics research.

    @inproceedings{lincoln48682,
    booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
    month = {May},
    title = {Self-supervised Representation Learning for Reliable Robotic Monitoring of Fruit Anomalies},
    author = {Taeyeong Choi and Owen Would and Adrian Salazar-Gomez and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2022},
    keywords = {ARRAY(0x558e723ee788)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48682/},
    abstract = {Data augmentation can be a simple yet powerful tool for autonomous robots to fully utilise available data for self-supervised
    identification of atypical scenes or objects. State-of-the-art augmentation methods arbitrarily embed "structural" peculiarity on typical images so that classifying these artefacts can provide guidance for learning representations for the detection of anomalous visual signals. In this paper, however, we argue that learning such structure-sensitive representations can be a suboptimal approach to some classes of anomaly (e.g., unhealthy fruits) which could be better recognised by a different type of visual element such as "colour". We thus propose Channel Randomisation as a novel data augmentation method for restricting neural networks to learn encoding of "colour irregularity" whilst predicting channel-randomised images to ultimately build reliable fruit-monitoring robots identifying atypical fruit qualities. Our experiments show that (1) this colour-based alternative can better learn representations for consistently accurate identification of fruit anomalies in various fruit species, and also, (2) unlike other methods, the validation accuracy can be utilised as a criterion for early stopping of training in practice due to positive correlation between the performance in the self-supervised colour-differentiation task and the subsequent detection rate of actual anomalous fruits. Also, the proposed approach is evaluated on a new agricultural dataset, Riseholme-2021, consisting of 3.5K strawberry images gathered by a mobile robot, which we share online to encourage active agri-robotics research.}
    }
  • F. Camara and C. Fox, “Game theory, proxemics and trust for self-driving car social navigation,” in Social robot navigation: advances and evaluation (seanavbench 2022), 2022.
    [BibTeX] [Abstract] [Download PDF]

    To navigate in human social spaces, self-driving cars and other robots must show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. The present paper gives an overview of our ongoing work on modelling and controlling human?self-driving car interactions using game theory, proxemics and trust, and unifying these fields via quantitative models and robot controllers.

    @inproceedings{lincoln49183,
    booktitle = {Social Robot Navigation: Advances and Evaluation (SEANavBench 2022)},
    month = {May},
    title = {Game Theory, Proxemics and Trust for Self-Driving Car Social Navigation},
    author = {Fanta Camara and Charles Fox},
    publisher = {Social Robot Navigation: Advances and Evaluation},
    year = {2022},
    keywords = {ARRAY(0x558e723ee848)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49183/},
    abstract = {To navigate in human social spaces, self-driving cars and other robots must show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. The present paper gives an overview of our ongoing work on modelling and controlling human?self-driving car interactions using game theory, proxemics and trust, and unifying these fields via quantitative models and robot controllers.}
    }
  • R. Godfrey, M. Rimmer, C. Headleand, and C. Fox, “Rhythmtrain: making rhythmic sight reading training fun,” in International computer music conference, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Rhythmic sight-reading forms a barrier to many musicians’ progress. It is difficult to practice in isolation, as it is hard to get feedback on accuracy. Different performers have different starting skills in different styles so it is hard to create a general curriculum for study. It can be boring to rehearse the same rhythms many times. We examine theories of motivation, engagement, and fun, and draw them together to design a novel training system, RhythmTrain. This includes consideration of dynamic difficultly, gamification and juicy design. The system uses machine learning to learn individual performers’ strengths, weaknesses, and interests, and optimises the selection of rhythms presented to maximise their engagement. An open source implementation is released as part of this publication.

    @inproceedings{lincoln49153,
    booktitle = {International Computer Music Conference},
    month = {July},
    title = {RhythmTrain: making rhythmic sight reading training fun},
    author = {Reece Godfrey and Matthew Rimmer and Chris Headleand and Charles Fox},
    publisher = {ICMA},
    year = {2022},
    keywords = {ARRAY(0x558e723ec070)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49153/},
    abstract = {Rhythmic sight-reading forms a barrier to many musicians' progress. It is difficult to practice in isolation, as it is hard to get feedback on accuracy. Different performers have different starting skills in different styles so it is hard to create a general curriculum for study. It can be boring to rehearse the same rhythms many times. We examine theories of motivation, engagement, and fun, and draw them together to design a novel training system, RhythmTrain. This includes consideration of dynamic difficultly, gamification and juicy design. The system uses machine learning to learn individual performers' strengths, weaknesses, and interests, and optimises the selection of rhythms presented to maximise their engagement. An open source implementation is released as part of this publication.}
    }
  • H. A. Montes and G. Cielniak, “Multiple broccoli head detection and tracking in 3d point clouds for autonomous harvesting,” in Aaai – ai for agriculture and food systems, 2022.
    [BibTeX] [Abstract] [Download PDF]

    This paper explores a tracking method of broccoli heads that combine a Particle Filter and 3D features detectors to track multiple crops in a sequence of 3D data frames. The tracking accuracy is verified based on a data association method that matches detections with tracks over each frame. The particle filter incorporates a simple motion model to produce the posterior particle distribution, and a similarity model as probability function to measure the tracking accuracy. The method is tested with datasets of two broccoli varieties collected in planted fields from two different countries. Our evaluation shows the tracking method reduces the number of false negatives produced by the detectors on their own. In addition, the method accurately detects and tracks the 3D locations of broccoli heads relative to the vehicle at high frame rates

    @inproceedings{lincoln48675,
    booktitle = {AAAI - AI for Agriculture and Food Systems},
    month = {February},
    title = {Multiple broccoli head detection and tracking in 3D point clouds for autonomous harvesting},
    author = {Hector A. Montes and Grzegorz Cielniak},
    year = {2022},
    keywords = {ARRAY(0x558e723f3358)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48675/},
    abstract = {This paper explores a tracking method of broccoli heads that combine a Particle Filter and 3D features detectors to track multiple crops in a sequence of 3D data frames. The tracking accuracy is verified based on a data association method that matches detections with tracks over each frame. The particle filter incorporates a simple motion model to produce the posterior particle distribution, and a similarity model as probability function to measure the tracking accuracy. The method is tested with datasets of two broccoli varieties collected in planted fields from two different countries. Our evaluation shows the tracking method reduces the number of false negatives produced by the detectors on their own. In addition, the method accurately detects and tracks the 3D locations of broccoli heads relative to the vehicle at high frame rates}
    }

2021

  • N. Andreakos, S. Yue, and V. Cutsuridis, “Quantitative investigation of memory recall performance of a computational microcircuit model of the hippocampus,” Brain informatics, vol. 8, p. 9, 2021. doi:10.1186/s40708-021-00131-7
    [BibTeX] [Abstract] [Download PDF]

    Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.

    @article{lincoln44717,
    volume = {8},
    month = {December},
    author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis},
    title = {Quantitative Investigation Of Memory Recall Performance Of A Computational Microcircuit Model Of The Hippocampus},
    publisher = {SpringerOpen},
    year = {2021},
    journal = {Brain Informatics},
    doi = {10.1186/s40708-021-00131-7},
    pages = {9},
    keywords = {ARRAY(0x558e723f3550)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44717/},
    abstract = {Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.}
    }
  • D. D. Barrie, M. Pandya, H. Pandya, M. Hanheide, and K. Elgeneidy, “A deep learning method for vision based force prediction of a soft fin ray gripper using simulation data,” Frontiers in robotics and ai, vol. 8, p. 631371, 2021. doi:10.3389/frobt.2021.631371
    [BibTeX] [Abstract] [Download PDF]

    Soft robotic grippers are increasingly desired in applications that involve grasping of complex and deformable objects. However, their flexible nature and non-linear dynamics makes the modelling and control difficult. Numerical techniques such as Finite Element Analysis (FEA) present an accurate way of modelling complex deformations. However, FEA approaches are computationally expensive and consequently challenging to employ for real-time control tasks. Existing analytical techniques simplify the modelling by approximating the deformed gripper geometry. Although this approach is less computationally demanding, it is limited in design scope and can lead to larger estimation errors. In this paper, we present a learning based framework that is able to predict contact forces as well as stress distribution from soft Fin Ray Effect (FRE) finger images in real-time. These images are used to learn internal representations for deformations using a deep neural encoder, which are further decoded to contact forces and stress maps using separate branches. The entire network is jointly learned in an end-to-end fashion. In order to address the challenge of having sufficient labelled data for training, we employ FEA to generate simulated images to supervise our framework. This leads to an accurate prediction, faster inference and availability of large and diverse data for better generalisability. Furthermore, our approach is able to predict a detailed stress distribution that can guide grasp planning, which would be particularly useful for delicate objects. Our proposed approach is validated by comparing the predicted contact forces to the computed ground-truth forces from FEA as well as real force sensor. We rigorously evaluate the performance of our approach under variations in contact point, object material, object shape, viewing angle, and level of occlusion.

    @article{lincoln45569,
    volume = {8},
    month = {May},
    author = {Daniel De Barrie and Manjari Pandya and Harit Pandya and Marc Hanheide and Khaled Elgeneidy},
    title = {A Deep Learning Method for Vision Based Force Prediction of a Soft Fin Ray Gripper Using Simulation Data},
    publisher = {Frontiers Media},
    year = {2021},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2021.631371},
    pages = {631371},
    keywords = {ARRAY(0x558e723f40c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45569/},
    abstract = {Soft robotic grippers are increasingly desired in applications that involve grasping of complex and deformable objects. However, their flexible nature and non-linear dynamics makes the modelling and control difficult. Numerical techniques such as Finite Element Analysis (FEA) present an accurate way of modelling complex deformations. However, FEA approaches are computationally expensive and consequently challenging to employ for real-time control tasks. Existing analytical techniques simplify the modelling by approximating the deformed gripper geometry. Although this approach is less computationally demanding, it is limited in design scope and can lead to larger estimation errors. In this paper, we present a learning based framework that is able to predict contact forces as well as stress distribution from soft Fin Ray Effect (FRE) finger images in real-time. These images are used to learn internal representations for deformations using a deep neural encoder, which are further decoded to contact forces and stress maps using separate branches. The entire network is jointly learned in an end-to-end fashion. In order to address the challenge of having sufficient labelled data for training, we employ FEA to generate simulated images to supervise our framework. This leads to an accurate prediction, faster inference and availability of large and diverse data for better generalisability. Furthermore, our approach is able to predict a detailed stress distribution that can guide grasp planning, which would be particularly useful for delicate objects. Our proposed approach is validated by comparing the predicted contact forces to the computed ground-truth forces from FEA as well as real force sensor. We rigorously evaluate the performance of our approach under variations in contact point, object material, object shape, viewing angle, and level of occlusion.}
    }
  • I. Sassoon, N. Kokciyan, S. Modgil, and S. Parsons, “Argumentation schemes for clinical decision support,” Argument & computation, 2021. doi:10.3233/AAC-200550
    [BibTeX] [Abstract] [Download PDF]

    This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created, and illustrates the overall process on a small set of case studies.

    @article{lincoln46566,
    month = {August},
    title = {Argumentation Schemes for Clinical Decision Support},
    author = {Isabel Sassoon and Nadin Kokciyan and Sanjay Modgil and Simon Parsons},
    publisher = {IOS Press},
    year = {2021},
    doi = {10.3233/AAC-200550},
    journal = {Argument \& Computation},
    keywords = {ARRAY(0x558e723f3cd0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46566/},
    abstract = {This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created, and illustrates the overall process on a small set of case studies.}
    }
  • Q. Fu, X. Sun, T. liu, C. Hu, and S. Yue, “Robustness of bio-inspired visual systems for collision prediction in critical robot traffic,” Frontiers in robotics and ai, vol. 8, p. 529872, 2021. doi:doi:10.3389/frobt.2021.529872
    [BibTeX] [Abstract] [Download PDF]

    Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust?s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.

    @article{lincoln46873,
    volume = {8},
    month = {August},
    author = {Qinbing Fu and Xuelong Sun and Tian liu and Cheng Hu and Shigang Yue},
    title = {Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic},
    publisher = {Frontiers Media},
    year = {2021},
    journal = {Frontiers in Robotics and AI},
    doi = {doi:10.3389/frobt.2021.529872},
    pages = {529872},
    keywords = {ARRAY(0x558e723f3d00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46873/},
    abstract = {Collision prevention sets a major research and development obstacle for intelligent robots
    and vehicles. This paper investigates the robustness of two state-of-the-art neural network
    models inspired by the locust?s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have
    been studied and modelled intensively, their capability and robustness against real-time
    critical traffic scenarios where real-physical crashes will happen have never been
    systematically investigated due to difficulty and high price in replicating risky traffic with
    many crash occurrences. To close this gap, we apply a recently published robotic platform
    to test the LGMDs inspired visual systems in physical implementation of critical traffic
    scenarios at low cost and high flexibility. The proposed visual systems are applied as the
    only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt
    braking. The simulated traffic resembles on-road sections including the intersection and
    highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones
    upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at
    bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is
    laid on corroborating the robustness of LGMDs neural systems model in different dynamic
    robot scenes to timely alert potential crashes. This study well complements previous
    experimentation on such bio-inspired computations for collision prediction in more critical
    physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired
    visual systems in critical traffic towards a reliable collision alert system under constrained
    computation power. This paper also exhibits a novel, tractable, and affordable robotic
    approach to evaluate online visual systems in dynamic scenes.}
    }
  • S. Brewer, S. Pearson, R. Maull, P. Godsiff, J. G. Frey, A. Zisman, G. Parr, A. McMillan, S. Cameron, H. Blackmore, L. Manning, and L. Bidaut, “A trust framework for digital food systems.,” Nature food, vol. 2, p. 543–545, 2021. doi:10.1038/s43016-021-00346-1
    [BibTeX] [Abstract] [Download PDF]

    The full potential for a digitally transformed food system has not yet been realised – or indeed imagined. Data flows across, and within, vast but largely decentralised and tiered supply chain networks. Data defines internal inputs, bi-directional flows of food, information and finance within the supply chain, and intended and extraneous outputs. Data exchanges can orchestrate critical network dependencies, define standards and underpin food safety. Poore and Nemecek1 hypothesised that digital technologies could drive system transformation for the public good by empowering personalised selection of foods with, for example, lower intrinsic greenhouse gas emissions. Here, we contend that the full potential of a digitally transformed food system can only be realised if permissioned and trusted data can flow seemlessly through complex, multi-lateral supply chains, effectively from farms through to the consumer.

    @article{lincoln47264,
    volume = {2},
    month = {August},
    author = {Steve Brewer and Simon Pearson and Roger Maull and Phil Godsiff and Jeremy G. Frey and Andrea Zisman and Gerard Parr and Andrew McMillan and Sarah Cameron and Hannah Blackmore and Louise Manning and Luc Bidaut},
    title = {A trust framework for digital food systems.},
    publisher = {Nature Research},
    year = {2021},
    journal = {Nature Food},
    doi = {10.1038/s43016-021-00346-1},
    pages = {543--545},
    keywords = {ARRAY(0x558e723f3d30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47264/},
    abstract = {The full potential for a digitally transformed food system has not yet been realised - or indeed imagined. Data flows across, and within, vast but largely decentralised and tiered supply chain networks. Data defines internal inputs, bi-directional flows of food, information and finance within the supply chain, and intended and extraneous outputs. Data exchanges can orchestrate critical network dependencies, define standards and underpin food safety. Poore and Nemecek1 hypothesised that digital technologies could drive system transformation for the public good by empowering personalised selection of foods with, for example, lower intrinsic greenhouse gas emissions. Here, we contend that the full potential of a digitally transformed food system can only be realised if permissioned and trusted data can flow seemlessly through complex, multi-lateral supply chains, effectively from farms through to the consumer.}
    }
  • L. Gong, M. Yu, S. Jiang, V. Cutsuridis, S. Kollias, and S. Pearson, “Studies of evolutionary algorithms for the reduced tomgro model calibration for modelling tomato yields,” Smart agricultural technology, vol. 1, p. 100011, 2021. doi:10.1016/j.atech.2021.100011
    [BibTeX] [Abstract] [Download PDF]

    The reduced Tomgro model is one of the popular biophysical models, which can reflect the actual growth process and model the yields of tomato-based on environmental parameters in a greenhouse. It is commonly integrated with the greenhouse environmental control system for optimally controlling environmental parameters to maximize the tomato growth/yields under acceptable energy consumption. In this work, we compare three mainstream evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and differential evolutionary (DE)) for calibrating the reduced Tomgro model, to model the tomato mature fruit dry matter (DM) weights. Different evolutionary algorithms have been applied to calibrate 14 key parameters of the reduced Tomgro model. And the performance of the calibrated Tomgro models based on different evolutionary algorithms has been evaluated based on three datasets obtained from a real tomato grower, with each dataset containing greenhouse environmental parameters (e.g., carbon dioxide concentration, temperature, photosynthetically active radiation (PAR)) and tomato yield information at a particular greenhouse for one year. Multiple metrics (root mean square errors (RMSEs), relative root mean square errors (r-RSMEs), and mean average errors (MAEs)) between actual DM weights and model-simulated ones for all three datasets, are used to validate the performance of calibrated reduced Tomgro model.

    @article{lincoln46525,
    volume = {1},
    month = {December},
    author = {Liyun Gong and Miao Yu and Shouyong Jiang and Vassilis Cutsuridis and Stefanos Kollias and Simon Pearson},
    title = {Studies of evolutionary algorithms for the reduced Tomgro model calibration for modelling tomato yields},
    publisher = {Elsevier},
    year = {2021},
    journal = {Smart Agricultural Technology},
    doi = {10.1016/j.atech.2021.100011},
    pages = {100011},
    keywords = {ARRAY(0x558e723f3580)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46525/},
    abstract = {The reduced Tomgro model is one of the popular biophysical models, which can reflect the actual growth process and model the yields of tomato-based on environmental parameters in a greenhouse. It is commonly integrated with the greenhouse environmental control system for optimally controlling environmental parameters to maximize the tomato growth/yields under acceptable energy consumption. In this work, we compare three mainstream evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and differential evolutionary (DE)) for calibrating the reduced Tomgro model, to model the tomato mature fruit dry matter (DM) weights. Different evolutionary algorithms have been applied to calibrate 14 key parameters of the reduced Tomgro model. And the performance of the calibrated Tomgro models based on different evolutionary algorithms has been evaluated based on three datasets obtained from a real tomato grower, with each dataset containing greenhouse environmental parameters (e.g., carbon dioxide concentration, temperature, photosynthetically active radiation (PAR)) and tomato yield information at a particular greenhouse for one year. Multiple metrics (root mean square errors (RMSEs), relative root mean square errors (r-RSMEs), and mean average errors (MAEs)) between actual DM weights and model-simulated ones for all three datasets, are used to validate the performance of calibrated reduced Tomgro model.}
    }
  • L. Freund, S. Al-Majeed, and A. Millard, “Complexity space modelling for industrial manufacturing systems,” International journal of computing and digital systems, 2021.
    [BibTeX] [Abstract] [Download PDF]

    The static and dynamic complexity of an industrial engineered system are integrated in a complexity space modelling approach, where information complexity boundaries expand over time and serve as an indicator for system instability in a static complexity space. In a first step, model-based static and dynamic conceptions of complexity are introduced and described. The necessary capabilities are theoretically demonstrated, alongside a set of assumptions concerning the behavior of industrial system complexity and its functions as a core foundation for the proposed complexity space model. In a second step, the successful application of the proposed modelling approach on a real-world industrial system is presented. Case study results are briefly presented and discussed as a first proof of concept for the general applicability of the proposed modelling approach for current and future industrial systems. In a final step a short research outlook is provided.

    @article{lincoln46666,
    month = {July},
    title = {Complexity Space Modelling for Industrial Manufacturing
    Systems},
    author = {Lucas Freund and Salah Al-Majeed and Alan Millard},
    publisher = {University of Bahrain},
    year = {2021},
    journal = {International Journal of Computing and Digital Systems},
    keywords = {ARRAY(0x558e723f3dc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46666/},
    abstract = {The static and dynamic complexity of an industrial engineered system are integrated in a complexity space modelling approach, where information complexity boundaries expand over time and serve as an indicator for system instability in a static complexity space. In a first step, model-based static and dynamic conceptions of complexity are introduced and described. The necessary capabilities are theoretically demonstrated, alongside a set of assumptions concerning the behavior of industrial system complexity and its functions as a core foundation for the proposed complexity space model. In a second step, the successful application of the proposed modelling approach on a real-world industrial system is presented. Case study results are briefly presented and discussed as a first proof of concept for the general applicability of the proposed modelling approach for current and future industrial systems. In a final step a short research outlook is provided.}
    }
  • L. Gong, M. Yu, S. Jiang, V. Cutsuridis, and S. Pearson, “Deep learning based prediction on greenhouse crop yield combined tcn and rnn,” Sensors, vol. 21, iss. 13, p. 4537, 2021. doi:10.3390/s21134537
    [BibTeX] [Abstract] [Download PDF]

    Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses? environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing{–}temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.

    @article{lincoln46522,
    volume = {21},
    number = {13},
    month = {July},
    author = {Liyun Gong and Miao Yu and Shouyong Jiang and Vassilis Cutsuridis and Simon Pearson},
    title = {Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN},
    publisher = {MDPI},
    year = {2021},
    journal = {Sensors},
    doi = {10.3390/s21134537},
    pages = {4537},
    keywords = {ARRAY(0x558e723f3f10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46522/},
    abstract = {Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses? environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing{--}temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.}
    }
  • H. Isakhani, C. Xiong, W. Chen, and S. Yue, “Towards locust-inspired gliding wing prototypes for micro aerial vehicle applications,” Royal society open science, vol. 8, iss. 6, p. 202253, 2021. doi:10.1098/rsos.202253
    [BibTeX] [Abstract] [Download PDF]

    In aviation, gliding is the most economical mode of flight explicitly appreciated by natural fliers. They achieve it by high-performance wing structures evolved over millions of years in nature. Among other prehistoric beings, locust (Schistocerca gregaria) is a perfect example of such natural glider capable of endured transatlantic flights that could inspire a practical solution to achieve similar capabilities on micro aerial vehicles. This study investigates the effects of haemolymph on the flexibility of several flying insect wings further showcasing the superior structural performance of locusts. However, biomimicry of such aerodynamic and structural properties is hindered by the limitations of modern as well as conventional fabrication technologies in terms of availability and precision, respectively. Therefore, here we adopt finite element analysis (FEA) to investigate the manufacturing-worthiness of a 3D digitally reconstructed locust tandem wing, and propose novel combinations of economical and readily-available manufacturing methods to develop the model into prototypes that are structurally similar to their counterparts in nature while maintaining the optimum gliding ratio previously obtained in the aerodynamic simulations. Latter is evaluated in the future study and the former is assessed here via an experimental analysis of the flexural stiffness and maximum deformation rate. Ultimately, a comparative study of the mechanical properties reveals the feasibility of each prototype for gliding micro aerial vehicle applications.

    @article{lincoln47017,
    volume = {8},
    number = {6},
    month = {June},
    author = {Hamid Isakhani and Caihua Xiong and Wenbin Chen and Shigang Yue},
    title = {Towards locust-inspired gliding wing prototypes for micro aerial vehicle applications},
    publisher = {The Royal Society},
    year = {2021},
    journal = {Royal Society Open Science},
    doi = {10.1098/rsos.202253},
    pages = {202253},
    keywords = {ARRAY(0x558e723f3f40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47017/},
    abstract = {In aviation, gliding is the most economical mode of flight explicitly appreciated by natural fliers. They achieve it by high-performance wing structures evolved over millions of years in nature. Among other prehistoric beings, locust (Schistocerca gregaria) is a perfect example of such natural glider capable of endured transatlantic flights that could inspire a practical solution to achieve similar capabilities on micro aerial vehicles. This study investigates the effects of haemolymph on the flexibility of several flying insect wings further showcasing the superior structural performance of locusts.
    However, biomimicry of such aerodynamic and structural properties is hindered by the limitations of modern as well as conventional fabrication technologies in terms of availability and precision, respectively. Therefore, here we adopt finite element analysis (FEA) to investigate the manufacturing-worthiness of a 3D digitally reconstructed locust tandem wing, and propose novel combinations of economical and readily-available manufacturing methods to develop the model into prototypes that are structurally similar to their counterparts in nature while maintaining the optimum gliding ratio previously obtained in the aerodynamic simulations. Latter is evaluated in the future study and the former is assessed here via an experimental analysis of the flexural stiffness and maximum deformation rate.
    Ultimately, a comparative study of the mechanical properties reveals the feasibility of each prototype for gliding micro aerial vehicle applications.}
    }
  • K. Munir, M. Ghafoor, M. Khafagy, and H. Ihshaish, “Agrosupportanalytics: a cloud-based complaints management and decision support system for sustainable farming in egypt,” Egyptian informatics journal, 2021. doi:10.1016/j.eij.2021.06.002
    [BibTeX] [Abstract] [Download PDF]

    Sustainable Farming requires up-to-date advice on crop diseases, patterns, and adequate prevention actions to face developing circumstances. Currently, in developing countries like Egypt, farmers? access to such information is extremely limited due to the agriculture support being either not available, inconsistent, or unreliable. The presented Cloud-based Complaints Management and Decision Support System for Sustainable Farming in Egypt, named as AgroSupportAnalytics, aims to resolve the problem of both the lack of support and advice for farmers, and the inconsistencies in doing so by current manual approach provided by agricultural experts. Key contribution is the development of an automated complaint management and decision support strategy, on the basis of extensive research on requirement analysis tailored for Egypt. The solution is grounded on the application of knowledge discovery and analysis on agricultural data and farmers? complaints, deployed on a Cloud platform, to provide farming stakeholders in Egypt with timely and suitable support. This paper presents the overall system architectural framework along with the information and storage services, which have been based on the requirements specifications phases of the project along with the historical data sets of past 10�year of farmers complaints and enquiries in Egypt.

    @article{lincoln47917,
    month = {June},
    title = {AgroSupportAnalytics: A Cloud-based Complaints Management and Decision Support System for Sustainable Farming in Egypt},
    author = {Kamran Munir and Mubeen Ghafoor and Mohamed Khafagy and Hisham Ihshaish},
    publisher = {Elsevier},
    year = {2021},
    doi = {10.1016/j.eij.2021.06.002},
    journal = {Egyptian Informatics Journal},
    keywords = {ARRAY(0x558e723f3f70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47917/},
    abstract = {Sustainable Farming requires up-to-date advice on crop diseases, patterns, and adequate prevention actions to face developing circumstances. Currently, in developing countries like Egypt, farmers? access to such information is extremely limited due to the agriculture support being either not available, inconsistent, or unreliable. The presented Cloud-based Complaints Management and Decision Support System for Sustainable Farming in Egypt, named as AgroSupportAnalytics, aims to resolve the problem of both the lack of support and advice for farmers, and the inconsistencies in doing so by current manual approach provided by agricultural experts. Key contribution is the development of an automated complaint management and decision support strategy, on the basis of extensive research on requirement analysis tailored for Egypt. The solution is grounded on the application of knowledge discovery and analysis on agricultural data and farmers? complaints, deployed on a Cloud platform, to provide farming stakeholders in Egypt with timely and suitable support. This paper presents the overall system architectural framework along with the information and storage services, which have been based on the requirements specifications phases of the project along with the historical data sets of past 10�year of farmers complaints and enquiries in Egypt.}
    }
  • M. Al-Khafajiy, S. Otoum, T. Baker, M. Asim, Z. Maamar, M. Aloqaily, M. Taylor, and M. Randles, “Intelligent control and security of fog resources in healthcare systems via a cognitive fog model,” Acm transactions on internet technology, vol. 21, iss. 3, p. 1–23, 2021. doi:10.1145/3382770
    [BibTeX] [Abstract] [Download PDF]

    There have been significant advances in the field of Internet of Things (IoT) recently, which have not always considered security or data security concerns: A high degree of security is required when considering the sharing of medical data over networks. In most IoT-based systems, especially those within smart-homes and smart-cities, there is a bridging point (fog computing) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks, as well as small amounts of data processing. The fog nodes can have useful knowledge and potential for constructive security and control over both the sensor network and the data transmitted over the Internet. Smart healthcare services utilise such networks of IoT systems. It is therefore vital that medical data emanating from IoT systems is highly secure, to prevent fraudulent use, whilst maintaining quality of service providing assured, verified and complete data. In this article, we examine the development of a Cognitive Fog (CF) model, for secure, smart healthcare services, that is able to make decisions such as opting-in and opting-out from running processes and invoking new processes when required, and providing security for the operational processes within the fog system. Overall, the proposed ensemble security model performed better in terms of Accuracy Rate, Detection Rate, and a lower False Positive Rate (standard intrusion detection measurements) than three base classifiers (K-NN, DBSCAN, and DT) using a standard security dataset (NSL-KDD).

    @article{lincoln47555,
    volume = {21},
    number = {3},
    month = {June},
    author = {Mohammed Al-Khafajiy and Safa Otoum and Thar Baker and Muhammad Asim and Zakaria Maamar and Moayad Aloqaily and Mark Taylor and Martin Randles},
    title = {Intelligent Control and Security of Fog Resources in Healthcare Systems via a Cognitive Fog Model},
    publisher = {ACM},
    year = {2021},
    journal = {ACM Transactions on Internet Technology},
    doi = {10.1145/3382770},
    pages = {1--23},
    keywords = {ARRAY(0x558e723f3fa0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47555/},
    abstract = {There have been significant advances in the field of Internet of Things (IoT) recently, which have not always considered security or data security concerns: A high degree of security is required when considering the sharing of medical data over networks. In most IoT-based systems, especially those within smart-homes and smart-cities, there is a bridging point (fog computing) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks, as well as small amounts of data processing. The fog nodes can have useful knowledge and potential for constructive security and control over both the sensor network and the data transmitted over the Internet. Smart healthcare services utilise such networks of IoT systems. It is therefore vital that medical data emanating from IoT systems is highly secure, to prevent fraudulent use, whilst maintaining quality of service providing assured, verified and complete data. In this article, we examine the development of a Cognitive Fog (CF) model, for secure, smart healthcare services, that is able to make decisions such as opting-in and opting-out from running processes and invoking new processes when required, and providing security for the operational processes within the fog system. Overall, the proposed ensemble security model performed better in terms of Accuracy Rate, Detection Rate, and a lower False Positive Rate (standard intrusion detection measurements) than three base classifiers (K-NN, DBSCAN, and DT) using a standard security dataset (NSL-KDD).}
    }
  • S. D. Mohan, F. J. Davis, A. Badiee, P. Hadley, C. A. Twitchen, and S. Pearson, “Optical and thermal properties of commercial polymer film,modeling the albedo effect,” Journal of applied polymer science, vol. 138, iss. 24, p. 50581, 2021. doi:10.1002/app.50 581
    [BibTeX] [Abstract] [Download PDF]

    Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.

    @article{lincoln44141,
    volume = {138},
    number = {24},
    month = {June},
    author = {Saeed D Mohan and Fred J Davis and Amir Badiee and Paul Hadley and Carrie A Twitchen and Simon Pearson},
    title = {Optical and thermal properties of commercial polymer film,modeling the albedo effect},
    publisher = {Wiley},
    year = {2021},
    journal = {Journal of Applied Polymer Science},
    doi = {10.1002/app.50 581},
    pages = {50581},
    keywords = {ARRAY(0x558e723f3fd0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44141/},
    abstract = {Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.}
    }
  • A. Badiee, J. R. Wallbank, J. P. Fentanes, E. Trill, P. Scarlet, Y. Zhu, G. Cielniak, H. Cooper, J. R. Blake, J. G. Evans, M. Zreda, K. Markus, S. Pearson, and and and, “Using additional moderator to control the footprint of a cosmos rover for soil moisture measurement,” Water resources research, vol. 57, iss. 6, p. e2020WR028478, 2021. doi:10.1029/2020WR028478
    [BibTeX] [Abstract] [Download PDF]

    Cosmic Ray Neutron Probes (CRNP) have found application in soil moisture estimation due to their conveniently large ({\ensuremath{>}}100 m) footprints. Here we explore the possibility of using high density polyethylene (HDPE) moderator to limit the field of view, and hence the footprint, of a soil moisture sensor formed of 12 CRNP mounted on to a mobile robotic platform (Thorvald) for better in-field localisation of moisture variation. URANOS neutron scattering simulations are used to show that 5 cm of additional HDPE moderator (used to shield the upper surface and sides of the detector) is sufficient to (i), reduce the footprint of the detector considerably, (ii) approximately double the percentage of neutrons detected from within 5 m of the detector, and (iii), does not affect the shape of the curve used to convert neutron counts into soil moisture. Simulation and rover measurements for a transect crossing between grass and concrete additionally suggest that (iv), soil moisture changes can be sensed over a length scales of tens of meters or less (roughly an order of magnitude smaller than commonly used footprint distances), and (v), the additional moderator does not reduce the detected neutron count rate (and hence increase noise) as much as might be expected given the extent of the additional moderator. The detector with additional HDPE moderator was also used to conduct measurements on a stubble field over three weeks to test the rover system in measuring spatial and temporal soil moisture variation.

    @article{lincoln45017,
    volume = {57},
    number = {6},
    month = {June},
    author = {Amir Badiee and John R. Wallbank and Jaime Pulido Fentanes and Emily Trill and Peter Scarlet and Yongchao Zhu and Grzegorz Cielniak and Hollie Cooper and James R. Blake and Jonathan G. Evans and Marek Zreda and K{\"o}hli Markus and Simon Pearson and and and },
    title = {Using Additional Moderator to Control the Footprint of a COSMOS Rover for Soil Moisture Measurement},
    publisher = {Wiley},
    year = {2021},
    journal = {Water Resources Research},
    doi = {10.1029/2020WR028478},
    pages = {e2020WR028478},
    keywords = {ARRAY(0x558e723f4000)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45017/},
    abstract = {Cosmic Ray Neutron Probes (CRNP) have found application in soil moisture estimation due to their conveniently large ({\ensuremath{>}}100 m) footprints. Here we explore the possibility of using high density polyethylene (HDPE) moderator to limit the field of view, and hence the footprint, of a soil moisture sensor formed of 12 CRNP mounted on to a mobile robotic platform (Thorvald) for better in-field localisation of moisture variation. URANOS neutron scattering simulations are used to show that 5 cm of additional HDPE moderator (used to shield the upper surface and sides of the detector) is sufficient to (i), reduce the footprint of the detector considerably, (ii) approximately double the percentage of neutrons detected from within 5 m of the detector, and (iii), does not affect the shape of the curve used to convert neutron counts into soil moisture. Simulation and rover measurements for a transect crossing between grass and concrete additionally suggest that (iv), soil moisture changes can be sensed over a length scales of tens of meters or less (roughly an order of magnitude smaller than commonly used footprint distances), and (v), the additional moderator does not reduce the detected neutron count rate (and hence increase noise) as much as might be expected given the extent of the additional moderator. The detector with additional HDPE moderator was also used to conduct measurements on a stubble field over three weeks to test the rover system in measuring spatial and temporal soil moisture variation.}
    }
  • D. C. Rose, J. Lyon, A. de Broon, M. Hanheide, and S. Pearson, “Responsible development of autonomous robots in agriculture,” Nature food, vol. 2, iss. 5, p. 306–309, 2021. doi:10.1038/s43016-021-00287-9
    [BibTeX] [Abstract] [Download PDF]

    Despite the potential contributions of autonomous robots to agricultural sustainability, social, legal and ethical issues threaten adoption. We discuss how responsible innovation principles can be embedded into the user-centred design of autonomous robots and identify areas for further empirical research.

    @article{lincoln45058,
    volume = {2},
    number = {5},
    month = {May},
    author = {David Christian Rose and Jessica Lyon and Auvikki de Broon and Marc Hanheide and Simon Pearson},
    title = {Responsible Development of Autonomous Robots in Agriculture},
    publisher = {Springer Nature},
    year = {2021},
    journal = {Nature Food},
    doi = {10.1038/s43016-021-00287-9},
    pages = {306--309},
    keywords = {ARRAY(0x558e723f4060)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45058/},
    abstract = {Despite the potential contributions of autonomous robots to agricultural sustainability, social, legal and ethical issues threaten adoption. We discuss how responsible innovation principles can be embedded into the user-centred design of autonomous robots and identify areas for further empirical research.}
    }
  • J. Aguzzi, C. Costa, M. Calisti, V. Funari, S. Stefanni, R. Danovaro, H. Gomes, F. Vecchi, L. Dartnell, P. Weiss, K. Nowak, D. Chatzievangelou, and S. Marini, “Research trends and future perspectives in marine biomimicking robotics,” Sensors, vol. 21, iss. 11, p. 3778, 2021. doi:10.3390/s21113778
    [BibTeX] [Abstract] [Download PDF]

    Mechatronic and soft robotics are taking inspiration from the animal kingdom to create new high-performance robots. Here, we focused on marine biomimetic research and used innovative bibliographic statistics tools, to highlight established and emerging knowledge domains. A total of 6980 scientific publications retrieved from the Scopus database (1950?2020), evidencing a sharp research increase in 2003?2004. Clustering analysis of countries collaborations showed two major Asian-North America and European clusters. Three significant areas appeared: (i) energy provision, whose advancement mainly relies on microbial fuel cells, (ii) biomaterials for not yet fully operational soft-robotic solutions; and finally (iii), design and control, chiefly oriented to locomotor designs. In this scenario, marine biomimicking robotics still lacks solutions for the long-lasting energy provision, which presently hinders operation autonomy. In the research environment, identifying natural processes by which living organisms obtain energy is thus urgent to sustain energy-demanding tasks while, at the same time, the natural designs must increasingly inform to optimize energy consumption.

    @article{lincoln46134,
    volume = {21},
    number = {11},
    month = {May},
    author = {Jacopo Aguzzi and Corrado Costa and Marcello Calisti and Valerio Funari and Sergio Stefanni and Roberto Danovaro and Helena Gomes and Fabrizio Vecchi and Lewis Dartnell and Peter Weiss and Kathrin Nowak and Damianos Chatzievangelou and Simone Marini},
    title = {Research Trends and Future Perspectives in Marine Biomimicking Robotics},
    year = {2021},
    journal = {Sensors},
    doi = {10.3390/s21113778},
    pages = {3778},
    keywords = {ARRAY(0x558e723f4090)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46134/},
    abstract = {Mechatronic and soft robotics are taking inspiration from the animal kingdom to create new high-performance robots. Here, we focused on marine biomimetic research and used innovative bibliographic statistics tools, to highlight established and emerging knowledge domains. A total of 6980 scientific publications retrieved from the Scopus database (1950?2020), evidencing a sharp research increase in 2003?2004. Clustering analysis of countries collaborations showed two major Asian-North America and European clusters. Three significant areas appeared: (i) energy provision, whose advancement mainly relies on microbial fuel cells, (ii) biomaterials for not yet fully operational soft-robotic solutions; and finally (iii), design and control, chiefly oriented to locomotor designs. In this scenario, marine biomimicking robotics still lacks solutions for the long-lasting energy provision, which presently hinders operation autonomy. In the research environment, identifying natural processes by which living organisms obtain energy is thus urgent to sustain energy-demanding tasks while, at the same time, the natural designs must increasingly inform to optimize energy consumption.}
    }
  • F. Camara, P. Dickinson, and C. Fox, “Evaluating pedestrian interaction preferences with a game theoretic autonomous vehicle in virtual reality,” Transportation research part f, vol. 78, p. 410–423, 2021. doi:10.1016/j.trf.2021.02.017
    [BibTeX] [Abstract] [Download PDF]

    Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved problems, but how to control their interactions with other road users in mixed traffic environments, especially with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game theory model has been developed only in unrealistic lab environments. To improve their realism, this study empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to investigate participants? preference for AV driving style. The results show that the majority of participants were not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two environments and with different car models suggestive of car and last-mile style vehicles. These results provide some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and more generally show how such parameters can be inferred from virtual reality experiments.

    @article{lincoln44566,
    volume = {78},
    month = {April},
    author = {Fanta Camara and Patrick Dickinson and Charles Fox},
    title = {Evaluating Pedestrian Interaction Preferences with a Game Theoretic Autonomous Vehicle in Virtual Reality},
    publisher = {Elsevier},
    year = {2021},
    journal = {Transportation Research Part F},
    doi = {10.1016/j.trf.2021.02.017},
    pages = {410--423},
    keywords = {ARRAY(0x558e723f40f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44566/},
    abstract = {Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved
    problems, but how to control their interactions with other road users in mixed traffic environments, especially
    with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control
    AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game
    theory model has been developed only in unrealistic lab environments. To improve their realism, this study
    empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous
    vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game
    theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions
    for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in
    VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to
    investigate participants? preference for AV driving style. The results show that the majority of participants were
    not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two
    environments and with different car models suggestive of car and last-mile style vehicles. These results provide
    some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and
    more generally show how such parameters can be inferred from virtual reality experiments.}
    }
  • A. G. Esfahani, K. N. Sasikolomi, H. Hashempour, and F. Zhong, “Deep-lfd: deep robot learning from demonstrations,” Software impacts, vol. 9, p. 100087, 2021. doi:10.1016/j.simpa.2021.100087
    [BibTeX] [Abstract] [Download PDF]

    Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.

    @article{lincoln45212,
    volume = {9},
    month = {August},
    author = {Amir Ghalamzan Esfahani and Kiyanoush Nazari Sasikolomi and Hamidreza Hashempour and Fangxun Zhong},
    title = {Deep-LfD: Deep robot learning from demonstrations},
    publisher = {Elsevier},
    year = {2021},
    journal = {Software Impacts},
    doi = {10.1016/j.simpa.2021.100087},
    pages = {100087},
    keywords = {ARRAY(0x558e723f3c40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45212/},
    abstract = {Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.}
    }
  • A. S. Gomez, E. Aptoula, S. Parsons, and S. Bosilj, “Deep regression versus detection for counting in robotic phenotyping,” Ieee robotics and automation letters, vol. 6, iss. 2, p. 2902–2907, 2021. doi:10.1109/LRA.2021.3062586
    [BibTeX] [Abstract] [Download PDF]

    Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.

    @article{lincoln44001,
    volume = {6},
    number = {2},
    month = {April},
    author = {Adrian Salazar Gomez and E Aptoula and Simon Parsons and Simon Bosilj},
    title = {Deep Regression versus Detection for Counting in Robotic Phenotyping},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2021.3062586},
    pages = {2902--2907},
    keywords = {ARRAY(0x558e723f4120)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44001/},
    abstract = {Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.}
    }
  • N. Dethlefs, A. Schoene, and H. Cuayahuitl, “A divide-and-conquer approach to neural natural language generation from structured data,” Neurocomputing, vol. 433, p. 300–309, 2021. doi:10.1016/j.neucom.2020.12.083
    [BibTeX] [Abstract] [Download PDF]

    Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.

    @article{lincoln43748,
    volume = {433},
    month = {April},
    author = {Nina Dethlefs and Annika Schoene and Heriberto Cuayahuitl},
    title = {A Divide-and-Conquer Approach to Neural Natural Language Generation from Structured Data},
    publisher = {Elsevier},
    year = {2021},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2020.12.083},
    pages = {300--309},
    keywords = {ARRAY(0x558e723f41b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43748/},
    abstract = {Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.}
    }
  • T. G. Thuruthel, G. Picardi, F. Iida, C. Laschi, and M. Calisti, “Learning to stop: a unifying principle for legged locomotion in varying environments,” Royal society open science, vol. 8, iss. 4, 2021. doi:10.1098/rsos.210223
    [BibTeX] [Abstract] [Download PDF]

    Evolutionary studies have unequivocally proven the transition of living organisms from water to land. Consequently, it can be deduced that locomotion strategies must have evolved from one environment to the other. However, the mechanism by which this transition happened and its implications on bio-mechanical studies and robotics research have not been explored in detail. This paper presents a unifying control strategy for locomotion in varying environments based on the principle of ?learning to stop?. Using a common reinforcement learning framework, deep deterministic policy gradient, we show that our proposed learning strategy facilitates a fast and safe methodology for transferring learned controllers from the facile water environment to the harsh land environment. Our results not only propose a plausible mechanism for safe and quick transition of locomotion strategies from a water to land environment but also provide a novel alternative for safer and faster training of robots.

    @article{lincoln44628,
    volume = {8},
    number = {4},
    month = {April},
    author = {T. G. Thuruthel and G. Picardi and F. Iida and C. Laschi and M. Calisti},
    title = {Learning to stop: a unifying principle for legged locomotion in varying environments},
    publisher = {The Royal Society},
    year = {2021},
    journal = {Royal Society Open Science},
    doi = {10.1098/rsos.210223},
    keywords = {ARRAY(0x558e723f41e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44628/},
    abstract = {Evolutionary studies have unequivocally proven the transition of living organisms from water to land. Consequently, it can be deduced that locomotion strategies must have evolved from one environment to the other. However, the mechanism by which this transition happened and its implications on bio-mechanical studies and robotics research have not been explored in detail. This paper presents a unifying control strategy for locomotion in varying environments based on the principle of ?learning to stop?. Using a common reinforcement learning framework, deep deterministic policy gradient, we show that our proposed learning strategy facilitates a fast and safe methodology for transferring learned controllers from the facile water environment to the harsh land environment. Our results not only propose a plausible mechanism for safe and quick transition of locomotion strategies from a water to land environment but also provide a novel alternative for safer and faster training of robots.}
    }
  • J. Gao, J. C. Westergaard, E. H. o, M. Bagge, E. Liljeroth, and E. Alexandersson, “Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning,” Knowledge-based systems, vol. 214, p. 106723, 2021. doi:10.1016/j.knosys.2020.106723
    [BibTeX] [Abstract] [Download PDF]

    The plant pathogen Phytophthora infestans causes the severe disease late blight in potato, which can result in huge yield loss for potato production. Automatic and accurate disease lesion segmentation enables fast evaluation of disease severity and assessment of disease progress. In tasks requiring computer vision, deep learning has recently gained tremendous success for image classification, object detection and semantic segmentation. To test whether we could extract late blight lesions from unstructured field environments based on high-resolution visual field images and deep learning algorithms, we collected{$\sim$}500 field RGB images in a set of diverse potato genotypes with different disease severity (0\%?70\%), resulting in 2100 cropped images. 1600 of these cropped images were used as the dataset for training deep neural networks and 250 cropped images were randomly selected as the validation dataset. Finally, the developed model was tested on the remaining 250 cropped images. The results show that the values for intersection over union (IoU) of the classes background (leaf and soil) and disease lesion in the test dataset were 0.996 and 0.386, respectively. Furthermore, we established a linear relationship (R2=0.655) between manual visual scores of late blight and the number of lesions detected by deep learning at the canopy level. We also showed that imbalance weights of lesion and background classes improved segmentation performance, and that fused masks based on the majority voting of the multiple masks enhanced the correlation with the visual disease scores. This study demonstrates the feasibility of using deep learning algorithms for disease lesion segmentation and severity evaluation based on proximal imagery, which could aid breeding for crop resistance in field environments, and also benefit precision farming.

    @article{lincoln43642,
    volume = {214},
    month = {February},
    author = {Junfeng Gao and Jesper Cairo Westergaard and Ea H{\o}egh Riis Sundmark and Merethe Bagge and Erland Liljeroth and Erik Alexandersson},
    title = {Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning},
    publisher = {Elsevier},
    year = {2021},
    journal = {Knowledge-Based Systems},
    doi = {10.1016/j.knosys.2020.106723},
    pages = {106723},
    keywords = {ARRAY(0x558e723f4240)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43642/},
    abstract = {The plant pathogen Phytophthora infestans causes the severe disease late blight in potato, which can result in huge yield loss for potato production. Automatic and accurate disease lesion segmentation enables fast evaluation of disease severity and assessment of disease progress. In tasks requiring computer vision, deep learning has recently gained tremendous success for image classification, object detection and semantic segmentation. To test whether we could extract late blight lesions from unstructured field environments based on high-resolution visual field images and deep learning algorithms, we collected{$\sim$}500 field RGB images in a set of diverse potato genotypes with different disease severity (0\%?70\%), resulting in 2100 cropped images. 1600 of these cropped images were used as the dataset for training deep neural networks and 250 cropped images were randomly selected as the validation dataset. Finally, the developed model was tested on the remaining 250 cropped images. The results show that the values for intersection over union (IoU) of the classes background (leaf and soil) and disease lesion in the test dataset were 0.996 and 0.386, respectively. Furthermore, we established a linear relationship (R2=0.655) between manual visual scores of late blight and the number of lesions detected by deep learning at the canopy level. We also showed that imbalance weights of lesion and background classes improved segmentation performance, and that fused masks based on the majority voting of the multiple masks enhanced the correlation with the visual disease scores. This study demonstrates the feasibility of using deep learning algorithms for disease lesion segmentation and severity evaluation based on proximal imagery, which could aid breeding for crop resistance in field environments, and also benefit precision farming.}
    }
  • P. McBurney and S. Parsons, “Argument schemes and dialogue protocols: doug walton’s legacy in artificial intelligence,” Journal of applied logics, vol. 8, iss. 1, p. 263–286, 2021.
    [BibTeX] [Abstract] [Download PDF]

    This paper is intended to honour the memory of Douglas Walton (1942–2020), a Canadian philosopher of argumentation who died in January 2020. Walton’s contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions –- argumentation schemes and dialogue protocols –- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.

    @article{lincoln43751,
    volume = {8},
    number = {1},
    month = {February},
    author = {Peter McBurney and Simon Parsons},
    title = {Argument Schemes and Dialogue Protocols: Doug Walton's legacy in artificial intelligence},
    publisher = {College Publications},
    year = {2021},
    journal = {Journal of Applied Logics},
    pages = {263--286},
    keywords = {ARRAY(0x558e723f12d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43751/},
    abstract = {This paper is intended to honour the memory of Douglas Walton (1942--2020), a Canadian philosopher of argumentation who died in January 2020. Walton's contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions --- argumentation schemes and dialogue protocols --- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.}
    }
  • A. Seddaoui and M. C. Saaj, “Collision-free optimal trajectory generation for a space robot using genetic algorithm,” Acta astronautica, vol. 179, p. 311–321, 2021. doi:10.1016/j.actaastro.2020.11.001
    [BibTeX] [Abstract] [Download PDF]

    Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around their target. Several challenges arise when modelling, controlling and planning the motion of such systems, therefore, new methodologies are required. A safe approach towards the grasping point implies that the space robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking using a nonlinear adaptive

    @article{lincoln43074,
    volume = {179},
    month = {February},
    author = {Asma Seddaoui and Mini Chakravarthini Saaj},
    note = {The paper is the outcome of a PhD I supervised at University of Surrey.},
    title = {Collision-free optimal trajectory generation for a space robot using genetic algorithm},
    publisher = {Elsevier},
    year = {2021},
    journal = {Acta Astronautica},
    doi = {10.1016/j.actaastro.2020.11.001},
    pages = {311--321},
    keywords = {ARRAY(0x558e723f1308)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43074/},
    abstract = {Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around
    their target. Several challenges arise when modelling, controlling and planning the motion of such systems,
    therefore, new methodologies are required. A safe approach towards the grasping point implies that the space
    robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain
    the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity
    of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a
    Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft
    base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The
    key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as
    an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking
    using a nonlinear adaptive}
    }
  • M. Lujak, E. I. Sklar, and F. Semet, “Agriculture fleet vehicle routing: a decentralised and dynamic problem,” Ai communications, vol. 34, iss. 1, p. 55–71, 2021. doi:10.3233/AIC-201581
    [BibTeX] [Abstract] [Download PDF]

    To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.

    @article{lincoln43570,
    volume = {34},
    number = {1},
    month = {February},
    author = {Marin Lujak and Elizabeth I Sklar and Frederic Semet},
    title = {Agriculture fleet vehicle routing: A decentralised and dynamic problem},
    publisher = {IOS Press},
    year = {2021},
    journal = {AI Communications},
    doi = {10.3233/AIC-201581},
    pages = {55--71},
    keywords = {ARRAY(0x558e723f1338)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43570/},
    abstract = {To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.}
    }
  • T. Vintr, Z. Yan, K. Eyisoy, F. Kubis, J. Blaha, J. Ulrich, C. Swaminathan, S. M. Mellado, T. Kucner, M. Magnusson, G. Cielniak, J. Faigl, T. Duckett, A. Lilienthal, and T. Krajnik, “Natural criteria for comparison of pedestrian flow forecasting models,” 2020 ieee/rjs international conference on intelligent robots and systems (iros), p. 11197–11204, 2021. doi:10.1109/IROS45743.2020.9341672
    [BibTeX] [Abstract] [Download PDF]

    Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-theart pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.

    @article{lincoln48928,
    month = {February},
    author = {Tomas Vintr and Zhi Yan and Kerem Eyisoy and Filip Kubis and Jan Blaha and Jiri Ulrich and Chittaranjan Swaminathan and Sergio Molina Mellado and Tomasz Kucner and Martin Magnusson and Grzegorz Cielniak and Jan Faigl and Tom Duckett and Achim Lilienthal and Tomas Krajnik},
    title = {Natural criteria for comparison of pedestrian flow forecasting models},
    publisher = {IEEE},
    journal = {2020 IEEE/RJS International Conference on Intelligent Robots and Systems (IROS)},
    doi = {10.1109/IROS45743.2020.9341672},
    pages = {11197--11204},
    year = {2021},
    keywords = {ARRAY(0x558e723f1398)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48928/},
    abstract = {Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-theart pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.}
    }
  • L. Korir, A. Drake, M. Collison, C. C. Villa, E. Sklar, and S. Pearson, “Current and emergent economic impacts of covid-19 and brexit on uk fresh produce and horticultural businesses,” Arxiv, 2021. doi:10.22004/ag.econ.312068
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK?s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.

    @article{lincoln46766,
    month = {January},
    title = {Current and Emergent Economic Impacts of Covid-19 and Brexit on UK Fresh Produce and Horticultural Businesses},
    author = {Lilian Korir and Archie Drake and Martin Collison and Carolina Camacho Villa and Elizabeth Sklar and Simon Pearson},
    year = {2021},
    doi = {10.22004/ag.econ.312068},
    journal = {ArXiv},
    keywords = {ARRAY(0x558e723f13f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46766/},
    abstract = {This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality.
    The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK?s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.}
    }
  • G. Picardi, H. Hauser, C. Laschi, and M. Calisti, “Morphologically induced stability on an underwater legged robot with a deformable body,” The international journal of robotics research, vol. 40, iss. 1, p. 435–448, 2021. doi:10.1177/0278364919840426
    [BibTeX] [Abstract] [Download PDF]

    For robots to navigate successfully in the real world, unstructured environment adaptability is a prerequisite. Although this is typically implemented within the control layer, there have been recent proposals of adaptation through a morphing of the body. However, the successful demonstration of this approach has mostly been theoretical and in simulations thus far. In this work we present an underwater hopping robot that features a deformable body implemented as a deployable structure that is covered by a soft skin for which it is possible to manually change the body size without altering any other property (e.g. buoyancy or weight). For such a system, we show that it is possible to induce a stable hopping behavior instead of a fall, by just increasing the body size. We provide a mathematical model that describes the hopping behavior of the robot under the influence of shape-dependent underwater contributions (drag, buoyancy, and added mass) in order to analyze and compare the results obtained. Moreover, we show that for certain conditions, a stable hopping behavior can only be obtained through changing the morphology of the robot as the controller (i.e. actuator) would already be working at maximum capacity. The presented work demonstrates that, through the exploitation of shape-dependent forces, the dynamics of a system can be modified through altering the morphology of the body to induce a desirable behavior and, thus, a morphological change can be an effective alternative to the classic control.

    @article{lincoln46149,
    volume = {40},
    number = {1},
    month = {January},
    author = {Giacomo Picardi and Helmut Hauser and Cecilia Laschi and Marcello Calisti},
    title = {Morphologically induced stability on an underwater legged robot with a deformable body},
    year = {2021},
    journal = {The International Journal of Robotics Research},
    doi = {10.1177/0278364919840426},
    pages = {435--448},
    keywords = {ARRAY(0x558e723f1428)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46149/},
    abstract = {For robots to navigate successfully in the real world, unstructured environment adaptability is a prerequisite. Although this is typically implemented within the control layer, there have been recent proposals of adaptation through a morphing of the body. However, the successful demonstration of this approach has mostly been theoretical and in simulations thus far. In this work we present an underwater hopping robot that features a deformable body implemented as a deployable structure that is covered by a soft skin for which it is possible to manually change the body size without altering any other property (e.g. buoyancy or weight). For such a system, we show that it is possible to induce a stable hopping behavior instead of a fall, by just increasing the body size. We provide a mathematical model that describes the hopping behavior of the robot under the influence of shape-dependent underwater contributions (drag, buoyancy, and added mass) in order to analyze and compare the results obtained. Moreover, we show that for certain conditions, a stable hopping behavior can only be obtained through changing the morphology of the robot as the controller (i.e. actuator) would already be working at maximum capacity. The presented work demonstrates that, through the exploitation of shape-dependent forces, the dynamics of a system can be modified through altering the morphology of the body to induce a desirable behavior and, thus, a morphological change can be an effective alternative to the classic control.}
    }
  • Z. Al-saadi, D. Sirintuna, A. Kucukyilmaz, and C. Basdogan, “A novel haptic feature set for the classification of interactive motor behaviors in collaborative object transfer,” Ieee transactions on haptics, p. 1–1, 2021. doi:10.1109/TOH.2020.3034244
    [BibTeX] [Abstract] [Download PDF]

    Haptics provides a natural and intuitive channel of communication during the interaction of two humans in complex physical tasks, such as joint object transportation. However, despite the utmost importance of touch in physical interactions, the use of haptics is underrepresented when developing intelligent systems. This study explores the prominence of haptic data to extract information about underlying interaction patterns within human-human cooperation. For this purpose, we design salient haptic features describing the collaboration quality within a physical dyadic task and investigate the use of these features to classify the interaction patterns. We categorize the interaction into four discrete behavior classes. These classes describe whether the partners work in harmony or face conflicts while jointly transporting an object through translational or rotational movements. We test the proposed features on a physical human-human interaction (pHHI) dataset, consisting of data collected from 12 human dyads. Using these data, we verify the salience of haptic features by achieving a correct classification rate over 91\% using a Random Forest classifier.

    @article{lincoln43742,
    title = {A Novel Haptic Feature Set for the Classification of Interactive Motor Behaviors in Collaborative Object Transfer},
    author = {Zaid Al-saadi and Doganay Sirintuna and Ayse Kucukyilmaz and Cagatay Basdogan},
    publisher = {IEEE},
    year = {2021},
    pages = {1--1},
    doi = {10.1109/TOH.2020.3034244},
    journal = {IEEE Transactions on Haptics},
    keywords = {ARRAY(0x558e723a2850)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43742/},
    abstract = {Haptics provides a natural and intuitive channel of communication during the interaction of two humans in complex physical tasks, such as joint object transportation. However, despite the utmost importance of touch in physical interactions, the use of haptics is underrepresented when developing intelligent systems. This study explores the prominence of haptic data to extract information about underlying interaction patterns within human-human cooperation. For this purpose, we design salient haptic features describing the collaboration quality within a physical dyadic task and investigate the use of these features to classify the interaction patterns. We categorize the interaction into four discrete behavior classes. These classes describe whether the partners work in harmony or face conflicts while jointly transporting an object through translational or rotational movements. We test the proposed features on a physical human-human interaction (pHHI) dataset, consisting of data collected from 12 human dyads. Using these data, we verify the salience of haptic features by achieving a correct classification rate over 91\% using a Random Forest classifier.}
    }
  • C. Armanini, M. Farman, M. Calisti, F. Giorgio-Serchi, C. Stefanini, and F. Renda, “Flagellate underwater robotics at macroscale: design, modeling, and characterization,” Ieee transactions on robotics, p. 1–17, 2021. doi:10.1109/TRO.2021.3094051
    [BibTeX] [Abstract] [Download PDF]

    Prokaryotic flagellum is considered as the only known example of a biological ?wheel,? a system capable of converting the action of rotatory actuator into a continuous propulsive force. For this reason, flagella are an interesting case study in soft robotics and they represent an appealing source of inspiration for the design of underwater robots. A great number of flagellum-inspired devices exists, but these are all characterized by a size ranging in the micrometer scale and mostly realized with rigid materials. Here, we present the design and development of a novel generation of macroscale underwater propellers that draw their inspiration from flagellated organisms. Through a simple rotatory actuation and exploiting the capability of the soft material to store energy when interacting with the surrounding fluid, the propellers attain different helical shapes that generate a propulsive thrust. A theoretical model is presented, accurately describing and predicting the kinematic and the propulsive capabilities of the proposed solution. Different experimental trials are presented to validate the accuracy of the model and to investigate the performance of the proposed design. Finally, an underwater robot prototype propelled by four flagellar modules is presented.

    @article{lincoln46191,
    title = {Flagellate Underwater Robotics at Macroscale: Design, Modeling, and Characterization},
    author = {Costanza Armanini and Madiha Farman and Marcello Calisti and Francesco Giorgio-Serchi and Cesare Stefanini and Federico Renda},
    year = {2021},
    pages = {1--17},
    doi = {10.1109/TRO.2021.3094051},
    journal = {IEEE Transactions on Robotics},
    keywords = {ARRAY(0x558e723f54e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46191/},
    abstract = {Prokaryotic flagellum is considered as the only known example of a biological ?wheel,? a system capable of converting the action of rotatory actuator into a continuous propulsive force. For this reason, flagella are an interesting case study in soft robotics and they represent an appealing source of inspiration for the design of underwater robots. A great number of flagellum-inspired devices exists, but these are all characterized by a size ranging in the micrometer scale and mostly realized with rigid materials. Here, we present the design and development of a novel generation of macroscale underwater propellers that draw their inspiration from flagellated organisms. Through a simple rotatory actuation and exploiting the capability of the soft material to store energy when interacting with the surrounding fluid, the propellers attain different helical shapes that generate a propulsive thrust. A theoretical model is presented, accurately describing and predicting the kinematic and the propulsive capabilities of the proposed solution. Different experimental trials are presented to validate the accuracy of the model and to investigate the performance of the proposed design. Finally, an underwater robot prototype propelled by four flagellar modules is presented.}
    }
  • N. Kokciyan, I. Sassoon, E. Sklar, S. Parsons, and S. Modgil, “Applying metalevel argumentation frameworks to support medical decision making,” Ieee intelligent systems, 2021. doi:10.1109/MIS.2021.3051420
    [BibTeX] [Abstract] [Download PDF]

    People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.

    @article{lincoln43690,
    title = {Applying Metalevel Argumentation Frameworks to Support Medical Decision Making},
    author = {Nadin Kokciyan and Isabel Sassoon and Elizabeth Sklar and Simon Parsons and Sanjay Modgil},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/MIS.2021.3051420},
    journal = {IEEE Intelligent Systems},
    keywords = {ARRAY(0x558e722c9b18)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43690/},
    abstract = {People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.}
    }
  • H. Wang, H. Wang, J. Zhao, C. Hu, J. Peng, and S. Yue, “A time-delay feedback neural network for discriminating small, fast-moving targets in complex dynamic environments,” Ieee transactions on neural networks and learning systems, 2021. doi:10.1109/TNNLS.2021.3094205
    [BibTeX] [Abstract] [Download PDF]

    Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this paper, we propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop, and find it shows preference for high-velocity objects. Extensive experiments suggest that the Feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.

    @article{lincoln45567,
    title = {A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments},
    author = {Hongxin Wang and Huatian Wang and Jiannan Zhao and Cheng Hu and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/TNNLS.2021.3094205},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    keywords = {ARRAY(0x558e722cc038)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45567/},
    abstract = {Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this paper, we propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop, and find it shows preference for high-velocity objects. Extensive experiments suggest that the Feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.}
    }
  • F. Yang, L. Shu, Y. Yang, G. Han, S. Pearson, and K. Li, “Optimal deployment of solar insecticidal lamps over constrained locations in mixed-crop farmlands,” Ieee internet of things journal, vol. 8, iss. 16, p. 13095–13114, 2021. doi:10.1109/JIOT.2021.3064043
    [BibTeX] [Abstract] [Download PDF]

    Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.

    @article{lincoln44192,
    volume = {8},
    number = {16},
    month = {August},
    author = {Fan Yang and Lei Shu and Yuli Yang and Guangjie Han and Simon Pearson and Kailiang Li},
    title = {Optimal Deployment of Solar Insecticidal Lamps over Constrained Locations in Mixed-Crop Farmlands},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Internet of Things Journal},
    doi = {10.1109/JIOT.2021.3064043},
    pages = {13095--13114},
    keywords = {ARRAY(0x558e723f3ca0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44192/},
    abstract = {Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.}
    }
  • J. Zhao, H. Wang, N. Bellotto, C. Hu, J. Peng, and S. Yue, “Enhancing lgmd’s looming selectivity for uav with spatial-temporal distributed presynaptic connections,” Ieee transactions on neural networks and learning systems, p. 1–15, 2021. doi:10.1109/TNNLS.2021.3106946
    [BibTeX] [Abstract] [Download PDF]

    Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect?s visual neuron, LGMD is considered to be an ideal basis for building UAV?s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, we proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts? synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. Systematic experiments have been conducted to verify the performance of the proposed model for UAV agile flights. The results have demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably, and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.

    @article{lincoln47316,
    title = {Enhancing LGMD's Looming Selectivity for UAV With Spatial-Temporal Distributed Presynaptic Connections},
    author = {Jiannan Zhao and Hongxin Wang and Nicola Bellotto and Cheng Hu and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    pages = {1--15},
    doi = {10.1109/TNNLS.2021.3106946},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    keywords = {ARRAY(0x558e72265478)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47316/},
    abstract = {Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect?s visual neuron, LGMD is considered to be an ideal basis for building UAV?s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, we proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts? synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. Systematic experiments have been conducted to verify the performance of the proposed model for UAV agile flights. The results have demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably, and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.}
    }
  • S. Sarkadi, A. Rutherford, P. McBurney, S. Parsons, and I. Rahwan, “The evolution of deception,” Royal society open science, vol. 8, iss. 9, 2021. doi:10.1098/rsos.201032
    [BibTeX] [Abstract] [Download PDF]

    Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good–public goods games (PGGs)–with ideas from Interpersonal Deception Theory and Truth-Default Theory. This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception, and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralised case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of The Digital Commons.

    @article{lincoln46543,
    volume = {8},
    number = {9},
    month = {September},
    author = {Stefan Sarkadi and Alex Rutherford and Peter McBurney and Simon Parsons and Iyad Rahwan},
    title = {The Evolution of Deception},
    publisher = {Royal Society},
    year = {2021},
    journal = {Royal Society Open Science},
    doi = {10.1098/rsos.201032},
    keywords = {ARRAY(0x558e723f3be0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46543/},
    abstract = {Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good--public goods games (PGGs)--with ideas from Interpersonal Deception Theory and Truth-Default Theory. This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception, and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralised case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of The Digital Commons.}
    }
  • H. Isakhani, N. Bellotto, Q. Fu, and S. Yue, “Generative design and fabrication of a locust-inspired gliding wing prototype for micro aerial robots,” Journal of computational design and engineering, vol. 8, iss. 5, p. 1191–1203, 2021. doi:10.1093/jcde/qwab040
    [BibTeX] [Abstract] [Download PDF]

    Gliding is generally one of the most efficient modes of flight in natural fliers that can be further emphasised in the aircraft industry to reduce emissions and facilitate endured flights. Natural wings being fundamentally responsible for this phenomenon are developed over millions of years of evolution. Artificial wings on the other hand, are limited to the human-proposed conceptual design phase often leading to sub-optimal results. However, the novel Generative Design (GD) method claims to produce mechanically improved solutions based on robust and rigorous models of design conditions and performance criteria. This study investigates the potential applications of this Computer-Associated Design (CAsD) technology to generate novel micro aerial vehicle wing concepts that are structurally more stable and efficient. Multiple performance-driven solutions (wings) with high-level goals are generated by an infinite scale cloud computing solution executing a machine learning based GD algorithm. Ultimately, the highest performing CAsD concepts are numerically analysed, fabricated, and mechanically tested according to our previous study, and the results are compared to the literature for qualitative as well as quantitative analysis and validations. It was concluded that the GD-based tandem wings’ (fore-& hindwing) ability to withstand fracture failure without compromising structural rigidity was optimised by 78\% compared to its peer models. However, the weight was slightly increased by 11\% with 14\% drop in stiffness when compared to our models from previous study.

    @article{lincoln46871,
    volume = {8},
    number = {5},
    month = {October},
    author = {Hamid Isakhani and Nicola Bellotto and Qinbing Fu and Shigang Yue},
    title = {Generative design and fabrication of a locust-inspired gliding wing prototype for micro aerial robots},
    publisher = {Oxford University Press},
    year = {2021},
    journal = {Journal of Computational Design and Engineering},
    doi = {10.1093/jcde/qwab040},
    pages = {1191--1203},
    keywords = {ARRAY(0x558e723f3a30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46871/},
    abstract = {Gliding is generally one of the most efficient modes of flight in natural fliers that can be further emphasised in the aircraft industry to reduce emissions and facilitate endured flights. Natural wings being fundamentally responsible for this phenomenon are developed over millions of years of evolution. Artificial wings on the other hand, are limited to the human-proposed conceptual design phase often leading to sub-optimal results. However, the novel Generative Design (GD) method claims to produce mechanically improved solutions based on robust and rigorous models of design conditions and performance criteria. This study investigates the potential applications of this Computer-Associated Design (CAsD) technology to generate novel micro aerial vehicle wing concepts that are structurally more stable and efficient. Multiple performance-driven solutions (wings) with high-level goals are generated by an infinite scale cloud computing solution executing a machine learning based GD algorithm. Ultimately, the highest performing CAsD concepts are numerically analysed, fabricated, and mechanically tested according to our previous study, and the results are compared to the literature for qualitative as well as quantitative analysis and validations. It was concluded that the GD-based tandem wings' (fore-\& hindwing) ability to withstand fracture failure without compromising structural rigidity was optimised by 78\% compared to its peer models. However, the weight was slightly increased by 11\% with 14\% drop in stiffness when compared to our models from previous study.}
    }
  • T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A multiple pheromone communication system for swarm intelligence,” Ieee access, vol. 9, p. 148721–148737, 2021. doi:10.1109/ACCESS.2021.3124386
    [BibTeX] [Abstract] [Download PDF]

    Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviours such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS\${$\backslash$}Phi\$) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS\${$\backslash$}Phi\$ uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The colour sensors beneath the robots can detect and identify lingering “pheromones” on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.

    @article{lincoln47447,
    volume = {9},
    month = {December},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue},
    title = {A Multiple Pheromone Communication System for Swarm Intelligence},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2021.3124386},
    pages = {148721--148737},
    keywords = {ARRAY(0x558e723f35b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47447/},
    abstract = {Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviours such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS\${$\backslash$}Phi\$) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS\${$\backslash$}Phi\$ uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The colour sensors beneath the robots can detect and identify lingering "pheromones" on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.}
    }
  • Z. Maamar, N. Faci, M. Al-Khafajiy, and M. Dohan, “Time-centric and resource-driven composition for the internet of things,” Internet of things, vol. 16, p. 100460, 2021. doi:10.1016/j.iot.2021.100460
    [BibTeX] [Abstract] [Download PDF]

    Internet of Things (IoT), one of the fastest growing Information and Communication Technologies (ICT), is playing a major role in provisioning contextualized, smart services to end-users and organizations. To sustain this role, many challenges must be tackled with focus in this paper on the design and development of thing composition. The complex nature of today?s needs requires groups of things, and not separate things, to work together to satisfy these needs. By analogy with other ICTs like Web services, thing composition is specified with a model that uses dependencies to decide upon things that will do what, where, when, and why. Two types of dependencies are adopted, regular that schedule the execution chronology of things and special that coordinate the operations of things when they run into obstacles like unavailability of resources to use. Both resource use and resource availability are specified in compliance with Allen?s time intervals upon which reasoning takes place. This reasoning is technically demonstrated through a system extending EdgeCloudSim and backed with a set of experiments.

    @article{lincoln47573,
    volume = {16},
    month = {December},
    author = {Zakaria Maamar and Noura Faci and Mohammed Al-Khafajiy and Murtada Dohan},
    title = {Time-centric and resource-driven composition for the Internet of Things},
    publisher = {Elsevier},
    year = {2021},
    journal = {Internet of Things},
    doi = {10.1016/j.iot.2021.100460},
    pages = {100460},
    keywords = {ARRAY(0x558e723f35e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47573/},
    abstract = {Internet of Things (IoT), one of the fastest growing Information and Communication Technologies (ICT), is playing a major role in provisioning contextualized, smart services to end-users and organizations. To sustain this role, many challenges must be tackled with focus in this paper on the design and development of thing composition. The complex nature of today?s needs requires groups of things, and not separate things, to work together to satisfy these needs. By analogy with other ICTs like Web services, thing composition is specified with a model that uses dependencies to decide upon things that will do what, where, when, and why. Two types of dependencies are adopted, regular that schedule the execution chronology of things and special that coordinate the operations of things when they run into obstacles like unavailability of resources to use. Both resource use and resource availability are specified in compliance with Allen?s time intervals upon which reasoning takes place. This reasoning is technically demonstrated through a system extending EdgeCloudSim and backed with a set of experiments.}
    }
  • M. Ghafoor, S. A. Tariq, T. Zia, I. A. Taj, A. Abbas, A. Hassan, and A. Y. Zomaya, “Fingerprint identification with shallow multifeature view classifier,” Ieee transactions on cybernetics, vol. 51, iss. 9, p. 14515–4527, 2021. doi:10.1109/TCYB.2019.2957188
    [BibTeX] [Abstract] [Download PDF]

    This article presents an efficient fingerprint identification system that implements an initial classification for search-space reduction followed by minutiae neighbor-based feature encoding and matching. The current state-of-the-art fingerprint classification methods use a deep convolutional neural network (DCNN) to assign confidence for the classification prediction, and based on this prediction, the input fingerprint is matched with only the subset of the database that belongs to the predicted class. It can be observed for the DCNNs that as the architectures deepen, the farthest layers of the network learn more abstract information from the input images that result in higher prediction accuracies. However, the downside is that the DCNNs are data hungry and require lots of annotated (labeled) data to learn generalized network parameters for deeper layers. In this article, a shallow multifeature view CNN (SMV-CNN) fingerprint classifier is proposed that extracts: 1) fine-grained features from the input image and 2) abstract features from explicitly derived representations obtained from the input image. The multifeature views are fed to a fully connected neural network (NN) to compute a global classification prediction. The classification results show that the SMV-CNN demonstrated an improvement of 2.8\% when compared to baseline CNN consisting of a single grayscale view on an open-source database. Moreover, in comparison with the state-of-the-art residual network (ResNet-50) image classification model, the proposed method performs comparably while being less complex and more efficient during training. The result of classification-based fingerprint identification has shown that the search space is reduced by over 50\% without degradation of identification accuracies.

    @article{lincoln43823,
    volume = {51},
    number = {9},
    month = {September},
    author = {Mubeen Ghafoor and Syed Ali Tariq and Tehseen Zia and Imtiaz Ahmad Taj and Assad Abbas and Ali Hassan and Albert Y. Zomaya},
    title = {Fingerprint Identification With Shallow Multifeature View Classifier},
    publisher = {IEEE Transactions on Cybernetics},
    year = {2021},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/TCYB.2019.2957188},
    pages = {14515--4527},
    keywords = {ARRAY(0x558e723f3b80)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43823/},
    abstract = {This article presents an efficient fingerprint identification system that implements an initial classification for search-space reduction followed by minutiae neighbor-based feature encoding and matching. The current state-of-the-art fingerprint classification methods use a deep convolutional neural network (DCNN) to assign confidence for the classification prediction, and based on this prediction, the input fingerprint is matched with only the subset of the database that belongs to the predicted class. It can be observed for the DCNNs that as the architectures deepen, the farthest layers of the network learn more abstract information from the input images that result in higher prediction accuracies. However, the downside is that the DCNNs are data hungry and require lots of annotated (labeled) data to learn generalized network parameters for deeper layers. In this article, a shallow multifeature view CNN (SMV-CNN) fingerprint classifier is proposed that extracts: 1) fine-grained features from the input image and 2) abstract features from explicitly derived representations obtained from the input image. The multifeature views are fed to a fully connected neural network (NN) to compute a global classification prediction. The classification results show that the SMV-CNN demonstrated an improvement of 2.8\% when compared to baseline CNN consisting of a single grayscale view on an open-source database. Moreover, in comparison with the state-of-the-art residual network (ResNet-50) image classification model, the proposed method performs comparably while being less complex and more efficient during training. The result of classification-based fingerprint identification has shown that the search space is reduced by over 50\% without degradation of identification accuracies.}
    }
  • M. M. N. Abid, T. Zia, M. Ghafoor, and D. Windridge, “Multi-view convolutional recurrent neural networks for lung cancer nodule identification,” Neurocomputing, vol. 453, p. 299–311, 2021. doi:10.1016/j.neucom.2020.06.144
    [BibTeX] [Abstract] [Download PDF]

    Screening via low-dose Computer Tomography (CT) has been shown to reduce lung cancer mortality rates by at least 20\%. However, the assessment of large numbers of CT scans by radiologists is cost intensive, and potentially produces varying and inconsistent results for differing radiologists (and also for temporally-separated assessments by the same radiologist). To overcome these challenges, computer aided diagnosis systems based on deep learning methods have proved effective in automatic detection and classification of lung cancer. Latterly, interest has focused on the full utilization of the 3D information in CT scans using 3D-CNNs and related approaches. However, such approaches do not intrinsically correlate size and shape information between slices. In this work, an innovative approach Multi-view Convolutional Recurrent Neural Network (MV-CRecNet) is proposed that exploits shape, size and cross-slice variations while learning to identify lung cancer nodules from CT scans. The multiple-views that are passed to the model ensure better generalization and the learning of robust features. We evaluate the proposed MV-CRecNet model on the reference Lung Image Database Consortium and Image Database Resource Initiative and Early Lung Cancer Action Program datasets; six evaluation metrics are applied to eleven comparison models for testing. Results demonstrate that proposed methodology outperforms all of the models against all of the evaluation metrics.

    @article{lincoln47918,
    volume = {453},
    month = {September},
    author = {Mian Muhammad Naeem Abid and Tehseen Zia and Mubeen Ghafoor and David Windridge},
    title = {Multi-view Convolutional Recurrent Neural Networks for Lung Cancer Nodule Identification},
    publisher = {Elsevier},
    year = {2021},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2020.06.144},
    pages = {299--311},
    keywords = {ARRAY(0x558e723f3b50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47918/},
    abstract = {Screening via low-dose Computer Tomography (CT) has been shown to reduce lung cancer mortality rates by at least 20\%. However, the assessment of large numbers of CT scans by radiologists is cost intensive, and potentially produces varying and inconsistent results for differing radiologists (and also for temporally-separated assessments by the same radiologist). To overcome these challenges, computer aided diagnosis systems based on deep learning methods have proved effective in automatic detection and classification of lung cancer. Latterly, interest has focused on the full utilization of the 3D information in CT scans using 3D-CNNs and related approaches. However, such approaches do not intrinsically correlate size and shape information between slices. In this work, an innovative approach Multi-view Convolutional Recurrent Neural Network (MV-CRecNet) is proposed that exploits shape, size and cross-slice variations while learning to identify lung cancer nodules from CT scans. The multiple-views that are passed to the model ensure better generalization and the learning of robust features. We evaluate the proposed MV-CRecNet model on the reference Lung Image Database Consortium and Image Database Resource Initiative and Early Lung Cancer Action Program datasets; six evaluation metrics are applied to eleven comparison models for testing. Results demonstrate that proposed methodology outperforms all of the models against all of the evaluation metrics.}
    }
  • A. Zahra, M. Ghafoor, K. Munir, A. Ullah, and Z. U. Abideen, “Application of region-based video surveillance in smart cities using deep learning,” Multimedia tools and applications, 2021. doi:10.1007/s11042-021-11468-w
    [BibTeX] [Abstract] [Download PDF]

    Smart video surveillance helps to build more robust smart city environment. The varied angle cameras act as smart sensors and collect visual data from smart city environment and transmit it for further visual analysis. The transmitted visual data is required to be in high quality for efcient analysis which is a challenging task while transmitting videos on low capacity bandwidth communication channels. In latest smart surveillance cameras, high quality of video transmission is maintained through various video encoding techniques such as high efciency video coding. However, these video coding techniques still provide limited capabilities and the demand of high-quality based encoding for salient regions such as pedestrians, vehicles, cyclist/motorcyclist and road in video surveillance systems is still not met. This work is a contribution towards building an efcient salient region-based sur?veillance framework for smart cities. The proposed framework integrates a deep learning?based video surveillance technique that extracts salient regions from a video frame without information loss, and then encodes it in reduced size. We have applied this approach in diverse case studies environments of smart city to test the applicability of the framework. The successful result in terms of bitrate 56.92\%, peak signal to noise ratio 5.35 bd and SR based segmentation accuracy of 92\% and 96\% for two diferent benchmark datasets is the outcome of proposed work. Consequently, the generation of less computational region?based video data makes it adaptable to improve surveillance solution in Smart Cities.

    @article{lincoln47914,
    month = {December},
    title = {Application of region-based video surveillance
    in smart cities using deep learning},
    author = {Asma Zahra and Mubeen Ghafoor and Kamran Munir and Ata Ullah and Zain Ul Abideen},
    publisher = {Springer},
    year = {2021},
    doi = {10.1007/s11042-021-11468-w},
    journal = {Multimedia Tools and Applications},
    keywords = {ARRAY(0x558e723f3610)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47914/},
    abstract = {Smart video surveillance helps to build more robust smart city environment. The varied
    angle cameras act as smart sensors and collect visual data from smart city environment and
    transmit it for further visual analysis. The transmitted visual data is required to be in high
    quality for efcient analysis which is a challenging task while transmitting videos on low
    capacity bandwidth communication channels. In latest smart surveillance cameras, high
    quality of video transmission is maintained through various video encoding techniques
    such as high efciency video coding. However, these video coding techniques still provide
    limited capabilities and the demand of high-quality based encoding for salient regions such
    as pedestrians, vehicles, cyclist/motorcyclist and road in video surveillance systems is still
    not met. This work is a contribution towards building an efcient salient region-based sur?veillance framework for smart cities. The proposed framework integrates a deep learning?based video surveillance technique that extracts salient regions from a video frame without
    information loss, and then encodes it in reduced size. We have applied this approach in
    diverse case studies environments of smart city to test the applicability of the framework.
    The successful result in terms of bitrate 56.92\%, peak signal to noise ratio 5.35 bd and
    SR based segmentation accuracy of 92\% and 96\% for two diferent benchmark datasets is
    the outcome of proposed work. Consequently, the generation of less computational region?based video data makes it adaptable to improve surveillance solution in Smart Cities.}
    }
  • D. Laparidou, F. Curtis, J. Akanuwe, K. Goher, N. Siriwardena, and A. Kucukyilmaz, “Patient, carer, and staff perceptions of robotics in motor rehabilitation: a systematic review and qualitative meta?synthesis.,” Journal of neuroengineering and rehabilitation, vol. 18, iss. 181, 2021. doi:10.1186/s12984-021-00976-3
    [BibTeX] [Abstract] [Download PDF]

    Background: In recent years, robotic rehabilitation devices have often been used for motor training. However, to date, no systematic reviews of qualitative studies exploring the end-user experiences of robotic devices in motor rehabilitation have been published. The aim of this study was to review end-users? (patients, carers and healthcare professionals) experiences with robotic devices in motor rehabilitation, by conducting a systematic review and thematic meta-synthesis of qualitative studies concerning the users? experiences with such robotic devices. Methods: Qualitative studies and mixed-methods studies with a qualitative element were eligible for inclusion. Nine electronic databases were searched from inception to August 2020, supplemented with internet searches and forward and backward citation tracking from the included studies and review articles. Data were synthesised thematically following the Thomas and Harden approach. The CASP Qualitative Checklist was used to assess the quality of the included studies of this review. Results: The search strategy identified a total of 13,556 citations and after removing duplicates and excluding citations based on title and abstract, and full text screening, 30 studies were included. All studies were considered of acceptable quality. We developed six analytical themes: logistic barriers; technological challenges; appeal and engagement; supportive interactions and relationships; benefits for physical, psychological, and social function(ing); and expanding and sustaining therapeutic options. Conclusions: Despite experiencing technological and logistic challenges, participants found robotic devices acceptable, useful and beneficial (physically, psychologically, and socially), as well as fun and interesting. Having supportive relationships with significant others and positive therapeutic relationships with healthcare staff were considered the foundation for successful rehabilitation and recovery.

    @article{lincoln47708,
    volume = {18},
    number = {181},
    month = {December},
    author = {Despina Laparidou and Ffion Curtis and Joseph Akanuwe and Khaled Goher and Niro Siriwardena and Ayse Kucukyilmaz},
    title = {Patient, carer, and staff perceptions of robotics in motor rehabilitation: a systematic review and qualitative meta?synthesis.},
    publisher = {BMC},
    year = {2021},
    journal = {Journal of NeuroEngineering and Rehabilitation},
    doi = {10.1186/s12984-021-00976-3},
    keywords = {ARRAY(0x558e723f3640)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47708/},
    abstract = {Background: In recent years, robotic rehabilitation devices have often been used for motor training. However, to date, no systematic reviews of qualitative studies exploring the end-user experiences of robotic devices in motor rehabilitation have been published. The aim of this study was to review end-users? (patients, carers and healthcare professionals) experiences with robotic devices in motor rehabilitation, by conducting a systematic review and thematic meta-synthesis of qualitative studies concerning the users? experiences with such robotic devices.
    Methods: Qualitative studies and mixed-methods studies with a qualitative element were eligible for inclusion. Nine electronic databases were searched from inception to August 2020, supplemented with internet searches and forward and backward citation tracking from the included studies and review articles. Data were synthesised thematically following the Thomas and Harden approach. The CASP Qualitative Checklist was used to assess the quality of the included studies of this review.
    Results: The search strategy identified a total of 13,556 citations and after removing duplicates and excluding citations based on title and abstract, and full text screening, 30 studies were included. All studies were considered of acceptable quality. We developed six analytical themes: logistic barriers; technological challenges; appeal and engagement; supportive interactions and relationships; benefits for physical, psychological, and social function(ing); and expanding and sustaining therapeutic options.
    Conclusions: Despite experiencing technological and logistic challenges, participants found robotic devices acceptable, useful and beneficial (physically, psychologically, and socially), as well as fun and interesting. Having supportive relationships with significant others and positive therapeutic relationships with healthcare staff were considered the foundation for successful rehabilitation and recovery.}
    }
  • A. Seddaoui, C. M. Saaj, and M. H. Nair, “Modeling a controlled-floating space robot for in-space services: a beginner?s tutorial,” Frontiers in robotics and ai, vol. 8, 2021. doi:10.3389/frobt.2021.725333
    [BibTeX] [Abstract] [Download PDF]

    Ground-based applications of robotics and autonomous systems (RASs) are fast advancing, and there is a growing appetite for developing cost-effective RAS solutions for in situ servicing, debris removal, manufacturing, and assembly missions. An orbital space robot, that is, a spacecraft mounted with one or more robotic manipulators, is an inevitable system for a range of future in-orbit services. However, various practical challenges make controlling a space robot extremely difficult compared with its terrestrial counterpart. The state of the art of modeling the kinematics and dynamics of a space robot, operating in the free-flying and free-floating modes, has been well studied by researchers. However, these two modes of operation have various shortcomings, which can be overcome by operating the space robot in the controlled-floating mode. This tutorial article aims to address the knowledge gap in modeling complex space robots operating in the controlled-floating mode and under perturbed conditions. The novel research contribution of this article is the refined dynamic model of a chaser space robot, derived with respect to the moving target while accounting for the internal perturbations due to constantly changing the center of mass, the inertial matrix, Coriolis, and centrifugal terms of the coupled system; it also accounts for the external environmental disturbances. The nonlinear model presented accurately represents the multibody coupled dynamics of a space robot, which is pivotal for precise pose control. Simulation results presented demonstrate the accuracy of the model for closed-loop control. In addition to the theoretical contributions in mathematical modeling, this article also offers a commercially viable solution for a wide range of in-orbit missions.

    @article{lincoln48335,
    volume = {8},
    month = {December},
    author = {Asma Seddaoui and Chakravarthini Mini Saaj and Manu Harikrishnan Nair},
    title = {Modeling a Controlled-Floating Space Robot for In-Space Services: A Beginner?s Tutorial},
    publisher = {Frontiers Media},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2021.725333},
    year = {2021},
    keywords = {ARRAY(0x558e723f3670)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48335/},
    abstract = {Ground-based applications of robotics and autonomous systems (RASs) are fast
    advancing, and there is a growing appetite for developing cost-effective RAS solutions
    for in situ servicing, debris removal, manufacturing, and assembly missions. An orbital
    space robot, that is, a spacecraft mounted with one or more robotic manipulators, is an
    inevitable system for a range of future in-orbit services. However, various practical
    challenges make controlling a space robot extremely difficult compared with its
    terrestrial counterpart. The state of the art of modeling the kinematics and dynamics of
    a space robot, operating in the free-flying and free-floating modes, has been well studied
    by researchers. However, these two modes of operation have various shortcomings,
    which can be overcome by operating the space robot in the controlled-floating mode. This
    tutorial article aims to address the knowledge gap in modeling complex space robots
    operating in the controlled-floating mode and under perturbed conditions. The novel
    research contribution of this article is the refined dynamic model of a chaser space robot,
    derived with respect to the moving target while accounting for the internal perturbations
    due to constantly changing the center of mass, the inertial matrix, Coriolis, and centrifugal
    terms of the coupled system; it also accounts for the external environmental disturbances.
    The nonlinear model presented accurately represents the multibody coupled dynamics of
    a space robot, which is pivotal for precise pose control. Simulation results presented
    demonstrate the accuracy of the model for closed-loop control. In addition to the
    theoretical contributions in mathematical modeling, this article also offers a
    commercially viable solution for a wide range of in-orbit missions.}
    }
  • S. Maleki, S. Maleki, and N. R. Jennings, “Unsupervised anomaly detection with lstm autoencoders using statistical data-filtering,” Applied soft computing, vol. 108, p. 107443, 2021. doi:10.1016/j.asoc.2021.107443
    [BibTeX] [Abstract] [Download PDF]

    To address one of the most challenging industry problems, we develop an enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series. We propose the outputs of a well-designed system are drawn from an unknown probability distribution, U, in normal conditions. We introduce a probability criterion based on the classical central limit theorem that allows evaluation of the likelihood that a data-point is drawn from U. This enables the labelling of the data on the fly. Non-anomalous data is passed to train a deep Long Short-Term Memory (LSTM) autoencoder that distinguishes anomalies when the reconstruction error exceeds a threshold. To illustrate our algorithm?s efficacy, we consider two real industrial case studies where gradually-developing and abrupt anomalies occur. Moreover, we compare our algorithm?s performance with four of the recent and widely used algorithms in the domain. We show that our algorithm achieves considerably better results in that it timely detects anomalies while others either miss or lag in doing so.

    @article{lincoln44910,
    volume = {108},
    month = {September},
    author = {Sepehr Maleki and Sasan Maleki and Nicholas R. Jennings},
    title = {Unsupervised anomaly detection with LSTM autoencoders using statistical data-filtering},
    publisher = {Elsevier},
    year = {2021},
    journal = {Applied Soft Computing},
    doi = {10.1016/j.asoc.2021.107443},
    pages = {107443},
    keywords = {ARRAY(0x558e723f3a90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44910/},
    abstract = {To address one of the most challenging industry problems, we develop an enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series. We propose the outputs of a well-designed system are drawn from an unknown probability distribution, U, in normal conditions. We introduce a probability criterion based on the classical central limit theorem that allows evaluation of the likelihood that a data-point is drawn from U. This enables the labelling of the data on the fly. Non-anomalous data is passed to train a deep Long Short-Term Memory (LSTM) autoencoder that distinguishes anomalies when the reconstruction error exceeds a threshold. To illustrate our algorithm?s efficacy, we consider two real industrial case studies where gradually-developing and abrupt anomalies occur. Moreover, we compare our algorithm?s performance with four of the recent and widely used algorithms in the domain. We show that our algorithm achieves considerably better results in that it timely detects anomalies while others either miss or lag in doing so.}
    }
  • J. L. Louedec and G. Cielniak, “3d shape sensing and deep learning-based segmentation of strawberries,” Computers and electronics in agriculture, vol. 190, 2021. doi:10.1016/j.compag.2021.106374
    [BibTeX] [Abstract] [Download PDF]

    Automation and robotisation of the agricultural sector are seen as a viable solution to socio-economic challenges faced by this industry. This technology often relies on intelligent perception systems providing information about crops, plants and the entire environment. The challenges faced by traditional 2D vision systems can be addressed by modern 3D vision systems which enable straightforward localisation of objects, size and shape estimation, or handling of occlusions. So far, the use of 3D sensing was mainly limited to indoor or structured environments. In this paper, we evaluate modern sensing technologies including stereo and time-of-flight cameras for 3D perception of shape in agriculture and study their usability for segmenting out soft fruit from background based on their shape. To that end, we propose a novel 3D deep neural network which exploits the organised nature of information originating from the camera-based 3D sensors. We demonstrate the superior performance and ef? ficiency of the proposed architecture compared to the state-of-the-art 3D networks. Through a simulated study, we also show the potential of the 3D sensing paradigm for object segmentation in agriculture and provide in? sights and analysis of what shape quality is needed and expected for further analysis of crops. The results of this work should encourage researchers and companies to develop more accurate and robust 3D sensing technologies to assure their wider adoption in practical agricultural applications.

    @article{lincoln47035,
    volume = {190},
    month = {November},
    author = {Justin Le Louedec and Grzegorz Cielniak},
    title = {3D shape sensing and deep learning-based segmentation of strawberries},
    publisher = {Elsevier},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2021.106374},
    year = {2021},
    keywords = {ARRAY(0x558e723f3730)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47035/},
    abstract = {Automation and robotisation of the agricultural sector are seen as a viable solution to socio-economic challenges
    faced by this industry. This technology often relies on intelligent perception systems providing information about
    crops, plants and the entire environment. The challenges faced by traditional 2D vision systems can be addressed
    by modern 3D vision systems which enable straightforward localisation of objects, size and shape estimation, or
    handling of occlusions. So far, the use of 3D sensing was mainly limited to indoor or structured environments. In
    this paper, we evaluate modern sensing technologies including stereo and time-of-flight cameras for 3D
    perception of shape in agriculture and study their usability for segmenting out soft fruit from background based
    on their shape. To that end, we propose a novel 3D deep neural network which exploits the organised nature of
    information originating from the camera-based 3D sensors. We demonstrate the superior performance and ef?
    ficiency of the proposed architecture compared to the state-of-the-art 3D networks. Through a simulated study,
    we also show the potential of the 3D sensing paradigm for object segmentation in agriculture and provide in?
    sights and analysis of what shape quality is needed and expected for further analysis of crops. The results of this
    work should encourage researchers and companies to develop more accurate and robust 3D sensing technologies
    to assure their wider adoption in practical agricultural applications.}
    }
  • T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A multiple pheromone communication system for swarm intelligence,” Ieee access, vol. 9, 2021. doi:10.1109/ACCESS.2021.3124386
    [BibTeX] [Abstract] [Download PDF]

    Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviors such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS{\ensuremath{\Phi}}) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS{\ensuremath{\Phi}} uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The color sensors beneath the robots can detect and identify lingering “pheromones” on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.

    @article{lincoln47216,
    volume = {9},
    month = {November},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue},
    title = {A Multiple Pheromone Communication System for Swarm Intelligence},
    publisher = {IEEE},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2021.3124386},
    year = {2021},
    keywords = {ARRAY(0x558e723f3790)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47216/},
    abstract = {Pheromones are chemical substances essential for communication among social insects.
    In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual
    pheromone has emerged recently as a powerful real-time method for indirect communication. However,
    these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in
    the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviors such
    as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve
    complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS{\ensuremath{\Phi}}) to
    support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones
    simultaneously. The proposed system ColCOS{\ensuremath{\Phi}} uses optical signals to emulate different evaporating
    chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a
    wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The
    color sensors beneath the robots can detect and identify lingering "pheromones" on the screen. Meanwhile,
    the release of any pheromone from each robot is enabled by monitoring its positional information over time
    with an overhead camera. No other communication methods apart from virtual pheromones are employed in
    this system. Two case studies have been carried out which have verified the feasibility and effectiveness of
    the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel
    platform is a timely and powerful tool for research into swarm intelligence.}
    }
  • F. Camara, N. Bellotto, S. Cosar, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, and C. Fox, “Pedestrian models for autonomous driving part i: low-level models, from sensing to tracking,” Ieee transactions on intelligent transport systems, vol. 22, iss. 10, p. 6131–6151, 2021. doi:10.1109/TITS.2020.3006768
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.

    @article{lincoln41705,
    volume = {22},
    number = {10},
    month = {October},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Dimitris Nathanael and Mathias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Charles Fox},
    title = {Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from Sensing to Tracking},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    doi = {10.1109/TITS.2020.3006768},
    pages = {6131--6151},
    keywords = {ARRAY(0x558e723f37c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41705/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.}
    }
  • H. Isakhani, S. Yue, C. Xiong, and W. Chen, “Aerodynamic analysis and optimization of gliding locust wing using nash genetic algorithm,” Aiaa journal, vol. 59, iss. 10, p. 4002–4013, 2021. doi:10.2514/1.J060298
    [BibTeX] [Abstract] [Download PDF]

    Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. This study investigates the aerodynamic characteristics of an insect species called desert locust (Schistocerca gregaria) with an extraordinary gliding skills at low Reynolds number. Here, locust tandem wings are subjected to a computational fluid dynamics (CFD) simulation using 2D and 3D Navier-Stokes equations revealing fore-hindwing interactions, and the influence of their corrugations on the aerodynamic performance. Furthermore, the obtained CFD results are mathematically parameterized using PARSEC method and optimized based on a novel fusion of Genetic Algorithms and Nash game theory to achieve Nash equilibrium being the optimized wings. It was concluded that the lift-drag (gliding) ratio of the optimized profiles were improved by at least 77\% and 150\% compared to the original wing and the published literature, respectively. Ultimately, the profiles are integrated and analyzed using 3D CFD simulations that demonstrated a 14\% performance improvement validating the proposed wing models for further fabrication and rapid prototyping presented in the future study.

    @article{lincoln47016,
    volume = {59},
    number = {10},
    month = {October},
    author = {Hamid Isakhani and Shigang Yue and Caihua Xiong and Wenbin Chen},
    title = {Aerodynamic Analysis and Optimization of Gliding Locust Wing Using Nash Genetic Algorithm},
    publisher = {Aerospace Research Central},
    year = {2021},
    journal = {AIAA Journal},
    doi = {10.2514/1.J060298},
    pages = {4002--4013},
    keywords = {ARRAY(0x558e723f3850)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47016/},
    abstract = {Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. This study investigates the aerodynamic characteristics of an insect species called desert locust (Schistocerca gregaria) with an extraordinary gliding skills at low Reynolds number. Here, locust tandem wings are subjected to a computational fluid dynamics (CFD) simulation using 2D and 3D Navier-Stokes equations revealing fore-hindwing interactions, and the influence of their corrugations on the aerodynamic performance. Furthermore, the obtained CFD results are mathematically parameterized using PARSEC method and optimized based on a novel fusion of Genetic Algorithms and Nash game theory to achieve Nash equilibrium being the optimized wings.
    It was concluded that the lift-drag (gliding) ratio of the optimized profiles were improved by at least 77\% and 150\% compared to the original wing and the published literature, respectively.
    Ultimately, the profiles are integrated and analyzed using 3D CFD simulations that demonstrated a 14\% performance improvement validating the proposed wing models for further fabrication and rapid prototyping presented in the future study.}
    }
  • R. Polvara, F. D. Duchetto, G. Neumann, and M. Hanheide, “Navigate-and-seek: a robotics framework for people localization in agricultural environments,” Ieee robotics and automation letters, vol. 6, iss. 4, p. 6577–6584, 2021. doi:10.1109/LRA.2021.3094557
    [BibTeX] [Abstract] [Download PDF]

    The agricultural domain offers a working environment where many human laborers are nowadays employed to maintain or harvest crops, with huge potential for productivity gains through the introduction of robotic automation. Detecting and localizing humans reliably and accurately in such an environment, however, is a prerequisite to many services offered by fleets of mobile robots collaborating with human workers. Consequently, in this paper, we expand on the concept of a topological particle filter (TPF) to accurately and individually localize and track workers in a farm environment, integrating information from heterogeneous sensors and combining local active sensing (exploiting a robot?s onboard sensing employing a Next-Best-Sense planning approach) and global localization (using affordable IoT GNSS devices). We validate the proposed approach in topologies created for the deployment of robotics fleets to support fruit pickers in a real farm environment. By combining multi-sensor observations on the topological level complemented by active perception through the NBS approach, we show that we can improve the accuracy of picker localization in comparison to prior work.

    @article{lincoln45627,
    volume = {6},
    number = {4},
    month = {October},
    author = {Riccardo Polvara and Francesco Del Duchetto and Gerhard Neumann and Marc Hanheide},
    title = {Navigate-and-Seek: a Robotics Framework for People Localization in Agricultural Environments},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2021.3094557},
    pages = {6577--6584},
    keywords = {ARRAY(0x558e723f38b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45627/},
    abstract = {The agricultural domain offers a working environment where many human laborers are nowadays employed to maintain or harvest crops, with huge potential for productivity gains through the introduction of robotic automation. Detecting and localizing humans reliably and accurately in such an environment, however, is a prerequisite to many services offered by fleets of mobile robots collaborating with human workers. Consequently, in this paper, we expand on the concept of a topological particle filter (TPF) to accurately and individually localize and track workers in a farm environment, integrating information from heterogeneous sensors and combining local active sensing (exploiting a robot?s onboard sensing employing a Next-Best-Sense planning approach) and global localization (using affordable IoT GNSS devices). We validate the proposed approach in topologies created for the deployment of robotics fleets to support fruit pickers in a real farm environment. By combining multi-sensor observations on the topological level complemented by active perception through the NBS approach, we show that we can improve the accuracy of picker localization in comparison to prior work.}
    }
  • I. Gould, J. D. Waegemaeker, D. Tzemi, I. Wright, S. Pearson, E. Ruto, L. Karrasch, L. S. Christensen, H. Aronsson, S. Eich-Greatorex, G. Bosworth, and P. Vellinga, “Salinization threats to agriculture across the north sea region,” in Future of sustainable agriculture in saline environments, Taylor and francis, 2021, p. 71–92. doi:doi:10.1201/9781003112327-5
    [BibTeX] [Abstract] [Download PDF]

    Salinization represents a global threat to agricultural productivity and human livelihoods. Historically, much saline research has focussed on arid or semi-arid systems. The North Sea region of Europe has seen very little attention in salinity literature, however, under future climate predictions, this is likely to change. In this review, we outline the mechanisms of salinization across the North Sea region. These include the intrusion of saline groundwater, coastal flooding, irrigation and airborne salinization. The extent of each degradation process is explored for the United Kingdom, Belgium, the Netherlands, Germany, Denmark, Sweden and Norway. The potential threat of salinization across the North Sea varies in a complex and diverse manner. However, we find an overall lack of data, both of water monitoring and soil sampling, on salinity in the region. For agricultural systems in the region to adapt against future salinization risk, more extensive mapping and monitoring of salinization need to be conducted, along with the development of appropriate land management practices.

    @incollection{lincoln45934,
    booktitle = {Future of Sustainable Agriculture in Saline Environments},
    title = {Salinization Threats to Agriculture across the North Sea Region},
    author = {Iain Gould and Jeroen De Waegemaeker and Domna Tzemi and Isobel Wright and Simon Pearson and Eric Ruto and Leena Karrasch and Laurids Siig Christensen and Henrik Aronsson and Susanne Eich-Greatorex and Gary Bosworth and Pier Vellinga},
    publisher = {Taylor and Francis},
    year = {2021},
    pages = {71--92},
    doi = {doi:10.1201/9781003112327-5},
    keywords = {ARRAY(0x558e723eb920)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45934/},
    abstract = {Salinization represents a global threat to agricultural productivity and human livelihoods. Historically, much saline research has focussed on arid or semi-arid systems. The North Sea region of Europe has seen very little attention in salinity literature, however, under future climate predictions, this is likely to change. In this review, we outline the mechanisms of salinization across the North Sea region. These include the intrusion of saline groundwater, coastal flooding, irrigation and airborne salinization. The extent of each degradation process is explored for the United Kingdom, Belgium, the Netherlands, Germany, Denmark, Sweden and Norway. The potential threat of salinization across the North Sea varies in a complex and diverse manner. However, we find an overall lack of data, both of water monitoring and soil sampling, on salinity in the region. For agricultural systems in the region to adapt against future salinization risk, more extensive mapping and monitoring of salinization need to be conducted, along with the development of appropriate land management practices.}
    }
  • E. Black, N. Maudet, and S. Parsons, “Argumentation-based dialogue,” in Handbook of formal argumentation, volume 2, D. Gabby, M. Giacomin, G. R. Simari, and M. Thimm, Eds., College publications, 2021.
    [BibTeX] [Abstract] [Download PDF]

    Dialogue is fundamental to argumentation, providing a dialectical basis for establishing which arguments are acceptable. Argumentation can also be used as the basis for dialogue. In such “argumentation-based” dialogues, participants take part in an exchange of arguments, and the mechanisms of argumentation are used to establish what participants take to be acceptable at the end of the exchange. This chapter considers such dialogues, discussing the elements that are required in order to carry out argumentation-based dialogues, giving examples, and discussing open issues.

    @incollection{lincoln48566,
    booktitle = {Handbook of Formal Argumentation, Volume 2},
    editor = {Dov Gabby and Massimiliano Giacomin and Guillermo R. Simari and Matthias Thimm},
    month = {August},
    title = {Argumentation-based Dialogue},
    author = {Elizabeth Black and Nicolas Maudet and Simon Parsons},
    publisher = {College Publications},
    year = {2021},
    keywords = {ARRAY(0x558e723f3d90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48566/},
    abstract = {Dialogue is fundamental to argumentation, providing a dialectical basis for establishing which arguments are acceptable.
    Argumentation can also be used as the basis for dialogue. In such ``argumentation-based'' dialogues, participants take part in an exchange of arguments, and the mechanisms of argumentation are used to establish what participants take to be acceptable at the end of the exchange. This chapter considers such dialogues, discussing the elements that are required in order to carry out argumentation-based dialogues, giving examples, and discussing open issues.}
    }
  • A. Bikakis, A. Cohen, W. Dvorak, G. Flouris, and S. Parsons, “Joint attacks and accrual in argumentation frameworks,” in Handbook of formal argumentation, volume 2, D. Gabbay, M. Giacomin, G. R. Simari, and M. Thimm, Eds., College publications, 2021.
    [BibTeX] [Abstract] [Download PDF]

    While modelling arguments, it is often useful to represent “joint attacks”, i.e., cases where multiple arguments jointly attack another (note that this is different from the case where multiple arguments attack another in isolation). Based on this remark, the notion of joint attacks has been proposed as a useful extension of classical Abstract Argumentation Frameworks, and has been shown to constitute a genuine extension in terms of expressive power. In this chapter, we review various works considering the notion of joint attacks from various perspectives, including abstract and structured frameworks. Moreover, we present results detailing the relation among frameworks with joint attacks and classical argumentation frameworks, computational aspects, and applications of joint attacks. Last but not least, we propose a roadmap for future research on the subject, identifying gaps in current research and important research directions.

    @incollection{lincoln48565,
    booktitle = {Handbook of Formal Argumentation, Volume 2},
    editor = {Dov Gabbay and Massimiliano Giacomin and Guillermo R. Simari and Matthias Thimm},
    month = {August},
    title = {Joint Attacks and Accrual in Argumentation Frameworks},
    author = {Antonis Bikakis and Andrea Cohen and Wolfgang Dvorak and Giorgos Flouris and Simon Parsons},
    publisher = {College Publications},
    year = {2021},
    keywords = {ARRAY(0x558e723f3d60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48565/},
    abstract = {While modelling arguments, it is often useful to represent ``joint attacks'', i.e., cases where multiple arguments jointly attack another (note that this is different from the case where multiple arguments attack another in isolation). Based on this remark, the notion of joint attacks has been proposed as a useful extension of classical Abstract Argumentation Frameworks, and has been shown to constitute a genuine extension in terms of expressive power.
    In this chapter, we review various works considering the notion of joint attacks from various perspectives, including abstract and structured frameworks. Moreover, we present results detailing the relation among frameworks with joint attacks and classical argumentation frameworks, computational aspects, and applications of joint attacks.
    Last but not least, we propose a roadmap for future research on the subject, identifying gaps in current research and important research directions.}
    }
  • J. Gao, J. C. Westergaard, and E. Alexandersson, “Computer vision and less complex image analyses to monitor potato traits in fields,” in Solanum tuberosum, D. Dobnik, K. Gruden, Ž. Ramšak, and A. Coll, Eds., New York: Springer, 2021, p. 273–299. doi:10.1007/978-1-0716-1609-3_13
    [BibTeX] [Abstract] [Download PDF]

    Field phenotyping of crops has recently gained considerable attention leading to the development of new protocols for recording plant traits of interest. Phenotyping in field conditions can be performed by various cameras, sensors and imaging platforms. In this chapter, practical aspects as well as advantages and disadvantages of above-ground phenotyping platforms are highlighted with a focus on drone-based imaging and relevant image analysis for field conditions. It includes useful planning tips for experimental design as well as protocols, sources, and tools for image acquisition, pre-processing, feature extraction and machine learning highlighting the possibilities with computer vision. Several open and free resources are given to speed up data analysis for biologists. This chapter targets professionals and researchers with limited computational background performing or wishing to perform phenotyping of field crops, especially with a drone-based platform. The advice and methods described focus on potato but can mostly be used for field phenotyping of any crops.

    @incollection{lincoln46316,
    number = {2354},
    month = {August},
    author = {Junfeng Gao and Jesper Cairo Westergaard and Erik Alexandersson},
    series = {Methods in Molecular Biology},
    booktitle = {Solanum tuberosum},
    editor = {David Dobnik and Kristina Gruden and {\v Z}iva Ram{\v s}ak and Anna Coll},
    title = {Computer Vision and Less Complex Image Analyses to Monitor Potato Traits in Fields},
    address = {New York},
    publisher = {Springer},
    year = {2021},
    doi = {10.1007/978-1-0716-1609-3\_13},
    pages = {273--299},
    keywords = {ARRAY(0x558e723f3c70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46316/},
    abstract = {Field phenotyping of crops has recently gained considerable attention leading to the development of new protocols for recording plant traits of interest. Phenotyping in field conditions can be performed by various cameras, sensors and imaging platforms. In this chapter, practical aspects as well as advantages and disadvantages of above-ground phenotyping platforms are highlighted with a focus on drone-based imaging and relevant image analysis for field conditions. It includes useful planning tips for experimental design as well as protocols, sources, and tools for image acquisition, pre-processing, feature extraction and machine learning highlighting the possibilities with computer vision. Several open and free resources are given to speed up data analysis for biologists.
    This chapter targets professionals and researchers with limited computational background performing or wishing to perform phenotyping of field crops, especially with a drone-based platform. The advice and methods described focus on potato but can mostly be used for field phenotyping of any crops.}
    }
  • H. Rogers, B. Dawson, G. Clawson, and C. Fox, “Extending an open source hardware agri-robot with simulation and plant re-identification,” in Oxford autonomous intelligent machines and systems conference 2021, 2021.
    [BibTeX] [Abstract] [Download PDF]

    Previous work constructed an open source hardware (OSH) agri-robot platform for swarming agriculture research. We summarise recent developments from the community on this platform as a case study of how an OSH project can develop. The original platform has been extended by contributions of a simulation package and a vision-based plant-re-identification system used as a target for blockchain-based food assurance. Gaining new participants in OSH projects requires explicit instructions on how to contribute. The system hardware and software is open-sourced at https://github.com/Harry-Rogers/PiCar as part of this publication. We invite others to get involved and extend the platform.

    @inproceedings{lincoln46862,
    booktitle = {Oxford Autonomous Intelligent Machines and Systems Conference 2021},
    month = {October},
    title = {Extending an Open Source Hardware Agri-Robot with Simulation and Plant Re-identification},
    author = {Harry Rogers and Benjamin Dawson and Garry Clawson and Charles Fox},
    publisher = {Oxford AIMS Conference 2021},
    year = {2021},
    keywords = {ARRAY(0x558e723f3a60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46862/},
    abstract = {Previous work constructed an open source hardware (OSH)
    agri-robot platform for swarming agriculture research. We summarise
    recent developments from the community on this platform as a case study
    of how an OSH project can develop. The original platform has been
    extended by contributions of a simulation package and a vision-based
    plant-re-identification system used as a target for blockchain-based food
    assurance. Gaining new participants in OSH projects requires explicit
    instructions on how to contribute. The system hardware and software is
    open-sourced at https://github.com/Harry-Rogers/PiCar as part of this
    publication. We invite others to get involved and extend the platform.}
    }
  • A. Henry and C. Fox, “Open source hardware automated guitar player,” in International conference on computer music, 2021.
    [BibTeX] [Abstract] [Download PDF]

    We present the first open source hardware (OSH) design and build of a physical robotic automated guitar player. Users? own instruments being different shapes and sizes, the system is designed to be used and/or modified to physically attach to a wide range of instruments. Design objectives include ease and low cost of build. Automation is split into three modules: the left-hand fretting, right-hand string picking, and right hand palm muting. Automation is performed using cheap electric linear solenoids. Software APIs are designed and implemented for both low level actuator control and high level music performance.

    @inproceedings{lincoln45327,
    booktitle = {International Conference on Computer Music},
    month = {July},
    title = {Open source hardware automated guitar player},
    author = {Andrew Henry and Charles Fox},
    publisher = {ICMC},
    year = {2021},
    keywords = {ARRAY(0x558e723f3e20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45327/},
    abstract = {We present the first open source hardware (OSH) design and build of a physical robotic automated guitar player. Users? own instruments being different shapes and sizes, the system is designed to be used and/or modified to physically attach to a wide range of instruments. Design objectives include ease and low cost of build. Automation is split into three modules: the left-hand fretting, right-hand string picking, and right hand palm muting. Automation is performed using cheap electric linear solenoids. Software APIs are designed and implemented for both low level actuator control and high level music performance.}
    }
  • A. Mohtasib, G. Neumann, and H. Cuayahuitl, “A study on dense and sparse (visual) rewards in robot policy learning,” in Towards autonomous robotic systems conference (taros), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Deep Reinforcement Learning (DRL) is a promising approach for teaching robots new behaviour. However, one of its main limitations is the need for carefully hand-coded reward signals by an expert. We argue that it is crucial to automate the reward learning process so that new skills can be taught to robots by their users. To address such automation, we consider task success classifiers using visual observations to estimate the rewards in terms of task success. In this work, we study the performance of multiple state-of-the-art deep reinforcement learning algorithms under different types of reward: Dense, Sparse, Visual Dense, and Visual Sparse rewards. Our experiments in various simulation tasks (Pendulum, Reacher, Pusher, and Fetch Reach) show that while DRL agents can learn successful behaviours using visual rewards when the goal targets are distinguishable, their performance may decrease if the task goal is not clearly visible. Our results also show that visual dense rewards are more successful than visual sparse rewards and that there is no single best algorithm for all tasks.

    @inproceedings{lincoln45983,
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    month = {September},
    title = {A Study on Dense and Sparse (Visual) Rewards in Robot Policy Learning},
    author = {Abdalkarim Mohtasib and Gerhard Neumann and Heriberto Cuayahuitl},
    publisher = {University of Lincoln},
    year = {2021},
    keywords = {ARRAY(0x558e723f3bb0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45983/},
    abstract = {Deep Reinforcement Learning (DRL) is a promising approach for teaching robots new behaviour. However, one of its main limitations is the need for carefully hand-coded reward signals by an expert. We argue that it is crucial to automate the reward learning process so that new skills can be taught to robots by their users. To address such automation, we consider task success classifiers using visual observations to estimate the rewards in terms of task success. In this work, we study the performance of multiple state-of-the-art deep reinforcement learning algorithms under different types of reward: Dense, Sparse, Visual Dense, and Visual Sparse rewards. Our experiments in various simulation tasks (Pendulum, Reacher, Pusher, and Fetch Reach) show that while DRL agents can learn successful behaviours using visual rewards when the goal targets are distinguishable, their performance may decrease if the task goal is not clearly visible. Our results also show that visual dense rewards are more successful than visual sparse rewards and that there is no single best algorithm for all tasks.}
    }
  • I. Hroob, R. Polvara, S. M. Mellado, G. Cielniak, and M. Hanheide, “Benchmark of visual and 3d lidar slam systems in simulation environment for vineyards,” in Towards autonomous robotic systems conference (taros), 2021.
    [BibTeX] [Abstract] [Download PDF]

    In this work, we present a comparative analysis of the trajectories estimated from various Simultaneous Localization and Mapping (SLAM) systems in a simulation environment for vineyards. Vineyard environment is challenging for SLAM methods, due to visual appearance changes over time, uneven terrain, and repeated visual patterns. For this reason, we created a simulation environment specifically for vineyards to help studying SLAM systems in such a challenging environment. We evaluated the following SLAM systems: LIO-SAM, StaticMapping, ORB-SLAM2, and RTAB-MAP in four different scenarios. The mobile robot used in this study is equipped with 2D and 3D lidars, IMU, and RGB-D camera (Kinect v2). The results show good and encouraging performance of RTAB-MAP in such an environment.

    @inproceedings{lincoln45642,
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    title = {Benchmark of visual and 3D lidar SLAM systems in simulation environment for vineyards},
    author = {Ibrahim Hroob and Riccardo Polvara and Sergio Molina Mellado and Grzegorz Cielniak and Marc Hanheide},
    year = {2021},
    journal = {The 22nd Towards Autonomous Robotic Systems Conference},
    keywords = {ARRAY(0x558e722c52e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45642/},
    abstract = {In this work, we present a comparative analysis of the trajectories estimated from various Simultaneous Localization and Mapping (SLAM) systems in a simulation environment for vineyards. Vineyard environment is challenging for SLAM methods, due to visual appearance changes over time, uneven terrain, and repeated visual patterns. For this reason, we created a simulation environment specifically for vineyards to help studying SLAM systems in such a challenging environment. We evaluated the following SLAM systems: LIO-SAM, StaticMapping, ORB-SLAM2, and RTAB-MAP in four different scenarios. The mobile robot used in this study is equipped with 2D and 3D lidars, IMU, and RGB-D camera (Kinect v2). The results show good and encouraging performance of RTAB-MAP in such an environment.}
    }
  • H. Harman and E. Sklar, “Auction-based task allocation mechanisms for managing fruit harvesting tasks,” in Ukras21, 2021, p. 47–48. doi:10.31256/Dg2Zp9Q
    [BibTeX] [Abstract] [Download PDF]

    Multi-robot task allocation mechanisms are de-signed to distribute a set of activities fairly amongst a set of robots. Frequently, this can be framed as a multi-criteria optimisation problem, for example minimising cost while maximising rewards. In soft fruit farms, tasks, such as picking ripe fruit at harvest time, are assigned to human labourers. The work presented here explores the application of multi-robot task allocation mechanisms to the complex problem of managing a heterogeneous workforce to undertake activities associated with harvesting soft fruit.

    @inproceedings{lincoln45349,
    booktitle = {UKRAS21},
    title = {Auction-based Task Allocation Mechanisms for Managing Fruit Harvesting Tasks},
    author = {Helen Harman and Elizabeth Sklar},
    year = {2021},
    pages = {47--48},
    doi = {10.31256/Dg2Zp9Q},
    keywords = {ARRAY(0x558e722ca0d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45349/},
    abstract = {Multi-robot task allocation mechanisms are de-signed to distribute a set of activities fairly amongst a set of robots. Frequently, this can be framed as a multi-criteria optimisation problem, for example minimising cost while maximising rewards. In soft fruit farms, tasks, such as picking ripe fruit at harvest time, are assigned to human labourers. The work presented here explores the application of multi-robot task allocation mechanisms to the complex problem of managing a heterogeneous workforce to undertake activities associated with harvesting soft fruit.}
    }
  • M. Hua, Q. Fu, W. Duan, and S. Yue, “Investigating refractoriness in collision perception neuronal model,” in 2021 international joint conference on neural networks (ijcnn), 2021. doi:10.1109/IJCNN52387.2021.9533965
    [BibTeX] [Abstract] [Download PDF]

    Currently, collision detection methods based on visual cues are still challenged by several factors including ultrafast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant movement detectors (LGMDs) in locust?s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ?link (L) layer? located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.

    @inproceedings{lincoln46692,
    booktitle = {2021 International Joint Conference on Neural Networks (IJCNN)},
    month = {September},
    title = {Investigating Refractoriness in Collision Perception Neuronal Model},
    author = {Mu Hua and Qinbing Fu and Wenting Duan and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/IJCNN52387.2021.9533965},
    keywords = {ARRAY(0x558e723f3b20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46692/},
    abstract = {Currently, collision detection methods based on visual cues are still challenged by several factors including ultrafast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant
    movement detectors (LGMDs) in locust?s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ?link (L) layer? located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.}
    }
  • U. A. Zahidi and G. Cielniak, “Active learning for crop-weed discrimination by image classification from convolutional neural network?s feature pyramid levels,” in 13th international conference, icvs 2021, International Conference on Computer Vision Systems ICVS 2021: Computer Vision Systems, 2021. doi:10.1007/978-3-030-87156-7_20
    [BibTeX] [Abstract] [Download PDF]

    The amount of e?ort required for high-quality data acquisition and labelling for adequate supervised learning drives the need for building an e?cient and e?ective image sampling strategy. We propose a novel Batch Mode Active Learning that blends Region Convolutional Neural Network?s (RCNN) Feature Pyramid Network (FPN) levels together and employs t-distributed Stochastic Neighbour Embedding (t-SNE) classi?cation for selecting incremental batch based on feature similarity. Later, K-means clustering is performed on t-SNE instances for the selected sample size of images. Results show that t-SNE classi?cation on merged FPN feature maps outperforms the approach based on RGB images directly, random sampling and maximum entropy-based image sampling schemes. For comparison, we employ a publicly available data set of images of Sugar beet for a crop-weed discrimination task together with our newly acquired annotated images of Romaine and Apollo lettuce crops at di?erent growth stages. Batch sampling on all datasets by the proposed method shows that only 60\% of images are required to produce precision/recall statistics similar to the complete dataset. Two lettuce datasets used in our experiments are publicly available (Lettuce datasets: https://bit.ly/3g7Owc5) to facilitate further research opportunities.

    @inproceedings{lincoln46648,
    month = {September},
    author = {Usman A. Zahidi and Grzegorz Cielniak},
    booktitle = {13th International Conference, ICVS 2021},
    address = {International Conference on Computer Vision Systems ICVS 2021: Computer Vision Systems},
    title = {Active Learning for Crop-Weed Discrimination by Image Classification from Convolutional Neural Network?s Feature Pyramid Levels},
    publisher = {Springer Verlag},
    doi = {10.1007/978-3-030-87156-7\_20},
    year = {2021},
    keywords = {ARRAY(0x558e723f3af0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46648/},
    abstract = {The amount of e?ort required for high-quality data acquisition and labelling for adequate supervised learning drives the need for building an e?cient and e?ective image sampling strategy. We propose a novel Batch Mode Active Learning that blends Region Convolutional Neural Network?s (RCNN) Feature Pyramid Network (FPN) levels together and employs t-distributed Stochastic Neighbour Embedding (t-SNE) classi?cation for selecting incremental batch based on feature similarity. Later, K-means clustering is performed on t-SNE instances for the selected sample size of images. Results show that t-SNE classi?cation on merged FPN feature maps outperforms the approach based on RGB images directly, random sampling and maximum entropy-based image sampling schemes. For comparison, we employ a publicly available data set of images of Sugar beet for a crop-weed discrimination task together with our newly acquired annotated images of Romaine and Apollo lettuce crops at di?erent growth stages. Batch sampling on all datasets by the proposed method shows that only 60\% of images are required to produce precision/recall statistics similar to the complete dataset. Two lettuce datasets used in our experiments are publicly available (Lettuce datasets: https://bit.ly/3g7Owc5) to facilitate further research opportunities.}
    }
  • H. Harman and E. Sklar, “A practical application of market-based mechanisms for allocating harvesting tasks,” in 19th international conference on practical applications of agents and multi-agent systems, 2021. doi:10.1007/978-3-030-85739-4_10
    [BibTeX] [Abstract] [Download PDF]

    Market-based task allocation mechanisms are designed to distribute a set of tasks fairly amongst a set of agents. Such mechanisms have been shown to be highly effective in simulation and when applied to multi-robot teams. Application of such mechanisms in real-world settings can present a range of practical challenges, such as knowing what is the best point in a complex process to allocate tasks and what information to consider in determining the allocation. The work presented here explores the application of market-based task allocation mechanisms to the problem of managing a heterogeneous human workforce to undertake activities associated with harvesting soft fruit. Soft fruit farms aim to maximise yield (the volume of fruit picked) while minimising labour time (and thus the cost of picking). Our work evaluates experimentally several different strategies for practical application of market-based mechanisms for allocating tasks to workers on soft fruit farms, identifying methods that appear best when simulated using a multi-agent model of farm activity.

    @inproceedings{lincoln46475,
    month = {September},
    author = {Helen Harman and Elizabeth Sklar},
    booktitle = {19th International Conference on Practical Applications of Agents and Multi-Agent Systems},
    title = {A Practical Application of Market-based Mechanisms for Allocating Harvesting Tasks},
    publisher = {Springer},
    journal = {Advances in Practical Applications of Agents, Multi-Agent Systems and Social Good: The PAAMS Collection},
    doi = {10.1007/978-3-030-85739-4\_10},
    year = {2021},
    keywords = {ARRAY(0x558e723f3ac0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46475/},
    abstract = {Market-based task allocation mechanisms are designed to distribute a set of tasks fairly amongst a set of agents. Such mechanisms have been shown to be highly effective in simulation and when applied to multi-robot teams. Application of such mechanisms in real-world settings can present a range of practical challenges, such as knowing what is the best point in a complex process to allocate tasks and what information to consider in determining the allocation. The work presented here explores the application of market-based task allocation mechanisms to the problem of managing a heterogeneous human workforce to undertake activities associated with harvesting soft fruit. Soft fruit farms aim to maximise yield (the volume of fruit picked) while minimising labour time (and thus the cost of picking). Our work evaluates experimentally several different strategies for practical application of market-based mechanisms for allocating tasks to workers on soft fruit farms, identifying methods that appear best when simulated using a multi-agent model of farm activity.}
    }
  • A. L. Zorrilla, I. M. Torres, and H. Cuayahuitl, “Audio embeddings help to learn better dialogue policies,” in Ieee automatic speech recognition and understanding, 2021.
    [BibTeX] [Abstract] [Download PDF]

    Neural transformer architectures have gained a lot of interest for text-based dialogue management in the last few years. They have shown high learning capabilities for open domain dialogue with huge amounts of data and also for domain adaptation in task-oriented setups. But the potential benefits of exploiting the users’ audio signal have rarely been explored in such frameworks. In this work, we combine text dialogue history representations generated by a GPT-2 model with audio embeddings obtained by the recently released Wav2Vec2 transformer model. We jointly fine-tune these models to learn dialogue policies via supervised learning and two policy gradient-based reinforcement learning algorithms. Our experimental results, using the DSTC2 dataset and a simulated user model capable of sampling audio turns, reveal that audio embeddings lead to overall higher task success (than without using audio embeddings) with statistically significant results across evaluation metrics and training algorithms.

    @inproceedings{lincoln46800,
    booktitle = {IEEE Automatic Speech Recognition and Understanding},
    month = {December},
    title = {Audio Embeddings Help to Learn Better Dialogue Policies},
    author = {Asier Lopez Zorrilla and M. Ines Torres and Heriberto Cuayahuitl},
    publisher = {IEEE},
    year = {2021},
    keywords = {ARRAY(0x558e723f36d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46800/},
    abstract = {Neural transformer architectures have gained a lot of interest for text-based dialogue management in the last few years. They have shown high learning capabilities for open domain dialogue with huge amounts of data and also for domain adaptation in task-oriented setups. But the potential benefits of exploiting the users' audio signal have rarely been explored in such frameworks. In this work, we combine text dialogue history representations generated by a GPT-2 model with audio embeddings obtained by the recently released Wav2Vec2 transformer model. We jointly fine-tune these models to learn dialogue policies via supervised learning and two policy gradient-based reinforcement learning algorithms. Our experimental results, using the DSTC2 dataset and a simulated user model capable of sampling audio turns, reveal that audio embeddings lead to overall higher task success (than without using audio embeddings) with statistically significant results across evaluation metrics and training algorithms.}
    }
  • L. Korir, A. Drake, M. Collison, C. C. Villa, E. Sklar, and S. Pearson, “Current and emergent economic impacts of covid-19 and brexit on uk fresh produce and horticultural businesses,” in The 94 th annual conference of the agricultural economics society (aes), 2021. doi:10.22004/ag.econ.312068
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK’s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.

    @inproceedings{lincoln46582,
    booktitle = {The 94 th Annual Conference of the Agricultural Economics Society (AES)},
    month = {January},
    title = {Current and Emergent Economic Impacts of Covid-19 and Brexit on UK Fresh Produce and Horticultural Businesses},
    author = {Lilian Korir and Archie Drake and Martin Collison and Carolina Camacho Villa and Elizabeth Sklar and Simon Pearson},
    year = {2021},
    doi = {10.22004/ag.econ.312068},
    keywords = {ARRAY(0x558e723f13c8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46582/},
    abstract = {This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK's horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.}
    }
  • M. Cédérick, I. Ferrané, and H. Cuayahuitl, “Reward-based environment states for robot manipulation policy learning,” in Neurips 2021 workshop on deployable decision making in embodied systems (ddm), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Training robot manipulation policies is a challenging and open problem in robotics and artificial intelligence. In this paper we propose a novel and compact state representation based on the rewards predicted from an image-based task success classifier. Our experiments{–}using the Pepper robot in simulation with two deep reinforcement learning algorithms on a grab-and-lift task{–}reveal that our proposed state representation can achieve up to 97\% task success using our best policies.

    @inproceedings{lincoln47522,
    booktitle = {NeurIPS 2021 Workshop on Deployable Decision Making in Embodied Systems (DDM)},
    month = {December},
    title = {Reward-Based Environment States for Robot Manipulation Policy Learning},
    author = {Mouliets C{\'e}d{\'e}rick and Isabelle Ferran{\'e} and Heriberto Cuayahuitl},
    year = {2021},
    keywords = {ARRAY(0x558e723f3700)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47522/},
    abstract = {Training robot manipulation policies is a challenging and open problem in robotics and artificial intelligence. In this paper we propose a novel and compact state representation based on the rewards predicted from an image-based task success
    classifier. Our experiments{--}using the Pepper robot in simulation with two deep reinforcement learning algorithms on a grab-and-lift task{--}reveal that our proposed state representation can achieve up to 97\% task success using our best policies.}
    }
  • S. Mghames, M. Hanheide, and A. G. Esfahani, “Interactive movement primitives: planning to push occluding pieces for fruit picking,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2021. doi:10.1109/IROS45743.2020.9341728
    [BibTeX] [Abstract] [Download PDF]

    Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.

    @inproceedings{lincoln42217,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {February},
    title = {Interactive Movement Primitives: Planning to Push Occluding Pieces for Fruit Picking},
    author = {Sariah Mghames and Marc Hanheide and Amir Ghalamzan Esfahani},
    year = {2021},
    doi = {10.1109/IROS45743.2020.9341728},
    note = {{\copyright} 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.},
    keywords = {ARRAY(0x558e723f1368)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42217/},
    abstract = {Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.}
    }
  • C. Fox, “Musichastie: field-based hierarchical music representation,” in International conference on computer music, 2021.
    [BibTeX] [Abstract] [Download PDF]

    MusicHastie is a hierarchical music representation language designed for use in human and automated composition and for human and machine learning based music study and analysis. It represents and manipulates musical structure in a semantic form based on concepts from Schenkerian analysis, western European art music and popular music notations, electronica and some non-western forms such as modes and ragas. The representation is designed to model one form of musical perception by human musicians so can be used to aid human understanding and memorization of popular music pieces. An open source MusicHastie to MIDI compiler is released as part of this publication, now including capabilities for electronica MIDI control commands to model structures such as filter sweeps in addition to keys, chords, rhythms, patterns, and melodies.

    @inproceedings{lincoln45328,
    booktitle = {International Conference on Computer Music},
    month = {July},
    title = {MusicHastie: field-based hierarchical music representation},
    author = {Charles Fox},
    publisher = {ICMC},
    year = {2021},
    keywords = {ARRAY(0x558e723f3df0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45328/},
    abstract = {MusicHastie is a hierarchical music representation language designed for use in human and automated composition and for human and machine learning based music study and analysis. It represents and manipulates musical structure in a semantic form based on concepts from Schenkerian analysis, western European art music and popular music notations, electronica and some non-western forms such as modes and ragas. The representation is designed to model one form of musical perception by human musicians so can be used to aid human understanding and memorization of popular music pieces. An open source MusicHastie to MIDI compiler is released as part of this publication, now including capabilities for electronica MIDI control commands to model structures such as filter sweeps in addition to keys, chords, rhythms, patterns, and melodies.}
    }
  • J. L. Louedec and G. Cielniak, “Gaussian map predictions for 3d surface feature localisation and counting,” in Bmvc, 2021.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features, addressing the limitations of state-of-the-art methods based on density estimation which struggle in presence of local disturbances. Gaussian maps indicate probable object location and can be generated directly from keypoint annotations avoiding laborious and costly per-pixel annotations. We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation enabling efficient processing by a neural network GNet, an improved UNet architecture, which generates the likely locations of surface features and their precise count. We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications. The results of training the proposed system on several hundreds of 3D scans of strawberries from a publicly available dataset demonstrate the accuracy and precision of the system which outperforms the state-of-the-art density-based methods for this application.

    @inproceedings{lincoln48667,
    booktitle = {BMVC},
    month = {November},
    title = {Gaussian map predictions for 3D surface feature localisation and counting},
    author = {Justin Le Louedec and Grzegorz Cielniak},
    publisher = {BMVA},
    year = {2021},
    keywords = {ARRAY(0x558e723f3760)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48667/},
    abstract = {In this paper, we propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features, addressing the limitations of state-of-the-art methods based on density estimation which struggle in presence of local disturbances. Gaussian maps indicate probable object location and can be generated directly from keypoint annotations avoiding laborious and costly per-pixel annotations. We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation enabling efficient processing by a neural network GNet, an improved UNet architecture, which generates the likely locations of surface features and their precise count. We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications. The results of training the proposed system on several hundreds of 3D scans of strawberries from a publicly available dataset demonstrate the accuracy and precision of the system which outperforms the state-of-the-art density-based methods for this application.}
    }
  • A. Mohtasib, A. G. Esfahani, N. Bellotto, and H. Cuayahuitl, “Neural task success classifiers for robotic manipulation from few real demonstrations,” in International joint conference on neural networks (ijcnn), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Robots learning a new manipulation task from a small amount of demonstrations are increasingly demanded in different workspaces. A classifier model assessing the quality of actions can predict the successful completion of a task, which can be used by intelligent agents for action-selection. This paper presents a novel classifier that learns to classify task completion only from a few demonstrations. We carry out a comprehensive comparison of different neural classifiers, e.g. fully connected-based, fully convolutional-based, sequence2sequence-based, and domain adaptation-based classification. We also present a new dataset including five robot manipulation tasks, which is publicly available. We compared the performances of our novel classifier and the existing models using our dataset and the MIME dataset. The results suggest domain adaptation and timing-based features improve success prediction. Our novel model, i.e. fully convolutional neural network with domain adaptation and timing features, achieves an average classification accuracy of 97.3\% and 95.5\% across tasks in both datasets whereas state-of-the-art classifiers without domain adaptation and timing-features only achieve 82.4\% and 90.3\%, respectively.

    @inproceedings{lincoln45559,
    booktitle = {International Joint Conference on Neural Networks (IJCNN)},
    month = {July},
    title = {Neural Task Success Classifiers for Robotic Manipulation from Few Real Demonstrations},
    author = {Abdalkarim Mohtasib and Amir Ghalamzan Esfahani and Nicola Bellotto and Heriberto Cuayahuitl},
    publisher = {IEEE},
    year = {2021},
    keywords = {ARRAY(0x558e723f3e50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45559/},
    abstract = {Robots learning a new manipulation task from a small amount of demonstrations are increasingly demanded in different workspaces. A classifier model assessing the quality of actions can predict the successful completion of a task, which can be used by intelligent agents for action-selection. This paper presents a novel classifier that learns to classify task completion only from a few demonstrations. We carry out a comprehensive comparison of different neural classifiers, e.g. fully connected-based, fully convolutional-based, sequence2sequence-based, and domain adaptation-based classification. We also present a new dataset including five robot manipulation tasks, which is publicly available. We compared the performances of our novel classifier and the existing models using our dataset and the MIME dataset. The results suggest domain adaptation and timing-based features improve success prediction. Our novel model, i.e. fully convolutional neural network with domain adaptation and timing features, achieves an average classification accuracy of 97.3\% and 95.5\% across tasks in both datasets whereas state-of-the-art classifiers without domain adaptation and timing-features only achieve 82.4\% and 90.3\%, respectively.}
    }
  • M. Khalid, L. Guevara, M. Hanheide, and S. Parsons, “Assessing the probability of human injury during uv-c treatment of crops by robots,” in 4th uk-ras conference, 2021. doi:10.31256/Pj6Cz2L
    [BibTeX] [Abstract] [Download PDF]

    This paper describes our work to assure safe autonomy in soft fruit production. The first step was hazard analysis, where all the possible hazards in representative scenarios were identified. Following this analysis, a three-layer safety architecture was identified that will minimise the occurrence of the identified hazards. Most of the hazards are minimised by upper layers, while unavoidable hazards are handled using emergency stops. In parallel, we are using probabilistic model checking to check the probability of a hazard’s occurrence. The results from the model checking will be used to improve safety system architecture.

    @inproceedings{lincoln46541,
    booktitle = {4th UK-RAS Conference},
    month = {July},
    title = {Assessing the probability of human injury during UV-C treatment of crops by robots},
    author = {Muhammad Khalid and Leonardo Guevara and Marc Hanheide and Simon Parsons},
    publisher = {UK-RAS},
    year = {2021},
    doi = {10.31256/Pj6Cz2L},
    keywords = {ARRAY(0x558e723f3ee0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46541/},
    abstract = {This paper describes our work to assure safe autonomy in soft fruit production. The first step was hazard analysis, where all the possible hazards in representative scenarios were identified. Following this analysis, a three-layer safety architecture was identified that will minimise the occurrence of the identified hazards. Most of the hazards are minimised by upper layers, while unavoidable hazards are handled using emergency stops. In parallel, we are using probabilistic model checking to check the probability of a hazard's occurrence. The results from the model checking will be used to improve safety system architecture.}
    }
  • W. King, L. Pooley, P. Johnson, and K. Elgeneidy, “Design and characterisation of a variable-stiffness soft actuator based on tendon twisting,” in Taros 2021, 2021.
    [BibTeX] [Abstract] [Download PDF]

    The field of soft robotics aims to address the challenges faced by traditional rigid robots in less structured and dynamic environments that require more adaptive interactions. Taking inspiration from biological organisms? such as octopus tentacles and elephant trunks, soft robots commonly use elastic materials and novel actuation methods to mimic the continuous deformation of their mostly soft bodies. While current robotic manipulators, such as those used in the DaVinci surgical robot, have seen use in precise minimally invasive surgeries applications, the capability of soft robotics to provide a greater degree of flexibility and inherently safe interactions shows great promise that motivates further study. Nevertheless, introducing softness consequently opens new challenges in achieving accurate positional control and sufficient force generation often required for manipulation tasks. In this paper, the feasibility of a stiffening mechanism based on tendon-twisting is investigated, as an alternative stiffening mechanism for soft actuators that can be easily scaled as needed based on tendon size, material properties, and arrangements, while offering simple means of controlling a gradual increase in stiffening during operation.

    @inproceedings{lincoln45570,
    booktitle = {Taros 2021},
    month = {September},
    title = {Design and Characterisation of a Variable-Stiffness Soft Actuator Based on Tendon Twisting},
    author = {William King and Luke Pooley and Philip Johnson and Khaled Elgeneidy},
    year = {2021},
    keywords = {ARRAY(0x558e723f3c10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45570/},
    abstract = {The field of soft robotics aims to address the challenges faced by traditional rigid robots in less structured and dynamic environments that require more adaptive interactions. Taking inspiration from biological organisms? such as octopus tentacles and elephant trunks, soft robots commonly use elastic materials and novel actuation methods to mimic the continuous deformation of their mostly soft bodies. While current robotic manipulators, such as those used in the DaVinci surgical robot, have seen use in precise minimally invasive surgeries applications, the capability of soft robotics to provide a greater degree of flexibility and inherently safe interactions shows great promise that motivates further study. Nevertheless, introducing softness consequently opens new challenges in achieving accurate positional control and sufficient force generation often required for manipulation tasks. In this paper, the feasibility of a stiffening mechanism based on tendon-twisting is investigated, as an alternative stiffening mechanism for soft actuators that can be easily scaled as needed based on tendon size, material properties, and arrangements, while offering simple means of controlling a gradual increase in stiffening during operation.}
    }
  • N. Wagner, R. Kirk, M. Hanheide, and G. Cielniak, “Efficient and robust orientation estimation of strawberries for fruit picking applications,” in Ieee international conference on robotics and automation (icra), 2021, p. 13857–1386. doi:10.1109/ICRA48506.2021.9561848
    [BibTeX] [Abstract] [Download PDF]

    Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB data only, mean errors of as low as 11? could be achieved.

    @inproceedings{lincoln44426,
    month = {October},
    author = {Nikolaus Wagner and Raymond Kirk and Marc Hanheide and Grzegorz Cielniak},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Efficient and Robust Orientation Estimation of Strawberries for Fruit Picking Applications},
    publisher = {IEEE},
    doi = {10.1109/ICRA48506.2021.9561848},
    pages = {13857--1386},
    year = {2021},
    keywords = {ARRAY(0x558e723f3a00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44426/},
    abstract = {Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well
    as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic
    harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB
    data only, mean errors of as low as 11? could be achieved.}
    }
  • J. C. Mayoral, L. Grimstad, P. r a, and G. Cielniak, “Integration of a human-aware risk-based braking system into an open-field mobile robot,” in Ieee international conference on robotics and automation (icra), 2021, p. 2435–2442. doi:10.1109/ICRA48506.2021.9561522
    [BibTeX] [Abstract] [Download PDF]

    Safety integration components for robotic applications are a mandatory feature for any autonomous mobile application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor. Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.

    @inproceedings{lincoln44427,
    month = {October},
    author = {Jose C. Mayoral and Lars Grimstad and P{\r a}l J. From and Grzegorz Cielniak},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Integration of a Human-aware Risk-based Braking System into an Open-Field Mobile Robot},
    publisher = {IEEE},
    doi = {10.1109/ICRA48506.2021.9561522},
    pages = {2435--2442},
    year = {2021},
    keywords = {ARRAY(0x558e723f39d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44427/},
    abstract = {Safety integration components for robotic applications are a mandatory feature for any autonomous mobile
    application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for
    open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor.
    Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.}
    }
  • T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A versatile vision-pheromone-communication platform for swarm robotics,” in 2021 ieee international conference on robotics and automation (icra), 2021. doi:10.1109/ICRA48506.2021.9561911
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction, but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOS\${$\backslash$}Phi\$ (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform, and to investigate how complex scenarios can be facilitated by making use of this platform.

    @inproceedings{lincoln47322,
    booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
    month = {October},
    title = {A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/ICRA48506.2021.9561911},
    keywords = {ARRAY(0x558e723f39a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47322/},
    abstract = {This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction, but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOS\${$\backslash$}Phi\$ (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform, and to investigate how complex scenarios can be facilitated by making use of this platform.}
    }
  • T. Choi and G. Cielniak, “Adaptive selection of informative path planning strategies via reinforcement learning,” in 2021 european conference on mobile robots (ecmr), 2021. doi:10.1109/ECMR50962.2021.9568796
    [BibTeX] [Abstract] [Download PDF]

    In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as ?attraction force? to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore ?local planning? approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.

    @inproceedings{lincoln46371,
    booktitle = {2021 European Conference on Mobile Robots (ECMR)},
    month = {October},
    title = {Adaptive Selection of Informative Path Planning Strategies via Reinforcement Learning},
    author = {Taeyeong Choi and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/ECMR50962.2021.9568796},
    keywords = {ARRAY(0x558e723f3970)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46371/},
    abstract = {In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy
    improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as ?attraction force? to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore ?local planning? approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could
    not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.}
    }
  • T. Zhivkov, A. Gomez, J. Gao, E. Sklar, and S. Parsons, “The need for speed: how 5g communication can support ai in the field,” in Epsrc uk-ras network (2021). ukras21 conference: robotics at home proceedings, 2021, p. 55–56. doi:10.31256/On8Hj9U
    [BibTeX] [Abstract] [Download PDF]

    Using AI for agriculture requires the fast transmission and processing of large volumes of data. Cost-effective high speed processing may not be possible on-board agricultural vehicles, and suitably fast transmission may not be possible with older generation wireless communications. In response, the work presented here investigates the use of 5G wireless technology to support the deployment of AI in this context.

    @inproceedings{lincoln46574,
    month = {June},
    author = {Tsvetan Zhivkov and Adrian Gomez and Junfeng Gao and Elizabeth Sklar and Simon Parsons},
    booktitle = {EPSRC UK-RAS Network (2021). UKRAS21 Conference: Robotics at home Proceedings},
    title = {The need for speed: How 5G communication can support AI in the field},
    publisher = {UK-RAS},
    doi = {10.31256/On8Hj9U},
    pages = {55--56},
    year = {2021},
    keywords = {ARRAY(0x558e723f4030)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46574/},
    abstract = {Using AI for agriculture requires the fast transmission and processing of large volumes of data. Cost-effective high speed processing may not be possible on-board agricultural vehicles, and suitably fast transmission may not be possible with older generation wireless communications. In response, the work presented here investigates the use of 5G wireless technology to support the deployment of AI in this context.}
    }
  • N. Wagner and G. Cielniak, “Inference of mechanical properties of dynamic objects through active perception,” in Towards autonomous robotic systems conference (taros), 2021, p. 430–439. doi:10.1007/978-3-030-89177-0_45
    [BibTeX] [Abstract] [Download PDF]

    Current robotic systems often lack a deeper understanding of their surroundings, even if they are equipped with visual sensors like RGB-D cameras. Knowledge of the mechanical properties of the objects in their immediate surroundings, however, could bring huge benefits to applications such as path planning, obstacle avoidance & removal or estimating object compliance. In this paper, we present a novel approach to inferring mechanical properties of dynamic objects with the help of active perception and frequency analysis of objects’ stimulus responses. We perform FFT on a buffer of image flow maps to identify the spectral signature of objects and from that their eigenfrequency. Combining this with 3D depth information allows us to infer an object’s mass without having to weigh it. We perform experiments on a demonstrator with variable mass and stiffness to test our approach and provide an analysis on the influence of individual properties on the result. By simply applying a controlled amount of force to a system, we were able to infer mechanical properties of systems with an eigenfrequency of around 4.5 Hz in about 2 s. This lab-based feasibility study opens new exciting robotic applications targeting realistic, non-rigid objects such as plants, crops or fabric.

    @inproceedings{lincoln46646,
    month = {October},
    author = {Nikolaus Wagner and Grzegorz Cielniak},
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    title = {Inference of Mechanical Properties of Dynamic Objects through Active Perception},
    publisher = {Springer},
    year = {2021},
    journal = {Towards Autonomous Robotic Systems Conference (TAROS) 2021},
    doi = {10.1007/978-3-030-89177-0\_45},
    pages = {430--439},
    keywords = {ARRAY(0x558e723f3910)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46646/},
    abstract = {Current robotic systems often lack a deeper understanding of their surroundings, even if they are equipped with visual sensors like RGB-D cameras. Knowledge of the mechanical properties of the objects in their immediate surroundings, however, could bring huge benefits to applications such as path planning, obstacle avoidance \& removal or estimating object compliance.
    In this paper, we present a novel approach to inferring mechanical properties of dynamic objects with the help of active perception and frequency analysis of objects' stimulus responses. We perform FFT on a buffer of image flow maps to identify the spectral signature of objects and from that their eigenfrequency. Combining this with 3D depth information allows us to infer an object's mass without having to weigh it.
    We perform experiments on a demonstrator with variable mass and stiffness to test our approach and provide an analysis on the influence of individual properties on the result. By simply applying a controlled amount of force to a system, we were able to infer mechanical properties of systems with an eigenfrequency of around 4.5 Hz in about 2 s. This lab-based feasibility study opens new exciting robotic applications targeting realistic, non-rigid objects such as plants, crops or fabric.}
    }
  • L. Guevara, M. Khalid, M. Hanheide, and S. Parsons, “Assessing the probability of human injury during uv-c treatment of crops by robots,” in 4th uk-ras conference, 2021. doi:10.31256/Pj6Cz2L
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a hazard analysis for an agricultural scenario where a crop is treated by a robot using UV-C light. Although human-robot interactions are not expected, it may be the case that unauthorized people approach the robot while it is operating. These potential human-robot interactions have been identified and modelled as Markov Decision Processes (MDP) and tested in the model checking tool PRISM.

    @inproceedings{lincoln46537,
    booktitle = {4th UK-RAS Conference},
    month = {July},
    title = {Assessing the probability of human injury during UV-C treatment of crops by robots},
    author = {Leonardo Guevara and Muhammad Khalid and Marc Hanheide and Simon Parsons},
    publisher = {UK-RAS},
    year = {2021},
    doi = {10.31256/Pj6Cz2L},
    keywords = {ARRAY(0x558e723f3eb0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46537/},
    abstract = {This paper describes a hazard analysis for an agricultural scenario where a crop is treated by a robot using UV-C light.
    Although human-robot interactions are not expected, it may be the case that unauthorized people approach the robot while it is operating. These potential human-robot interactions have been identified and modelled as Markov Decision Processes (MDP) and tested in the model checking tool PRISM.}
    }
  • Z. Maamar and M. Al-Khafajiy, “Cloud-edge coupling to mitigate execution failures,” in Proceedings of the 36th annual acm symposium on applied computing, 2021, p. 711–718. doi:10.1145/3412841.3442334
    [BibTeX] [Abstract] [Download PDF]

    This paper examines the doability of cloud-edge coupling to mitigate execution failures and hence, achieve business process continuity. These failures are the result of disruptions that impact the cycles of consuming cloud resources and/or edge resources. Cloud/Edge resources are subject to restrictions like limitedness and non-shareability that increase the complexity of resuming execution operations to the extent that some of these operations could be halted, which means failures. To mitigate failures, cloud and edge resources are synchronized using messages allowing proper consumption of these resources. A Microsoft Azure-based testbed simulating cloud-edge coupling is also presented in the paper.

    @inproceedings{lincoln47575,
    month = {March},
    author = {Zakaria Maamar and Mohammed Al-Khafajiy},
    booktitle = {Proceedings of the 36th Annual ACM Symposium on Applied Computing},
    title = {Cloud-edge coupling to mitigate execution failures},
    publisher = {Association for Computing Machinery},
    doi = {10.1145/3412841.3442334},
    pages = {711--718},
    year = {2021},
    keywords = {ARRAY(0x558e723f4210)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47575/},
    abstract = {This paper examines the doability of cloud-edge coupling to mitigate execution failures and hence, achieve business process continuity. These failures are the result of disruptions that impact the cycles of consuming cloud resources and/or edge resources. Cloud/Edge resources are subject to restrictions like limitedness and non-shareability that increase the complexity of resuming execution operations to the extent that some of these operations could be halted, which means failures. To mitigate failures, cloud and edge resources are synchronized using messages allowing proper consumption of these resources. A Microsoft Azure-based testbed simulating cloud-edge coupling is also presented in the paper.}
    }
  • C. Jansen and E. Sklar, “Predicting artist drawing activity via multi-camera inputs for co-creative drawing,” in Towards autonomous robotic systems conference (taros), 2021. doi:10.1007/978-3-030-89177-0_23
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the results of experimentation in computer vision based for the perception of the artist drawing with analog media (pen and paper), with the aim to contribute towards a human- robot co-creative drawing framework. Using data gathered from user studies with artists and illustrators, two types of CNN models were de- signed and evaluated to predict an artist?s activity (e.g. are they drawing or not?) and the position of the pen on the canvas based only on a multi- camera input of the drawing surface. Results of different combination of input sources are presented, with an overall mean accuracy of 95\% (std: 7\%) for predicting when the artist is present and 68\% (std: 15\%) for predicting when the artist is drawing; and mean squared normalised error of 0.0034 (std: 0.0099) of predicting the pen?s position on the drawing canvas. These results point toward an autonomous robotic system having an awareness of an artist at work via camera based input and contributes toward the development of a more fluid physical to digital workflow for creative content creation.

    @inproceedings{lincoln46480,
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    month = {October},
    title = {Predicting Artist Drawing Activity via Multi-Camera Inputs for Co-Creative Drawing},
    author = {Chipp Jansen and Elizabeth Sklar},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_23},
    journal = {Proceedings of the 22nd Towards Autonomous Robotic Systems (TAROS) Conference},
    keywords = {ARRAY(0x558e723f3880)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46480/},
    abstract = {This paper presents the results of experimentation in computer vision based for the perception of the artist drawing with analog media (pen and paper), with the aim to contribute towards a human- robot co-creative drawing framework. Using data gathered from user studies with artists and illustrators, two types of CNN models were de- signed and evaluated to predict an artist?s activity (e.g. are they drawing or not?) and the position of the pen on the canvas based only on a multi- camera input of the drawing surface. Results of different combination of input sources are presented, with an overall mean accuracy of 95\% (std: 7\%) for predicting when the artist is present and 68\% (std: 15\%) for predicting when the artist is drawing; and mean squared normalised error of 0.0034 (std: 0.0099) of predicting the pen?s position on the drawing canvas. These results point toward an autonomous robotic system having an awareness of an artist at work via camera based input and contributes toward the development of a more fluid physical to digital workflow for creative content creation.}
    }
  • D. Dai, J. Gao, S. Parsons, and E. Sklar, “Small datasets for fruit detection with transfer learning,” in 4th uk-ras conference, 2021, p. 5–6. doi:10.31256/Nf6Uh8Q
    [BibTeX] [Abstract] [Download PDF]

    A common approach to the problem of fruit detection in images is to design a deep learning network and train a model to locate objects, using bounding boxes to identify regions containing fruit. However, this requires sufficient data and presents challenges for small datasets. Transfer learning, which acquires knowledge from a source domain and brings that to a new target domain, can produce improved performance in the target domain. The work discussed in this paper shows the application of transfer learning for fruit detection with small datasets and presents analysis between the number of training images in source and target domains.

    @inproceedings{lincoln46542,
    month = {July},
    author = {Dan Dai and Junfeng Gao and Simon Parsons and Elizabeth Sklar},
    booktitle = {4th UK-RAS Conference},
    title = {Small datasets for fruit detection with transfer learning},
    publisher = {UK-RAS},
    doi = {10.31256/Nf6Uh8Q},
    pages = {5--6},
    year = {2021},
    keywords = {ARRAY(0x558e723f3e80)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46542/},
    abstract = {A common approach to the problem of fruit detection in images is to design a deep learning network and train a model to locate objects, using bounding boxes to identify regions containing fruit. However, this requires sufficient data and presents challenges for small datasets. Transfer learning, which acquires knowledge from a source domain and brings that to a new target domain, can produce improved performance in the target domain. The work discussed in this paper shows the application of transfer learning for fruit detection with small datasets and presents analysis between the number of training images in source and target domains.}
    }
  • K. Swann, P. Hadley, M. A. Hadley, S. Pearson, A. Badiee, and C. Twitchen, “The effect of light intensity and duration on yield and quality of everbearer and june-bearer strawberry cultivars in a led lit multi-tiered vertical growing system,” in Ix international strawberry symposium, 2021, p. 359–366. doi:10.17660/ActaHortic.2021.1309.52
    [BibTeX] [Abstract] [Download PDF]

    This study aimed to provide insights into the efficient use of supplementary lighting for strawberry crops produced in a multi-tiered LED lit vertical growing system, ascertaining the optimal light intensity and duration, with comparative energy use and costs. Furthermore, the suitability of a premium everbearer strawberry cultivar with a high yield potential was compared with a standard winter glasshouse June-bearer cultivar currently used for out-of-season production in the UK. Three lighting durations (11, 16 and 22 h) provided by LEDs were combined with two light intensities (344 and 227 ?mol) to give six light treatments on each tier of a three-tiered system to grow the two cultivars. The everbearer showed a higher yield with a higher correlation with increased lighting and a greater proportion of reproductive growth than the Junebearer. Light intensity and duration increased yield with duration also increasing sugar content (?Brix). However, even with yields of over 100 t ha?1 recorded in this study, yields are likely to be insufficient to cover the cost of electricity.

    @inproceedings{lincoln45160,
    booktitle = {IX International Strawberry Symposium},
    month = {April},
    title = {The effect of light intensity and duration on yield and quality of everbearer and June-bearer strawberry cultivars in a LED lit multi-tiered vertical growing system},
    author = {K Swann and P Hadley and M. A. Hadley and Simon Pearson and Amir Badiee and C. Twitchen},
    year = {2021},
    pages = {359--366},
    doi = {10.17660/ActaHortic.2021.1309.52},
    keywords = {ARRAY(0x558e723f4150)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45160/},
    abstract = {This study aimed to provide insights into the efficient use of supplementary lighting for strawberry crops produced in a multi-tiered LED lit vertical growing system, ascertaining the optimal light intensity and duration, with comparative energy use and costs. Furthermore, the suitability of a premium everbearer strawberry cultivar with a high yield potential was compared with a standard winter glasshouse June-bearer cultivar currently used for out-of-season production in the UK. Three lighting durations (11, 16 and 22 h) provided by LEDs were combined with two light intensities (344 and 227 ?mol) to give six light treatments on each tier of a three-tiered system to grow the two cultivars. The everbearer showed a higher yield with a higher correlation with increased lighting and a greater proportion of reproductive growth than the Junebearer. Light intensity and duration increased yield with duration also increasing sugar content (?Brix). However, even with yields of over 100 t ha?1 recorded in this study, yields are likely to be insufficient to cover the cost of electricity.}
    }
  • Z. Maamar, M. Al-Khafajiy, and M. Dohan, “An iot application business-model on top of cloud and fog nodes,” in Advanced information networking and applications, 2021, p. 174–186. doi:10.1007/978-3-030-75075-6_14
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses the design of a business model dedicated for IoT applications that would be deployed on top of cloud and fog resources. This business model features 2 constructs, flow (specialized into data and collaboration) and placement (specialized into processing and storage). On the one hand, the flow construct is about who sends what and to whom, who collaborates with whom, and what restrictions exist on what to send, to whom to send, and with whom to collaborate. On the other hand, the placement construct is about what and how to fragment, where to store, and what restrictions exist on what and how to fragment, and where to store. The paper also discusses the development of a system built-upon a deep learning model that recommends how the different flows and placements should be formed. These recommendations consider the technical capabilities of cloud and fog resources as well as the networking topology connecting these resources to things.

    @inproceedings{lincoln47574,
    volume = {226},
    month = {April},
    author = {Zakaria Maamar and Mohammed Al-Khafajiy and Murtada Dohan},
    booktitle = {Advanced Information Networking and Applications},
    title = {An IoT Application Business-Model on Top of Cloud and Fog Nodes},
    publisher = {Springer},
    year = {2021},
    journal = {AINA 2021: Advanced Information Networking and Applications},
    doi = {10.1007/978-3-030-75075-6\_14},
    pages = {174--186},
    keywords = {ARRAY(0x558e723f4180)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47574/},
    abstract = {This paper discusses the design of a business model dedicated for IoT applications that would be deployed on top of cloud and fog resources. This business model features 2 constructs, flow (specialized into data and collaboration) and placement (specialized into processing and storage). On the one hand, the flow construct is about who sends what and to whom, who collaborates with whom, and what restrictions exist on what to send, to whom to send, and with whom to collaborate. On the other hand, the placement construct is about what and how to fragment, where to store, and what restrictions exist on what and how to fragment, and where to store. The paper also discusses the development of a system built-upon a deep learning model that recommends how the different flows and placements should be formed. These recommendations consider the technical capabilities of cloud and fog resources as well as the networking topology connecting these resources to things.}
    }
  • J. Heselden and G. Das, “Crh*: a deadlock free framework for scalable prioritised path planning in multi-robot systems,” in Towards autonomous robotic systems conference, 2021. doi:10.1007/978-3-030-89177-0_7
    [BibTeX] [Abstract] [Download PDF]

    Multi-robot system is an ever growing tool which is able to be applied to a wide range of industries to improve productivity and robustness, especially when tasks are distributed in space, time and functionality. Recent works have shown the benefits of multi-robot systems in fields such as warehouse automation, entertainment and agriculture. The work presented in this paper tackles the deadlock problem in multi-robot navigation, in which robots within a common work-space, are caught in situations where they are unable to navigate to their targets, being blocked by one another. This problem can be mitigated by efficient multi-robot path planning. Our work focused around the development of a scalable rescheduling algorithm named Conflict Resolution Heuristic A* (CRH*) for decoupled prioritised planning. Extensive experimental evaluation of CRH* was carried out in discrete event simulations of a fleet of autonomous agricultural robots. The results from these experiments proved that the algorithm was both scalable and deadlock-free. Additionally, novel customisation options were included to test further optimisations in system performance. Continuous Assignment and Dynamic Scoring showed to reduce the make-span of the routing whilst Combinatorial Heuristics showed to reduce the impact of outliers on priority orderings.

    @inproceedings{lincoln46453,
    booktitle = {Towards Autonomous Robotic Systems Conference},
    month = {October},
    title = {CRH*: A Deadlock Free Framework for Scalable Prioritised Path Planning in Multi-Robot Systems},
    author = {James Heselden and Gautham Das},
    publisher = {Springer International Publishing},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_7},
    keywords = {ARRAY(0x558e723f3820)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46453/},
    abstract = {Multi-robot system is an ever growing tool which is able to be applied to a wide range of industries to improve productivity and robustness, especially when tasks are distributed in space, time and functionality. Recent works have shown the benefits of multi-robot systems in fields such as warehouse automation, entertainment and agriculture. The work presented in this paper tackles the deadlock problem in multi-robot navigation, in which robots within a common work-space, are caught in situations where they are unable to navigate to their targets, being blocked by one another. This problem can be mitigated by efficient multi-robot path planning. Our work focused around the development of a scalable rescheduling algorithm named Conflict Resolution Heuristic A* (CRH*) for decoupled prioritised planning. Extensive experimental evaluation of CRH* was carried out in discrete event simulations of a fleet of autonomous agricultural robots. The results from these experiments proved that the algorithm was both scalable and deadlock-free. Additionally, novel customisation options were included to test further optimisations in system performance. Continuous Assignment and Dynamic Scoring showed to reduce the make-span of the routing whilst Combinatorial Heuristics showed to reduce the impact of outliers on priority orderings.}
    }
  • K. Heiwolt, T. Duckett, and G. Cielniak, “Deep semantic segmentation of 3d plant point clouds,” in Towards autonomous robotic systems conference, 2021. doi:10.1007/978-3-030-89177-0_4
    [BibTeX] [Abstract] [Download PDF]

    Plant phenotyping is an essential step in the plant breeding cycle, necessary to ensure food safety for a growing world population. Standard procedures for evaluating three-dimensional plant morphology and extracting relevant phenotypic characteristics are slow, costly, and in need of automation. Previous work towards automatic semantic segmentation of plants relies on explicit prior knowledge about the species and sensor set-up, as well as manually tuned parameters. In this work, we propose to use a supervised machine learning algorithm to predict per-point semantic annotations directly from point cloud data of whole plants and minimise the necessary user input. We train a PointNet++ variant on a fully annotated procedurally generated data set of partial point clouds of tomato plants, and show that the network is capable of distinguishing between the semantic classes of leaves, stems, and soil based on structural data only. We present both quantitative and qualitative evaluation results, and establish a proof of concept, indicating that deep learning is a promising approach towards replacing the current complex, laborious, species-specific, state-of-the-art plant segmentation procedures.

    @inproceedings{lincoln46669,
    booktitle = {Towards Autonomous Robotic Systems Conference},
    month = {October},
    title = {Deep semantic segmentation of 3D plant point clouds},
    author = {Karoline Heiwolt and Tom Duckett and Grzegorz Cielniak},
    publisher = {Springer International Publishing},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_4},
    keywords = {ARRAY(0x558e723f37f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46669/},
    abstract = {Plant phenotyping is an essential step in the plant breeding cycle, necessary to ensure food safety for a growing world population. Standard procedures for evaluating three-dimensional plant morphology and extracting relevant phenotypic characteristics are slow, costly, and in need of automation. Previous work towards automatic semantic segmentation of plants relies on explicit prior knowledge about the species and sensor set-up, as well as manually tuned parameters. In this work, we propose to use a supervised machine learning algorithm to predict per-point semantic annotations directly from point cloud data of whole plants and minimise the necessary user input. We train a PointNet++ variant on a fully annotated procedurally generated data set of partial point clouds of tomato plants, and show that the network is capable of distinguishing between the semantic classes of leaves, stems, and soil based on structural data only. We present both quantitative and qualitative evaluation results, and establish a proof of concept, indicating that deep learning is a promising approach towards replacing the current complex, laborious, species-specific, state-of-the-art plant segmentation procedures.}
    }
  • R. Ravikanna, M. Hanheide, G. Das, and Z. Zhu, “Maximising availability of transportation robots through intelligent allocation of parking spaces,” in Taros2021, 2021. doi:10.1007/978-3-030-89177-0_34
    [BibTeX] [Abstract] [Download PDF]

    Autonomous agricultural robots increasingly have an important role in tasks such as transportation, crop monitoring, weed detection etc. These tasks require the robots to travel to different locations in the field. Reducing time for this travel can greatly reduce the global task completion time and improve the availability of the robot to perform more number of tasks. Looking at in-field logistics robots for supporting human fruit pickers as a relevant scenario, this research deals with the design of various algorithms for automated allocation of parking spaces for the on-field robots, so as to make them most accessible to preferred areas of the field. These parking space allocation algorithms are tested for their performance by varying initial parameters like the size of the field, number of farm workers in the field, position of the farm workers etc. Various experiments are conducted for this purpose on a simulated environment. Their results are studied and discussed for better understanding about the contribution of intelligent parking space allocation towards improving the overall time efficiency of task completion.

    @inproceedings{lincoln46635,
    booktitle = {TAROS2021},
    month = {October},
    title = {Maximising availability of transportation robots
    through intelligent allocation of parking spaces},
    author = {Roopika Ravikanna and Marc Hanheide and Gautham Das and Zuyuan Zhu},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_34},
    keywords = {ARRAY(0x558e723f38e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46635/},
    abstract = {Autonomous agricultural robots increasingly have an important role in tasks such as transportation, crop monitoring, weed detection etc. These tasks require the robots to travel to different locations in the field. Reducing time for this travel can greatly reduce the global task completion time and improve the availability of the robot to perform more number of tasks. Looking at in-field logistics robots for supporting human fruit pickers as a relevant scenario, this research deals with the design of various algorithms for automated allocation of parking spaces for the on-field robots, so as to make them most accessible to preferred areas of the field. These parking space allocation algorithms are tested
    for their performance by varying initial parameters like the size of the field, number of farm workers in the field, position of the farm workers etc. Various experiments are conducted for this purpose on a simulated environment. Their results are studied and discussed for better understanding about the contribution of intelligent parking space allocation towards improving the overall time efficiency of task completion.}
    }

2020

  • H. Wang, Q. Fu, H. Wang, P. Baxter, J. Peng, and S. Yue, “A bioinspired angular velocity decoding neural network model for visually guided flights,” Neural networks, 2020. doi:10.1016/j.neunet.2020.12.008
    [BibTeX] [Abstract] [Download PDF]

    Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.

    @article{lincoln43704,
    title = {A bioinspired angular velocity decoding neural network model for visually guided flights},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Paul Baxter and Jigen Peng and Shigang Yue},
    publisher = {Elsevier},
    year = {2020},
    doi = {10.1016/j.neunet.2020.12.008},
    journal = {Neural Networks},
    keywords = {ARRAY(0x558e7244bf08)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43704/},
    abstract = {Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.}
    }
  • M. Al-Khafajiy, T. Baker, M. Asim, Z. Guo, R. Ranjan, A. Longo, D. Puthal, and M. Taylor, “Comitment: a fog computing trust management approach,” Journal of parallel and distributed computing, vol. 137, p. 1–16, 2020. doi:10.1016/j.jpdc.2019.10.006
    [BibTeX] [Abstract] [Download PDF]

    As an extension of cloud computing, fog computing is considered to be relatively more secure than cloud computing due to data being transiently maintained and analyzed on local fog nodes closer to data sources. However, there exist several security and privacy concerns when fog nodes collaborate and share data to execute certain tasks. For example, offloading data to a malicious fog node can result into an unauthorized collection or manipulation of users? private data. Cryptographic-based techniques can prevent external attacks, but are not useful when fog nodes are already authenticated and part of a networks using legitimate identities. We therefore resort to trust to identify and isolate malicious fog nodes and mitigate security, respectively. In this paper, we present a fog COMputIng Trust manageMENT (COMITMENT) approach that uses quality of service and quality of protection history measures from previous direct and indirect fog node interactions for assessing and managing the trust level of the nodes within the fog computing environment. Using COMITMENT approach, we were able to reduce/identify the malicious attacks/interactions among fog nodes by approximately 66\%, while reducing the service response time by approximately 15s.

    @article{lincoln47559,
    volume = {137},
    month = {March},
    author = {Mohammed Al-Khafajiy and Thar Baker and Muhammad Asim and Zehua Guo and Rajiv Ranjan and Antonella Longo and Deepak Puthal and Mark Taylor},
    title = {COMITMENT: A Fog Computing Trust Management Approach},
    publisher = {Elsevier},
    year = {2020},
    journal = {Journal of Parallel and Distributed Computing},
    doi = {10.1016/j.jpdc.2019.10.006},
    pages = {1--16},
    keywords = {ARRAY(0x558e7241ba38)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47559/},
    abstract = {As an extension of cloud computing, fog computing is considered to be relatively more secure than cloud computing due to data being transiently maintained and analyzed on local fog nodes closer to data sources. However, there exist several security and privacy concerns when fog nodes collaborate and share data to execute certain tasks. For example, offloading data to a malicious fog node can result into an unauthorized collection or manipulation of users? private data. Cryptographic-based techniques can prevent external attacks, but are not useful when fog nodes are already authenticated and part of a networks using legitimate identities. We therefore resort to trust to identify and isolate malicious fog nodes and mitigate security, respectively. In this paper, we present a fog COMputIng Trust manageMENT (COMITMENT) approach that uses quality of service and quality of protection history measures from previous direct and indirect fog node interactions for assessing and managing the trust level of the nodes within the fog computing environment. Using COMITMENT approach, we were able to reduce/identify the malicious attacks/interactions among fog nodes by approximately 66\%, while reducing the service response time by approximately 15s.}
    }
  • D. Liu, N. Bellotto, and S. Yue, “Deep spiking neural network for video-based disguise face recognition based on dynamic facial movements,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 6, p. 1843–1855, 2020. doi:10.1109/TNNLS.2019.2927274
    [BibTeX] [Abstract] [Download PDF]

    With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well – it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios

    @article{lincoln41718,
    volume = {31},
    number = {6},
    month = {June},
    author = {Daqi Liu and Nicola Bellotto and Shigang Yue},
    title = {Deep Spiking Neural Network for Video-based Disguise Face Recognition Based on Dynamic Facial Movements},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2927274},
    pages = {1843--1855},
    keywords = {ARRAY(0x558e723fe0f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41718/},
    abstract = {With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well - it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios}
    }
  • R. Polvara, M. Fernandez-Carmona, M. Hanheide, and G. Neumann, “Next-best-sense: a multi-criteria robotic exploration strategy for rfid tags discovery,” Ieee robotics and automation letters, vol. 5, iss. 3, p. 4477–4484, 2020. doi:10.1109/LRA.2020.3001539
    [BibTeX] [Abstract] [Download PDF]

    Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation – a scan with the RFID reader – as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors’ repository.

    @article{lincoln41120,
    volume = {5},
    number = {3},
    month = {June},
    author = {Riccardo Polvara and Manuel Fernandez-Carmona and Marc Hanheide and Gerhard Neumann},
    title = {Next-Best-Sense: a multi-criteria robotic exploration strategy for RFID tags discovery},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2020.3001539},
    pages = {4477--4484},
    keywords = {ARRAY(0x558e723b1650)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41120/},
    abstract = {Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation -- a scan with the RFID reader -- as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors' repository.}
    }
  • Q. Fu, H. Wang, J. Peng, and S. Yue, “Improved collision perception neuronal system model with adaptive inhibition mechanism and evolutionary learning,” Ieee access, vol. 8, p. 108896–108912, 2020. doi:10.1109/ACCESS.2020.3001396
    [BibTeX] [Abstract] [Download PDF]

    Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.

    @article{lincoln42131,
    volume = {8},
    month = {June},
    author = {Qinbing Fu and Huatian Wang and Jigen Peng and Shigang Yue},
    title = {Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2020.3001396},
    pages = {108896--108912},
    keywords = {ARRAY(0x558e722692f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42131/},
    abstract = {Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.}
    }
  • J. Liu, S. Iacoponi, C. Laschi, L. Wen, and M. Calisti, “Underwater mobile manipulation: a soft arm on a benthic legged robot,” Ieee robotics & automation magazine, vol. 27, iss. 4, p. 12–26, 2020. doi:10.1109/MRA.2020.3024001
    [BibTeX] [Abstract] [Download PDF]

    Robotic systems that can explore the sea floor, collect marine samples, gather shallow water refuse, and perform other underwater tasks are interesting and important in several fields, from biology and ecology to off-shore industry. In this article, we present a robotic platform that is, to our knowledge, the first to combine benthic legged locomotion and soft continuum manipulation to perform real-world underwater mission-like experiments. We experimentally exploit inverse kinematics for spatial manipulation in a laboratory environment and then examine the robot’s workspace extensibility, force, energy consumption, and grasping ability in different undersea scenarios.

    @article{lincoln46137,
    volume = {27},
    number = {4},
    month = {December},
    author = {Jiaqi Liu and Saverio Iacoponi and Cecilia Laschi and Li Wen and Marcello Calisti},
    title = {Underwater Mobile Manipulation: A Soft Arm on a Benthic Legged Robot},
    year = {2020},
    journal = {IEEE Robotics \& Automation Magazine},
    doi = {10.1109/MRA.2020.3024001},
    pages = {12--26},
    keywords = {ARRAY(0x558e72436178)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46137/},
    abstract = {Robotic systems that can explore the sea floor, collect marine samples, gather shallow water refuse, and perform other underwater tasks are interesting and important in several fields, from biology and ecology to off-shore industry. In this article, we present a robotic platform that is, to our knowledge, the first to combine benthic legged locomotion and soft continuum manipulation to perform real-world underwater mission-like experiments. We experimentally exploit inverse kinematics for spatial manipulation in a laboratory environment and then examine the robot's workspace extensibility, force, energy consumption, and grasping ability in different undersea scenarios.}
    }
  • Z. Yan, S. Schreiberhuber, G. Halmetschlager, T. Duckett, M. Vincze, and N. Bellotto, “Robot perception of static and dynamic objects with an autonomous floor scrubber,” Intelligent service robotics, 2020. doi:10.1007/s11370-020-00324-9
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.

    @article{lincoln40882,
    month = {June},
    title = {Robot Perception of Static and Dynamic Objects with an Autonomous Floor Scrubber},
    author = {Zhi Yan and Simon Schreiberhuber and Georg Halmetschlager and Tom Duckett and Markus Vincze and Nicola Bellotto},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s11370-020-00324-9},
    journal = {Intelligent Service Robotics},
    keywords = {ARRAY(0x558e72438550)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40882/},
    abstract = {This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.}
    }
  • I. Albayati, A. Postnikov, S. Pearson, R. Bickerton, A. Zolotas, and C. Bingham, “Power and energy analysis for a commercial retail refrigeration system responding to a static demand side response,” International journal of electrical power & energy systems, vol. 117, p. 105645, 2020. doi:10.1016/j.ijepes.2019.105645
    [BibTeX] [Abstract] [Download PDF]

    The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.

    @article{lincoln38163,
    volume = {117},
    month = {May},
    author = {Ibrahim Albayati and Andrey Postnikov and Simon Pearson and Ronald Bickerton and Argyrios Zolotas and Chris Bingham},
    title = {Power and Energy Analysis for a Commercial Retail Refrigeration System Responding to a Static Demand Side Response},
    publisher = {Elsevier},
    year = {2020},
    journal = {International Journal of Electrical Power \& Energy Systems},
    doi = {10.1016/j.ijepes.2019.105645},
    pages = {105645},
    keywords = {ARRAY(0x558e722cbeb8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38163/},
    abstract = {The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.}
    }
  • G. Picardi, M. Chellapurath, S. Iacoponi, S. Stefanni, C. Laschi, and M. Calisti, “Bioinspired underwater legged robot for seabed exploration with low environmental disturbance,” Science robotics, vol. 5, iss. 42, p. eaaz1012, 2020. doi:10.1126/scirobotics.aaz1012
    [BibTeX] [Abstract] [Download PDF]

    Robots have the potential to assist and complement humans in the study and exploration of extreme and hostile environments. For example, valuable scientific data have been collected with the aid of propeller-driven autonomous and remotely operated vehicles in underwater operations. However, because of their nature as swimmers, such robots are limited when closer interaction with the environment is required. Here, we report a bioinspired underwater legged robot, called SILVER2, that implements locomotion modalities inspired by benthic animals (organisms that harness the interaction with the seabed to move; for example, octopi and crabs). Our robot can traverse irregular terrains, interact delicately with the environment, approach targets safely and precisely, and hold position passively and silently. The capabilities of our robot were validated through a series of field missions in real sea conditions in a depth range between 0.5 and 12 meters.

    @article{lincoln46143,
    volume = {5},
    number = {42},
    month = {May},
    author = {G. Picardi and M. Chellapurath and S. Iacoponi and S. Stefanni and C. Laschi and M. Calisti},
    title = {Bioinspired underwater legged robot for seabed exploration with low environmental disturbance},
    year = {2020},
    journal = {Science Robotics},
    doi = {10.1126/scirobotics.aaz1012},
    pages = {eaaz1012},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46143/},
    abstract = {Robots have the potential to assist and complement humans in the study and exploration of extreme and hostile environments. For example, valuable scientific data have been collected with the aid of propeller-driven autonomous and remotely operated vehicles in underwater operations. However, because of their nature as swimmers, such robots are limited when closer interaction with the environment is required. Here, we report a bioinspired underwater legged robot, called SILVER2, that implements locomotion modalities inspired by benthic animals (organisms that harness the interaction with the seabed to move; for example, octopi and crabs). Our robot can traverse irregular terrains, interact delicately with the environment, approach targets safely and precisely, and hold position passively and silently. The capabilities of our robot were validated through a series of field missions in real sea conditions in a depth range between 0.5 and 12 meters.}
    }
  • L. Jackson, C. M. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and S. Hadfield, “Downsizing an orbital space robot: a dynamic system based evaluation,” Advances in space research, vol. 65, iss. 10, p. 2247–2262, 2020. doi:10.1016/j.asr.2020.03.004
    [BibTeX] [Abstract] [Download PDF]

    Small space robots have the potential to revolutionise space exploration by facilitating the on-orbit assembly of infrastructure, in shorter time scales, at reduced costs. Their commercial appeal will be further improved if such a system is also capable of performing on-orbit servicing missions, in line with the current drive to limit space debris and prolong the lifetime of satellites already in orbit. Whilst there have been a limited number of successful demonstrations of technologies capable of these on-orbit operations, the systems remain large and bespoke. The recent surge in small satellite technologies is changing the economics of space and in the near future, downsizing a space robot might become be a viable option with a host of benefits. This industry wide shift means some of the technologies for use with a downsized space robot, such as power and communication subsystems, now exist. However, there are still dynamic and control issues that need to be overcome before a downsized space robot can be capable of undertaking useful missions. This paper first outlines these issues, before analyzing the effect of downsizing a system on its operational capability. Therefore presenting the smallest controllable system such that the benefits of a small space robot can be achieved with current technologies. The sizing of the base spacecraft and manipulator are addressed here. The design presented consists of a 3 link, 6 degrees of freedom robotic manipulator mounted on a 12U form factor satellite. The feasibility of this 12U space robot was evaluated in simulation and the in-depth results presented here support the hypothesis that a small space robot is a viable solution for in-orbit operations.

    @article{lincoln48337,
    volume = {65},
    number = {10},
    month = {May},
    author = {Lucy Jackson and Chakravarthini M. Saaj and Asma Seddaoui and Calem Whiting and Steve Eckersley and Simon Hadfield},
    title = {Downsizing an orbital space robot: A dynamic system based evaluation},
    publisher = {Elsevier},
    year = {2020},
    journal = {Advances in Space Research},
    doi = {10.1016/j.asr.2020.03.004},
    pages = {2247--2262},
    keywords = {ARRAY(0x558e722cc140)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48337/},
    abstract = {Small space robots have the potential to revolutionise space exploration by facilitating the on-orbit assembly of infrastructure, in shorter time scales, at reduced costs. Their commercial appeal will be further improved if such a system is also capable of performing on-orbit servicing missions, in line with the current drive to limit space debris and prolong the lifetime of satellites already in orbit. Whilst there have been a limited number of successful demonstrations of technologies capable of these on-orbit operations, the systems remain large and bespoke. The recent surge in small satellite technologies is changing the economics of space and in the near future, downsizing a space robot might become be a viable option with a host of benefits. This industry wide shift means some of the technologies for use with
    a downsized space robot, such as power and communication subsystems, now exist. However, there are still dynamic and control issues that need to be overcome before a downsized space robot can be capable of undertaking useful missions. This paper first outlines these issues, before analyzing the effect of downsizing a system on its operational capability. Therefore presenting the smallest controllable system such that the benefits of a small space robot can be achieved with current technologies. The sizing of the base spacecraft and manipulator are addressed here. The design presented consists of a 3 link, 6 degrees of freedom robotic manipulator mounted on a 12U form factor satellite. The feasibility of this 12U space robot was evaluated in simulation and the in-depth results presented here support the hypothesis that a small space robot is a viable solution for in-orbit operations.}
    }
  • D. D. Barrie, R. Margetts, and K. Goher, “Simpa: soft-grasp infant myoelectric prosthetic arm,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 699–704, 2020. doi:10.1109/LRA.2019.2963820
    [BibTeX] [Abstract] [Download PDF]

    Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.

    @article{lincoln39383,
    volume = {5},
    number = {2},
    month = {April},
    author = {Daniel De Barrie and Rebecca Margetts and Khaled Goher},
    title = {SIMPA: Soft-Grasp Infant Myoelectric Prosthetic Arm},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2963820},
    pages = {699--704},
    keywords = {ARRAY(0x558e72450220)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39383/},
    abstract = {Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using
    predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.}
    }
  • H. Wang, J. Peng, and S. Yue, “A directionally selective small target motion detecting visual neural network in cluttered backgrounds,” Ieee transactions on cybernetics, vol. 50, iss. 4, p. 1541–1555, 2020. doi:10.1109/TCYB.2018.2869384
    [BibTeX] [Abstract] [Download PDF]

    Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect’s visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.

    @article{lincoln33420,
    volume = {50},
    number = {4},
    month = {April},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8485659},
    title = {A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/TCYB.2018.2869384},
    pages = {1541--1555},
    keywords = {ARRAY(0x558e722c9da0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33420/},
    abstract = {Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect's visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.}
    }
  • T. Pardi, V. Ortenzi, C. Fairbairn, T. Pipe, A. G. Esfahani, and R. Stolkin, “Planning maximum-manipulability cutting paths,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 1999–2006, 2020. doi:10.1109/LRA.2020.2970949
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.

    @article{lincoln41285,
    volume = {5},
    number = {2},
    month = {April},
    author = {Tommaso Pardi and Valerio Ortenzi and Colin Fairbairn and Tony Pipe and Amir Ghalamzan Esfahani and Rustam Stolkin},
    title = {Planning maximum-manipulability cutting paths},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2020.2970949},
    pages = {1999--2006},
    keywords = {ARRAY(0x558e72428918)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41285/},
    abstract = {This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.}
    }
  • W. Martindale, S. Pearson, M. Swainson, L. Korir, I. Wright, A. M. Opiyo, B. Karanja, S. Nyalala, and M. Kumar, “Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between kenyan, indian and united kingdom food manufacturers,” Emerald open research, vol. 2, iss. 12, 2020. doi:10.35241/emeraldopenres.13414.1
    [BibTeX] [Abstract] [Download PDF]

    The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.

    @article{lincoln40529,
    volume = {2},
    number = {12},
    month = {April},
    author = {Wayne Martindale and Simon Pearson and Mark Swainson and Lilian Korir and Isobel Wright and Arnold M. Opiyo and Benard Karanja and Samuel Nyalala and Mahesh Kumar},
    title = {Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between Kenyan, Indian and United Kingdom food manufacturers},
    publisher = {Emerald},
    year = {2020},
    journal = {Emerald Open Research},
    doi = {10.35241/emeraldopenres.13414.1},
    keywords = {ARRAY(0x558e72456788)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40529/},
    abstract = {The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.}
    }
  • S. Cosar and N. Bellotto, “Human re-identification with a robot thermal camera using entropy-based sampling,” Journal of intelligent and robotic systems, vol. 98, iss. 1, p. 85–102, 2020. doi:10.1007/s10846-019-01026-w
    [BibTeX] [Abstract] [Download PDF]

    Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.

    @article{lincoln35778,
    volume = {98},
    number = {1},
    month = {April},
    author = {Serhan Cosar and Nicola Bellotto},
    title = {Human Re-Identification with a Robot Thermal Camera using Entropy-based Sampling},
    publisher = {Springer},
    year = {2020},
    journal = {Journal of Intelligent and Robotic Systems},
    doi = {10.1007/s10846-019-01026-w},
    pages = {85--102},
    keywords = {ARRAY(0x558e722cbde0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35778/},
    abstract = {Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.}
    }
  • J. Gao, A. French, M. Pound, Y. He, T. Pridmore, and J. Pieters, “Deep convolutional neural networks for image-based convolvulus sepium detection in sugar beet fields,” Plant methods, vol. 16, p. 19, 2020. doi:10.1186/s13007-020-00570-z
    [BibTeX] [Abstract] [Download PDF]

    Background Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. Results Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 {$\times$} 1200) on a NVIDIA Titan X GPU environment.

    @article{lincoln41223,
    volume = {16},
    month = {March},
    author = {Junfeng Gao and Andrew French and Michael Pound and Yong He and Tony Pridmore and Jan Pieters},
    title = {Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields},
    publisher = {BMC},
    year = {2020},
    journal = {Plant Methods},
    doi = {10.1186/s13007-020-00570-z},
    pages = {19},
    keywords = {ARRAY(0x558e722cc008)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41223/},
    abstract = {Background
    Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments.
    Results
    Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 {$\times$} 1200) on a NVIDIA Titan X GPU environment.}
    }
  • H. Cuayahuitl, “A data-efficient deep learning approach for deployable multimodal social robots,” Neurocomputing, vol. 396, p. 587–598, 2020. doi:10.1016/j.neucom.2018.09.104
    [BibTeX] [Abstract] [Download PDF]

    The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games–-and use the game of `Noughts {$\backslash$}& Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.

    @article{lincoln42805,
    volume = {396},
    month = {July},
    author = {Heriberto Cuayahuitl},
    note = {The final published version of this article can be accessed online at https://www.journals.elsevier.com/neurocomputing/},
    title = {A Data-Efficient Deep Learning Approach for Deployable Multimodal Social Robots},
    publisher = {Elsevier},
    year = {2020},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2018.09.104},
    pages = {587--598},
    keywords = {ARRAY(0x558e722487d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42805/},
    abstract = {The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games---and use the game of `Noughts {$\backslash$}\& Crosses' with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.}
    }
  • H. Wang, J. Peng, X. Zheng, and S. Yue, “A robust visual system for small target motion detection against cluttered moving backgrounds,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 3, p. 839–853, 2020. doi:10.1109/TNNLS.2019.2910418
    [BibTeX] [Abstract] [Download PDF]

    Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems–ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.

    @article{lincoln36114,
    volume = {31},
    number = {3},
    month = {March},
    author = {Hongxin Wang and Jigen Peng and Xuqiang Zheng and Shigang Yue},
    title = {A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2910418},
    pages = {839--853},
    keywords = {ARRAY(0x558e722cc1b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36114/},
    abstract = {Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems--ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.}
    }
  • R. Polvara, M. Patacchiola, M. Hanheide, and G. Neumann, “Sim-to-real quadrotor landing via sequential deep q-networks and domain randomization,” Robotics, vol. 9, iss. 1, 2020. doi:doi:10.3390/robotics9010008
    [BibTeX] [Abstract] [Download PDF]

    The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.

    @article{lincoln40216,
    volume = {9},
    number = {1},
    month = {February},
    author = {Riccardo Polvara and Massimiliano Patacchiola and Marc Hanheide and Gerhard Neumann},
    title = {Sim-to-Real Quadrotor Landing via Sequential Deep Q-Networks and Domain Randomization},
    publisher = {MDPI},
    year = {2020},
    journal = {Robotics},
    doi = {doi:10.3390/robotics9010008},
    keywords = {ARRAY(0x558e724432b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40216/},
    abstract = {The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.}
    }
  • M. Bartlett, C. Costescu, P. Baxter, and S. Thill, “Requirements for robotic interpretation of social signals ?in the wild?: insights from diagnostic criteria of autism spectrum disorder,” Mdpi information, vol. 11, iss. 81, p. 1–20, 2020. doi:10.3390/info11020081
    [BibTeX] [Abstract] [Download PDF]

    The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.

    @article{lincoln40108,
    volume = {11},
    number = {81},
    month = {February},
    author = {M Bartlett and C Costescu and Paul Baxter and S Thill},
    title = {Requirements for Robotic Interpretation of Social Signals ?in the Wild?: Insights from Diagnostic Criteria of Autism Spectrum Disorder},
    publisher = {MDPI},
    year = {2020},
    journal = {MDPI Information},
    doi = {10.3390/info11020081},
    pages = {1--20},
    keywords = {ARRAY(0x558e7224cb48)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40108/},
    abstract = {The last few decades have seen widespread advances in technological means to characterise
    observable aspects of human behaviour such as gaze or posture. Among others, these developments
    have also led to significant advances in social robotics. At the same time, however, social robots
    are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether
    the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we
    characterise the problems that a social robot in the real world may face, and review the technological
    state of the art in terms of addressing these. We do this by considering what it would entail
    to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD
    diagnosis fundamentally requires the ability to characterise human behaviour from observable
    aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis
    is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall,
    we demonstrate that even with relatively clear therapist-provided criteria and current technological
    progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have
    clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis,
    we provide a classification of criteria based on whether or not they depend on covert information
    and highlight present-day possibilities for supporting therapists in diagnosis through technological
    means. For social robotics, we highlight the fundamental role of covert behaviour, show that the
    current state-of-the-art is unable to characterise this, and emphasise that future research should tackle
    this explicitly in realistic settings.}
    }
  • B. Chen, J. Huang, Y. Huang, S. Kollias, and S. Yue, “Combining guaranteed and spot markets in display advertising: selling guaranteed page views with stochastic demand,” European journal of operational research, vol. 280, iss. 3, p. 1144–1159, 2020. doi:10.1016/j.ejor.2019.07.067
    [BibTeX] [Abstract] [Download PDF]

    While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.

    @article{lincoln39575,
    volume = {280},
    number = {3},
    month = {February},
    author = {Bowei Chen and Jingmin Huang and Yufei Huang and Stefanos Kollias and Shigang Yue},
    title = {Combining guaranteed and spot markets in display advertising: Selling guaranteed page views with stochastic demand},
    publisher = {Elsevier},
    year = {2020},
    journal = {European Journal of Operational Research},
    doi = {10.1016/j.ejor.2019.07.067},
    pages = {1144--1159},
    keywords = {ARRAY(0x558e7241dcf0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39575/},
    abstract = {While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page
    views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.}
    }
  • J. P. Fentanes, A. Badiee, T. Duckett, J. Evans, S. Pearson, and G. Cielniak, “Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor,” Journal of field robotics, vol. 37, iss. 1, p. 122–136, 2020. doi:10.1002/rob.21914
    [BibTeX] [Abstract] [Download PDF]

    Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.

    @article{lincoln37350,
    volume = {37},
    number = {1},
    month = {January},
    author = {Jaime Pulido Fentanes and Amir Badiee and Tom Duckett and Jonathan Evans and Simon Pearson and Grzegorz Cielniak},
    title = {Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor},
    publisher = {Wiley Periodicals, Inc.},
    year = {2020},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21914},
    pages = {122--136},
    keywords = {ARRAY(0x558e722cbba0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37350/},
    abstract = {Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.}
    }
  • P. Chudzik, A. Mitchell, M. Alkaseem, Y. Wu, S. Fang, T. Hudaib, S. Pearson, and B. Al-Diri, “Mobile real-time grasshopper detection and data aggregation framework,” Scientific reports, vol. 10, p. 1150, 2020. doi:10.1038/s41598-020-57674-8
    [BibTeX] [Abstract] [Download PDF]

    nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.

    @article{lincoln39125,
    volume = {10},
    month = {January},
    author = {Piotr Chudzik and Arthur Mitchell and Mohammad Alkaseem and Yingie Wu and Shibo Fang and Taghread Hudaib and Simon Pearson and Bashir Al-Diri},
    title = {Mobile Real-Time Grasshopper Detection and Data Aggregation Framework},
    publisher = {Springer},
    year = {2020},
    journal = {Scientific Reports},
    doi = {10.1038/s41598-020-57674-8},
    pages = {1150},
    keywords = {ARRAY(0x558e722ca2e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39125/},
    abstract = {nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images
    to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.}
    }
  • R. Kirk, G. Cielniak, and M. Mangan, “L*a*b*fruits: a rapid and robust outdoor fruit detection system combining bio-inspired features with one-stage deep learning networks,” Sensors, vol. 20, iss. 1, p. 275, 2020. doi:10.3390/s20010275
    [BibTeX] [Abstract] [Download PDF]

    Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.

    @article{lincoln39423,
    volume = {20},
    number = {1},
    month = {January},
    author = {Raymond Kirk and Grzegorz Cielniak and Michael Mangan},
    title = {L*a*b*Fruits: A Rapid and Robust Outdoor Fruit Detection System Combining Bio-Inspired Features with One-Stage Deep Learning Networks},
    publisher = {MDPI},
    year = {2020},
    journal = {Sensors},
    doi = {10.3390/s20010275},
    pages = {275},
    keywords = {ARRAY(0x558e723a2880)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39423/},
    abstract = {Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.}
    }
  • P. Bosilj, E. Aptoula, T. Duckett, and G. Cielniak, “Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture,” Journal of field robotics, vol. 37, iss. 1, p. 7–19, 2020. doi:10.1002/rob.21869
    [BibTeX] [Abstract] [Download PDF]

    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.

    @article{lincoln35535,
    volume = {37},
    number = {1},
    month = {January},
    author = {Petra Bosilj and Erchan Aptoula and Tom Duckett and Grzegorz Cielniak},
    title = {Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture},
    publisher = {Wiley},
    year = {2020},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21869},
    pages = {7--19},
    keywords = {ARRAY(0x558e722ca268)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35535/},
    abstract = {Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.}
    }
  • C. Coppola, S. Cosar, D. R. Faria, and N. Bellotto, “Social activity recognition on continuous rgb-d video sequences,” International journal of social robotics, p. 1–15, 2020. doi:10.1007/s12369-019-00541-y
    [BibTeX] [Abstract] [Download PDF]

    Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.

    @article{lincoln35151,
    month = {January},
    author = {Claudio Coppola and Serhan Cosar and Diego R. Faria and Nicola Bellotto},
    title = {Social Activity Recognition on Continuous RGB-D Video Sequences},
    publisher = {Springer},
    journal = {International Journal of Social Robotics},
    doi = {10.1007/s12369-019-00541-y},
    pages = {1--15},
    year = {2020},
    keywords = {ARRAY(0x558e722cc428)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35151/},
    abstract = {Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.}
    }
  • Z. Yan, T. Duckett, and N. Bellotto, “Online learning for 3d lidar-based human detection: experimental analysis of point cloud clustering and classification methods,” Autonomous robots, vol. 44, iss. 2, p. 147–164, 2020. doi:10.1007/s10514-019-09883-y
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of “experts” to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.

    @article{lincoln36535,
    volume = {44},
    number = {2},
    month = {January},
    author = {Zhi Yan and Tom Duckett and Nicola Bellotto},
    title = {Online Learning for 3D LiDAR-based Human Detection: Experimental Analysis of Point Cloud Clustering and Classification Methods},
    publisher = {Springer},
    year = {2020},
    journal = {Autonomous Robots},
    doi = {10.1007/s10514-019-09883-y},
    pages = {147--164},
    keywords = {ARRAY(0x558e72420038)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36535/},
    abstract = {This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of "experts" to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.}
    }
  • F. Camara and C. Fox, “Space invaders: pedestrian proxemic utility functions and trust zones for autonomous vehicle interactions,” International journal of social robotics, 2020. doi:10.1007/s12369-020-00717-x
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian proxemic utility and trust will help autonomous vehicles to plan and control interactions with pedestrians more safely and efficiently. When pedestrians cross the road in front of human-driven vehicles, the two agents use knowledge of each other?s preferences to negotiate and to determine who will yield to the other. Autonomous vehicles will require similar understandings, but previous work has shown a need for them to be provided in the form of continuous proxemic utility functions, which are not available from previous proxemics stud- ies based on Hall?s discrete zones. To fill this gap, a new Bayesian method to infer continuous pedestrian proxemic utility functions is proposed, and related to a new definition of ?physical trust requirement? (PTR) for road-crossing scenarios. The method is validated on simulation data then its parameters are inferred empirically from two public datasets. Results show that pedestrian proxemic utility is best described by a hyperbolic function, and that trust by the pedestrian is required in a discrete ?trust zone? which emerges naturally from simple physics. The PTR concept is then shown to be capable of generating and explaining the empirically observed zone sizes of Hall’s discrete theory of proxemics.

    @article{lincoln42876,
    title = {Space Invaders: Pedestrian Proxemic Utility Functions and Trust Zones for Autonomous Vehicle Interactions},
    author = {Fanta Camara and Charles Fox},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s12369-020-00717-x},
    journal = {International Journal of Social Robotics},
    keywords = {ARRAY(0x558e72254400)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42876/},
    abstract = {Understanding pedestrian proxemic utility and trust will help autonomous vehicles to plan and control interactions with pedestrians more safely and efficiently. When pedestrians cross the road in front of human-driven vehicles, the two agents use knowledge of each other?s preferences to negotiate and to determine who will yield to the other. Autonomous vehicles will require similar understandings, but previous work has shown a need for them to be provided
    in the form of continuous proxemic utility functions, which are not available from previous proxemics stud-
    ies based on Hall?s discrete zones. To fill this gap, a new Bayesian method to infer continuous pedestrian
    proxemic utility functions is proposed, and related to a new definition of ?physical trust requirement? (PTR)
    for road-crossing scenarios. The method is validated on simulation data then its parameters are inferred empirically from two public datasets. Results show that pedestrian proxemic utility is best described by a hyperbolic function, and that trust by the pedestrian is required in a discrete ?trust zone? which emerges naturally from simple physics. The PTR concept is then shown to be capable of generating and explaining the
    empirically observed zone sizes of Hall's discrete theory of proxemics.}
    }
  • J. Lock, I. Gilchrist, G. Cielniak, and N. Bellotto, “Experimental analysis of a spatialised audio interface for people with visual impairments,” Acm transactions on accessible computing, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.

    @article{lincoln41544,
    title = {Experimental Analysis of a Spatialised Audio Interface for People with Visual Impairments},
    author = {Jacobus Lock and Iain Gilchrist and Grzegorz Cielniak and Nicola Bellotto},
    publisher = {Association for Computing Machinery},
    year = {2020},
    journal = {ACM Transactions on Accessible Computing},
    keywords = {ARRAY(0x558e722c9cc8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41544/},
    abstract = {Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.}
    }
  • J. Singh, A. R. Srinivasan, G. Neumann, and A. Kucukyilmaz, “Haptic-guided teleoperation of a 7-dof collaborative robot arm with an identical twin master,” Ieee transactions on haptics, p. 1–1, 2020. doi:10.1109/TOH.2020.2971485
    [BibTeX] [Abstract] [Download PDF]

    In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.

    @article{lincoln40137,
    title = {Haptic-Guided Teleoperation of a 7-DoF Collaborative Robot Arm with an Identical Twin Master},
    author = {Jayant Singh and Aravinda Ramakrishnan Srinivasan and Gerhard Neumann and Ayse Kucukyilmaz},
    publisher = {IEEE},
    year = {2020},
    pages = {1--1},
    doi = {10.1109/TOH.2020.2971485},
    journal = {IEEE Transactions on Haptics},
    keywords = {ARRAY(0x558e723d0bb8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40137/},
    abstract = {In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.}
    }
  • Q. Fu and S. Yue, “Modelling drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds,” Biological cybernetics, 2020. doi:10.1007/s00422-020-00841-x
    [BibTeX] [Abstract] [Download PDF]

    Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.

    @article{lincoln42133,
    month = {July},
    title = {Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds},
    author = {Qinbing Fu and Shigang Yue},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s00422-020-00841-x},
    journal = {Biological Cybernetics},
    keywords = {ARRAY(0x558e722c9d28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42133/},
    abstract = {Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.}
    }
  • Y. M. Lee, R. Madigan, O. Giles, L. Garach?Morcillo, G. Markkula, C. Fox, F. Camara, M. Rothmueller, S. A. Vendelbo?Larsen, P. H. Rasmussen, A. Dietrich, D. Nathanael, V. Portouli, A. Schieben, and N. Merat, “Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles,” Cognition, technology & work, 2020. doi:10.1007/s10111-020-00635-y
    [BibTeX] [Abstract] [Download PDF]

    To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu- nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred. This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle- based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit communication cues from drivers, although some cultural differences were observed. The implications of these findings are discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.

    @article{lincoln41217,
    month = {June},
    title = {Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles},
    author = {Yee Mun Lee and Ruth Madigan and Oscar Giles and Laura Garach?Morcillo and Gustav Markkula and Charles Fox and Fanta Camara and Markus Rothmueller and Signe Alexandra Vendelbo?Larsen and Pernille Holm Rasmussen and Andre Dietrich and Dimitris Nathanael and Villy Portouli and Anna Schieben and Natasha Merat},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s10111-020-00635-y},
    journal = {Cognition, Technology \& Work},
    keywords = {ARRAY(0x558e722cc4b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41217/},
    abstract = {To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by
    road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu-
    nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European
    project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six
    observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using
    an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from
    the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand
    gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication
    techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred.
    This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed
    pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle-
    based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement
    information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit
    communication cues from drivers, although some cultural differences were observed. The implications of these findings are
    discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.}
    }
  • X. Sun, S. Yue, and M. Mangan, “A decentralised neural model explaining optimal integration of navigational strategies in insects,” Elife, vol. 9, 2020. doi:10.7554/eLife.54026
    [BibTeX] [Abstract] [Download PDF]

    Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.

    @article{lincoln41703,
    volume = {9},
    month = {July},
    author = {Xuelong Sun and Shigang Yue and Michael Mangan},
    title = {A decentralised neural model explaining optimal integration of navigational strategies in insects},
    publisher = {eLife Sciences Publications},
    journal = {eLife},
    doi = {10.7554/eLife.54026},
    year = {2020},
    keywords = {ARRAY(0x558e7225aa28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41703/},
    abstract = {Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.}
    }
  • S. Iacoponi, M. Calisti, and C. Laschi, “Simulation and analysis of microspines interlocking behavior on rocky surfaces: an in-depth study of the isolated spine,” Journal of mechanisms and robotics, vol. 12, iss. 6, 2020. doi:10.1115/1.4047725
    [BibTeX] [Abstract] [Download PDF]

    Microspine grippers address a large variety of possible applications, especially in field robotics and manipulation in extreme environments. Predicting and modeling the gripper behavior remains a major challenge to this day. One of the most complex aspects of these predictions is how to model the spine to rock interaction of the spine tip with the local asperity. This paper proposes a single spine model, in order to fill the gap of knowledge in this specific field. A new model for the anchoring resistance of a single spine is proposed and discussed. The model is then applied to a simulation campaign. With the aid of simulations and analytic functions, we correlated performance characteristics of a spine with a set of quantitative, macroscopic variables related to the spine, the substrate and its usage. Eventually, this paper presents some experimental comparison tests and discusses traversal phenomena observed during the tests.

    @article{lincoln46135,
    volume = {12},
    number = {6},
    month = {December},
    author = {Saverio Iacoponi and Marcello Calisti and Cecilia Laschi},
    title = {Simulation and Analysis of Microspines Interlocking Behavior on Rocky Surfaces: An In-Depth Study of the Isolated Spine},
    journal = {Journal of Mechanisms and Robotics},
    doi = {10.1115/1.4047725},
    year = {2020},
    keywords = {ARRAY(0x558e7242e128)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46135/},
    abstract = {Microspine grippers address a large variety of possible applications, especially in field robotics and manipulation in extreme environments. Predicting and modeling the gripper behavior remains a major challenge to this day. One of the most complex aspects of these predictions is how to model the spine to rock interaction of the spine tip with the local asperity. This paper proposes a single spine model, in order to fill the gap of knowledge in this specific field. A new model for the anchoring resistance of a single spine is proposed and discussed. The model is then applied to a simulation campaign. With the aid of simulations and analytic functions, we correlated performance characteristics of a spine with a set of quantitative, macroscopic variables related to the spine, the substrate and its usage. Eventually, this paper presents some experimental comparison tests and discusses traversal phenomena observed during the tests.}
    }
  • M. T. Fountain, A. Badiee, S. Hemer, A. Delgado, M. Mangan, C. Dowding, F. Davis, and S. Pearson, “The use of light spectrum blocking films to reduce populations of drosophila suzukii matsumura in fruit crops,” Scientific reports, vol. 10, iss. 1, 2020. doi:10.1038/s41598-020-72074-8
    [BibTeX] [Abstract] [Download PDF]

    Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries. Effective control is challenging and reliant on integrated pest management which includes the use of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest that can be integrated into control strategies are urgently required. In many production regions, including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials. These can be modified to filter specific wavebands of light. We investigated whether targeted spectral modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii. We present a novel approach that starts from a neuroscientific investigation of insect sensory systems and ends with infield testing of new cladding materials inspired by the biological data. We show D. suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm (orange & red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting materials reduces pest populations up to 73\% in field trials.

    @article{lincoln42446,
    volume = {10},
    number = {1},
    month = {September},
    author = {Michelle T. Fountain and Amir Badiee and Sebastian Hemer and Alvaro Delgado and Michael Mangan and Colin Dowding and Frederick Davis and Simon Pearson},
    title = {The use of light spectrum blocking films to reduce populations of Drosophila suzukii Matsumura in fruit crops},
    publisher = {Nature Publishing Group},
    year = {2020},
    journal = {Scientific Reports},
    doi = {10.1038/s41598-020-72074-8},
    keywords = {ARRAY(0x558e7227f848)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42446/},
    abstract = {Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of
    multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries.
    Effective control is challenging and reliant on integrated pest management which includes the use
    of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest
    that can be integrated into control strategies are urgently required. In many production regions,
    including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials.
    These can be modified to filter specific wavebands of light. We investigated whether targeted spectral
    modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii.
    We present a novel approach that starts from a neuroscientific investigation of insect sensory systems
    and ends with infield testing of new cladding materials inspired by the biological data. We show D.
    suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm
    (orange \& red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting
    materials reduces pest populations up to 73\% in field trials.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Are you still with me? continuous engagement assessment from a robot’s point of view,” Frontiers in robotics and ai, vol. 7, iss. 116, 2020. doi:10.3389/frobt.2020.00116
    [BibTeX] [Abstract] [Download PDF]

    Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.

    @article{lincoln42433,
    volume = {7},
    number = {116},
    month = {September},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    title = {Are You Still With Me? Continuous Engagement Assessment From a Robot's Point of View},
    publisher = {Frontiers Media S.A.},
    year = {2020},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2020.00116},
    keywords = {ARRAY(0x558e7227f890)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42433/},
    abstract = {Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement\_detector, as a tool to measure engagement in a variety of settings.}
    }
  • D. Bochtis, L. Benos, M. Lampridi, V. Marinoudi, S. Pearson, and C. G. S. o, “Agricultural workforce crisis in light of the covid-19 pandemic,” Sustainability, vol. 12, iss. 19, p. 8212, 2020. doi:10.3390/su12198212
    [BibTeX] [Abstract] [Download PDF]

    COVID-19 and the restrictive measures towards containing the spread of its infections have seriously affected the agricultural workforce and jeopardized food security. The present study aims at assessing the COVID-19 pandemic impacts on agricultural labor and suggesting strategies to mitigate them. To this end, after an introduction to the pandemic background, the negative consequences on agriculture and the existing mitigation policies, risks to the agricultural workers were benchmarked across the United States? Standard Occupational Classification system. The individual tasks associated with each occupation in agricultural production were evaluated on the basis of potential COVID-19 infection risk. As criteria, the most prevalent virus transmission mechanisms were considered, namely the possibility of touching contaminated surfaces and the close proximity of workers. The higher risk occupations within the sector were identified, which facilitates the allocation of worker protection resources to the occupations where they are most needed. In particular, the results demonstrated that 50\% of the agricultural workforce and 54\% of the workers? annual income are at moderate to high risk. As a consequence, a series of control measures need to be adopted so as to enhance the resilience and sustainability of the sector as well as protect farmers including physical distancing, hygiene practices, and personal protection equipment.

    @article{lincoln43697,
    volume = {12},
    number = {19},
    month = {October},
    author = {Dionysis Bochtis and Lefteris Benos and Maria Lampridi and Vasso Marinoudi and Simon Pearson and Claus G. S{\o}rensen},
    title = {Agricultural Workforce Crisis in Light of the COVID-19 Pandemic},
    year = {2020},
    journal = {Sustainability},
    doi = {10.3390/su12198212},
    pages = {8212},
    keywords = {ARRAY(0x558e7227f800)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43697/},
    abstract = {COVID-19 and the restrictive measures towards containing the spread of its infections have seriously affected the agricultural workforce and jeopardized food security. The present study aims at assessing the COVID-19 pandemic impacts on agricultural labor and suggesting strategies to mitigate them. To this end, after an introduction to the pandemic background, the negative consequences on agriculture and the existing mitigation policies, risks to the agricultural workers were benchmarked across the United States? Standard Occupational Classification system. The individual tasks associated with each occupation in agricultural production were evaluated on the basis of potential COVID-19 infection risk. As criteria, the most prevalent virus transmission mechanisms were considered, namely the possibility of touching contaminated surfaces and the close proximity of workers. The higher risk occupations within the sector were identified, which facilitates the allocation of worker protection resources to the occupations where they are most needed. In particular, the results demonstrated that 50\% of the agricultural workforce and 54\% of the workers? annual income are at moderate to high risk. As a consequence, a series of control measures need to be adopted so as to enhance the resilience and sustainability of the sector as well as protect farmers including physical distancing, hygiene practices, and personal protection equipment.}
    }
  • C. Hu, C. Xiong, J. Peng, and S. Yue, “Coping with multiple visual motion cues under extremely constrained computation power of micro autonomous robots,” Ieee access, vol. 8, p. 159050–159066, 2020. doi:10.1109/ACCESS.2020.3016893
    [BibTeX] [Abstract] [Download PDF]

    The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system’s functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems.

    @article{lincoln43658,
    volume = {8},
    month = {September},
    author = {Cheng Hu and Caihua Xiong and Jigen Peng and Shigang Yue},
    title = {Coping With Multiple Visual Motion Cues Under Extremely Constrained Computation Power of Micro Autonomous Robots},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2020.3016893},
    pages = {159050--159066},
    keywords = {ARRAY(0x558e7227f980)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43658/},
    abstract = {The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system's functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems.}
    }
  • G. Bosworth, L. Price, M. Collison, and C. Fox, “Unequal futures of rural mobility:�challenges for a ?smart countryside?,” Local economy, vol. 35, iss. 6, p. 586–608, 2020. doi:10.1177/0269094220968231
    [BibTeX] [Abstract] [Download PDF]

    Current transport strategy in the UK is strongly urban-focused, with assumptions that technological advances in mobility will simply trickle down into rural areas. This paper challenges such a view and instead draws on rural development thinking aligned to a ?Smart Countryside? which emphasises the need for place-based approaches. Survey and interview methods are employed to develop a framework of rural needs associated with older people, younger people and businesses. This framework is employed to assess a range of mobility innovations that could most effectively address these needs in different rural contexts. In presenting visions of future rural mobility, the paper also identifies key infrastructure as well as institutional and financial changes that are required to facilitate the roll-out of new technologies across rural areas.

    @article{lincoln42612,
    volume = {35},
    number = {6},
    month = {September},
    author = {Gary Bosworth and Liz Price and Martin Collison and Charles Fox},
    title = {Unequal Futures of Rural Mobility:�Challenges for a ?Smart Countryside?},
    publisher = {Sage},
    year = {2020},
    journal = {Local Economy},
    doi = {10.1177/0269094220968231},
    pages = {586--608},
    keywords = {ARRAY(0x558e7227f860)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42612/},
    abstract = {Current transport strategy in the UK is strongly urban-focused, with assumptions that technological advances in mobility will simply trickle down into rural areas. This paper challenges such a view and instead draws on rural development thinking aligned to a ?Smart Countryside? which emphasises the need for place-based approaches. Survey and interview methods are employed to develop a framework of rural needs associated with older people, younger people and businesses. This framework is employed to assess a range of mobility innovations that could most effectively address these needs in different rural contexts. In presenting visions of future rural mobility, the paper also identifies key infrastructure as well as institutional and financial changes that are required to facilitate the roll-out of new technologies across rural areas.}
    }
  • P. Bosilj, I. Gould, T. Duckett, and G. Cielniak, “Estimating soil aggregate size distribution from images using pattern spectra,” Biosystems engineering, vol. 198, p. 63–77, 2020. doi:10.1016/j.biosystemseng.2020.07.012
    [BibTeX] [Abstract] [Download PDF]

    A method for quantifying aggregate size distribution from the images of soil samples is introduced. Knowledge of soil aggregate size distribution can help to inform soil management practices for the sustainable growth of crops. While current in-field approaches are mostly subjective, obtaining quantifiable results in a laboratory is labour- and time-intensive. Our goal is to develop an imaging technique for quantitative analysis of soil aggregate size distribution, which could provide the basis of a tool for rapid assessment of soil structure. The prediction accuracy of pattern spectra descriptors based on hierarchical representations from attribute morphology are analysed, as well as the impact of using images of different quality and scales. The method is able to handle greater sample complexity than the previous approaches, while working with smaller samples sizes that are easier to handle. The results show promise for size analysis of soils with larger structures, and minimal sample preparation, as typical of soil assessment in agriculture.

    @article{lincoln42179,
    volume = {198},
    month = {October},
    author = {Petra Bosilj and Iain Gould and Tom Duckett and Grzegorz Cielniak},
    title = {Estimating soil aggregate size distribution from images using pattern spectra},
    publisher = {Elsevier},
    year = {2020},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2020.07.012},
    pages = {63--77},
    keywords = {ARRAY(0x558e71f58ea0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42179/},
    abstract = {A method for quantifying aggregate size distribution from the images of soil samples is introduced. Knowledge of soil aggregate size distribution can help to inform soil management practices for the sustainable growth of crops. While current in-field approaches are mostly subjective, obtaining quantifiable results in a laboratory is labour- and time-intensive. Our goal is to develop an imaging technique for quantitative analysis of soil aggregate size distribution, which could provide the basis of a tool for rapid assessment of soil structure. The prediction accuracy of pattern spectra descriptors based on hierarchical representations from attribute morphology are analysed, as well as the impact of using images of different quality and scales. The method is able to handle greater sample complexity than the previous approaches, while working with smaller samples sizes that are easier to handle. The results show promise for size analysis of soils with larger structures, and minimal sample preparation, as typical of soil assessment in agriculture.}
    }
  • G. Picardi, C. Borrelli, A. Sarti, G. Chimienti, and M. Calisti, “A minimal metric for the characterization of acoustic noise emitted by underwater vehicles,” Sensors, vol. 20, iss. 22, p. 6644, 2020. doi:10.3390/s20226644
    [BibTeX] [Abstract] [Download PDF]

    Underwater robots emit sound during operations which can deteriorate the quality of acoustic data recorded by on-board sensors or disturb marine fauna during in vivo observations. Notwithstanding this, there have only been a few attempts at characterizing the acoustic emissions of underwater robots in the literature, and the datasheets of commercially available devices do not report information on this topic. This work has a twofold goal. First, we identified a setup consisting of a camera directly mounted on the robot structure to acquire the acoustic data and two indicators (i.e., spectral roll-off point and noise introduced to the environment) to provide a simple and intuitive characterization of the acoustic emissions of underwater robots carrying out specific maneuvers in specific environments. Second, we performed the proposed analysis on three underwater robots belonging to the classes of remotely operated vehicles and underwater legged robots. Our results showed how the legged device produced a clearly different signature compared to remotely operated vehicles which can be an advantage in operations that require low acoustic disturbance. Finally, we argue that the proposed indicators, obtained through a standardized procedure, may be a useful addition to datasheets of existing underwater robots

    @article{lincoln46141,
    volume = {20},
    number = {22},
    month = {November},
    author = {Giacomo Picardi and Clara Borrelli and Augusto Sarti and Giovanni Chimienti and Marcello Calisti},
    title = {A Minimal Metric for the Characterization of Acoustic Noise Emitted by Underwater Vehicles},
    year = {2020},
    journal = {Sensors},
    doi = {10.3390/s20226644},
    pages = {6644},
    keywords = {ARRAY(0x558e722cc170)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46141/},
    abstract = {Underwater robots emit sound during operations which can deteriorate the quality of acoustic data recorded by on-board sensors or disturb marine fauna during in vivo observations. Notwithstanding this, there have only been a few attempts at characterizing the acoustic emissions of underwater robots in the literature, and the datasheets of commercially available devices do not report information on this topic. This work has a twofold goal. First, we identified a setup consisting of a camera directly mounted on the robot structure to acquire the acoustic data and two indicators (i.e., spectral roll-off point and noise introduced to the environment) to provide a simple and intuitive characterization of the acoustic emissions of underwater robots carrying out specific maneuvers in specific environments. Second, we performed the proposed analysis on three underwater robots belonging to the classes of remotely operated vehicles and underwater legged robots. Our results showed how the legged device produced a clearly different signature compared to remotely operated vehicles which can be an advantage in operations that require low acoustic disturbance. Finally, we argue that the proposed indicators, obtained through a standardized procedure, may be a useful addition to datasheets of existing underwater robots}
    }
  • G. Canal, R. Borgo, A. Coles, A. Drake, D. Huynh, P. Keller, S. Krivić, P. Luff, Q. Mahesar, L. Moreau, S. Parsons, M. Patel, and E. Sklar, “Building trust in human-machine partnerships,” Computer law & security review, vol. 39, p. 105489, 2020. doi:10.1016/j.clsr.2020.105489
    [BibTeX] [Abstract] [Download PDF]

    Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing.

    @article{lincoln43255,
    volume = {39},
    month = {November},
    author = {Gerard Canal and Rita Borgo and Andrew Coles and Archie Drake and Dong Huynh and Perry Keller and Senka Krivi{\'c} and Paul Luff and Quratul-ain Mahesar and Luc Moreau and Simon Parsons and Menisha Patel and Elizabeth Sklar},
    title = {Building Trust in Human-Machine Partnerships},
    journal = {Computer Law \& Security Review},
    doi = {10.1016/j.clsr.2020.105489},
    pages = {105489},
    year = {2020},
    keywords = {ARRAY(0x558e722cc4d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43255/},
    abstract = {Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing.}
    }
  • Q. Fu and S. Yue, “Modelling drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds,” Biological cybernetics, vol. 114, p. 443–460, 2020. doi:10.1007/s00422-020-00841-x
    [BibTeX] [Abstract] [Download PDF]

    Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.

    @article{lincoln46870,
    volume = {114},
    month = {October},
    author = {Qinbing Fu and Shigang Yue},
    title = {Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds},
    publisher = {Springer},
    year = {2020},
    journal = {Biological Cybernetics},
    doi = {10.1007/s00422-020-00841-x},
    pages = {443--460},
    keywords = {ARRAY(0x558e7225cd28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46870/},
    abstract = {Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.}
    }
  • S. Cosar, M. Fernandez-Carmona, R. Agrigoroaie, J. Pages, F. Ferland, F. Zhao, S. Yue, N. Bellotto, and A. Tapus, “Enrichme: perception and interaction of an assistive robot for the elderly at home,” International journal of social robotics, vol. 12, iss. 3, p. 779–805, 2020. doi:10.1007/s12369-019-00614-y
    [BibTeX] [Abstract] [Download PDF]

    Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.

    @article{lincoln39037,
    volume = {12},
    number = {3},
    month = {July},
    author = {Serhan Cosar and Manuel Fernandez-Carmona and Roxana Agrigoroaie and Jordi Pages and Francois Ferland and Feng Zhao and Shigang Yue and Nicola Bellotto and Adriana Tapus},
    title = {ENRICHME: Perception and Interaction of an Assistive Robot for the Elderly at Home},
    publisher = {Springer},
    year = {2020},
    journal = {International Journal of Social Robotics},
    doi = {10.1007/s12369-019-00614-y},
    pages = {779--805},
    keywords = {ARRAY(0x558e7227f9f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39037/},
    abstract = {Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.}
    }
  • F. Camara, N. Bellotto, S. Cosar, F. Weber, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, G. Markkula, A. Schieben, F. Tango, N. Merat, and C. Fox, “Pedestrian models for autonomous driving part ii: high-level models of human behavior,” Ieee transactions on intelligent transport systems, 2020. doi:10.1109/TITS.2020.3006767
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.

    @article{lincoln41706,
    month = {July},
    title = {Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Florian Weber and Dimitris Nathanael and Matthias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Gustav Markkula and Anna Schieben and Fabio Tango and Natasha Merat and Charles Fox},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/TITS.2020.3006767},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    keywords = {ARRAY(0x558e72453460)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41706/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.}
    }
  • A. Mohamed, C. Saaj, A. Seddaoui, and M. Nair, “Linear controllers for free-flying and controlled-floating space robots: a new perspective,” Aeronautics and aerospace open access journal, vol. 4, iss. 3, p. 97–114, 2020. doi:10.15406/aaoaj.2020.04.00112
    [BibTeX] [Abstract] [Download PDF]

    Autonomous space robots are crucial for performing future in-orbit operations, including servicing of a spacecraft, assembly of large structures, maintenance of other space assets and active debris removal. Such orbital missions require servicer spacecraft equipped with one or more dexterous manipulators. However, unlike its terrestrial counterpart, the base of the robotic manipulator is not fixed in inertial space; instead, it is mounted on the base?spacecraft, which itself possess both translational and rotational motions. Additionally, the system will be subjected to extreme environmental perturbations, parametric uncertainties and system constraints due to the dynamic coupling between the manipulator and the base-spacecraft. This paper presents the dynamic model of the space robot and a three?stage control algorithm for this highly dynamic non-linear system. In this approach, feed?forward compensation and feed-forward linearization techniques are used to decouple and linearize the highly non-linear system respectively. This approach allows the use of the linear Proportional-Integral-Derivative (PID) controller and Linear Quadratic Regulator (LQR) in the final stages. Moreover, this paper covers a simulation-based trade-off analysis to determine both proposed linear controllers? efficacy. This assessment considers precise trajectory tracking requirements whilst minimizing power consumption and improving robustness during the close-range operation with the target spacecraft.

    @article{lincoln48336,
    volume = {4},
    number = {3},
    month = {July},
    author = {Amr Mohamed and Chakravarthini Saaj and Asma Seddaoui and Manu Nair},
    title = {Linear controllers for free-flying and controlled-floating space robots: a new perspective},
    publisher = {MedCrave Group},
    year = {2020},
    journal = {Aeronautics and Aerospace Open Access Journal},
    doi = {10.15406/aaoaj.2020.04.00112},
    pages = {97--114},
    keywords = {ARRAY(0x558e7242bc18)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48336/},
    abstract = {Autonomous space robots are crucial for performing future in-orbit operations, including
    servicing of a spacecraft, assembly of large structures, maintenance of other space assets
    and active debris removal. Such orbital missions require servicer spacecraft equipped with
    one or more dexterous manipulators. However, unlike its terrestrial counterpart, the base
    of the robotic manipulator is not fixed in inertial space; instead, it is mounted on the base?spacecraft, which itself possess both translational and rotational motions. Additionally, the
    system will be subjected to extreme environmental perturbations, parametric uncertainties
    and system constraints due to the dynamic coupling between the manipulator and the
    base-spacecraft. This paper presents the dynamic model of the space robot and a three?stage control algorithm for this highly dynamic non-linear system. In this approach, feed?forward compensation and feed-forward linearization techniques are used to decouple and
    linearize the highly non-linear system respectively. This approach allows the use of the
    linear Proportional-Integral-Derivative (PID) controller and Linear Quadratic Regulator
    (LQR) in the final stages. Moreover, this paper covers a simulation-based trade-off analysis
    to determine both proposed linear controllers? efficacy. This assessment considers precise
    trajectory tracking requirements whilst minimizing power consumption and improving
    robustness during the close-range operation with the target spacecraft.}
    }
  • M. Chellapurath, S. Stefanni, G. Fiorito, A. M. Sabatini, C. Laschi, and M. Calisti, “Locomotory behaviour of the intertidal marble crab (pachygrapsus marmoratus) supports the underwater spring-loaded inverted pendulum as a fundamental model for punting in animals,” Bioinspiration & biomimetics, vol. 15, iss. 5, p. 55004, 2020. doi:10.1088/1748-3190/ab968c
    [BibTeX] [Abstract] [Download PDF]

    In aquatic pedestrian locomotion the dynamics of terrestrial and aquatic environments are coupled. Here we study terrestrial running and aquatic punting locomotion of the marine-living crab Pachygrapsus marmoratus. We detected both active and passive phases of running and punting through the observation of crab locomotory behaviour in standardized settings and by three-dimensional kinematic analysis of its dynamic gaits using high-speed video cameras. Variations in different stride parameters were studied and compared. The comparison was done based on the dimensionless parameter the Froude number (Fr) to account for the effect of buoyancy and size variability among the crabs. The underwater spring-loaded inverted pendulum (USLIP) model better fitted the dynamics of aquatic punting. USLIP takes account of the damping effect of the aquatic environment, a variable not considered by the spring-loaded inverted pendulum (SLIP) model in reduced gravity. Our results highlight the underlying principles of aquatic terrestrial locomotion by comparing it with terrestrial locomotion. Comparing punting with running, we show and increased stride period, decreased duty cycle and orientation of the carapace more inclined with the horizontal plane, indicating the significance of fluid forces on the dynamics due to the aquatic environment. Moreover, we discovered periodicity in punting locomotion of crabs and two different gaits, namely, long-flight punting and short-flight punting, distinguished by both footfall patterns and kinematic parameters. The generic fundamental model which belongs to all animals performing both terrestrial and aquatic legged locomotion has implications for control strategies, evolution and translation to robotic artefacts.

    @article{lincoln46139,
    volume = {15},
    number = {5},
    month = {July},
    author = {Mrudul Chellapurath and Sergio Stefanni and Graziano Fiorito and Angelo Maria Sabatini and Cecilia Laschi and Marcello Calisti},
    title = {Locomotory behaviour of the intertidal marble crab (Pachygrapsus marmoratus) supports the underwater spring-loaded inverted pendulum as a fundamental model for punting in animals},
    year = {2020},
    journal = {Bioinspiration \& Biomimetics},
    doi = {10.1088/1748-3190/ab968c},
    pages = {055004},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46139/},
    abstract = {In aquatic pedestrian locomotion the dynamics of terrestrial and aquatic environments are coupled. Here we study terrestrial running and aquatic punting locomotion of the marine-living crab Pachygrapsus marmoratus. We detected both active and passive phases of running and punting through the observation of crab locomotory behaviour in standardized settings and by three-dimensional kinematic analysis of its dynamic gaits using high-speed video cameras. Variations in different stride parameters were studied and compared. The comparison was done based on the dimensionless parameter the Froude number (Fr) to account for the effect of buoyancy and size variability among the crabs. The underwater spring-loaded inverted pendulum (USLIP) model better fitted the dynamics of aquatic punting. USLIP takes account of the damping effect of the aquatic environment, a variable not considered by the spring-loaded inverted pendulum (SLIP) model in reduced gravity. Our results highlight the underlying principles of aquatic terrestrial locomotion by comparing it with terrestrial locomotion. Comparing punting with running, we show and increased stride period, decreased duty cycle and orientation of the carapace more inclined with the horizontal plane, indicating the significance of fluid forces on the dynamics due to the aquatic environment. Moreover, we discovered periodicity in punting locomotion of crabs and two different gaits, namely, long-flight punting and short-flight punting, distinguished by both footfall patterns and kinematic parameters. The generic fundamental model which belongs to all animals performing both terrestrial and aquatic legged locomotion has implications for control strategies, evolution and translation to robotic artefacts.}
    }
  • I. J. Gould, I. Wright, M. Collison, E. Ruto, G. Bosworth, and S. Pearson, “The impact of coastal flooding on agriculture: a case study of lincolnshire, united kingdom,” Land degradation & development, vol. 31, iss. 12, p. 1545–1559, 2020. doi:10.1002/ldr.3551
    [BibTeX] [Abstract] [Download PDF]

    Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.

    @article{lincoln40049,
    volume = {31},
    number = {12},
    month = {July},
    author = {Iain J Gould and Isobel Wright and Martin Collison and Eric Ruto and Gary Bosworth and Simon Pearson},
    title = {The impact of coastal flooding on agriculture: a case study of Lincolnshire, United Kingdom},
    publisher = {Wiley},
    year = {2020},
    journal = {Land Degradation \& Development},
    doi = {10.1002/ldr.3551},
    pages = {1545--1559},
    keywords = {ARRAY(0x558e72260070)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40049/},
    abstract = {Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.}
    }
  • M. Al-Khafajiy, T. Baker, A. Hussien, and A. Cotgrave, “Uav and fog computing for ioe-based systems: a case study on environment disasters prediction and recovery plans,” in Unmanned aerial vehicles in smart cities, Springer, 2020, p. 133–152. doi:10.1007/978-3-030-38712-9_8
    [BibTeX] [Abstract] [Download PDF]

    In the past few years, an exponential upsurge in the development and use of the Internet of Everything (IoE)-based systems has evolved. IoE-based systems bring together the power of embedded smart things (e.g., sensors and actuators), flying-things (e.g., drones), and machine learning and data processing mediums (e.g., fog and edge computing) to create intelligent and powerful networked systems. These systems benefit various aspects of our modern smart cities{–}ranging from healthcare and smart homes to smart motorways, for example, via making informed decisions. In IoE-based systems, sensors sense the surrounding environment and return data for processing: Unmanned aerial vehicles (UAVs) survey and scan areas that are difficult to reach by human beings (e.g., oceans and mountains), and machine learning algorithms are used to classify data, interpret and learn from collected data over fog and edge computing nodes. In fact, the integration of UAVs, fog computing and machine learning provides fast, cost-effective and safe deployments for many civil and military applications. While fog computing is a new network paradigm of distributed computing nodes at the edge of the network, fog extends the cloud?s capability to the edge to provide better quality of service (QoS), and it is particularly suitable for applications that have strict requirements on latency and reliability. Also, fog computing has the advantage of providing the support of mobility, location awareness, scalability and efficient integration with other systems such as cloud computing. Fog computing and UAV are an integral part of the future information and communication technologies (ICT) that are able to achieve higher functionality, optimised resources utilisation and better management to improve both quality of service (QoS) and quality of experiences (QoE). Such systems that can combine both these technologies are natural disaster prediction systems, which could use fog-based algorithms to predict and warn for upcoming disaster threats, such as floods. The fog computing algorithms use data to make decisions and predictions from both the embedded-sensors, such as environmental sensors and data from flying-things, such as data from UAV that include live images and videos.

    @incollection{lincoln47572,
    month = {April},
    author = {Mohammed Al-Khafajiy and Thar Baker and Aseel Hussien and Alison Cotgrave},
    booktitle = {Unmanned Aerial Vehicles in Smart Cities},
    title = {UAV and Fog Computing for IoE-Based Systems: A Case Study on Environment Disasters Prediction and Recovery Plans},
    publisher = {Springer},
    year = {2020},
    journal = {UAV and Fog Computing for IoE-Based Systems: A Case Study on Environment Disasters Prediction and Recovery Plans},
    doi = {10.1007/978-3-030-38712-9\_8},
    pages = {133--152},
    keywords = {ARRAY(0x558e722cc110)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47572/},
    abstract = {In the past few years, an exponential upsurge in the development and use of the Internet of Everything (IoE)-based systems has evolved. IoE-based systems bring together the power of embedded smart things (e.g., sensors and actuators), flying-things (e.g., drones), and machine learning and data processing mediums (e.g., fog and edge computing) to create intelligent and powerful networked systems. These systems benefit various aspects of our modern smart cities{--}ranging from healthcare and smart homes to smart motorways, for example, via making informed decisions. In IoE-based systems, sensors sense the surrounding environment and return data for processing: Unmanned aerial vehicles (UAVs) survey and scan areas that are difficult to reach by human beings (e.g., oceans and mountains), and machine learning algorithms are used to classify data, interpret and learn from collected data over fog and edge computing nodes. In fact, the integration of UAVs, fog computing and machine learning provides fast, cost-effective and safe deployments for many civil and military applications. While fog computing is a new network paradigm of distributed computing nodes at the edge of the network, fog extends the cloud?s capability to the edge to provide better quality of service (QoS), and it is particularly suitable for applications that have strict requirements on latency and reliability. Also, fog computing has the advantage of providing the support of mobility, location awareness, scalability and efficient integration with other systems such as cloud computing. Fog computing and UAV are an integral part of the future information and communication technologies (ICT) that are able to achieve higher functionality, optimised resources utilisation and better management to improve both quality of service (QoS) and quality of experiences (QoE). Such systems that can combine both these technologies are natural disaster prediction systems, which could use fog-based algorithms to predict and warn for upcoming disaster threats, such as floods. The fog computing algorithms use data to make decisions and predictions from both the embedded-sensors, such as environmental sensors and data from flying-things, such as data from UAV that include live images and videos.}
    }
  • F. Camara, S. Cosar, N. Bellotto, N. Merat, and C. Fox, “Continuous game theory pedestrian modelling method for autonomous vehicles,” in Human factors in intelligent vehicles, C. Olaverri-Monreal, F. García-Fernández, and R. J. F. Rossetti, Eds., River publishers, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous Vehicles (AVs) must interact with other road users. They must understand and adapt to complex pedestrian behaviour, especially during crossings where priority is not clearly defined. This includes feedback effects such as modelling a pedestrian?s likely behaviours resulting from changes in the AVs behaviour. For example, whether a pedestrian will yield if the AV accelerates, and vice versa. To enable such automated interactions, it is necessary for the AV to possess a statistical model of the pedestrian?s responses to its own actions. A previous work demonstrated a proof-of- concept method to fit parameters to a simplified model based on data from a highly artificial discrete laboratory task with human subjects. The method was based on LIDAR-based person tracking, game theory, and Gaussian process analysis. The present study extends this method to enable analysis of more realistic continuous human experimental data. It shows for the first time how game-theoretic predictive parameters can be fit into pedestrians natural and continuous motion during road-crossings, and how predictions can be made about their interactions with AV controllers in similar real-world settings.

    @incollection{lincoln42872,
    month = {October},
    author = {Fanta Camara and Serhan Cosar and Nicola Bellotto and Natasha Merat and Charles Fox},
    series = {River Publishers Series in Transport Technology},
    booktitle = {Human Factors in Intelligent Vehicles},
    editor = {Cristina Olaverri-Monreal and Fernando Garc{\'i}a-Fern{\'a}ndez and Rosaldo J. F. Rossetti},
    title = {Continuous Game Theory Pedestrian Modelling Method for Autonomous Vehicles},
    publisher = {River Publishers},
    year = {2020},
    keywords = {ARRAY(0x558e722c9bf0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42872/},
    abstract = {Autonomous Vehicles (AVs) must interact with other road users. They must understand and adapt to complex pedestrian behaviour, especially during crossings where priority is not clearly defined. This includes feedback effects
    such as modelling a pedestrian?s likely behaviours resulting from changes in the AVs behaviour. For example, whether a pedestrian will yield if the AV accelerates, and vice versa. To enable such automated interactions, it is necessary for the AV to possess a statistical model of the pedestrian?s responses to its own actions. A previous work demonstrated a proof-of- concept method to fit parameters to a simplified model based on data from a highly artificial discrete laboratory task with human subjects. The method was based on LIDAR-based person tracking, game theory, and Gaussian process analysis. The present study extends this method to enable analysis of more realistic continuous human experimental data. It shows for the first time how game-theoretic predictive parameters can be fit into pedestrians natural and continuous motion during road-crossings, and how predictions can be made about their interactions with AV controllers in similar real-world settings.}
    }
  • Q. Fu and S. Yue, “Complementary visual neuronal systems model for collision sensing,” in The ieee international conference on advanced robotics and mechatronics (arm), 2020. doi:10.1109/ICARM49381.2020.9195303
    [BibTeX] [Abstract] [Download PDF]

    Inspired by insects? visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-?eld motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in ?ies, have been studied, intensively. The LGMDs have speci?c selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To ?ll this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented ingroundmicro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.

    @inproceedings{lincoln42134,
    booktitle = {The IEEE International Conference on Advanced Robotics and Mechatronics (ARM)},
    month = {December},
    title = {Complementary Visual Neuronal Systems Model for Collision Sensing},
    author = {Qinbing Fu and Shigang Yue},
    year = {2020},
    doi = {10.1109/ICARM49381.2020.9195303},
    keywords = {ARRAY(0x558e724361c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42134/},
    abstract = {Inspired by insects? visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-?eld motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in ?ies, have been studied, intensively. The LGMDs have speci?c selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To ?ll this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented ingroundmicro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.}
    }
  • N. Mavrakis, R. Stolkin, and A. G. Esfahani, “Estimating an object?s inertial parameters by robotic pushing: a data-driven approach,” in The ieee/rsj international conference on intelligent robots and systems (iros), 2020, p. 9537–9544. doi:10.1109/IROS45743.2020.9341112
    [BibTeX] [Abstract] [Download PDF]

    Estimating the inertial properties of an object can make robotic manipulations more efficient, especially in extreme environments. This paper presents a novel method of estimating the 2D inertial parameters of an object, by having a robot applying a push on it. We draw inspiration from previous analyses on quasi-static pushing mechanics, and introduce a data-driven model that can accurately represent these mechan- ics and provide a prediction for the object?s inertial parameters. We evaluate the model with two datasets. For the first dataset, we set up a V-REP simulation of seven robots pushing objects with large range of inertial parameters, acquiring 48000 pushes in total. For the second dataset, we use the object pushes from the MIT M-Cube lab pushing dataset. We extract features from force, moment and velocity measurements of the pushes, and train a Multi-Output Regression Random Forest. The experimental results show that we can accurately predict the 2D inertial parameters from a single push, and that our method retains this robust performance under various surface types.

    @inproceedings{lincoln42213,
    month = {October},
    author = {Nikos Mavrakis and Rustam Stolkin and Amir Ghalamzan Esfahani},
    booktitle = {The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Estimating An Object?s Inertial Parameters By Robotic Pushing: A Data-Driven Approach},
    journal = {The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020)},
    doi = {10.1109/IROS45743.2020.9341112},
    pages = {9537--9544},
    year = {2020},
    keywords = {ARRAY(0x558e722cbd38)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42213/},
    abstract = {Estimating the inertial properties of an object can make robotic manipulations more efficient, especially in extreme environments. This paper presents a novel method of estimating the 2D inertial parameters of an object, by having a robot applying a push on it. We draw inspiration from previous analyses on quasi-static pushing mechanics, and introduce a data-driven model that can accurately represent these mechan- ics and provide a prediction for the object?s inertial parameters. We evaluate the model with two datasets. For the first dataset, we set up a V-REP simulation of seven robots pushing objects with large range of inertial parameters, acquiring 48000 pushes in total. For the second dataset, we use the object pushes from the MIT M-Cube lab pushing dataset. We extract features from force, moment and velocity measurements of the pushes, and train a Multi-Output Regression Random Forest. The experimental results show that we can accurately predict the 2D inertial parameters from a single push, and that our method retains this robust performance under various surface types.}
    }
  • J. L. Louedec, B. Li, and G. Cielniak, “Evaluation of 3d vision systems for detection of small objects in agricultural environments,” in The 15th international joint conference on computer vision, imaging and computer graphics theory and applications, 2020. doi:10.5220/0009182806820689
    [BibTeX] [Abstract] [Download PDF]

    3D information provides unique information about shape, localisation and relations between objects, not found in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a study on the application of modern 3D sensing technology together with the state-of-the-art machine learning algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for future work on 3D perception for challenging outdoor applications such as agriculture.

    @inproceedings{lincoln40456,
    booktitle = {The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications},
    month = {February},
    title = {Evaluation of 3D Vision Systems for Detection of Small Objects in Agricultural Environments},
    author = {Justin Le Louedec and Bo Li and Grzegorz Cielniak},
    publisher = {SciTePress},
    year = {2020},
    doi = {10.5220/0009182806820689},
    keywords = {ARRAY(0x558e722c9d10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40456/},
    abstract = {3D information provides unique information about shape, localisation and relations between objects, not found
    in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a
    study on the application of modern 3D sensing technology together with the state-of-the-art machine learning
    algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance
    of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks
    trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the
    current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for
    future work on 3D perception for challenging outdoor applications such as agriculture.}
    }
  • M. H. Nair, C. M. Saaj, and A. G. Esfahani, “On robotic in-orbit assembly of large aperture space telescopes,” in Proc. ieee/rsj international conference on intelligent robots and systems (iros), 2020.
    [BibTeX] [Abstract] [Download PDF]

    Space has found itself amidst numerous missions benefitting the life on Earth and for mankind to explore further. The space community has been in the move of launching various on-orbit missions, tackling the extremities of the space environment, with the use of robots, for performing tasks like assembly, maintenance, repairs, etc. The urge to explore further in the universe for scientific benefits has found the rise of modular Large-Space Telescopes (LASTs). With respect to the challenges of the in-space assembly of LAST, a five Degrees-of Freedom (DoF) End-Over-End Walking Robot (E-Walker) is presented in this paper. The Dynamical Model and Gait Pattern of the E-Walker is discussed with reference to the different phases of its motion. For the initial verification of the E-Walker model, a PID controller was used to make the E-Walker follow the desired trajectory. A mission concept discussing a potential strategy of assembling a 25m LAST with 342 Primary Mirror Units (PMUs) is briefly discussed. Simulation results show the precise tracking of the E-Walker along a desired trajectory is achieved without exceeding the joint torques.

    @inproceedings{lincoln48338,
    booktitle = {Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {October},
    title = {On Robotic In-Orbit Assembly of Large Aperture Space Telescopes},
    author = {Manu H. Nair and Chakravarthini M. Saaj and Amir G. Esfahani},
    publisher = {IEEE},
    year = {2020},
    keywords = {ARRAY(0x558e7226d5f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48338/},
    abstract = {Space has found itself amidst numerous missions benefitting the life on Earth and for mankind to explore further.
    The space community has been in the move of launching various on-orbit missions, tackling the extremities of the space environment, with the use of robots, for performing tasks like assembly, maintenance, repairs, etc. The urge to explore further in the universe for scientific benefits has found the rise of modular Large-Space Telescopes (LASTs). With respect to the challenges of the in-space assembly of LAST, a five Degrees-of Freedom (DoF) End-Over-End Walking Robot (E-Walker) is presented in this paper. The Dynamical Model and Gait Pattern of the E-Walker is discussed with reference to the different phases of its motion. For the initial verification of the E-Walker model, a PID controller was used to make the E-Walker follow the desired trajectory. A mission concept discussing a potential strategy of assembling a 25m LAST with 342 Primary Mirror Units (PMUs) is briefly discussed. Simulation results show the
    precise tracking of the E-Walker along a desired trajectory is achieved without exceeding the joint torques.}
    }
  • W. Khan, G. Das, M. Hanheide, and G. Cielniak, “Incorporating spatial constraints into a bayesian tracking framework for improved localisation in agricultural environments,” in 2020 ieee/rsj international conference on intelligent robots and systems, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF). The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.

    @inproceedings{lincoln42419,
    booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems},
    month = {October},
    title = {Incorporating Spatial Constraints into a Bayesian Tracking Framework for Improved Localisation in Agricultural Environments},
    author = {Waqas Khan and Gautham Das and Marc Hanheide and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2020},
    keywords = {ARRAY(0x558e722cbf00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42419/},
    abstract = {Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF).
    The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.}
    }
  • R. Kirk, M. Mangan, and G. Cielniak, “Feasibility study of in-field phenotypic trait extraction for robotic soft-fruit operations,” in Ukras20 conference: ?robots into the real world? proceedings, 2020, p. 21–23. doi:doi:10.31256/Uk4Td6I
    [BibTeX] [Abstract] [Download PDF]

    There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN.

    @inproceedings{lincoln42101,
    month = {February},
    author = {Raymond Kirk and Michael Mangan and Grzegorz Cielniak},
    booktitle = {UKRAS20 Conference: ?Robots into the real world? Proceedings},
    title = {Feasibility Study of In-Field Phenotypic Trait Extraction for Robotic Soft-Fruit Operations},
    publisher = {UKRAS},
    doi = {doi:10.31256/Uk4Td6I},
    pages = {21--23},
    year = {2020},
    keywords = {ARRAY(0x558e722cbca8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42101/},
    abstract = {There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN.}
    }
  • J. Barber, H. Cuayahuitl, M. Zhong, and W. Luan, “Lightweight non-intrusive load monitoring employing pruned sequence-to-point learning,” in 5th international workshop on non-intrusive load monitoring, 2020. doi:10.1145/1122445.1122456
    [BibTeX] [Abstract] [Download PDF]

    Non-intrusive load monitoring (NILM) is the process in which a household?s total power consumption is used to determine the power consumption of household appliances. Previous work has shown that sequence-to-point (seq2point) learning is one of the most promising methods for tackling NILM. This process uses a sequence of aggregate power data to map a target appliance’s power consumption at the midpoint of that window of power data. However, models produced using this method contain upwards of thirty million weights, meaning that the models require large volumes of resources to perform disaggregation. This paper addresses this problem by pruning the weights learned by such a model, which results in a lightweight NILM algorithm for the purpose of being deployed on mobile devices such as smart meters. The pruned seq2point learning algorithm was applied to the REFIT data, experimentally showing that the performance was retained comparing to the original seq2point learning whilst the number of weights was reduced by 87{$\backslash$}\%. Code:https://github.com/JackBarber98/pruned-nilm

    @inproceedings{lincoln42806,
    booktitle = {5th International Workshop on Non-Intrusive Load Monitoring},
    month = {October},
    title = {Lightweight Non-Intrusive Load Monitoring Employing Pruned Sequence-to-Point Learning},
    author = {Jack Barber and Heriberto Cuayahuitl and Mingjun Zhong and Wempen Luan},
    publisher = {ACM Conference Proceedings},
    year = {2020},
    doi = {10.1145/1122445.1122456},
    keywords = {ARRAY(0x558e722cba68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42806/},
    abstract = {Non-intrusive load monitoring (NILM) is the process in which a household?s total power consumption is used to determine the power consumption of household appliances.
    Previous work has shown that sequence-to-point (seq2point) learning is one of the most promising methods for tackling NILM. This process uses a sequence of aggregate power data to map a target appliance's power consumption at the midpoint of that window of power data.
    However, models produced using this method contain upwards of thirty million weights, meaning that the models require large volumes of resources to perform disaggregation. This paper addresses this problem by pruning the weights learned by such a model, which results in a lightweight NILM algorithm for the purpose of being deployed on mobile devices such as smart meters. The pruned seq2point learning algorithm was applied to the REFIT data, experimentally showing that the performance was retained comparing to the original seq2point learning whilst the number of weights was reduced by 87{$\backslash$}\%. Code:https://github.com/JackBarber98/pruned-nilm}
    }
  • M. Calisti, F. Giorgio-Serchi, C. Stefanini, M. Farman, I. Hussain, C. Armanini, D. Gan, L. Seneviratne, and F. Renda, “Design, modeling and testing of a flagellum-inspired soft underwater propeller exploiting passive elasticity,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2020, p. 3328–3334. doi:10.1109/IROS40897.2019.8967700
    [BibTeX] [Abstract] [Download PDF]

    Flagellated micro-organism are regarded as excellent swimmers within their size scales. This, along with the simplicity of their actuation and the richness of their dynamics makes them a valuable source of inspiration to design continuum, self-propelled underwater robots. Here we introduce a soft, flagellum-inspired system which exploits the compliance of its own body to passively attain a range of geometrical configurations from the interaction with the surrounding fluid. The spontaneous formation of stable helical waves along the length of the flagellum is responsible for the generation of positive net thrust. We investigate the relationship between actuation frequency and material elasticity in determining the steady-state configuration of the system and its thrust output. This is ultimately used to perform a parameter identification procedure of an elastodynamic model aimed at investigating the scaling laws in the propulsion of flagellated robots.

    @inproceedings{lincoln46145,
    booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {January},
    title = {Design, Modeling and Testing of a Flagellum-inspired Soft Underwater Propeller Exploiting Passive Elasticity},
    author = {Marcello Calisti and Francesco Giorgio-Serchi and Cesare Stefanini and Madiha Farman and Irfan Hussain and Costanza Armanini and Dongming Gan and Lakmal Seneviratne and Federico Renda},
    year = {2020},
    pages = {3328--3334},
    doi = {10.1109/IROS40897.2019.8967700},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46145/},
    abstract = {Flagellated micro-organism are regarded as excellent swimmers within their size scales. This, along with the simplicity of their actuation and the richness of their dynamics makes them a valuable source of inspiration to design continuum, self-propelled underwater robots. Here we introduce a soft, flagellum-inspired system which exploits the compliance of its own body to passively attain a range of geometrical configurations from the interaction with the surrounding fluid. The spontaneous formation of stable helical waves along the length of the flagellum is responsible for the generation of positive net thrust. We investigate the relationship between actuation frequency and material elasticity in determining the steady-state configuration of the system and its thrust output. This is ultimately used to perform a parameter identification procedure of an elastodynamic model aimed at investigating the scaling laws in the propulsion of flagellated robots.}
    }
  • M. Terreran, A. Tramontano, J. Lock, S. Ghidoni, and N. Bellotto, “Real-time object detection using deep learning for helping people with visual impairments,” in 4th ieee international conference on image processing, applications and systems (ipas), 2020. doi:10.1109/IPAS50080.2020.9334933
    [BibTeX] [Abstract] [Download PDF]

    Object detection plays a crucial role in the development of Electronic Travel Aids (ETAs), capable to guide a person with visual impairments towards a target object in an unknown indoor environment. In such a scenario, the object detector runs on a mobile device (e.g. smartphone) and needs to be fast, accurate, and, most importantly, lightweight. Nowadays, Deep Neural Networks (DNN) have become the state-of-the-art solution for object detection tasks, with many works improving speed and accuracy by proposing new architectures or extending existing ones. A common strategy is to use deeper networks to get higher performance, but that leads to a higher computational cost which makes it impractical to integrate them on mobile devices with limited computational power. In this work we compare different object detectors to find a suitable candidate to be implemented on ETAs, focusing on lightweight models capable of working in real-time on mobile devices with a good accuracy. In particular, we select two models: SSD Lite with Mobilenet V2 and Tiny-DSOD. Both models have been tested on the popular OpenImage dataset and a new dataset, called Office dataset, collected to further test models? performance and robustness in a real scenario inspired by the actual perception challenges of a user with visual impairments.

    @inproceedings{lincoln42338,
    booktitle = {4th IEEE International Conference on Image Processing, Applications and Systems (IPAS)},
    month = {December},
    title = {Real-time Object Detection using Deep Learning for helping People with Visual Impairments},
    author = {Matteo Terreran and Andrea Tramontano and Jacobus Lock and Stefano Ghidoni and Nicola Bellotto},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/IPAS50080.2020.9334933},
    keywords = {ARRAY(0x558e723eeb90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42338/},
    abstract = {Object detection plays a crucial role in the development of Electronic Travel Aids (ETAs), capable to guide a person with visual impairments towards a target object in an unknown indoor environment. In such a scenario, the object detector runs on a mobile device (e.g. smartphone) and needs to be fast, accurate, and, most importantly, lightweight. Nowadays, Deep Neural Networks (DNN) have become the state-of-the-art solution for object detection tasks, with many works improving speed and accuracy by proposing new architectures or extending existing ones. A common strategy is to use deeper networks to get higher performance, but that leads to a higher computational cost which makes it impractical to integrate them on mobile devices with limited computational power. In this work we compare different object detectors to find a suitable candidate to be implemented on ETAs, focusing on lightweight models capable of working in real-time on mobile devices with a good accuracy. In particular, we select two models: SSD Lite with Mobilenet V2 and Tiny-DSOD. Both models have been tested on the popular OpenImage dataset and a new dataset, called Office dataset, collected to further test models? performance and robustness in a real scenario inspired by the actual perception challenges of a user with visual impairments.}
    }
  • F. Lei, Z. Peng, V. Cutsuridis, M. Liu, Y. Zhang, and S. Yue, “Competition between on and off neural pathways enhancing collision selectivity,” in Ieee wcci 2020-ijcnn regular session, 2020. doi:10.1109/IJCNN48605.2020.9207131
    [BibTeX] [Abstract] [Download PDF]

    The LGMD1 neuron of locusts shows strong looming-sensitive property for both light and dark objects. Although a few LGMD1 models have been proposed, they are not reliable to inhibit the translating motion under certain conditions compare to the biological LGMD1 in the locust. To address this issue, we propose a bio-plausible model to enhance the collision selectivity by inhibiting the translating motion. The proposed model contains three parts, the retina to lamina layer for receiving luminance change signals, the lamina to medulla layer for extracting motion cues via ON and OFF pathways separately, the medulla to lobula layer for eliminating translational excitation with neural competition. We tested the model by synthetic stimuli and real physical stimuli. The experimental results demonstrate that the proposed LGMD1 model has a strong preference for objects in direct collision course-it can detect looming objects in different conditions while completely ignoring translating objects.

    @inproceedings{lincoln41701,
    booktitle = {IEEE WCCI 2020-IJCNN regular session},
    title = {Competition between ON and OFF Neural Pathways Enhancing Collision Selectivity},
    author = {Fang Lei and Zhiping Peng and Vassilis Cutsuridis and Mei Liu and Yicheng Zhang and Shigang Yue},
    year = {2020},
    doi = {10.1109/IJCNN48605.2020.9207131},
    keywords = {ARRAY(0x558e72461a40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41701/},
    abstract = {The LGMD1 neuron of locusts shows strong looming-sensitive property for both light and dark objects. Although a few LGMD1 models have been proposed, they are not reliable to inhibit the translating motion under certain conditions compare to the biological LGMD1 in the locust. To address this issue, we propose a bio-plausible model to enhance the collision selectivity by inhibiting the translating motion. The proposed model contains three parts, the retina to lamina layer for receiving luminance change signals, the lamina to medulla layer for extracting motion cues via ON and OFF pathways separately, the medulla to lobula layer for eliminating translational excitation with neural competition. We tested the model by synthetic stimuli and real physical stimuli. The experimental results demonstrate that the proposed LGMD1 model has a strong preference for objects in direct collision course-it can detect looming objects in
    different conditions while completely ignoring translating objects.}
    }
  • L. Roberts-Elliott, M. Fernandez-Carmona, and M. Hanheide, “Towards safer robot motion: using a qualitative motion model to classify human-robot spatial interaction,” in 21st towards autonomous robotic systems conference, 2020.
    [BibTeX] [Abstract] [Download PDF]

    For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.

    @inproceedings{lincoln40186,
    booktitle = {21st Towards Autonomous Robotic Systems Conference},
    month = {December},
    title = {Towards Safer Robot Motion: Using a Qualitative Motion Model to Classify Human-Robot Spatial Interaction},
    author = {Laurence Roberts-Elliott and Manuel Fernandez-Carmona and Marc Hanheide},
    publisher = {Springer},
    year = {2020},
    keywords = {ARRAY(0x558e723f7eb0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40186/},
    abstract = {For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.}
    }
  • M. Al-Khafajiy, T. Baker, A. Waraich, O. Alfandi, and A. Hussien, “Enabling high performance fog computing through fog-2-fog coordination model,” in 2019 ieee/acs 16th international conference on computer systems and applications (aiccsa), 2020, p. 1–6. doi:10.1109/AICCSA47632.2019.9035353
    [BibTeX] [Abstract] [Download PDF]

    Fog computing is a promising network paradigm in the IoT area as it has a great potential to reduce processing time for time-sensitive IoT applications. However, fog can get congested very easily due to fog resources limitations in term of capacity and computational power. In this paper, we tackle the issue of fog congestion through a request offloading algorithm. The result shows that the performance of fogs nodes can be increased be sharing fog’s overload over several fog nodes. The proposed offloading algorithm could have the potential to achieve a sustainable network paradigm and highlights the significant benefits of fog offloading for the future networking paradigm.

    @inproceedings{lincoln47564,
    month = {March},
    author = {Mohammed Al-Khafajiy and Thar Baker and Atif Waraich and Omar Alfandi and Aseel Hussien},
    booktitle = {2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA)},
    title = {Enabling High Performance Fog Computing through Fog-2-Fog Coordination Model},
    publisher = {IEEE},
    doi = {10.1109/AICCSA47632.2019.9035353},
    pages = {1--6},
    year = {2020},
    keywords = {ARRAY(0x558e722cbcd8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47564/},
    abstract = {Fog computing is a promising network paradigm in the IoT area as it has a great potential to reduce processing time for time-sensitive IoT applications. However, fog can get congested very easily due to fog resources limitations in term of capacity and computational power. In this paper, we tackle the issue of fog congestion through a request offloading algorithm. The result shows that the performance of fogs nodes can be increased be sharing fog's overload over several fog nodes. The proposed offloading algorithm could have the potential to achieve a sustainable network paradigm and highlights the significant benefits of fog offloading for the future networking paradigm.}
    }
  • S. Kottayil, P. Tsoleridis, K. Rossa, R. Connors, and C. Fox, “Investigation of driver route choice behaviour using bluetooth data,” in 15th world conference on transport research, 2020, p. 632–645. doi:10.1016/j.trpro.2020.08.065
    [BibTeX] [Abstract] [Download PDF]

    Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.

    @inproceedings{lincoln34791,
    volume = {48},
    month = {September},
    author = {Sreedevi Kottayil and Panagiotis Tsoleridis and Kacper Rossa and Richard Connors and Charles Fox},
    booktitle = {15th World Conference on Transport Research},
    title = {Investigation of Driver Route Choice Behaviour using Bluetooth Data},
    publisher = {Elsevier},
    year = {2020},
    journal = {Transportation Research Procedia},
    doi = {10.1016/j.trpro.2020.08.065},
    pages = {632--645},
    keywords = {ARRAY(0x558e7227f8f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34791/},
    abstract = {Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.}
    }
  • F. D. Duchetto, P. Baxter, and M. Hanheide, “Automatic assessment and learning of robot social abilities,” in Companion of the 2020 acm/ieee international conference on human-robot interaction, 2020, p. 561–563. doi:10.1145/3371382.3377430
    [BibTeX] [Abstract] [Download PDF]

    One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].

    @inproceedings{lincoln40509,
    booktitle = {Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction},
    month = {March},
    title = {Automatic Assessment and Learning of Robot Social Abilities},
    author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide},
    year = {2020},
    pages = {561--563},
    doi = {10.1145/3371382.3377430},
    keywords = {ARRAY(0x558e722cbee8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40509/},
    abstract = {One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].}
    }
  • N. Andreakos, S. Yue, and V. Cutsuridis, “Improving recall in an associative neural network model of the hippocampus,” in 9th international conference, living machines 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    The mammalian hippocampus is involved in auto-association and hetero-association of declarative memories. We employed a bio-inspired neural model of hippocampal CA1 region to systematically evaluate its mean recall quality against different number of stored patterns, overlaps and active cells per pattern. Model consisted of excitatory (pyramidal cells) and four types of inhibitory cells: axo-axonic, basket, bistratified, and oriens lacunosum-moleculare cells. Cells were simplified compartmental models with complex ion channel dynamics. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. During recall excitatory input to network excitatory cells provided context and timing information for retrieval of previously stored memory patterns. Dendritic inhibition acted as a nonspecific global threshold machine that removed spurious activity during recall. Simulations showed recall quality improved when the network?s memory capacity increased as the number of active cells per pattern decreased. Furthermore, increased firing rate of a presynaptic inhibitory threshold machine inhibiting a network of postsynaptic excitatory cells has a better success at removing spurious activity at the network level and improving recall quality than increased synaptic efficacy of the same threshold machine on the same network of excitatory cells, while keeping its firing rate fixed.

    @inproceedings{lincoln43365,
    booktitle = {9th International Conference, Living Machines 2020},
    month = {September},
    title = {Improving Recall in an Associative Neural Network Model of the Hippocampus},
    author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis},
    publisher = {Springer Nature},
    year = {2020},
    keywords = {ARRAY(0x558e7227f968)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43365/},
    abstract = {The mammalian hippocampus is involved in auto-association and hetero-association of declarative
    memories. We employed a bio-inspired neural model of hippocampal CA1 region to systematically
    evaluate its mean recall quality against different number of stored patterns, overlaps and active cells per
    pattern. Model consisted of excitatory (pyramidal cells) and four types of inhibitory cells: axo-axonic,
    basket, bistratified, and oriens lacunosum-moleculare cells. Cells were simplified compartmental models
    with complex ion channel dynamics. Cells? firing was timed to a theta oscillation paced by two distinct
    neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the
    other to the peak of theta. During recall excitatory input to network excitatory cells provided context and
    timing information for retrieval of previously stored memory patterns. Dendritic inhibition acted as a nonspecific
    global threshold machine that removed spurious activity during recall. Simulations showed recall
    quality improved when the network?s memory capacity increased as the number of active cells per pattern
    decreased. Furthermore, increased firing rate of a presynaptic inhibitory threshold machine inhibiting a
    network of postsynaptic excitatory cells has a better success at removing spurious activity at the network
    level and improving recall quality than increased synaptic efficacy of the same threshold machine on the
    same network of excitatory cells, while keeping its firing rate fixed.}
    }
  • M. Sorour, K. Elgeneidy, M. Hanheide, and A. Srinivasan, “Enhancing grasp pose computation in gripper workspace spheres,” in Icra 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76\% and 85:5\% respectively has been reported by real world experiments.

    @inproceedings{lincoln39957,
    booktitle = {ICRA 2020},
    month = {July},
    title = {Enhancing Grasp Pose Computation in Gripper Workspace Spheres},
    author = {Mohamed Sorour and Khaled Elgeneidy and Marc Hanheide and Aravinda Srinivasan},
    year = {2020},
    keywords = {ARRAY(0x558e7245c0a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39957/},
    abstract = {In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features
    a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76\% and 85:5\% respectively has been reported by real world experiments.}
    }
  • J. L. Louedec, H. A. Montes, T. Duckett, and G. Cielniak, “Segmentation and detection from organised 3d point clouds: a case study in broccoli head detection,” in 2020 ieee/cvf conference on computer vision and pattern recognition workshops (cvprw), 2020, p. 285–293. doi:10.1109/CVPRW50498.2020.00040
    [BibTeX] [Abstract] [Download PDF]

    Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.

    @inproceedings{lincoln43425,
    month = {June},
    author = {Justin Le Louedec and Hector A. Montes and Tom Duckett and Grzegorz Cielniak},
    booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
    title = {Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection},
    publisher = {IEEE},
    doi = {10.1109/CVPRW50498.2020.00040},
    pages = {285--293},
    year = {2020},
    keywords = {ARRAY(0x558e724511a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43425/},
    abstract = {Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.}
    }
  • K. Elgeneidy and K. Goher, “Structural optimization of adaptive soft fin ray fingers with variable stiffening capability,” in Ieee robosoft 2020, 2020. doi:10.1109/RoboSoft48309.2020.9115969
    [BibTeX] [Abstract] [Download PDF]

    Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.

    @inproceedings{lincoln40182,
    booktitle = {IEEE RoboSoft 2020},
    month = {June},
    title = {Structural Optimization of Adaptive Soft Fin Ray Fingers with Variable Stiffening Capability},
    author = {Khaled Elgeneidy and Khaled Goher},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/RoboSoft48309.2020.9115969},
    keywords = {ARRAY(0x558e72440ff8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40182/},
    abstract = {Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.}
    }
  • J. L. Louedec, H. Montes, T. Duckett, and G. Cielniak, “Segmentation and detection from organised 3d point clouds: a case study in broccoli head detection,” in 2020 ieee/cvf conference on computer vision and pattern recognition workshops (cvprw), 2020.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.

    @inproceedings{lincoln45041,
    booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
    month = {June},
    title = {Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection},
    author = {Justin Le Louedec and Hector Montes and Tom Duckett and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2020},
    keywords = {ARRAY(0x558e723b1620)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45041/},
    abstract = {Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.}
    }
  • A. Binch, G. Das, J. P. Fentanes, and M. Hanheide, “Context dependant iterative parameter optimisation for robust robot navigation,” in 2020 ieee international conference on robotics and automation (icra), 2020, p. 3937–3943. doi:10.1109/ICRA40945.2020.9196550
    [BibTeX] [Abstract] [Download PDF]

    Progress in autonomous mobile robotics has seen significant advances in the development of many algorithms for motion control and path planning. However, robust performance from these algorithms can often only be expected if the parameters controlling them are tuned specifically for the respective robot model, and optimised for specific scenarios in the environment the robot is working in. Such parameter tuning can, depending on the underlying algorithm, amount to a substantial combinatorial challenge, often rendering extensive manual tuning of these parameters intractable. In this paper, we present a framework that permits the use of different navigation actions and/or parameters depending on the spatial context of the navigation task, while considering the respective navigation algorithms themselves mostly as a “black box”, and find suitable parameters by means of an iterative optimisation, improving for performance metrics in simulated environments. We present a genetic algorithm incorporated into the framework and empirically show that the resulting parameter sets lead to substantial performance improvements in both simulated and real-world environments in the domain of agricultural robots.

    @inproceedings{lincoln42389,
    month = {May},
    author = {Adam Binch and Gautham Das and Jaime Pulido Fentanes and Marc Hanheide},
    booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Context Dependant Iterative Parameter Optimisation for Robust Robot Navigation},
    publisher = {IEEE},
    doi = {10.1109/ICRA40945.2020.9196550},
    pages = {3937--3943},
    year = {2020},
    keywords = {ARRAY(0x558e7226b2f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42389/},
    abstract = {Progress in autonomous mobile robotics has seen significant advances in the development of many algorithms for motion control and path planning. However, robust performance from these algorithms can often only be expected if the parameters controlling them are tuned specifically for the respective robot model, and optimised for specific scenarios in the environment the robot is working in. Such parameter tuning can, depending on the underlying algorithm, amount to a substantial combinatorial challenge, often rendering extensive manual tuning of these parameters intractable. In this paper, we present a framework that permits the use of different navigation actions and/or parameters depending on the spatial context of the navigation task, while considering the respective navigation algorithms themselves mostly as a "black box", and find suitable parameters by means of an iterative optimisation, improving for performance metrics in simulated environments. We present a genetic algorithm incorporated into the framework and empirically show that the resulting parameter sets lead to substantial performance improvements in both simulated and real-world environments in the domain of agricultural robots.}
    }
  • S. Parsa, D. Kamale, S. Mghames, K. Nazari, T. Pardi, A. Srinivasan, G. Neumann, M. Hanheide, and A. G. Esfahani, “Haptic-guided shared control grasping: collision-free manipulation,” in Case 2020- international conference on automation science and engineering, 2020. doi:10.1109/CASE48305.2020.9216789
    [BibTeX] [Abstract] [Download PDF]

    We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.

    @inproceedings{lincoln41283,
    month = {August},
    author = {Soran Parsa and Disha Kamale and Sariah Mghames and Kiyanoush Nazari and Tommaso Pardi and Aravinda Srinivasan and Gerhard Neumann and Marc Hanheide and Amir Ghalamzan Esfahani},
    booktitle = {CASE 2020- International Conference on Automation Science and Engineering},
    title = {Haptic-guided shared control grasping: collision-free manipulation},
    publisher = {IEEE},
    journal = {International Conference on Automation Science and Engineering (CASE)},
    doi = {10.1109/CASE48305.2020.9216789},
    year = {2020},
    keywords = {ARRAY(0x558e7227f9b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41283/},
    abstract = {We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.}
    }
  • L. Sun, D. Adolfsson, M. Magnusson, H. Andreasson, I. Posner, and T. Duckett, “Localising faster: efficient and precise lidar-based robot localisation in large-scale environments,” in 2020 ieee international conference on robotics and automation (icra), 2020, p. 4386–4392. doi:10.1109/ICRA40945.2020.9196708
    [BibTeX] [Abstract] [Download PDF]

    This paper proposes a novel approach for global localisation of mobile robots in large-scale environments. Our method leverages learning-based localisation and filtering-based localisation, to localise the robot efficiently and precisely through seeding Monte Carlo Localisation (MCL) with a deep learned distribution. In particular, a fast localisation system rapidly estimates the 6-DOF pose through a deep-probabilistic model (Gaussian Process Regression with a deep kernel), then a precise recursive estimator refines the estimated robot pose according to the geometric alignment. More importantly, the Gaussian method (i.e. deep probabilistic localisation) and non-Gaussian method (i.e. MCL) can be integrated naturally via importance sampling. Consequently, the two systems can be integrated seamlessly and mutually benefit from each other. To verify the proposed framework, we provide a case study in large-scale localisation with a 3D lidar sensor. Our experiments on the Michigan NCLT long-term dataset show that the proposed method is able to localise the robot in 1.94 s on average (median of 0.8 s) with precision 0.75 m in a large-scale environment of approximately 0.5 km 2 .

    @inproceedings{lincoln43349,
    month = {May},
    author = {Li Sun and Daniel Adolfsson and Martin Magnusson and Henrik Andreasson and Ingmar Posner and Tom Duckett},
    booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Localising Faster: Efficient and precise lidar-based robot localisation in large-scale environments},
    publisher = {IEEE},
    doi = {10.1109/ICRA40945.2020.9196708},
    pages = {4386--4392},
    year = {2020},
    keywords = {ARRAY(0x558e722c4fb8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43349/},
    abstract = {This paper proposes a novel approach for global localisation of mobile robots in large-scale environments. Our method leverages learning-based localisation and filtering-based localisation, to localise the robot efficiently and precisely through seeding Monte Carlo Localisation (MCL) with a deep learned distribution. In particular, a fast localisation system rapidly estimates the 6-DOF pose through a deep-probabilistic model (Gaussian Process Regression with a deep kernel), then a precise recursive estimator refines the estimated robot pose according to the geometric alignment. More importantly, the Gaussian method (i.e. deep probabilistic localisation) and non-Gaussian method (i.e. MCL) can be integrated naturally via importance sampling. Consequently, the two systems can be integrated seamlessly and mutually benefit from each other. To verify the proposed framework, we provide a case study in large-scale localisation with a 3D lidar sensor. Our experiments on the Michigan NCLT long-term dataset show that the proposed method is able to localise the robot in 1.94 s on average (median of 0.8 s) with precision 0.75 m in a large-scale environment of approximately 0.5 km 2 .}
    }
  • T. Zhivkov and E. Sklar, “Modelling variable communication signal strength for experiments with multi-robot teams,” in 3rd uk-ras conference, 2020, p. 128–130. doi:10.31256/Ld2Re8B
    [BibTeX] [Abstract] [Download PDF]

    Reliable communication is a critical factor for ensuring robust performance of multi-robot teams. A selection of results are presented here comparing the impact of poor network quality on team performance under several conditions. Two different processes for emulating degraded network signal strength are compared in a physical environment: modelled signal degradation (MSD), approximated according to increasing distance from a connected network node (ie robot), versus effective signal degradation (ESD). The results of both signal strength processes exhibit similar trends, demonstrating that ESD in a physical environment can be modelled relatively well using MSD.

    @inproceedings{lincoln45011,
    month = {May},
    author = {Tsvetan Zhivkov and Elizabeth Sklar},
    booktitle = {3rd UK-RAS Conference},
    title = {Modelling variable communication signal strength for experiments with multi-robot teams},
    publisher = {UK-RAS},
    doi = {10.31256/Ld2Re8B},
    pages = {128--130},
    year = {2020},
    keywords = {ARRAY(0x558e7240b810)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45011/},
    abstract = {Reliable communication is a critical factor for ensuring robust performance of multi-robot teams. A selection of results are presented here comparing the impact of poor network quality on team performance under several conditions. Two different processes for emulating degraded network signal strength are compared in a physical environment: modelled signal degradation (MSD), approximated according to increasing distance from a connected network node (ie robot), versus effective signal degradation (ESD). The results of both signal strength processes exhibit similar trends, demonstrating that ESD in a physical environment can be modelled relatively well using MSD.}
    }
  • X. Li, C. Fox, and S. Coutts, “Deep learning for robotic strawberry harvesting,” in Ukras20, 2020, p. 80–82. doi:10.31256/Bj3Kl5B
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}We develop a novel machine learning based robotic strawberry harvesting system for fruit counting, sizing/weighting, and yield prediction.

    @inproceedings{lincoln41273,
    month = {April},
    author = {Xiaodong Li and Charles Fox and Shaun Coutts},
    booktitle = {UKRAS20},
    title = {Deep learning for robotic strawberry harvesting},
    publisher = {UK-RAS},
    doi = {10.31256/Bj3Kl5B},
    pages = {80--82},
    year = {2020},
    keywords = {ARRAY(0x558e722cbe28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41273/},
    abstract = {Abstract{--}We develop a novel machine learning based robotic
    strawberry harvesting system for fruit counting, sizing/weighting,
    and yield prediction.}
    }
  • F. Camara, P. Dickenson, N. Merat, and C. Fox, “Examining pedestrian-autonomous vehicle interactions in virtual reality,” in 8th transport research arena tra 2020, 2020.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous vehicles now have well developed algorithms and open source software for localisation and navigation in static environments but their future interactions with other road users in mixed traffic environments, especially with pedestrians, raise some concerns. Pedestrian behaviour is complex to model and unpredictable, thus creating a big challenge for self-driving cars. This paper examines pedestrian behaviour during crossing scenarios with a game theoretic autonomous vehicle in virtual reality. In a first experiment, we recorded participants? trajectories and found that they were crossing more cautiously in VR than in previous laboratory experiments. In two other experiments, we used a gradient descent approach to investigate participants? preference for a certain AV driving style. We found that the majority of them were not expecting the car to stop in these scenarios. These results suggest that VR is an interesting tool for testing autonomous vehicle algorithms and for finding out about pedestrian preferences.

    @inproceedings{lincoln40029,
    booktitle = {8th Transport Research Arena TRA 2020},
    month = {April},
    title = {Examining Pedestrian-Autonomous Vehicle Interactions in Virtual Reality},
    author = {Fanta Camara and Patrick Dickenson and Natasha Merat and Charles Fox},
    year = {2020},
    keywords = {ARRAY(0x558e724501d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40029/},
    abstract = {Autonomous vehicles now have well developed algorithms and open source software for localisation and
    navigation in static environments but their future interactions with other road users in mixed traffic
    environments, especially with pedestrians, raise some concerns. Pedestrian behaviour is complex to model and
    unpredictable, thus creating a big challenge for self-driving cars. This paper examines pedestrian behaviour
    during crossing scenarios with a game theoretic autonomous vehicle in virtual reality. In a first experiment, we
    recorded participants? trajectories and found that they were crossing more cautiously in VR than in previous
    laboratory experiments. In two other experiments, we used a gradient descent approach to investigate
    participants? preference for a certain AV driving style. We found that the majority of them were not expecting the
    car to stop in these scenarios. These results suggest that VR is an interesting tool for testing autonomous vehicle
    algorithms and for finding out about pedestrian preferences.}
    }
  • T. Liu, X. Sun, C. Hu, Q. Fu, H. Isakhani, and S. Yue, “Investigating multiple pheromones in swarm robots – a case study of multi-robot deployment,” in 2020 5th international conference on advanced robotics and mechatronics (icarm), 2020, p. 595–601. doi:10.1109/ICARM49381.2020.9195311
    [BibTeX] [Abstract] [Download PDF]

    Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects’ collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOS{\ensuremath{\Phi}}. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOS{\ensuremath{\Phi}} platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects’ research.

    @inproceedings{lincoln43680,
    month = {September},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Hamid Isakhani and Shigang Yue},
    booktitle = {2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM)},
    title = {Investigating Multiple Pheromones in Swarm Robots - A Case Study of Multi-Robot Deployment},
    publisher = {IEEE},
    doi = {10.1109/ICARM49381.2020.9195311},
    pages = {595--601},
    year = {2020},
    keywords = {ARRAY(0x558e7227f920)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43680/},
    abstract = {Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects' collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOS{\ensuremath{\Phi}}. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOS{\ensuremath{\Phi}} platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects' research.}
    }
  • H. Isakhani, S. Yue, C. Xiong, W. Chen, X. Sun, and T. liu, “Fabrication and mechanical analysis of bioinspired gliding-optimized wing prototypes for micro aerial vehicles,” in 5th international conference on advanced robotics and mechatronics (icarm), 2020, p. 602–608. doi:10.1109/ICARM49381.2020.9195392
    [BibTeX] [Abstract] [Download PDF]

    Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.

    @inproceedings{lincoln43687,
    month = {September},
    author = {Hamid Isakhani and Shigang Yue and Caihua Xiong and Wenbin Chen and Xuelong Sun and Tian liu},
    booktitle = {5th International Conference on Advanced Robotics and Mechatronics (ICARM)},
    title = {Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles},
    publisher = {IEEE},
    year = {2020},
    journal = {2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM)},
    doi = {10.1109/ICARM49381.2020.9195392},
    pages = {602--608},
    keywords = {ARRAY(0x558e7227f8d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43687/},
    abstract = {Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.}
    }
  • M. Al-Khafajiy, S. Ghareeb, R. Al-Jumeily, R. Almurshedi, A. Hussien, T. Baker, and Y. Jararweh, “A holistic study on emerging iot networking paradigms,” in 2019 12th international conference on developments in esystems engineering (dese), 2020, p. 943–949. doi:10.1109/DeSE.2019.00175
    [BibTeX] [Abstract] [Download PDF]

    With the emerge of Internet of Things, billions of devices and humans are connected directly or indirectly to the internet. This significant growth in the number of connected devices rises the needs for a new development for the current network paradigm (e.g., cloud computing). The new network paradigm, such as fog computing, along with its related edge computing paradigms, are seen as promising solutions for handling the large volume of securely-critical and delay-sensitive data that is being produced by the IoT nodes. In this paper, we give a brief overview on the IoT related computing paradigms, including their similarities and differences as well as challenges. Next, we provide a summary of the challenges and processing and storage capabilities of each network paradigm.

    @inproceedings{lincoln47565,
    month = {April},
    author = {Mohammed Al-Khafajiy and Shatha Ghareeb and Rawaa Al-Jumeily and Rusul Almurshedi and Aseel Hussien and Thar Baker and Yaser Jararweh},
    booktitle = {2019 12th International Conference on Developments in eSystems Engineering (DeSE)},
    title = {A Holistic Study on Emerging IoT Networking Paradigms},
    publisher = {IEEE},
    doi = {10.1109/DeSE.2019.00175},
    pages = {943--949},
    year = {2020},
    keywords = {ARRAY(0x558e722cbdf8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47565/},
    abstract = {With the emerge of Internet of Things, billions of devices and humans are connected directly or indirectly to the internet. This significant growth in the number of connected devices rises the needs for a new development for the current network paradigm (e.g., cloud computing). The new network paradigm, such as fog computing, along with its related edge computing paradigms, are seen as promising solutions for handling the large volume of securely-critical and delay-sensitive data that is being produced by the IoT nodes. In this paper, we give a brief overview on the IoT related computing paradigms, including their similarities and differences as well as challenges. Next, we provide a summary of the challenges and processing and storage capabilities of each network paradigm.}
    }
  • V. R. Ponnambalam, J. P. Fentanes, G. Das, G. Cielniak, J. G. O. Gjevestad, and P. From, “Agri-cost-maps ? integration of environmental constraints into navigation systems for agricultural robot,” in 6th international conference on control, automation and robotics (iccar), 2020. doi:10.1109/ICCAR49639.2020.9108030
    [BibTeX] [Abstract] [Download PDF]

    Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.

    @inproceedings{lincoln42418,
    booktitle = {6th International Conference on Control, Automation and Robotics (ICCAR)},
    month = {April},
    title = {Agri-Cost-Maps ? Integration of Environmental Constraints into Navigation Systems for Agricultural Robot},
    author = {Vignesh Raja Ponnambalam and Jaime Pulido Fentanes and Gautham Das and Grzegorz Cielniak and Jon Glenn Omholt Gjevestad and Pal From},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/ICCAR49639.2020.9108030},
    keywords = {ARRAY(0x558e722cbcc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42418/},
    abstract = {Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.}
    }
  • V. R. Ponnambalam, J. P. Fentanes, G. Das, G. Cielniak, J. G. O. Gjevestad, and P. r a, “Agri-cost-maps – integration of environmental constraints into navigation systems for agricultural robots,” in 6th international conference on control, automation and robotics (iccar), 2020. doi:10.1109/ICCAR49639.2020.9108030
    [BibTeX] [Abstract] [Download PDF]

    Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.

    @inproceedings{lincoln42458,
    booktitle = {6th International Conference on Control, Automation and Robotics (ICCAR)},
    month = {April},
    title = {Agri-Cost-Maps - Integration of Environmental Constraints into Navigation Systems for Agricultural Robots},
    author = {Vignesh Raja Ponnambalam and Jaime Pulido Fentanes and Gautham Das and Grzegorz Cielniak and Jon Glenn Omholt Gjevestad and P{\r a}l Johan From},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/ICCAR49639.2020.9108030},
    keywords = {ARRAY(0x558e722cb960)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42458/},
    abstract = {Robust navigation is a key ability for agricultural robots. Such robots must operate safely minimizing their impact on the soil and avoiding crop damage. This paper proposes a method for unified incorporation of the application-specific constraints into the navigation system of robots deployed in different agricultural environments. The constraints are incorporated as an additional cost-map layer into the ROS navigation stack. These so-called Agri-Cost-Maps facilitate the transition from the tailored navigation systems typical for the current generation of agricultural robots to a more flexible ROS-based navigation framework that can be easily deployed for different agricultural applications. We demonstrate the applicability of this framework in three different agricultural scenarios, evaluate its benefits in simulation and demonstrate its validity in a real-world setting.}
    }
  • P. Somaiya, M. Hanheide, and G. Cielniak, “Unsupervised anomaly detection for safe robot operations,” in Ukras20 conference: ?robots into the real world?, 2020, p. 154–156. doi:10.31256/Wg7Ap8J
    [BibTeX] [Abstract] [Download PDF]

    Faults in robot operations are risky, particularly when robots are operating in the same environment as humans. Early detection of such faults is necessary to prevent further escalation and endangering human life. However, due to sensor noise and unforeseen faults in robots, creating a model for fault prediction is difficult. Existing supervised data-driven approaches rely on large amounts of labelled data for detecting anomalies, which is impractical in real applications. In this paper, we present an unsupervised machine learning approach for this purpose, which requires only data corresponding to the normal operation of the robot. We demonstrate how to fuse multi-modal information from robot motion sensors and evaluate the proposed framework in multiple scenarios collected from a real mobile robot.

    @inproceedings{lincoln46369,
    month = {April},
    author = {Pratik Somaiya and Marc Hanheide and Grzegorz Cielniak},
    booktitle = {UKRAS20 Conference: ?Robots into the real world?},
    title = {Unsupervised Anomaly Detection for Safe Robot Operations},
    publisher = {UKRAS},
    doi = {10.31256/Wg7Ap8J},
    pages = {154--156},
    year = {2020},
    keywords = {ARRAY(0x558e72244d78)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46369/},
    abstract = {Faults in robot operations are risky, particularly when robots are operating in the same environment as humans. Early detection of such faults is necessary to prevent further escalation and endangering human life. However, due to sensor noise and unforeseen faults in robots, creating a model for fault prediction is difficult. Existing supervised data-driven approaches rely on large amounts of labelled data for detecting anomalies, which is impractical in real applications. In this paper, we present an unsupervised machine learning approach for this purpose, which requires only data corresponding to the normal operation of the robot. We demonstrate how to fuse multi-modal information from robot motion sensors and evaluate the proposed framework in multiple scenarios collected from a real mobile robot.}
    }
  • H. Isakhani, C. Xiong, S. Yue, and W. Chen, “A bioinspired airfoil optimization technique using nash genetic algorithm,” in 2020 17th international conference on ubiquitous robots (ur), 2020, p. 506–513. doi:10.1109/UR49135.2020.9144868
    [BibTeX] [Abstract] [Download PDF]

    Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77\% improvement in its gliding ratio.

    @inproceedings{lincoln43819,
    month = {July},
    author = {Hamid Isakhani and Caihua Xiong and Shigang Yue and Wenbin Chen},
    booktitle = {2020 17th International Conference on Ubiquitous Robots (UR)},
    title = {A Bioinspired Airfoil Optimization Technique Using Nash Genetic Algorithm},
    publisher = {IEEE},
    doi = {10.1109/UR49135.2020.9144868},
    pages = {506--513},
    year = {2020},
    keywords = {ARRAY(0x558e7243b998)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43819/},
    abstract = {Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77\% improvement in its gliding ratio.}
    }
  • N. Andreakos, S. Yue, and V. Cutsuridis, “Recall performance improvement in a bio-inspired model of the mammalian hippocampus,” in Brein informatics, 2020, p. 319–328. doi:10.1007/978-3-030-59277-6_29
    [BibTeX] [Abstract] [Download PDF]

    Mammalian hippocampus is involved in short-term formation of declarative memories. We employed a bio-inspired neural model of hippocampal CA1 region consisting of a zoo of excitatory and inhibitory cells. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. To systematically evaluate the model?s recall performance against number of stored patterns, overlaps and ?active cells per pattern?, its cells were driven by a non-specific excitatory input to their dendrites. This excitatory input to model excitatory cells provided context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells? dendrites acted as a non-specific global threshold machine that removed spurious activity during recall. Out of the three models tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells per pattern? had a massive effect on network recall quality regardless of how many patterns were stored in it. As ?active cells per pattern? decreased, network?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved. Key finding was that increased firing rate of an inhibitory cell inhibiting a network of excitatory cells has a better success at removing spurious activity at the network level and improving recall quality than increasing the synaptic strength of the same inhibitory cell inhibiting the same network of excitatory cells, while keeping its firing rate fixed.

    @inproceedings{lincoln43364,
    booktitle = {Brein Informatics},
    month = {December},
    title = {Recall Performance Improvement in a Bio-Inspired Model of the Mammalian Hippocampus},
    author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis},
    year = {2020},
    pages = {319--328},
    doi = {10.1007/978-3-030-59277-6\_29},
    keywords = {ARRAY(0x558e72265490)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43364/},
    abstract = {Mammalian hippocampus is involved in short-term formation of declarative memories. We employed a
    bio-inspired neural model of hippocampal CA1 region consisting of a zoo of excitatory and inhibitory
    cells. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting
    highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. To
    systematically evaluate the model?s recall performance against number of stored patterns, overlaps and
    ?active cells per pattern?, its cells were driven by a non-specific excitatory input to their dendrites. This
    excitatory input to model excitatory cells provided context and timing information for retrieval of
    previously stored memory patterns. Inhibition to excitatory cells? dendrites acted as a non-specific global
    threshold machine that removed spurious activity during recall. Out of the three models tested, ?model 1?
    recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active
    cells per pattern? had a massive effect on network recall quality regardless of how many patterns were
    stored in it. As ?active cells per pattern? decreased, network?s memory capacity increased, interference
    effects between stored patterns decreased, and recall quality improved. Key finding was that increased
    firing rate of an inhibitory cell inhibiting a network of excitatory cells has a better success at removing
    spurious activity at the network level and improving recall quality than increasing the synaptic strength of
    the same inhibitory cell inhibiting the same network of excitatory cells, while keeping its firing rate fixed.}
    }
  • G. Bosworth, C. Fox, L. Price, and M. Collison, “The future of rural mobility study (forms),” Midlands Connect, Project Report , 2020.
    [BibTeX] [Abstract] [Download PDF]

    Recognising the urban-focus of many national and regional transport strategies, the purpose of this project is to explore how emerging technologies could support rural economies across the Midlands. Fundamentally, the rationale for the study is to begin with an assessment of rural needs and then exploring a range of mobility innovations, including social innovations as well as technologies, that can provide place-based solutions designed for more rural areas. This avoids the National Transport Strategy assumption that new mobility innovations will inevitably occur in urban areas and then be rolled out across more rural places. While economic realities mean that many private sector transport innovations can start out in urban centres, their rural impacts may be quite different and require alternative responses from rural planners and policy-makers.

    @techreport{lincoln42273,
    month = {August},
    type = {Project Report},
    title = {The Future of Rural Mobility Study (FoRMS)},
    author = {Gary Bosworth and Charles Fox and Liz Price and Martin Collison},
    publisher = {Midlands Connect},
    year = {2020},
    institution = {Midlands Connect},
    keywords = {ARRAY(0x558e7227fa10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42273/},
    abstract = {Recognising the urban-focus of many national and regional transport strategies, the purpose of this project is to explore how emerging technologies could support rural economies across the Midlands. Fundamentally, the rationale for the study is to begin with an assessment of rural needs and then exploring a range of mobility innovations, including social innovations as well as technologies, that can provide place-based solutions designed for more rural areas. This avoids the National Transport Strategy assumption that new mobility innovations will inevitably occur in urban areas and then be rolled out across more rural places. While economic realities mean that many private sector transport innovations can start out in urban centres, their rural impacts may be quite different and require alternative responses from rural planners and policy-makers.}
    }

2019

  • C. Achillas, D. Bochtis, D. Aidonis, V. Marinoudi, and D. Folinas, “Voice-driven fleet management system for agricultural operations,” Information processing in agriculture, vol. 6, iss. 4, p. 471–478, 2019. doi:10.1016/j.inpa.2019.03.001
    [BibTeX] [Abstract] [Download PDF]

    Food consumption is constantly increasing at global scale. In this light, agricultural production also needs to increase in order to satisfy the relevant demand for agricultural products. However, due to by environmental and biological factors (e.g. soil compaction) the weight and size of the machinery cannot be further physically optimized. Thus, only marginal improvements are possible to increase equipment effectiveness. On the contrary, late technological advances in ICT provide the ground for significant improvements in agri-production efficiency. In this work, the V-Agrifleet tool is presented and demonstrated. V-Agrifleet is developed to provide a ?hands-free? interface for information exchange and an ?Olympic view? to all coordinated users, giving them the ability for decentralized decision-making. The proposed tool can be used by the end-users (e.g. farmers, contractors, farm associations, agri-products storage and processing facilities, etc.) order to optimize task and time management. The visualized documentation of the fleet performance provides valuable information for the evaluation management level giving the opportunity for improvements in the planning of next operations. Its vendor-independent architecture, voice-driven interaction, context awareness functionalities and operation planning support constitute V-Agrifleet application a highly innovative agricultural machinery operational aiding system.

    @article{lincoln39226,
    volume = {6},
    number = {4},
    month = {December},
    author = {Ch. Achillas and Dionysis Bochtis and D. Aidonis and V. Marinoudi and D. Folinas},
    title = {Voice-driven fleet management system for agricultural operations},
    publisher = {Elsevier},
    year = {2019},
    journal = {Information Processing in Agriculture},
    doi = {10.1016/j.inpa.2019.03.001},
    pages = {471--478},
    keywords = {ARRAY(0x558e72430e28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39226/},
    abstract = {Food consumption is constantly increasing at global scale. In this light, agricultural production also needs to increase in order to satisfy the relevant demand for agricultural products. However, due to by environmental and biological factors (e.g. soil compaction) the weight and size of the machinery cannot be further physically optimized. Thus, only marginal improvements are possible to increase equipment effectiveness. On the contrary, late technological advances in ICT provide the ground for significant improvements in agri-production efficiency. In this work, the V-Agrifleet tool is presented and demonstrated. V-Agrifleet is developed to provide a ?hands-free? interface for information exchange and an ?Olympic view? to all coordinated users, giving them the ability for decentralized decision-making. The proposed tool can be used by the end-users (e.g. farmers, contractors, farm associations, agri-products storage and processing facilities, etc.) order to optimize task and time management. The visualized documentation of the fleet performance provides valuable information for the evaluation management level giving the opportunity for improvements in the planning of next operations. Its vendor-independent architecture, voice-driven interaction, context awareness functionalities and operation planning support constitute V-Agrifleet application a highly innovative agricultural machinery operational aiding system.}
    }
  • S. Pearson, D. May, G. Leontidis, M. Swainson, S. Brewer, L. Bidaut, J. Frey, G. Parr, R. Maull, and A. Zisman, “Are distributed ledger technologies the panacea for food traceability?,” Global food security, vol. 20, p. 145–149, 2019. doi:10.1016/j.gfs.2019.02.002
    [BibTeX] [Abstract] [Download PDF]

    Distributed Ledger Technology (DLT), such as blockchain, has the potential to transform supply chains. It can provide a cryptographically secure and immutable record of transactions and associated metadata (origin, contracts, process steps, environmental variations, microbial records, etc.) linked across whole supply chains. The ability to trace food items within and along a supply chain is legally required by all actors within the chain. It is critical to food safety, underpins trust and global food trade. However, current food traceability systems are not linked between all actors within the supply chain. Key metadata on the age and process history of a food is rarely transferred when a product is bought and sold through multiple steps within the chain. Herein, we examine the potential of massively scalable DLT to securely link the entire food supply chain, from producer to end user. Under such a paradigm, should a food safety or quality issue ever arise, authorized end users could instantly and accurately trace the origin and history of any particular food item. This novel and unparalleled technology could help underpin trust for the safety of all food, a critical component of global food security. In this paper, we investigate the (I) data requirements to develop DLT technology across whole supply chains, (ii) key challenges and barriers to optimizing the complete system, and (iii) potential impacts on production efficiency, legal compliance, access to global food markets and the safety of food. Our conclusion is that while DLT has the potential to transform food systems, this can only be fully realized through the global development and agreement on suitable data standards and governance. In addition, key technical issues need to be resolved including challenges with DLT scalability, privacy and data architectures.

    @article{lincoln35035,
    volume = {20},
    month = {March},
    author = {Simon Pearson and David May and Georgios Leontidis and Mark Swainson and Steve Brewer and Luc Bidaut and Jeremy Frey and Gerard Parr and Roger Maull and Andrea Zisman},
    title = {Are Distributed Ledger Technologies the Panacea for Food Traceability?},
    publisher = {Elsevier},
    year = {2019},
    journal = {Global Food Security},
    doi = {10.1016/j.gfs.2019.02.002},
    pages = {145--149},
    keywords = {ARRAY(0x558e72425650)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35035/},
    abstract = {Distributed Ledger Technology (DLT), such as blockchain, has the potential to transform supply chains. It can provide a cryptographically secure and immutable record of transactions and associated metadata (origin, contracts, process steps, environmental variations, microbial records, etc.) linked across whole supply chains. The ability to trace food items within and along a supply chain is legally required by all actors within the chain. It is critical to food safety, underpins trust and global food trade. However, current food traceability systems are not linked between all actors within the supply chain. Key metadata on the age and process history of a food is rarely transferred when a product is bought and sold through multiple steps within the chain. Herein, we examine the potential of massively scalable DLT to securely link the entire food supply chain, from producer to end user. Under such a paradigm, should a food safety or quality issue ever arise, authorized end users could instantly and accurately trace the origin and history of any particular food item. This novel and unparalleled technology could help underpin trust for the safety of all food, a critical component of global food security. In this paper, we investigate the (I) data requirements to develop DLT technology across whole supply chains, (ii) key challenges and barriers to optimizing the complete system, and (iii) potential impacts on production efficiency, legal compliance, access to global food markets and the safety of food. Our conclusion is that while DLT has the potential to transform food systems, this can only be fully realized through the global development and agreement on suitable data standards and governance. In addition, key technical issues need to be resolved including challenges with DLT scalability, privacy and data architectures.}
    }
  • K. Goher and S. Fadlallah, “Control of a two-wheeled machine with two-directions handling mechanism using pid and pd-flc algorithms,” International journal of automation and computing, vol. 16, iss. 4, p. 511–533, 2019. doi:10.1007/s11633-019-1172-0
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel five degrees of freedom (DOF) two-wheeled robotic machine (TWRM) that delivers solutions for both industrial and service robotic applications by enlarging the vehicle?s workspace and increasing its flexibility. Designing a two-wheeled robot with five degrees of freedom creates a high challenge for the control, therefore the modelling and design of such robot should be precise with a uniform distribution of mass over the robot and the actuators. By employing the Lagrangian modelling approach, the TWRM?s mathematical model is derived and simulated in Matlab/Simulink?. For stabilizing the system?s highly nonlinear model, two control approaches were developed and implemented: proportional-integral-derivative (PID) and fuzzy logic control (FLC) strategies. Considering multiple scenarios with different initial conditions, the proposed control strategies? performance has been assessed.

    @article{lincoln35606,
    volume = {16},
    number = {4},
    month = {August},
    author = {Khaled Goher and Sulaiman Fadlallah},
    title = {Control of a Two-wheeled Machine with Two-directions Handling Mechanism Using PID and PD-FLC Algorithms},
    publisher = {Springer},
    year = {2019},
    journal = {International Journal of Automation and Computing},
    doi = {10.1007/s11633-019-1172-0},
    pages = {511--533},
    keywords = {ARRAY(0x558e7240f7e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35606/},
    abstract = {This paper presents a novel five degrees of freedom (DOF) two-wheeled robotic machine (TWRM) that delivers solutions
    for both industrial and service robotic applications by enlarging the vehicle?s workspace and increasing its flexibility. Designing a two-wheeled robot with five degrees of freedom creates a high challenge for the control, therefore the modelling and design of such robot should be precise with a uniform distribution of mass over the robot and the actuators. By employing the Lagrangian modelling approach, the TWRM?s mathematical model is derived and simulated in Matlab/Simulink?. For stabilizing the system?s highly nonlinear model, two control approaches were developed and implemented: proportional-integral-derivative (PID) and fuzzy logic control (FLC)
    strategies. Considering multiple scenarios with different initial conditions, the proposed control strategies? performance has been assessed.}
    }
  • M. Al-Khafajiy, H. Kolivand, T. Baker, D. Tully, and A. Waraich, “Smart hospital emergency system via mobile-based requesting services,” Multimedia tools and applications, vol. 78, iss. 14, p. 20087–20111, 2019. doi:10.1007/s11042-019-7274-4
    [BibTeX] [Abstract] [Download PDF]

    In recent years, the UK?s emergency call and response has shown elements of great strain as of today. The strain on emergency call systems estimated by a 9 million calls (including both landline and mobile) made in 2014 alone. Coupled with an increasing population and cuts in government funding, this has resulted in lower percentages of emergency response vehicles at hand and longer response times. In this paper, we highlight the main challenges of emergency services and overview of previous solutions. In addition, we propose a new system call Smart Hospital Emergency System (SHES). The main aim of SHES is to save lives through improving communications between patient and emergency services. Utilising the latest of technologies and algorithms within SHES is aiming to increase emergency communication throughput, while reducing emergency call systems issues and making the process of emergency response more efficient. Utilising health data held within a personal smartphone, and internal tracked data (GPU, Accelerometer, Gyroscope etc.), SHES aims to process the mentioned data efficiently, and securely, through automatic communications with emergency services, ultimately reducing communication bottlenecks. Live video-streaming through real-time video communication protocols is also a focus of SHES to improve initial communications between emergency services and patients. A prototype of this system has been developed. The system has been evaluated by a preliminary usability, reliability, and communication performance study.

    @article{lincoln47558,
    volume = {78},
    number = {14},
    month = {July},
    author = {Mohammed Al-Khafajiy and Hoshang Kolivand and Thar Baker and David Tully and Atif Waraich},
    title = {Smart hospital emergency system via mobile-based requesting services},
    publisher = {Springer},
    year = {2019},
    journal = {Multimedia Tools and Applications},
    doi = {10.1007/s11042-019-7274-4},
    pages = {20087--20111},
    keywords = {ARRAY(0x558e723eec68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47558/},
    abstract = {In recent years, the UK?s emergency call and response has shown elements of great strain as of today. The strain on emergency call systems estimated by a 9 million calls (including both landline and mobile) made in 2014 alone. Coupled with an increasing population and cuts in government funding, this has resulted in lower percentages of emergency response vehicles at hand and longer response times. In this paper, we highlight the main challenges of emergency services and overview of previous solutions. In addition, we propose a new system call Smart Hospital Emergency System (SHES). The main aim of SHES is to save lives through improving communications between patient and emergency services. Utilising the latest of technologies and algorithms within SHES is aiming to increase emergency communication throughput, while reducing emergency call systems issues and making the process of emergency response more efficient. Utilising health data held within a personal smartphone, and internal tracked data (GPU, Accelerometer, Gyroscope etc.), SHES aims to process the mentioned data efficiently, and securely, through automatic communications with emergency services, ultimately reducing communication bottlenecks. Live video-streaming through real-time video communication protocols is also a focus of SHES to improve initial communications between emergency services and patients. A prototype of this system has been developed. The system has been evaluated by a preliminary usability, reliability, and communication performance study.}
    }
  • N. Tsolakis, D. Bechtsis, and D. Bochtis, “Agros: a robot operating system based emulation tool for agricultural robotics,” Agronomy, vol. 9, iss. 7, p. 403, 2019. doi:10.3390/agronomy9070403
    [BibTeX] [Abstract] [Download PDF]

    This research aims to develop a farm management emulation tool that enables agrifood producers to effectively introduce advanced digital technologies, like intelligent and autonomous unmanned ground vehicles (UGVs), in real-world field operations. To that end, we first provide a critical taxonomy of studies investigating agricultural robotic systems with regard to: (i) the analysis approach, i.e., simulation, emulation, real-world implementation; (ii) farming operations; and (iii) the farming type. Our analysis demonstrates that simulation and emulation modelling have been extensively applied to study advanced agricultural machinery while the majority of the extant research efforts focuses on harvesting/picking/mowing and fertilizing/spraying activities; most studies consider a generic agricultural layout. Thereafter, we developed AgROS, an emulation tool based on the Robot Operating System, which could be used for assessing the efficiency of real-world robot systems in customized fields. The AgROS allows farmers to select their actual field from a map layout, import the landscape of the field, add characteristics of the actual agricultural layout (e.g., trees, static objects), select an agricultural robot from a predefined list of commercial systems, import the selected UGV into the emulation environment, and test the robot?s performance in a quasi-real-world environment. AgROS supports farmers in the ex-ante analysis and performance evaluation of robotized precision farming operations while lays the foundations for realizing ?digital twins? in agriculture

    @article{lincoln39229,
    volume = {9},
    number = {7},
    month = {July},
    author = {Naoum Tsolakis and Dimitrios Bechtsis and Dionysis Bochtis},
    title = {AgROS: A Robot Operating System Based Emulation Tool for Agricultural Robotics},
    year = {2019},
    journal = {Agronomy},
    doi = {10.3390/agronomy9070403},
    pages = {403},
    keywords = {ARRAY(0x558e722df1c8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39229/},
    abstract = {This research aims to develop a farm management emulation tool that enables agrifood producers to effectively introduce advanced digital technologies, like intelligent and autonomous unmanned ground vehicles (UGVs), in real-world field operations. To that end, we first provide a critical taxonomy of studies investigating agricultural robotic systems with regard to: (i) the analysis approach, i.e., simulation, emulation, real-world implementation; (ii) farming operations; and (iii) the farming type. Our analysis demonstrates that simulation and emulation modelling have been extensively applied to study advanced agricultural machinery while the majority of the extant research efforts focuses on harvesting/picking/mowing and fertilizing/spraying activities; most studies consider a generic agricultural layout. Thereafter, we developed AgROS, an emulation tool based on the Robot Operating System, which could be used for assessing the efficiency of real-world robot systems in customized fields. The AgROS allows farmers to select their actual field from a map layout, import the landscape of the field, add characteristics of the actual agricultural layout (e.g., trees, static objects), select an agricultural robot from a predefined list of commercial systems, import the selected UGV into the emulation environment, and test the robot?s performance in a quasi-real-world environment. AgROS supports farmers in the ex-ante analysis and performance evaluation of robotized precision farming operations while lays the foundations for realizing ?digital twins? in agriculture}
    }
  • K. Boukadi, N. Faci, Z. Maamar, E. Ugljanin, M. Sellami, T. Baker, and M. Al-Khafajiy, “Norm-based and commitment-driven agentification of the internet of things,” Internet of things, vol. 6, p. 100042, 2019. doi:10.1016/j.iot.2019.02.002
    [BibTeX] [Abstract] [Download PDF]

    There are no doubts that the Internet-of-Things (IoT) has conquered the ICT industry to the extent that many governments and organizations are already rolling out many anywhere,anytime online services that IoT sustains. However, like any emerging and disruptive technology, multiple obstacles are slowing down IoT practical adoption including the passive nature and privacy invasion of things. This paper examines how to empower things with necessary capabilities that would make them proactive and responsive. This means things can, for instance reach out to collaborative peers, (un)form dynamic communities when necessary, avoid malicious peers, and be ?questioned? for their actions. To achieve such empowerment, this paper presents an approach for agentifying things using norms along with commitments that operationalize these norms. Both norms and commitments are specialized into social (i.e., application independent) and business (i.e., application dependent), respectively. Being proactive, things could violate commitments at run-time, which needs to be detected through monitoring. In this paper, thing agentification is illustrated with a case study about missing children and demonstrated with a testbed that uses different IoT-related technologies such as Eclipse Mosquitto broker and Message Queuing Telemetry Transport protocol. Some experiments conducted upon this testbed are also discussed.

    @article{lincoln47561,
    volume = {6},
    month = {June},
    author = {Khouloud Boukadi and Noura Faci and Zakaria Maamar and Emir Ugljanin and Mohamed Sellami and Thar Baker and Mohammed Al-Khafajiy},
    title = {Norm-based and commitment-driven agentification of the Internet of Things},
    publisher = {Elsevier},
    year = {2019},
    journal = {Internet of Things},
    doi = {10.1016/j.iot.2019.02.002},
    pages = {100042},
    keywords = {ARRAY(0x558e723d0990)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47561/},
    abstract = {There are no doubts that the Internet-of-Things (IoT) has conquered the ICT industry to the extent that many governments and organizations are already rolling out many anywhere,anytime online services that IoT sustains. However, like any emerging and disruptive technology, multiple obstacles are slowing down IoT practical adoption including the passive nature and privacy invasion of things. This paper examines how to empower things with necessary capabilities that would make them proactive and responsive. This means things can, for instance reach out to collaborative peers, (un)form dynamic communities when necessary, avoid malicious peers, and be ?questioned? for their actions. To achieve such empowerment, this paper presents an approach for agentifying things using norms along with commitments that operationalize these norms. Both norms and commitments are specialized into social (i.e., application independent) and business (i.e., application dependent), respectively. Being proactive, things could violate commitments at run-time, which needs to be detected through monitoring. In this paper, thing agentification is illustrated with a case study about missing children and demonstrated with a testbed that uses different IoT-related technologies such as Eclipse Mosquitto broker and Message Queuing Telemetry Transport protocol. Some experiments conducted upon this testbed are also discussed.}
    }
  • H. Cao, P. G. Esteban, M. Bartlett, P. Baxter, T. Belpaeme, E. Billing, H. Cai, M. Coeckelbergh, C. Costescu, D. David, A. D. Beir, D. Hernandez, J. Kennedy, H. Liu, S. Matu, A. Mazel, A. Pandey, K. Richardson, E. Senft, S. Thill, G. V. de Perre, B. Vanderborght, D. Vernon, K. Wakanuma, H. Yu, X. Zhou, and T. Ziemke, “Robot-enhanced therapy: development and validation of supervised autonomous robotic system for autism spectrum disorders therapy,” Ieee robotics & automation magazine, vol. 26, iss. 2, p. 49–58, 2019. doi:doi:10.1109/MRA.2019.2904121
    [BibTeX] [Abstract] [Download PDF]

    Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.

    @article{lincoln36203,
    volume = {26},
    number = {2},
    month = {June},
    author = {Hoang-Long Cao and Pablo G. Esteban and Madeleine Bartlett and Paul Baxter and Tony Belpaeme and Erik Billing and Haibin Cai and Mark Coeckelbergh and Cristina Costescu and Daniel David and Albert De Beir and Daniel Hernandez and James Kennedy and Honghai Liu and Silviu Matu and Alexandre Mazel and Amit Pandey and Kathleen Richardson and Emmanuel Senft and Serge Thill and Greet Van de Perre and Bram Vanderborght and David Vernon and Kutoma Wakanuma and Hui Yu and Xiaolong Zhou and Tom Ziemke},
    title = {Robot-Enhanced Therapy: Development and Validation of Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Robotics \& Automation Magazine},
    doi = {doi:10.1109/MRA.2019.2904121},
    pages = {49--58},
    keywords = {ARRAY(0x558e72469b70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36203/},
    abstract = {Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.}
    }
  • G. Onoufriou, R. Bickerton, S. Pearson, and G. Leontidis, “Nemesyst: a hybrid parallelism deep learning-based framework applied for internet of things enabled food retailing refrigeration systems,” Computers in industry, vol. 113, p. 103133, 2019. doi:10.1016/j.compind.2019.103133
    [BibTeX] [Abstract] [Download PDF]

    Deep Learning has attracted considerable attention across multiple application domains, including computer vision, signal processing and natural language processing. Although quite a few single node deep learning frameworks exist, such as tensorflow, pytorch and keras, we still lack a complete process- ing structure that can accommodate large scale data processing, version control, and deployment, all while staying agnostic of any specific single node framework. To bridge this gap, this paper proposes a new, higher level framework, i.e. Nemesyst, which uses databases along with model sequentialisation to allow processes to be fed unique and transformed data at the point of need. This facilitates near real-time application and makes models available for further training or use at any node that has access to the database simultaneously. Nemesyst is well suited as an application framework for internet of things aggregated control systems, deploying deep learning techniques to optimise individual machines in massive networks. To demonstrate this framework, we adopted a case study in a novel domain; deploying deep learning to optimise the high speed control of electrical power consumed by a massive internet of things network of retail refrigeration systems in proportion to load available on the UK Na- tional Grid (a demand side response). The case study demonstrated for the first time in such a setting how deep learning models, such as Recurrent Neural Networks (vanilla and Long-Short-Term Memory) and Generative Adversarial Networks paired with Nemesyst, achieve compelling performance, whilst still being malleable to future adjustments as both the data and requirements inevitably change over time.

    @article{lincoln37181,
    volume = {113},
    month = {December},
    author = {George Onoufriou and Ronald Bickerton and Simon Pearson and Georgios Leontidis},
    note = {Partners included: Tesco and IMS-Evolve},
    title = {Nemesyst: A Hybrid Parallelism Deep Learning-Based Framework Applied for Internet of Things Enabled Food Retailing Refrigeration Systems},
    publisher = {Elsevier},
    year = {2019},
    journal = {Computers in Industry},
    doi = {10.1016/j.compind.2019.103133},
    pages = {103133},
    keywords = {ARRAY(0x558e722cbab0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37181/},
    abstract = {Deep Learning has attracted considerable attention across multiple application domains, including computer vision, signal processing and natural language processing. Although quite a few single node deep learning frameworks exist, such as tensorflow, pytorch and keras, we still lack a complete process- ing structure that can accommodate large scale data processing, version control, and deployment, all while staying agnostic of any specific single node framework. To bridge this gap, this paper proposes a new, higher level framework, i.e. Nemesyst, which uses databases along with model sequentialisation to allow processes to be fed unique and transformed data at the point of need. This facilitates near real-time application and makes models available for further training or use at any node that has access to the database simultaneously. Nemesyst is well suited as an application framework for internet of things aggregated control systems, deploying deep learning techniques to optimise individual machines in massive networks. To demonstrate this framework, we adopted a case study in a novel domain; deploying deep learning to optimise the high speed control of electrical power consumed by a massive internet of things network of retail refrigeration systems in proportion to load available on the UK Na- tional Grid (a demand side response). The case study demonstrated for the first time in such a setting how deep learning models, such as Recurrent Neural Networks (vanilla and Long-Short-Term Memory) and Generative Adversarial Networks paired with Nemesyst, achieve compelling performance, whilst still being malleable to future adjustments as both the data and requirements inevitably change over time.}
    }
  • S. M. Mustaza, Y. Elsayed, C. Lekakou, C. Saaj, and J. Fras, “Dynamic modeling of fiber-reinforced soft manipulator: a visco-hyperelastic material-based continuum mechanics approach,” Soft robotics, vol. 6, iss. 3, p. 305–317, 2019. doi:10.1089/soro.2018.0032
    [BibTeX] [Abstract] [Download PDF]

    Robot-assisted surgery is gaining popularity worldwide and there is increasing scientific interest to explore the potential of soft continuum robots for minimally invasive surgery. However, the remote control of soft robots is much more challenging compared with their rigid counterparts. Accurate modeling of manipulator dynamics is vital to remotely control the diverse movement configurations and is particularly important for safe interaction with the operating environment. However, current dynamic models applied to soft manipulator systems are simplistic and empirical, which restricts the full potential of the new soft robots technology. Therefore, this article provides a new insight into the development of a nonlinear dynamic model for a soft continuum manipulator based on a material model. The continuum manipulator used in this study is treated as a composite material and a modified nonlinear Kelvin?Voigt material model is utilized to embody the visco-hyperelastic dynamics of soft silicone. The Lagrangian approach is applied to derive the equation of motion of the manipulator. Simulation and experimental results prove that this material modeling approach sufficiently captures the nonlinear time- and rate-dependent behavior of a soft manipulator. Material model-based closed-loop trajectory control was implemented to further validate the feasibility of the derived model and increase the performance of the overall system.

    @article{lincoln37436,
    volume = {6},
    number = {3},
    month = {June},
    author = {S.M. Mustaza and Y. Elsayed and C. Lekakou and C. Saaj and J. Fras},
    note = {cited By 1},
    title = {Dynamic modeling of fiber-reinforced soft manipulator: A visco-hyperelastic material-based continuum mechanics approach},
    publisher = {Mary Ann Liebert},
    year = {2019},
    journal = {Soft Robotics},
    doi = {10.1089/soro.2018.0032},
    pages = {305--317},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37436/},
    abstract = {Robot-assisted surgery is gaining popularity worldwide and there is increasing scientific interest to explore the potential of soft continuum robots for minimally invasive surgery. However, the remote control of soft robots is much more challenging compared with their rigid counterparts. Accurate modeling of manipulator dynamics is vital to remotely control the diverse movement configurations and is particularly important for safe interaction with the operating environment. However, current dynamic models applied to soft manipulator systems are simplistic and empirical, which restricts the full potential of the new soft robots technology. Therefore, this article provides a new insight into the development of a nonlinear dynamic model for a soft continuum manipulator based on a material model. The continuum manipulator used in this study is treated as a composite material and a modified nonlinear Kelvin?Voigt material model is utilized to embody the visco-hyperelastic dynamics of soft silicone. The Lagrangian approach is applied to derive the equation of motion of the manipulator. Simulation and experimental results prove that this material modeling approach sufficiently captures the nonlinear time- and rate-dependent behavior of a soft manipulator. Material model-based closed-loop trajectory control was implemented to further validate the feasibility of the derived model and increase the performance of the overall system.}
    }
  • L. Sun, C. Zhao, Z. Yan, P. Liu, T. Duckett, and R. Stolkin, “A novel weakly-supervised approach for rgb-d-based nuclear waste object detection and categorization,” Ieee sensors journal, vol. 19, iss. 9, p. 3487–3500, 2019. doi:10.1109/JSEN.2018.2888815
    [BibTeX] [Abstract] [Download PDF]

    This paper addresses the problem of RGBD-based detection and categorization of waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a novel industrial application, large amounts of annotated waste object data are currently unavailable. To overcome this problem, we propose a weakly-supervised learning approach which is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos while requiring very few annotations. The proposed method also has the potential to be applied to other household or industrial applications. We evaluate our approach on the Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance among semi-supervised methods. More importantly, we introduce a novel dataset, i.e. Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this novel industrial object recognition challenge. We further propose a complete real-time pipeline for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised approach has demonstrated to be highly effective in solving a novel RGB-D object detection and recognition application with limited human annotations.

    @article{lincoln35699,
    volume = {19},
    number = {9},
    month = {May},
    author = {Li Sun and Cheng Zhao and Zhi Yan and Pengcheng Liu and Tom Duckett and Rustam Stolkin},
    title = {A Novel Weakly-supervised approach for RGB-D-based Nuclear Waste Object Detection and Categorization},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Sensors Journal},
    doi = {10.1109/JSEN.2018.2888815},
    pages = {3487--3500},
    keywords = {ARRAY(0x558e722e2aa8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35699/},
    abstract = {This paper addresses the problem of RGBD-based detection and categorization of waste objects for nuclear de- commissioning. To enable autonomous robotic manipulation for nuclear decommissioning, nuclear waste objects must be detected and categorized. However, as a novel industrial application, large amounts of annotated waste object data are currently unavailable. To overcome this problem, we propose a weakly-supervised learning approach which is able to learn a deep convolutional neural network (DCNN) from unlabelled RGBD videos while requiring very few annotations. The proposed method also has the potential to be
    applied to other household or industrial applications. We evaluate our approach on the Washington RGB- D object recognition benchmark, achieving the state-of-the-art performance among semi-supervised methods. More importantly, we introduce a novel dataset, i.e. Birmingham nuclear waste simulants dataset, and evaluate our proposed approach on this novel industrial object recognition challenge. We further propose a complete real-time pipeline for RGBD-based detection and categorization of nuclear waste simulants. Our weakly-supervised approach has demonstrated to be highly effective in solving a novel RGB-D object
    detection and recognition application with limited human annotations.}
    }
  • E. C. Rodias, M. Lampridi, A. Sopegno, R. Berruto, G. Banias, D. Bochtis, and P. Busato, “Optimal energy performance on allocating energy crops,” Biosystems engineering, vol. 181, p. 11–27, 2019. doi:10.1016/j.biosystemseng.2019.02.007
    [BibTeX] [Abstract] [Download PDF]

    There is a variety of crops that may be considered as potential biomass production crops. In order to select the best suitable for cultivation crop for a given area, a number of several factors should be taken into account. During the crop selection process, a common framework should be followed focussing on financial or energy performance. Combining multiple crops and multiple fields for the extraction of the best allocation requires a model to evaluate various and complex factors given a specific objective. This paper studies the maximisation of total energy gained from the biomass production by energy crops, reduced by the energy costs of the production process. The tool calculates the energy balance using multiple crops allocated to multiple fields. Both binary programming and linear programming methods are employed to solve the allocation problem. Each crop is assigned to a field (or a combination of crops are allocated to each field) with the aim of maximising the energy balance provided by the production system. For the demonstration of the tool, a hypothetical case study of three different crops cultivated for a decade (Miscanthus x giganteus, Arundo donax, and Panicum virgatum) and allocated to 40 dispersed fields around a biogas plant in Italy is presented. The objective of the best allocation is the maximisation of energy balance showing that the linear solution is slightly better than the binary one in the basic scenario while focussing on suggesting alternative scenarios that would have an optimal energy balance.

    @article{lincoln39225,
    volume = {181},
    month = {May},
    author = {Efthymios C. Rodias and Maria Lampridi and Alessandro Sopegno and Remigio Berruto and George Banias and Dionysis Bochtis and Patrizia Busato},
    title = {Optimal energy performance on allocating energy crops},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2019.02.007},
    pages = {11--27},
    year = {2019},
    keywords = {ARRAY(0x558e72251b18)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39225/},
    abstract = {There is a variety of crops that may be considered as potential biomass production crops. In order to select the best suitable for cultivation crop for a given area, a number of several factors should be taken into account. During the crop selection process, a common framework should be followed focussing on financial or energy performance. Combining multiple crops and multiple fields for the extraction of the best allocation requires a model to evaluate various and complex factors given a specific objective. This paper studies the maximisation of total energy gained from the biomass production by energy crops, reduced by the energy costs of the production process. The tool calculates the energy balance using multiple crops allocated to multiple fields. Both binary programming and linear programming methods are employed to solve the allocation problem. Each crop is assigned to a field (or a combination of crops are allocated to each field) with the aim of maximising the energy balance provided by the production system. For the demonstration of the tool, a hypothetical case study of three different crops cultivated for a decade (Miscanthus x giganteus, Arundo donax, and Panicum virgatum) and allocated to 40 dispersed fields around a biogas plant in Italy is presented. The objective of the best allocation is the maximisation of energy balance showing that the linear solution is slightly better than the binary one in the basic scenario while focussing on suggesting alternative scenarios that would have an optimal energy balance.}
    }
  • F. Brandherm, J. Peters, G. Neumann, and R. Akrour, “Learning replanning policies with direct policy search,” Ieee robotics and automation letters (ra-l), vol. 4, iss. 2, p. 2196 –2203, 2019. doi:10.1109/LRA.2019.2901656
    [BibTeX] [Abstract] [Download PDF]

    Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed context

    @article{lincoln36284,
    volume = {4},
    number = {2},
    month = {April},
    author = {F. Brandherm and J. Peters and Gerhard Neumann and R. Akrour},
    title = {Learning Replanning Policies with Direct Policy Search},
    year = {2019},
    journal = {IEEE Robotics and Automation Letters (RA-L)},
    doi = {10.1109/LRA.2019.2901656},
    pages = {2196 --2203},
    keywords = {ARRAY(0x558e722e2580)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36284/},
    abstract = {Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed context}
    }
  • Y. Zhang, J. Gao, H. Cen, Y. Lu, X. Yu, Y. He, and J. G. Pieters, “Automated spectral feature extraction from hyperspectral images to differentiate weedy rice and barnyard grass from a rice crop,” Computers and electronics in agriculture, vol. 159, p. 42–49, 2019. doi:10.1016/j.compag.2019.02.018
    [BibTeX] [Abstract] [Download PDF]

    Barnyard grass (Echinochloa crusgalli) and weedy rice (Oryza sativa f. spontanea) are two common and troublesome weed species in rice (Oryza sativa L.) crop. They cause significant yield loss in rice production while it is difficult to differentiate them for site-specific weed management. In this paper, we aimed to develop a classification model with important spectral features to recognize these two weeds and rice based on hyperspectral imaging techniques. There were 287 plant leaf samples in total which were scanned by the hyperspectral imaging systems within the spectral range from 380 nm to 1080 nm. After obtaining hyperspectral images, we first developed an algorithmic pipeline to automatically extract spectral features from line scan hyperspectral images. Then the raw spectral features were subjected to wavelet transformation for noise reduction. Random forests and support vector machine models were developed with the optimal hyperparameters to compare their performances in the test set. Moreover, feature selection was explored through successive projection algorithm (SPA). It is shown that the weighted support vector machine with 6 spectral features selected by SPA can achieve 100\%, 100\%, and 92\% recognition rates for barnyard grass, weedy rice and rice, respectively. Furthermore, the selected 6 wavelengths (415 nm, 561 nm, 687 nm, 705 nm, 735 nm, 1007 nm) have the potential to design a customized optical sensor for these two weeds and rice discrimination in practice.

    @article{lincoln41510,
    volume = {159},
    month = {April},
    author = {Yanchao Zhang and Junfeng Gao and Haiyan Cen and Yongliang Lu and Xiaoyue Yu and Yong He and Jan G. Pieters},
    title = {Automated spectral feature extraction from hyperspectral images to differentiate weedy rice and barnyard grass from a rice crop},
    publisher = {Elsevier},
    year = {2019},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2019.02.018},
    pages = {42--49},
    keywords = {ARRAY(0x558e7243bad0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41510/},
    abstract = {Barnyard grass (Echinochloa crusgalli) and weedy rice (Oryza sativa f. spontanea) are two common and troublesome weed species in rice (Oryza sativa L.) crop. They cause significant yield loss in rice production while it is difficult to differentiate them for site-specific weed management. In this paper, we aimed to develop a classification model with important spectral features to recognize these two weeds and rice based on hyperspectral imaging techniques. There were 287 plant leaf samples in total which were scanned by the hyperspectral imaging systems within the spectral range from 380 nm to 1080 nm. After obtaining hyperspectral images, we first developed an algorithmic pipeline to automatically extract spectral features from line scan hyperspectral images. Then the raw spectral features were subjected to wavelet transformation for noise reduction. Random forests and support vector machine models were developed with the optimal hyperparameters to compare their performances in the test set. Moreover, feature selection was explored through successive projection algorithm (SPA). It is shown that the weighted support vector machine with 6 spectral features selected by SPA can achieve 100\%, 100\%, and 92\% recognition rates for barnyard grass, weedy rice and rice, respectively. Furthermore, the selected 6 wavelengths (415 nm, 561 nm, 687 nm, 705 nm, 735 nm, 1007 nm) have the potential to design a customized optical sensor for these two weeds and rice discrimination in practice.}
    }
  • M. Lampridi, D. Kateris, G. Vasileiadis, S. Pearson, C. S. o, A. Balafoutis, and D. Bochtis, “A case-based economic assessment of robotics employment in precision arable farming,” Agronomy, vol. 9, iss. 4, p. 175, 2019. doi:10.3390/agronomy9040175
    [BibTeX] [Abstract] [Download PDF]

    The need to intensify agriculture to meet increasing nutritional needs, in combination with the evolution of unmanned autonomous systems has led to the development of a series of ?smart? farming technologies that are expected to replace or complement conventional machinery and human labor. This paper proposes a preliminary methodology for the economic analysis of the employment of robotic systems in arable farming. This methodology is based on the basic processes for estimating the use cost for agricultural machinery. However, for the case of robotic systems, no average norms for the majority of the operational parameters are available. Here, we propose a novel estimation process for these parameters in the case of robotic systems. As a case study, the operation of light cultivation has been selected due the technological readiness for this type of operation.

    @article{lincoln35601,
    volume = {9},
    number = {4},
    month = {April},
    author = {Maria Lampridi and Dimitrios Kateris and Giorgos Vasileiadis and Simon Pearson and Claus S{\o}rensen and Athanasios Balafoutis and Dionysis Bochtis},
    title = {A Case-Based Economic Assessment of Robotics Employment in Precision Arable Farming},
    publisher = {MDPI},
    year = {2019},
    journal = {Agronomy},
    doi = {10.3390/agronomy9040175},
    pages = {175},
    keywords = {ARRAY(0x558e7227b380)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35601/},
    abstract = {The need to intensify agriculture to meet increasing nutritional needs, in combination with the evolution of unmanned autonomous systems has led to the development of a series of ?smart? farming technologies that are expected to replace or complement conventional machinery and human labor. This paper proposes a preliminary methodology for the economic analysis of the employment of robotic systems in arable farming. This methodology is based on the basic processes for estimating the use cost for agricultural machinery. However, for the case of robotic systems, no average norms for the majority of the operational parameters are available. Here, we propose a novel estimation process for these parameters in the case of robotic systems. As a case study, the operation of light cultivation has been selected due the technological readiness for this type of operation.}
    }
  • T. Angelopoulou, N. Tziolas, A. Balafoutis, G. Zalidis, and D. Bochtis, “Remote sensing techniques for soil organic carbon estimation: a review,” Remote sensing, vol. 11, iss. 6, p. 676, 2019. doi:10.3390/rs11060676
    [BibTeX] [Abstract] [Download PDF]

    Towards the need for sustainable development, remote sensing (RS) techniques in the Visible-Near Infrared?Shortwave Infrared (VNIR?SWIR, 400?2500 nm) region could assist in a more direct, cost-effective and rapid manner to estimate important indicators for soil monitoring purposes. Soil reflectance spectroscopy has been applied in various domains apart from laboratory conditions, e.g., sensors mounted on satellites, aircrafts and Unmanned Aerial Systems. The aim of this review is to illustrate the research made for soil organic carbon estimation, with the use of RS techniques, reporting the methodology and results of each study. It also aims to provide a comprehensive introduction in soil spectroscopy for those who are less conversant with the subject. In total, 28 journal articles were selected and further analysed. It was observed that prediction accuracy reduces from Unmanned Aerial Systems (UASs) to satellite platforms, though advances in machine learning techniques could further assist in the generation of better calibration models. There are some challenges concerning atmospheric, radiometric and geometric corrections, vegetation cover, soil moisture and roughness that still need to be addressed. The advantages and disadvantages of each approach are highlighted and future considerations are also discussed at the end.

    @article{lincoln39227,
    volume = {11},
    number = {6},
    month = {March},
    author = {Theodora Angelopoulou and Nikolaos Tziolas and Athanasios Balafoutis and George Zalidis and Dionysis Bochtis},
    title = {Remote Sensing Techniques for Soil Organic Carbon Estimation: A Review},
    year = {2019},
    journal = {Remote Sensing},
    doi = {10.3390/rs11060676},
    pages = {676},
    keywords = {ARRAY(0x558e724534c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39227/},
    abstract = {Towards the need for sustainable development, remote sensing (RS) techniques in the Visible-Near Infrared?Shortwave Infrared (VNIR?SWIR, 400?2500 nm) region could assist in a more direct, cost-effective and rapid manner to estimate important indicators for soil monitoring purposes. Soil reflectance spectroscopy has been applied in various domains apart from laboratory conditions, e.g., sensors mounted on satellites, aircrafts and Unmanned Aerial Systems. The aim of this review is to illustrate the research made for soil organic carbon estimation, with the use of RS techniques, reporting the methodology and results of each study. It also aims to provide a comprehensive introduction in soil spectroscopy for those who are less conversant with the subject. In total, 28 journal articles were selected and further analysed. It was observed that prediction accuracy reduces from Unmanned Aerial Systems (UASs) to satellite platforms, though advances in machine learning techniques could further assist in the generation of better calibration models. There are some challenges concerning atmospheric, radiometric and geometric corrections, vegetation cover, soil moisture and roughness that still need to be addressed. The advantages and disadvantages of each approach are highlighted and future considerations are also discussed at the end.}
    }
  • M. Hüttenrauch, S. Adrian, and G. Neumann, “Deep reinforcement learning for swarm systems,” Journal of machine learning research, vol. 20, iss. 54, p. 1–31, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, the observation vector for decentralized decision making is represented by a concatenation of the (local) information an agent gathers about other agents. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions, where we treat the agents as samples and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and neural networks trained end-to-end. We evaluate the representation on two well-known problems from the swarm literature in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents, facilitating the development of complex collective strategies.

    @article{lincoln36281,
    volume = {20},
    number = {54},
    month = {February},
    author = {Maximilian H{\"u}ttenrauch and Sosic Adrian and Gerhard Neumann},
    title = {Deep Reinforcement Learning for Swarm Systems},
    publisher = {Journal of Machine Learning Research},
    year = {2019},
    journal = {Journal of Machine Learning Research},
    pages = {1--31},
    keywords = {ARRAY(0x558e723f7880)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36281/},
    abstract = {Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, the observation vector for decentralized decision making is represented by a concatenation of the (local) information an agent gathers about other agents. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions, where we treat the agents as samples and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and neural networks trained end-to-end. We evaluate the representation on two well-known problems from the swarm literature in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents, facilitating the development of complex collective strategies.}
    }
  • E. Rodias, R. Berruto, D. Bochtis, A. Sopegno, and P. Busato, “Green, yellow, and woody biomass supply-chain management: a review,” Energies, vol. 12, iss. 15, p. 3020, 2019. doi:10.3390/en12153020
    [BibTeX] [Abstract] [Download PDF]

    Various sources of biomass contribute significantly in energy production globally given a series of constraints in its primary production. Green biomass sources (such as perennial grasses), yellow biomass sources (such as crop residues), and woody biomass sources (such as willow) represent the three pillars in biomass production by crops. In this paper, we conducted a comprehensive review on research studies targeted to advancements at biomass supply-chain management in connection to these three types of biomass sources. A framework that classifies the works in problem-based and methodology-based approaches was followed. Results show the use of modern technological means and tools in current management-related problems. From the review, it is evident that the presented up-to-date trends on biomass supply-chain management and the potential for future advanced approach applications play a crucial role on business and sustainability efficiency of biomass supply chain

    @article{lincoln39230,
    volume = {12},
    number = {15},
    month = {August},
    author = {Efthymios Rodias and Remigio Berruto and Dionysis Bochtis and Alessandro Sopegno and Patrizia Busato},
    title = {Green, Yellow, and Woody Biomass Supply-Chain Management: A Review},
    year = {2019},
    journal = {Energies},
    doi = {10.3390/en12153020},
    pages = {3020},
    keywords = {ARRAY(0x558e72445fc8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39230/},
    abstract = {Various sources of biomass contribute significantly in energy production globally given a series of constraints in its primary production. Green biomass sources (such as perennial grasses), yellow biomass sources (such as crop residues), and woody biomass sources (such as willow) represent the three pillars in biomass production by crops. In this paper, we conducted a comprehensive review on research studies targeted to advancements at biomass supply-chain management in connection to these three types of biomass sources. A framework that classifies the works in problem-based and methodology-based approaches was followed. Results show the use of modern technological means and tools in current management-related problems. From the review, it is evident that the presented up-to-date trends on biomass supply-chain management and the potential for future advanced approach applications play a crucial role on business and sustainability efficiency of biomass supply chain}
    }
  • J. Ganzer-Ripoll, N. Criado, M. Lopez-Sanchez, S. Parsons, and J. A. Rodriguez-Aguilar, “Combining social choice theory and argumentation: enabling collective decision making,” Group decision and negotiation, vol. 28, iss. 1, p. 127–173, 2019. doi:10.1007/s10726-018-9594-6
    [BibTeX] [Download PDF]
    @article{lincoln38395,
    volume = {28},
    number = {1},
    month = {February},
    author = {J. Ganzer-Ripoll and N. Criado and M. Lopez-Sanchez and Simon Parsons and J.A. Rodriguez-Aguilar},
    note = {cited By 0},
    title = {Combining Social Choice Theory and Argumentation: Enabling Collective Decision Making},
    year = {2019},
    journal = {Group Decision and Negotiation},
    doi = {10.1007/s10726-018-9594-6},
    pages = {127--173},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38395/}
    }
  • E. C. Rodias, A. Sopegno, R. Berruto, D. Bochtis, E. Cavallo, and P. Busato, “A combined simulation and linear programming method for scheduling organic fertiliser application,” Biosystems engineering, vol. 178, p. 233–243, 2019. doi:10.1016/j.biosystemseng.2018.11.002
    [BibTeX] [Abstract] [Download PDF]

    Logistics have been used to analyse agricultural operations, such as chemical application, mineral or organic fertilisation and harvesting-handling operations. Recently, due to national or European commitments concerning livestock waste management, this waste is being applied in many crops instead of other mineral fertilisers. The organic fertiliser produced has a high availability although most of the crops it is applied to have strict timeliness issues concerning its application. Here, organic fertilizer (as liquid manure) distribution logistic system is modelled by using a combined simulation and linear programming method. The method applies in certain crops and field areas taking into account specific agronomical, legislation and other constraints with the objective of minimising the optimal annual cost. Given their direct connection with the organic fertiliser distribution, the operations of cultivation and seeding were included. In a basic scenario, the optimal cost was assessed for both crops in total cultivated area of 120 ha. Three modified scenarios are presented. The first regards one more tractor as being available and provides a reduction of 3.8\% in the total annual cost in comparison with the basic scenario. In the second and third modified scenarios fields having high nitrogen demand next to the farm are considered with one or two tractors and savings of 2.5\% and 6.1\%, respectively, compared to the basic scenario are implied. Finally, it was concluded that the effect of distance from the manure production to the location of the fields could reduce costs by 6.5\%.

    @article{lincoln39224,
    volume = {178},
    month = {February},
    author = {Efthymios C. Rodias and Alessandro Sopegno and Remigio Berruto and Dionysis Bochtis and Eugenio Cavallo and Patrizia Busato},
    title = {A combined simulation and linear programming method for scheduling organic fertiliser application},
    publisher = {Elsevier},
    year = {2019},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2018.11.002},
    pages = {233--243},
    keywords = {ARRAY(0x558e71c262c8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39224/},
    abstract = {Logistics have been used to analyse agricultural operations, such as chemical application, mineral or organic fertilisation and harvesting-handling operations. Recently, due to national or European commitments concerning livestock waste management, this waste is being applied in many crops instead of other mineral fertilisers. The organic fertiliser produced has a high availability although most of the crops it is applied to have strict timeliness issues concerning its application. Here, organic fertilizer (as liquid manure) distribution logistic system is modelled by using a combined simulation and linear programming method. The method applies in certain crops and field areas taking into account specific agronomical, legislation and other constraints with the objective of minimising the optimal annual cost. Given their direct connection with the organic fertiliser distribution, the operations of cultivation and seeding were included. In a basic scenario, the optimal cost was assessed for both crops in total cultivated area of 120 ha. Three modified scenarios are presented. The first regards one more tractor as being available and provides a reduction of 3.8\% in the total annual cost in comparison with the basic scenario. In the second and third modified scenarios fields having high nitrogen demand next to the farm are considered with one or two tractors and savings of 2.5\% and 6.1\%, respectively, compared to the basic scenario are implied. Finally, it was concluded that the effect of distance from the manure production to the location of the fields could reduce costs by 6.5\%.}
    }
  • Y. Zhu, X. Li, S. Pearson, D. Wu, R. Sun, S. Johnson, J. Wheeler, and S. Fang, “Evaluation of fengyun-3c soil moisture products using in-situ data from the chinese automatic soil moisture observation stations: a case study in henan province, china,” Water, vol. 11, iss. 2, p. 248, 2019. doi:doi:10.3390/w11020248
    [BibTeX] [Abstract] [Download PDF]

    Soil moisture (SM) products derived from passive satellite missions are playing an increasingly important role in agricultural applications, especially crop monitoring and disaster warning. Evaluating the dependability of satellite-derived soil moisture products on a large scale is crucial. In this study, we assessed the level 2 (L2) SM product from the Chinese Fengyun-3C (FY-3C) radiometer against in-situ measurements collected from the Chinese Automatic Soil Moisture Observation Stations (CASMOS) during a one-year period from 1 January 2016 to 31 December 2016 across Henan in China. In contrast, we also investigated the skill of the Advanced Microwave Scanning Radiometer 2 (AMSR2) and Soil Moisture Active/Passive (SMAP) SM products simultaneously. Four statistical parameters were used to evaluate these products? reliability: mean difference, root-mean-square error (RMSE), unbiased RMSE (ubRMSE), and the correlation coefficient. Our assessment results revealed that the FY-3C L2 SM product generally showed a poor correlation with the in-situ SM data from CASMOS on both temporal and spatial scales. The AMSR2 L3 SM product of JAXA (Japan Aerospace Exploration Agency) algorithm had a similar level of skill as FY-3C in the study area. The SMAP L3 SM product outperformed the FY-3C temporally but showed lower performance in capturing the SM spatial variation. A time-series analysis indicated that the correlations and estimated error varied systematically through the growing periods of the key crops in our study area. FY-3C L2 SM data tended to overestimate soil moisture during May, August, and September when the crops reached maximum vegetation density and tended to underestimate the soil moisture content during the rest of the year. The comparison between the statistical parameters and the ground vegetation water content (VWC) further showed that the FY-3C SM product performed much better under a low VWC condition ({\ensuremath{}}0.3 kg/m2), and the performance generally decreased with increased VWC. To improve the accuracy of the FY-3C SM product, an improved algorithm that can better characterize the variations of the ground VWC should be applied in the future.

    @article{lincoln35398,
    volume = {11},
    number = {2},
    month = {January},
    author = {Yongchao Zhu and Xuan Li and Simon Pearson and Dongli Wu and Ruijing Sun and Sarah Johnson and James Wheeler and Shibo Fang},
    title = {Evaluation of Fengyun-3C Soil Moisture Products Using In-Situ Data from the Chinese Automatic Soil Moisture Observation Stations: A Case Study in Henan Province, China},
    year = {2019},
    journal = {Water},
    doi = {doi:10.3390/w11020248},
    pages = {248},
    keywords = {ARRAY(0x558e722dad10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35398/},
    abstract = {Soil moisture (SM) products derived from passive satellite missions are playing an increasingly important role in agricultural applications, especially crop monitoring and disaster warning. Evaluating the dependability of satellite-derived soil moisture products on a large scale is crucial. In this study, we assessed the level 2 (L2) SM product from the Chinese Fengyun-3C (FY-3C) radiometer against in-situ measurements collected from the Chinese Automatic Soil Moisture Observation Stations (CASMOS) during a one-year period from 1 January 2016 to 31 December 2016 across Henan in China. In contrast, we also investigated the skill of the Advanced Microwave Scanning Radiometer 2 (AMSR2) and Soil Moisture Active/Passive (SMAP) SM products simultaneously. Four statistical parameters were used to evaluate these products? reliability: mean difference, root-mean-square error (RMSE), unbiased RMSE (ubRMSE), and the correlation coefficient. Our assessment results revealed that the FY-3C L2 SM product generally showed a poor correlation with the in-situ SM data from CASMOS on both temporal and spatial scales. The AMSR2 L3 SM product of JAXA (Japan Aerospace Exploration Agency) algorithm had a similar level of skill as FY-3C in the study area. The SMAP L3 SM product outperformed the FY-3C temporally but showed lower performance in capturing the SM spatial variation. A time-series analysis indicated that the correlations and estimated error varied systematically through the growing periods of the key crops in our study area. FY-3C L2 SM data tended to overestimate soil moisture during May, August, and September when the crops reached maximum vegetation density and tended to underestimate the soil moisture content during the rest of the year. The comparison between the statistical parameters and the ground vegetation water content (VWC) further showed that the FY-3C SM product performed much better under a low VWC condition ({\ensuremath{}}0.3 kg/m2), and the performance generally decreased with increased VWC. To improve the accuracy of the FY-3C SM product, an improved algorithm that can better characterize the variations of the ground VWC should be applied in the future.}
    }
  • G. Das, P. J. Vance, D. Kerr, S. A. Coleman, T. M. McGinnity, and J. K. Liu, “Computational modelling of salamander retinal ganglion cells using machine learning approaches,” Neurocomputing, vol. 325, p. 101–112, 2019. doi:10.1016/j.neucom.2018.10.004
    [BibTeX] [Abstract] [Download PDF]

    Artificial vision using computational models that can mimic biological vision is an area of ongoing research. One of the main themes within this research is the study of the retina and in particular, retinal ganglion cells which are responsible for encoding the visual stimuli. A common approach to modelling the internal processes of retinal ganglion cells is the use of a linear ? non-linear cascade model, which models the cell?s response using a linear filter followed by a static non-linearity. However, the resulting model is generally restrictive as it is often a poor estimator of the neuron?s response. In this paper we present an alternative to the linear ? non-linear model by modelling retinal ganglion cells using a number of machine learning techniques which have a proven track record for learning complex non-linearities in many different domains. A comparison of the model predicted spike rate shows that the machine learning models perform better than the standard linear ? non-linear approach in the case of temporal white noise stimuli.

    @article{lincoln40818,
    volume = {325},
    month = {January},
    author = {Gautham Das and Philip J. Vance and Dermot Kerr and Sonya A. Coleman and Thomas M. McGinnity and Jian K. Liu},
    title = {Computational modelling of salamander retinal ganglion cells using machine learning approaches},
    publisher = {Elsevier},
    year = {2019},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2018.10.004},
    pages = {101--112},
    keywords = {ARRAY(0x558e71f93270)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40818/},
    abstract = {Artificial vision using computational models that can mimic biological vision is an area of ongoing research. One of the main themes within this research is the study of the retina and in particular, retinal ganglion cells which are responsible for encoding the visual stimuli. A common approach to modelling the internal processes of retinal ganglion cells is the use of a linear ? non-linear cascade model, which models the cell?s response using a linear filter followed by a static non-linearity. However, the resulting model is generally restrictive as it is often a poor estimator of the neuron?s response. In this paper we present an alternative to the linear ? non-linear model by modelling retinal ganglion cells using a number of machine learning techniques which have a proven track record for learning complex non-linearities in many different domains. A comparison of the model predicted spike rate shows that the machine learning models perform better than the standard linear ? non-linear approach in the case of temporal white noise stimuli.}
    }
  • T. Pardi, R. Stolkin, and A. G. Esfahani, “Choosing grasps to enable collision-free post-grasp manipulations,” Ieee-ras 18th international conference on humanoid robots (humanoids), 2019. doi:10.1109/HUMANOIDS.2018.8625027
    [BibTeX] [Abstract] [Download PDF]

    Consider the task of grasping the handle of a door, and then pushing it until the door opens. These two fundamental robotics problems (selecting secure grasps of a hand on an object, e.g. the door handle, and planning collision-free trajectories of a robot arm that will move that object along a desired path) have predominantly been studied separately from one another. Thus, much of the grasping literature overlooks the fundamental purpose of grasping objects, which is typically to make them move in desirable ways. Given a desired post-grasp trajectory of the object, different choices of grasp will often determine whether or not collision-free post-grasp motions of the arm can be found, which will deliver that trajectory. We address this problem by examining a number of possible stable grasping configurations on an object. For each stable grasp, we explore the motion space of the manipulator which would be needed for post-grasp motions, to deliver the object along the desired trajectory. A criterion, based on potential fields in the post-grasp motion space, is used to assign a collision-cost to each grasp. A grasping configuration is then selected which enables the desired post-grasp object motion while minimising the proximity of all robot parts to obstacles during motion. We demonstrate our method with peg-in-hole and pick-and-place experiments in cluttered scenes, using a Franka Panda robot. Our approach is effective in selecting appropriate grasps, which enable both stable grasp and also desired post-grasp movements without collisions. We also show that, when grasps are selected based on grasp stability alone, without consideration for desired post-grasp manipulations, the corresponding post-grasp movements of the manipulator may result in collisions.

    @article{lincoln35570,
    month = {January},
    title = {Choosing grasps to enable collision-free post-grasp manipulations},
    author = {Tommaso Pardi and Rustam Stolkin and Amir Ghalamzan Esfahani},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/HUMANOIDS.2018.8625027},
    journal = {IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)},
    keywords = {ARRAY(0x558e7235ae98)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35570/},
    abstract = {Consider the task of grasping the handle of a door, and then pushing it until the door opens. These two fundamental robotics problems (selecting secure grasps of a hand on an object, e.g. the door handle, and planning collision-free trajectories of a robot arm that will move that object along a desired path) have predominantly been studied separately from one another. Thus, much of the grasping literature overlooks the fundamental purpose of grasping objects, which is typically to make them move in desirable ways. Given a desired post-grasp trajectory of the object, different choices of grasp will often determine whether or not collision-free post-grasp motions of the arm can be found, which will deliver that trajectory. We address this problem by examining a number of possible stable grasping configurations on an object. For each stable grasp, we explore the motion space of the manipulator which would be needed for post-grasp motions, to deliver the object along the desired trajectory. A criterion, based on potential fields in the post-grasp motion space, is used to assign a collision-cost to each grasp. A grasping configuration is then selected which enables the desired post-grasp object motion while minimising the proximity of all robot parts to obstacles during motion. We demonstrate our method with peg-in-hole and pick-and-place experiments in cluttered scenes, using a Franka Panda robot. Our approach is effective in selecting appropriate grasps, which enable both stable grasp and also desired post-grasp movements without collisions. We also show that, when grasps are selected based on grasp stability alone, without consideration for desired post-grasp manipulations, the corresponding post-grasp movements of the manipulator may result in collisions.}
    }
  • R. C. Tieppo, T. L. Romanelli, M. Milan, C. A. G. S. o, and D. Bochtis, “Modeling cost and energy demand in agricultural machinery fleets for soybean and maize cultivated using a no-tillage system,” Computers and electronics in agriculture, vol. 156, p. 282–292, 2019. doi:10.1016/j.compag.2018.11.032
    [BibTeX] [Abstract] [Download PDF]

    Climate, area expansion and the possibility to grow soybean and maize within a same season using the no-tillage system and mechanized agriculture are factors that promoted the agriculture growth in Mato Grosso State ? Brazil. Mechanized operations represent around 23\% of production costs for maize and soybean, demanding a considerably powerful machinery. Energy balance is a tool to verify the sustainability level of mechanized system. Regarding the sustainability components profit and environment, this study aims to develop a deterministic model for agricultural machinery costs and energy demand for no-tillage system production of soybean and maize crops. In addition, scenario simulation aids to analyze the influence of fleet sizing regarding cost and energy demand. The development of the deterministic model consists on equations and data retrieved from literature. A simulation was developed for no-tillage soybean production system in Brazil, considering three basic mechanized operations (sowing, spraying and harvesting). Thereby, for those operations, three sizes of machinery commercially available and regularly used (small, medium, large) and seven levels of cropping area (500, 1000, 2000, 4000, 6000, 8000 and 10,000 ha) were used. The developed model was consistent for predictions of power demand, fuel consumption and costs. We noticed that the increase in area size implies in more working time for the machinery, which decreases the cost difference among the combinations. The greatest difference for the smallest area (500 ha) was 22.1 and 94.8\% for sowing and harvesting operations, respectively. For 4000 and 10,000 ha, the difference decreased to 1.30 and 0.20\%. Simulated scenarios showed the importance of determining operational cost and energy demand when energy efficiency is desired.

    @article{lincoln34502,
    volume = {156},
    month = {January},
    author = {Rafael Ceasar Tieppo and Thiago Lib{\'o}rio Romanelli and Marcos Milan and Claus Aage Gr{\o}n S{\o}rensen and Dionysis Bochtis},
    title = {Modeling cost and energy demand in agricultural machinery fleets for soybean and maize cultivated using a no-tillage system},
    publisher = {Elsevier},
    year = {2019},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2018.11.032},
    pages = {282--292},
    keywords = {ARRAY(0x558e7246c350)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34502/},
    abstract = {Climate, area expansion and the possibility to grow soybean and maize within a same season using the no-tillage system and mechanized agriculture are factors that promoted the agriculture growth in Mato Grosso State ? Brazil. Mechanized operations represent around 23\% of production costs for maize and soybean, demanding a considerably powerful machinery. Energy balance is a tool to verify the sustainability level of mechanized system. Regarding the sustainability components profit and environment, this study aims to develop a deterministic model for agricultural machinery costs and energy demand for no-tillage system production of soybean and maize crops. In addition, scenario simulation aids to analyze the influence of fleet sizing regarding cost and energy demand. The development of the deterministic model consists on equations and data retrieved from literature. A simulation was developed for no-tillage soybean production system in Brazil, considering three basic mechanized operations (sowing, spraying and harvesting). Thereby, for those operations, three sizes of machinery commercially available and regularly used (small, medium, large) and seven levels of cropping area (500, 1000, 2000, 4000, 6000, 8000 and 10,000 ha) were used. The developed model was consistent for predictions of power demand, fuel consumption and costs. We noticed that the increase in area size implies in more working time for the machinery, which decreases the cost difference among the combinations. The greatest difference for the smallest area (500 ha) was 22.1 and 94.8\% for sowing and harvesting operations, respectively. For 4000 and 10,000 ha, the difference decreased to 1.30 and 0.20\%. Simulated scenarios showed the importance of determining operational cost and energy demand when energy efficiency is desired.}
    }
  • T. Flyr and S. Parsons, “Towards adversarial training for mobile robots,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11649, p. 197–208, 2019. doi:10.1007/978-3-030-23807-0_17
    [BibTeX] [Download PDF]
    @article{lincoln38396,
    volume = {11649},
    author = {T. Flyr and Simon Parsons},
    note = {cited By 0},
    title = {Towards Adversarial Training for Mobile Robots},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-23807-0\_17},
    pages = {197--208},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38396/}
    }
  • J. Pajarinen, H. L. Thai, R. Akrour, J. Peters, and G. Neumann, “Compatible natural gradient policy search,” Machine learning, 2019. doi:10.1007/s10994-019-05807-0
    [BibTeX] [Abstract] [Download PDF]

    Trust-region methods have yielded state-of-the-art results in policy search. A common approach is to use KL-divergence to bound the region of trust resulting in a natural gradient policy update. We show that the natural gradient and trust region optimization are equivalent if we use the natural parameterization of a standard exponential policy distribution in combination with compatible value function approximation. Moreover, we show that standard natural gradient updates may reduce the entropy of the policy according to a wrong schedule leading to premature convergence. To control entropy reduction we introduce a new policy search method called compatible policy search (COPOS) which bounds entropy loss. The experimental results show that COPOS yields state-of-the-art results in challenging continuous control tasks and in discrete partially observable tasks.

    @article{lincoln36283,
    title = {Compatible natural gradient policy search},
    author = {J. Pajarinen and H.L. Thai and R. Akrour and J. Peters and Gerhard Neumann},
    publisher = {Springer},
    year = {2019},
    doi = {10.1007/s10994-019-05807-0},
    journal = {Machine Learning},
    keywords = {ARRAY(0x558e723ff290)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36283/},
    abstract = {Trust-region methods have yielded state-of-the-art results in policy search. A common approach is to use KL-divergence to bound the region of trust resulting in a natural gradient policy update. We show that the natural gradient and trust region optimization are equivalent if we use the natural parameterization of a standard exponential policy distribution in combination with compatible value function approximation. Moreover, we show that standard natural gradient updates may reduce the entropy of the policy according to a wrong schedule leading to premature convergence. To control entropy reduction we introduce a
    new policy search method called compatible policy search (COPOS) which bounds entropy loss. The experimental results show that COPOS yields state-of-the-art results in challenging continuous control tasks and in discrete partially observable tasks.}
    }
  • A. R. Panisson, ?. Sarkadi, P. McBurney, S. Parsons, and R. H. Bordini, “On the formal semantics of theory of mind in agent communication,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11327, p. 18–32, 2019. doi:10.1007/978-3-030-17294-7{$_2$}
    [BibTeX] [Download PDF]
    @article{lincoln38400,
    volume = {11327},
    author = {A.R. Panisson and ?. Sarkadi and P. McBurney and Simon Parsons and R.H. Bordini},
    note = {cited By 0},
    title = {On the Formal Semantics of Theory of Mind in Agent Communication},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-17294-7{$_2$}},
    pages = {18--32},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38400/}
    }
  • c S, A. R. Panisson, R. H. Bordini, P. McBurney, and S. Parsons, “Towards an approach for modelling uncertain theory of mind in multi-agent systems,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11327, p. 3–17, 2019. doi:10.1007/978-3-030-17294-7{$_1$}
    [BibTeX] [Abstract] [Download PDF]

    Applying Theory of Mind to multi-agent systems enables agents to model and reason about other agents? minds. Recent work shows that this ability could increase the performance of agents, making them more efficient than agents that lack this ability. However, modelling others agents? minds is a difficult task, given that it involves many factors of uncertainty, e.g., the uncertainty of the communication channel, the uncertainty of reading other agents correctly, and the uncertainty of trust in other agents. In this paper, we explore how agents acquire and update Theory of Mind under conditions of uncertainty. To represent uncertain Theory of Mind, we add probability estimation on a formal semantics model for agent communication based on the BDI architecture and agent communication languages.

    @article{lincoln38399,
    volume = {11327},
    author = {{\c S}. Sarkadi and A.R. Panisson and R.H. Bordini and P. McBurney and S. Parsons},
    note = {cited By 0},
    title = {Towards an Approach for Modelling Uncertain Theory of Mind in Multi-Agent Systems},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-17294-7{$_1$}},
    pages = {3--17},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38399/},
    abstract = {Applying Theory of Mind to multi-agent systems enables agents to model and reason about other agents? minds. Recent work shows that this ability could increase the performance of agents, making them more efficient than agents that lack this ability. However, modelling others agents? minds is a difficult task, given that it involves many factors of uncertainty, e.g., the uncertainty of the communication channel, the uncertainty of reading other agents correctly, and the uncertainty of trust in other agents. In this paper, we explore how agents acquire and update Theory of Mind under conditions of uncertainty. To represent uncertain Theory of Mind, we add probability estimation on a formal semantics model for agent communication based on the BDI architecture and agent communication languages.}
    }
  • c S, A. R. Panisson, R. H. Bordini, P. McBurney, S. Parsons, and M. Chapman, “Modelling deception using theory of mind in multi-agent systems,” Ai communications, vol. 32, iss. 4, p. 287–302, 2019. doi:10.3233/AIC-190615
    [BibTeX] [Abstract] [Download PDF]

    Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.

    @article{lincoln38401,
    volume = {32},
    number = {4},
    author = {{\c S}. Sarkadi and A.R. Panisson and R.H. Bordini and P. McBurney and S. Parsons and M. Chapman},
    note = {cited By 0},
    title = {Modelling deception using theory of mind in multi-agent systems},
    year = {2019},
    journal = {AI Communications},
    doi = {10.3233/AIC-190615},
    pages = {287--302},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38401/},
    abstract = {Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.}
    }
  • I. Sassoon, N. Kökciyan, E. Sklar, and S. Parsons, “Explainable argumentation for wellness consultation,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11763, p. 186–202, 2019. doi:10.1007/978-3-030-30391-4{$_1$}{$_1$}
    [BibTeX] [Download PDF]
    @article{lincoln38398,
    volume = {11763},
    author = {I. Sassoon and N. K{\"o}kciyan and E. Sklar and Simon Parsons},
    note = {cited By 0},
    title = {Explainable argumentation for wellness consultation},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-30391-4{$_1$}{$_1$}},
    pages = {186--202},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38398/}
    }
  • I. Sassoon, N. Kökciyan, E. Sklar, and S. Parsons, “Explainable argumentation for wellness consultation,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11763, p. 186–202, 2019. doi:10.1007/978-3-030-30391-4
    [BibTeX] [Download PDF]
    @article{lincoln38539,
    volume = {11763},
    author = {I. Sassoon and N. K{\"o}kciyan and Elizabeth Sklar and S. Parsons},
    note = {cited By 0},
    title = {Explainable argumentation for wellness consultation},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-30391-4},
    pages = {186--202},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38539/}
    }
  • D. Zhang, E. Schneider, and E. Sklar, “A cross-landscape evaluation of multi-robot team performance in static task-allocation domains,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11650, p. 261–272, 2019. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @article{lincoln38537,
    volume = {11650},
    author = {D. Zhang and E. Schneider and Elizabeth Sklar},
    note = {cited By 0},
    title = {A cross-landscape evaluation of multi-robot team performance in static task-allocation domains},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-25332-5},
    pages = {261--272},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38537/}
    }
  • Q. Fu, H. Wang, C. Hu, and S. Yue, “Towards computational models and applications of insect visual systems for motion perception: a review,” Artificial life, vol. 25, iss. 3, p. 263–311, 2019. doi:10.1162/artl_a_00297
    [BibTeX] [Abstract] [Download PDF]

    Motion perception is a critical capability determining a variety of aspects of insects’ life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects’ visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects’ visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.

    @article{lincoln35584,
    volume = {25},
    number = {3},
    month = {August},
    author = {Qinbing Fu and Hongxin Wang and Cheng Hu and Shigang Yue},
    title = {Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review},
    publisher = {MIT Press},
    year = {2019},
    journal = {Artificial life},
    doi = {10.1162/artl\_a\_00297},
    pages = {263--311},
    keywords = {ARRAY(0x558e72251ad0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35584/},
    abstract = {Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models.}
    }
  • T. Zhivkov, E. Schneider, and E. Sklar, “Mrcomm: multi-robot communication testbed,” Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol. 11650, p. 346–357, 2019. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @article{lincoln38538,
    volume = {11650},
    author = {Tsvetan Zhivkov and E. Schneider and Elizabeth Sklar},
    note = {cited By 0},
    title = {MRComm: Multi-robot communication testbed},
    journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
    doi = {10.1007/978-3-030-25332-5},
    pages = {346--357},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38538/}
    }
  • R. Madigan, S. Nordhoff, C. Fox, R. E. Amina, T. Louw, M. Wilbrink, A. Schieben, and N. Merat, “Understanding interactions between automated road transport systems and other road users: a video analysis,” Transportation research part f, vol. 66, p. 196–213, 2019. doi:10.1016/j.trf.2019.09.006
    [BibTeX] [Abstract] [Download PDF]

    If automated vehicles (AVs) are to move efficiently through the traffic environment, there is a need for them to interact and communicate with other road users in a comprehensible and predictable manner. For this reason, an understanding of the interaction requirements of other road users is needed. The current study investigated these requirements through an analysis of 22 hours of video footage of the CityMobil2 AV demonstrations in La Rochelle (France) and Trikala (Greece). Manual and automated video-analysis techniques were used to identify typical interactions patterns between AVs and other road users. Results indicate that road infrastructure and road user factors had a major impact on the type of interactions that arose between AVs and other road users. Road infrastructure features such as road width, and the presence or absence of zebra crossings had an impact on road users? trajectory decisions while approaching an AV. Where possible, pedestrians and cyclists appeared to leave as much space as possible between their trajectories and that of the AV. However, in situations where the infrastructure did not allow for the separation of traffic, risky behaviours were more likely to emerge, with cyclists, in particular, travelling closely alongside the AVs on narrow paths of the road, rather than waiting for the AV to pass. In addition, the types of interaction varied considerably across socio-demographic groups, with females and older users more likely to show cautionary behaviour around the AVs than males, or younger road users. Overall, the results highlight the importance of implementing the correct infrastructure to support the safe introduction of AVs, while also ensuring that the behaviour of the AV matches other road users? expectations as closely as possible in order to avoid traffic conflicts.

    @article{lincoln36914,
    volume = {66},
    month = {October},
    author = {Ruth Madigan and Sina Nordhoff and Charles Fox and Roja Ezzati Amina and Tyron Louw and Marc Wilbrink and Anna Schieben and Natasha Merat},
    title = {Understanding interactions between Automated Road Transport Systems and other road users: A video analysis},
    publisher = {Elsevier},
    year = {2019},
    journal = {Transportation Research Part F},
    doi = {10.1016/j.trf.2019.09.006},
    pages = {196--213},
    keywords = {ARRAY(0x558e72469918)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36914/},
    abstract = {If automated vehicles (AVs) are to move efficiently through the traffic environment, there is a need for them to interact and communicate with other road users in a comprehensible and predictable manner. For this reason, an understanding of the interaction requirements of other road users is needed. The current study investigated these requirements through an analysis of 22 hours of video footage of the CityMobil2 AV demonstrations in La Rochelle (France) and Trikala (Greece). Manual and automated video-analysis techniques were used to identify typical interactions patterns between AVs and other road users. Results indicate that road infrastructure and road user factors had a major impact on the type of interactions that arose between AVs and other road users. Road infrastructure features such as road width, and the presence or absence of zebra crossings had an impact on road users? trajectory decisions while approaching an AV. Where possible, pedestrians and cyclists appeared to leave as much space as possible between their trajectories and that of the AV. However, in situations where the infrastructure did not allow for the separation of traffic, risky behaviours were more likely to emerge, with cyclists, in particular, travelling closely alongside the AVs on narrow paths of the road, rather than waiting for the AV to pass. In addition, the types of interaction varied considerably across socio-demographic groups, with females and older users more likely to show cautionary behaviour around the AVs than males, or younger road users. Overall, the results highlight the importance of implementing the correct infrastructure to support the safe introduction of AVs, while also ensuring that the behaviour of the AV matches other road users? expectations as closely as possible in order to avoid traffic conflicts.}
    }
  • Z. Maamar, T. Baker, N. Faci, M. Al-Khafajiy, E. Ugljanin, Y. Atif, and M. Sellami, “Weaving cognition into the internet-of-things: application to water leaks,” Cognitive systems research, vol. 56, p. 233–245, 2019. doi:10.1016/j.cogsys.2019.04.001
    [BibTeX] [Abstract] [Download PDF]

    Despite the growing interest in the Internet-of-Things, many organizations remain reluctant to integrating things into their business processes. Different reasons justify this reluctance including things? limited capabilities to act upon the cyber-physical surrounding in which they operate. To address this specific limitation, this paper examines thing empowerment with cognitive capabilities that would make them for instance, selective of the next business processes in which they would participate. The selection is based on things? restrictions like limitedness and goals to achieve like improved reputation. For demonstration purposes, water leaks are used as a case study. A BPEL-based business process driving the fixing of water leaks is implemented involving different cognitive things like moisture sensor.

    @article{lincoln47560,
    volume = {56},
    month = {August},
    author = {Zakaria Maamar and Thar Baker and Noura Faci and Mohammed Al-Khafajiy and Emir Ugljanin and Yacine Atif and Mohamed Sellami},
    title = {Weaving cognition into the internet-of-things: Application to water leaks},
    publisher = {Elsevier},
    year = {2019},
    journal = {Cognitive Systems Research},
    doi = {10.1016/j.cogsys.2019.04.001},
    pages = {233--245},
    keywords = {ARRAY(0x558e722de778)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47560/},
    abstract = {Despite the growing interest in the Internet-of-Things, many organizations remain reluctant to integrating things into their business processes. Different reasons justify this reluctance including things? limited capabilities to act upon the cyber-physical surrounding in which they operate. To address this specific limitation, this paper examines thing empowerment with cognitive capabilities that would make them for instance, selective of the next business processes in which they would participate. The selection is based on things? restrictions like limitedness and goals to achieve like improved reputation. For demonstration purposes, water leaks are used as a case study. A BPEL-based business process driving the fixing of water leaks is implemented involving different cognitive things like moisture sensor.}
    }
  • C. Zhao, L. Sun, Z. Yan, G. Neumann, T. Duckett, and R. Stolkin, “Learning kalman network: a deep monocular visual odometry for on-road driving,” Robotics and autonomous systems, vol. 121, p. 103234, 2019. doi:10.1016/j.robot.2019.07.004
    [BibTeX] [Abstract] [Download PDF]

    This paper proposes a Learning Kalman Network (LKN) based monocular visual odometry (VO), i.e. LKN-VO, for on-road driving. Most existing learning-based VO focus on ego-motion estimation by comparing the two most recent consecutive frames. By contrast, the LKN-VO incorporates a learning ego-motion estimation through the current measurement, and a discriminative state estimator through a sequence of previous measurements. Superior to the model-based monocular VO, a more accurate absolute scale can be learned by LKN without any geometric constraints. In contrast to the model-based Kalman Filter (KF), the optimal model parameters of LKN can be obtained from dynamic and deterministic outputs of the neural network without elaborate human design. LKN is a hybrid approach where we achieve the non-linearity of the observation model and the transition model though deep neural networks, and update the state following the Kalman probabilistic mechanism. In contrast to the learning-based state estimator, a sparse representation is further proposed to learn the correlations within the states from the car?s movement behaviour, thereby applying better filtering on the 6DOF trajectory for on-road driving. The experimental results show that the proposed LKN-VO outperforms both model-based and learning state-estimator-based monocular VO on the most well-cited on-road driving datasets, i.e. KITTI and Apolloscape. In addition, LKN-VO is integrated with dense 3D mapping, which can be deployed for simultaneous localization and mapping in urban environments.

    @article{lincoln43351,
    volume = {121},
    month = {November},
    author = {Cheng Zhao and Li Sun and Zhi Yan and Gerhard Neumann and Tom Duckett and Rustam Stolkin},
    title = {Learning Kalman Network: A deep monocular visual odometry for on-road driving},
    publisher = {Elsevier},
    year = {2019},
    journal = {Robotics and Autonomous Systems},
    doi = {10.1016/j.robot.2019.07.004},
    pages = {103234},
    keywords = {ARRAY(0x558e7226b188)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43351/},
    abstract = {This paper proposes a Learning Kalman Network (LKN) based monocular visual odometry (VO), i.e. LKN-VO, for on-road driving. Most existing learning-based VO focus on ego-motion estimation by comparing the two most recent consecutive frames. By contrast, the LKN-VO incorporates a learning ego-motion estimation through the current measurement, and a discriminative state estimator through a sequence of previous measurements. Superior to the model-based monocular VO, a more accurate absolute scale can be learned by LKN without any geometric constraints. In contrast to the model-based Kalman Filter (KF), the optimal model parameters of LKN can be obtained from dynamic and deterministic outputs of the neural network without elaborate human design. LKN is a hybrid approach where we achieve the non-linearity of the observation model and the transition model though deep neural networks, and update the state following the Kalman probabilistic mechanism. In contrast to the learning-based state estimator, a sparse representation is further proposed to learn the correlations within the states from the car?s movement behaviour, thereby applying better filtering on the 6DOF trajectory for on-road driving. The experimental results show that the proposed LKN-VO outperforms both model-based and learning state-estimator-based monocular VO on the most well-cited on-road driving datasets, i.e. KITTI and Apolloscape. In addition, LKN-VO is integrated with dense 3D mapping, which can be deployed for simultaneous localization and mapping in urban environments.}
    }
  • T. Krajnik, T. Vintr, S. M. Mellado, J. P. Fentanes, G. Cielniak, O. M. Mozos, G. Broughton, and T. Duckett, “Warped hypertime representations for long-term autonomy of mobile robots,” Ieee robotics and automation letters, vol. 4, iss. 4, p. 3310–3317, 2019. doi:10.1109/LRA.2019.2926682
    [BibTeX] [Abstract] [Download PDF]

    This letter presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modeling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.

    @article{lincoln36962,
    volume = {4},
    number = {4},
    month = {October},
    author = {Tomas Krajnik and Tomas Vintr and Sergi Molina Mellado and Jaime Pulido Fentanes and Grzegorz Cielniak and Oscar Martinez Mozos and George Broughton and Tom Duckett},
    title = {Warped Hypertime Representations for Long-Term Autonomy of Mobile Robots},
    publisher = {IEEE},
    year = {2019},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2926682},
    pages = {3310--3317},
    keywords = {ARRAY(0x558e72399b50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36962/},
    abstract = {This letter presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modeling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.}
    }
  • E. Senft, S. Lemaignan, P. Baxter, M. Bartlett, and T. Belpaeme, “Teaching robots social autonomy from in situ human guidance,” Science robotics, vol. 4, iss. 35, p. eaat1186, 2019. doi:10.1126/scirobotics.aat1186
    [BibTeX] [Abstract] [Download PDF]

    Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.

    @article{lincoln38234,
    volume = {4},
    number = {35},
    month = {October},
    author = {Emmanuel Senft and S{\'e}verin Lemaignan and Paul Baxter and Madeleine Bartlett and Tony Belpaeme},
    title = {Teaching robots social autonomy from in situ human guidance},
    publisher = {American Association for the Advancement of Science},
    year = {2019},
    journal = {Science Robotics},
    doi = {10.1126/scirobotics.aat1186},
    pages = {eaat1186},
    keywords = {ARRAY(0x558e72423458)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38234/},
    abstract = {Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.}
    }
  • A. Kucukyilmaz and I. Issak, “Online identification of interaction behaviors from haptic data during collaborative object transfer,” Ieee robotics and automation letters, p. 1–1, 2019. doi:10.1109/LRA.2019.2945261
    [BibTeX] [Abstract] [Download PDF]

    Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another?s actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object co-manipulation. The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80\% accuracy with both classifiers when identifying general interaction types.

    @article{lincoln37631,
    month = {October},
    author = {Ayse Kucukyilmaz and Illimar Issak},
    title = {Online Identification of Interaction Behaviors from Haptic Data during Collaborative Object Transfer},
    publisher = {IEEE},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2019.2945261},
    pages = {1--1},
    year = {2019},
    keywords = {ARRAY(0x558e7240fee8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37631/},
    abstract = {Joint object transfer is a complex task, which is less structured and less specific than what is existing in several industrial settings. When two humans are involved in such a task, they cooperate through different modalities to understand the interaction states during operation and mutually adapt to one another?s actions. Mutual adaptation implies that both partners can identify how well they collaborate (i.e. infer about the interaction state) and act accordingly. These interaction states can define whether the partners work in harmony, face conflicts, or remain passive during interaction. Understanding how two humans work together during physical interactions is important when exploring the ways a robotic assistant should operate under similar settings. This study acts as a first step to implement an automatic classification mechanism during ongoing collaboration to identify the interaction state during object co-manipulation.
    The classification is done on a dataset consisting of data from 40 subjects, who are partnered to form 20 dyads. The dyads experiment in a physical human-human interaction (pHHI) scenario to move an object in an haptics-enabled virtual environment to reach predefined goal configurations. In this study, we propose a sliding-window approach for feature extraction and demonstrate the online classification methodology to identify interaction patterns. We evaluate our approach using 1) a support vector machine classifier (SVMc) and 2) a Gaussian Process classifier (GPc) for multi-class classification, and achieve over 80\% accuracy with both classifiers when identifying general interaction types.}
    }
  • A. Postnikov, I. Albayati, S. Pearson, C. Bingham, R. Bickerton, and A. Zolotas, “Facilitating static firm frequency response with aggregated networks of commercial food refrigeration systems,” Applied energy, vol. 251, p. 113357, 2019. doi:10.1016/j.apenergy.2019.113357
    [BibTeX] [Abstract] [Download PDF]

    Aggregated electrical loads from massive numbers of distributed retail refrigeration systems could have a significant role in frequency balancing services. To date, no study has realised effective engineering applications of static firm frequency response to these aggregated networks. Here, the authors present a novel and validated approach that enables large scale control of distributed retail refrigeration assets. The authors show a validated model that simulates the operation of retail refrigerators comprising centralised compressor packs feeding multiple in-store display cases. The model was used to determine an optimal control strategy that both minimised the engineering risk to the pack during shut down and potential impacts to food safety. The authors show that following a load shedding frequency response trigger the pack should be allowed to maintain operation but with increased suction pressure set-point. This reduces compressor load whilst enabling a continuous flow of refrigerant to food cases. In addition, the authors simulated an aggregated response of up to three hundred compressor packs (over 2 MW capacity), with refrigeration cases on hysteresis and modulation control. Hysteresis control, compared to modulation, led to undesired load oscillations when the system recovers after a frequency balancing event. Transient responses of the system during the event showed significant fluctuations of active power when compressor network responds to both primary and secondary parts of a frequency balancing event. Enabling frequency response within this system is demonstrated by linking the aggregated refrigeration loads with a simplified power grid model that simulates a power loss incident.

    @article{lincoln36072,
    volume = {251},
    month = {October},
    author = {Andrey Postnikov and Ibrahim Albayati and Simon Pearson and Chris Bingham and Ronald Bickerton and Argyrios Zolotas},
    title = {Facilitating static firm frequency response with aggregated networks of commercial food refrigeration systems},
    publisher = {Elsevier},
    year = {2019},
    journal = {Applied Energy},
    doi = {10.1016/j.apenergy.2019.113357},
    pages = {113357},
    keywords = {ARRAY(0x558e72456b78)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36072/},
    abstract = {Aggregated electrical loads from massive numbers of distributed retail refrigeration systems could have a significant role in frequency balancing services. To date, no study has realised effective engineering applications of static firm frequency response to these aggregated networks. Here, the authors present a novel and validated approach that enables large scale control of distributed retail refrigeration assets. The authors show a validated model that simulates the operation of retail refrigerators comprising centralised compressor packs feeding multiple in-store display cases. The model was used to determine an optimal control strategy that both minimised the engineering risk to the pack during shut down and potential impacts to food safety. The authors show that following a load shedding frequency response trigger the pack should be allowed to maintain operation but with increased suction pressure set-point. This reduces compressor load whilst enabling a continuous flow of refrigerant to food cases. In addition, the authors simulated an aggregated response of up to three hundred compressor packs (over 2 MW capacity), with refrigeration cases on hysteresis and modulation control. Hysteresis control, compared to modulation, led to undesired load oscillations when the system recovers after a frequency balancing event. Transient responses of the system during the event showed significant fluctuations of active power when compressor network responds to both primary and secondary parts of a frequency balancing event. Enabling frequency response within this system is demonstrated by linking the aggregated refrigeration loads with a simplified power grid model that simulates a power loss incident.}
    }
  • L. Baronti, M. Alston, N. Mavrakis, A. M. G. Esfahani, and M. Castellani, “Primitive shape fitting in point clouds using the bees algorithm,” Advances in automation and robotics, vol. 9, iss. 23, 2019. doi:10.3390/app9235198
    [BibTeX] [Abstract] [Download PDF]

    In this study, the problem of fitting shape primitives to point cloud scenes was tackled 2 as a parameter optimisation procedure and solved using the popular Bees Algorithm. Tested on three sets of clean and differently blurred point cloud models, the Bees Algorithm obtained performances comparable to those obtained using the state-of-the-art RANSAC method, and superior to those obtained by an evolutionary algorithm. Shape fitting times were compatible with the real-time application. The main advantage of the Bees Algorithm over standard methods is that it doesn?t rely on ad hoc assumptions about the nature of the point cloud model like RANSAC approximation tolerance.

    @article{lincoln39027,
    volume = {9},
    number = {23},
    month = {November},
    author = {Luca Baronti and Mark Alston and Nikos Mavrakis and Amir Masoud Ghalamzan Esfahani and Marco Castellani},
    title = {Primitive Shape Fitting in Point Clouds Using the Bees Algorithm},
    publisher = {MDPI},
    year = {2019},
    journal = {Advances in Automation and Robotics},
    doi = {10.3390/app9235198},
    keywords = {ARRAY(0x558e72465df0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39027/},
    abstract = {In this study, the problem of fitting shape primitives to point cloud scenes was tackled 2 as a parameter optimisation procedure and solved using the popular Bees Algorithm. Tested on three sets of clean and differently blurred point cloud models, the Bees Algorithm obtained performances comparable to those obtained using the state-of-the-art RANSAC method, and superior to those obtained by an evolutionary algorithm. Shape fitting times were compatible with the real-time application. The main advantage of the Bees Algorithm over standard methods is that it doesn?t rely on ad hoc assumptions about the nature of the point cloud model like RANSAC approximation tolerance.}
    }
  • M. G. Lampridi, C. G. S. o, and D. Bochtis, “Agricultural sustainability: a review of concepts and methods,” Sustainability, vol. 11, iss. 18, p. 5120, 2019. doi:10.3390/su11185120
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology analysis. The framework was implemented for the systematic literature review of 38 crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community?s interest in agricultural sustainability is increasing in the last three years. The most used methods include indicator-based tools, frameworks, and indexes, followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. It should also be mentioned that combinational use of methodologies is often observed, thus a clear distinction of methodologies is not always possible

    @article{lincoln39231,
    volume = {11},
    number = {18},
    month = {September},
    author = {Maria G. Lampridi and Claus G. S{\o}rensen and Dionysis Bochtis},
    title = {Agricultural Sustainability: A Review of Concepts and Methods},
    year = {2019},
    journal = {Sustainability},
    doi = {10.3390/su11185120},
    pages = {5120},
    keywords = {ARRAY(0x558e7239ad68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39231/},
    abstract = {This paper presents a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology analysis. The framework was implemented for the systematic literature review of 38 crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community?s interest in agricultural sustainability is increasing in the last three years. The most used methods include indicator-based tools, frameworks, and indexes, followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. It should also be mentioned that combinational use of methodologies is often observed, thus a clear distinction of methodologies is not always possible}
    }
  • M. Al-Khafajiy, T. Baker, C. Chalmers, M. Asim, H. Kolivand, M. Fahim, and A. Waraich, “Remote health monitoring of elderly through wearable sensors,” Multimedia tools and applications, vol. 78, iss. 17, p. 24681–24706, 2019. doi:10.1007/s11042-018-7134-7
    [BibTeX] [Abstract] [Download PDF]

    Due to a rapidly increasing aging population and its associated challenges in health and social care, Ambient Assistive Living has become the focal point for both researchers and industry alike. The need to manage or even reduce healthcare costs while improving the quality of service is high government agendas. Although, technology has a major role to play in achieving these aspirations, any solution must be designed, implemented and validated using appropriate domain knowledge. In order to overcome these challenges, the remote real-time monitoring of a person?s health can be used to identify relapses in conditions, therefore, enabling early intervention. Thus, the development of a smart healthcare monitoring system, which is capable of observing elderly people remotely, is the focus of the research presented in this paper. The technology outlined in this paper focuses on the ability to track a person?s physiological data to detect specific disorders which can aid in Early Intervention Practices. This is achieved by accurately processing and analysing the acquired sensory data while transmitting the detection of a disorder to an appropriate career. The finding reveals that the proposed system can improve clinical decision supports while facilitating Early Intervention Practices. Our extensive simulation results indicate a superior performance of the proposed system: low latency (96\% of the packets are received with less than 1 millisecond) and low packets-lost (only 2.2\% of total packets are dropped). Thus, the system runs efficiently and is cost-effective in terms of data acquisition and manipulation.

    @article{lincoln47557,
    volume = {78},
    number = {17},
    month = {September},
    author = {Mohammed Al-Khafajiy and Thar Baker and Carl Chalmers and Muhammad Asim and Hoshang Kolivand and Muhammad Fahim and Atif Waraich},
    title = {Remote health monitoring of elderly through wearable sensors},
    publisher = {Springer},
    year = {2019},
    journal = {Multimedia Tools and Applications},
    doi = {10.1007/s11042-018-7134-7},
    pages = {24681--24706},
    keywords = {ARRAY(0x558e723f56d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47557/},
    abstract = {Due to a rapidly increasing aging population and its associated challenges in health and social care, Ambient Assistive Living has become the focal point for both researchers and industry alike. The need to manage or even reduce healthcare costs while improving the quality of service is high government agendas. Although, technology has a major role to play in achieving these aspirations, any solution must be designed, implemented and validated using appropriate domain knowledge. In order to overcome these challenges, the remote real-time monitoring of a person?s health can be used to identify relapses in conditions, therefore, enabling early intervention. Thus, the development of a smart healthcare monitoring system, which is capable of observing elderly people remotely, is the focus of the research presented in this paper. The technology outlined in this paper focuses on the ability to track a person?s physiological data to detect specific disorders which can aid in Early Intervention Practices. This is achieved by accurately processing and analysing the acquired sensory data while transmitting the detection of a disorder to an appropriate career. The finding reveals that the proposed system can improve clinical decision supports while facilitating Early Intervention Practices. Our extensive simulation results indicate a superior performance of the proposed system: low latency (96\% of the packets are received with less than 1 millisecond) and low packets-lost (only 2.2\% of total packets are dropped). Thus, the system runs efficiently and is cost-effective in terms of data acquisition and manipulation.}
    }
  • M. Selvaggio, A. G. Esfahani, R. Moccia, F. Ficuciello, and B. Siciliano, “Haptic-guided shared control for needle grasping optimization in minimally invasive robotic surgery,” Ieee/rsj international conference intelligent robotic system, 2019.
    [BibTeX] [Abstract] [Download PDF]

    During suturing tasks performed with minimally invasive surgical robots, configuration singularities and joint limits often force surgeons to interrupt the task and re- grasp the needle using dual-arm movements. This yields an increased operator?s cognitive load, time-to-completion, fatigue and performance degradation. In this paper, we propose a haptic-guided shared control method for grasping the needle with the Patient Side Manipulator (PSM) of the da Vinci robot avoiding such issues. We suggest a cost function consisting of (i) the distance from robot joint limits and (ii) the task-oriented manipulability over the suturing trajectory. We evaluate the cost and its gradient on the needle grasping manifold that allows us to obtain the optimal grasping pose for joint-limit and singularity free movements of the needle during suturing. Then, we compute force cues that are applied to the Master Tool Manipulator (MTM) of the da Vinci to guide the operator towards the optimal grasp. As such, our system helps the operator to choose a grasping configuration allowing the robot to avoid joint limits and singularities during post-grasp suturing movements. We show the effectiveness of our proposed haptic- guided shared control method during suturing using both simulated and real experiments. The results illustrate that our approach significantly improves the performance in terms of needle re-grasping.

    @article{lincoln36571,
    month = {October},
    title = {Haptic-guided shared control for needle grasping optimization in minimally invasive robotic surgery},
    author = {Mario Selvaggio and Amir Ghalamzan Esfahani and Rocco Moccia and Fanny Ficuciello and Bruno Siciliano},
    year = {2019},
    journal = {IEEE/RSJ International Conference Intelligent Robotic System},
    keywords = {ARRAY(0x558e72424f90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36571/},
    abstract = {During suturing tasks performed with minimally invasive surgical robots, configuration singularities and joint limits often force surgeons to interrupt the task and re- grasp the needle using dual-arm movements. This yields an increased operator?s cognitive load, time-to-completion, fatigue and performance degradation. In this paper, we propose a haptic-guided shared control method for grasping the needle with the Patient Side Manipulator (PSM) of the da Vinci robot avoiding such issues. We suggest a cost function consisting of (i) the distance from robot joint limits and (ii) the task-oriented manipulability over the suturing trajectory. We evaluate the cost and its gradient on the needle grasping manifold that allows us to obtain the optimal grasping pose for joint-limit and singularity free movements of the needle during suturing. Then, we compute force cues that are applied to the Master Tool Manipulator (MTM) of the da Vinci to guide the operator towards the optimal grasp. As such, our system helps the operator to choose a grasping configuration allowing the robot to avoid joint limits and singularities during post-grasp suturing movements. We show the effectiveness of our proposed haptic- guided shared control method during suturing using both simulated and real experiments. The results illustrate that our approach significantly improves the performance in terms of needle re-grasping.}
    }
  • H. Cuayahuitl, D. Lee, S. Ryu, Y. Cho, S. Choi, S. Indurthi, S. Yu, H. Choi, I. Hwang, and J. Kim, “Ensemble-based deep reinforcement learning for chatbots,” Neurocomputing, vol. 366, p. 118–130, 2019. doi:10.1016/j.neucom.2019.08.007
    [BibTeX] [Abstract] [Download PDF]

    Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only ? without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency ? which revealed that our proposed dialogue rewards strongly correlate with human judgements.

    @article{lincoln36668,
    volume = {366},
    month = {November},
    author = {Heriberto Cuayahuitl and Donghyeon Lee and Seonghan Ryu and Yongjin Cho and Sungja Choi and Satish Indurthi and Seunghak Yu and Hyungtak Choi and Inchul Hwang and Jihie Kim},
    title = {Ensemble-Based Deep Reinforcement Learning for Chatbots},
    publisher = {Elsevier},
    year = {2019},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2019.08.007},
    pages = {118--130},
    keywords = {ARRAY(0x558e722f5230)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36668/},
    abstract = {Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only ? without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency ? which revealed that our proposed dialogue rewards strongly correlate with human judgements.}
    }
  • B. Grieve, T. Duckett, M. Collison, L. Boyd, J. West, Y. Hujun, F. Arvin, and S. Pearson, “The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: a fundamental rethink is required.,” Global food security, vol. 23, p. 116–124, 2019. doi:10.1016/j.gfs.2019.04.011
    [BibTeX] [Abstract] [Download PDF]

    Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology & physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).

    @article{lincoln35842,
    volume = {23},
    month = {December},
    author = {Bruce Grieve and Tom Duckett and Martin Collison and Lesley Boyd and Jon West and Yin Hujun and Farshad Arvin and Simon Pearson},
    title = {The challenges posed by global broadacre crops in delivering smart agri-robotic solutions: A fundamental rethink is required.},
    publisher = {Elsevier},
    year = {2019},
    journal = {Global Food Security},
    doi = {10.1016/j.gfs.2019.04.011},
    pages = {116--124},
    keywords = {ARRAY(0x558e722cbcf0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35842/},
    abstract = {Threats to global food security from multiple sources, such as population growth, ageing farming populations, meat consumption trends, climate-change effects on abiotic and biotic stresses, the environmental impacts of agriculture are well publicised. In addition, with ever increasing tolerance of pest, diseases and weeds there is growing pressure on traditional crop genetic and protective chemistry technologies of the ?Green Revolution?. To ease the burden of these challenges, there has been a move to automate and robotise aspects of the farming process. This drive has focussed typically on higher value sectors, such as horticulture and viticulture, that have relied on seasonal manual labour to maintain produce supply. In developed economies, and increasingly developing nations, pressure on labour supply has become unsustainable and forced the need for greater mechanisation and higher labour productivity. This paper creates the case that for broadacre crops, such as cereals, a wholly new approach is necessary, requiring the establishment of an integrated biology \& physical engineering infrastructure, which can work in harmony with current breeding, chemistry and agronomic solutions. For broadacre crops the driving pressure is to sustainably intensify production; increase yields and/or productivity whilst reducing environmental impact. Additionally, our limited understanding of the complex interactions between the variations in pests, weeds, pathogens, soils, water, environment and crops is inhibiting growth in resource productivity and creating yield gaps. We argue that for agriculture to deliver knowledge based sustainable intensification requires a new generation of Smart Technologies, which combine sensors and robotics with localised and/or cloud-based Artificial Intelligence (AI).}
    }
  • A. Seddaoui and C. M. Saaj, “Combined nonlinear h? controller for a controlled-floating space robot,” Journal of guidance, control, and dynamics, vol. 42, iss. 8, p. 1878–1885, 2019. doi:10.2514/1.G003811
    [BibTeX] [Download PDF]
    @article{lincoln39389,
    volume = {42},
    number = {8},
    month = {August},
    author = {Asma Seddaoui and Chakravarthini M. Saaj},
    title = {Combined Nonlinear H? Controller for a Controlled-Floating Space Robot},
    publisher = {Aerospace Research Central},
    year = {2019},
    journal = {Journal of Guidance, Control, and Dynamics},
    doi = {10.2514/1.G003811},
    pages = {1878--1885},
    keywords = {ARRAY(0x558e723ae0a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39389/}
    }
  • A. Seddaoui and M. Saaj, “Combined nonlinear h? controller for a controlled-floating space robot,” Journal of guidance, control, and dynamics, vol. 42, iss. 8, p. 1878–1885, 2019. doi:10.2514/1.G003811
    [BibTeX] [Download PDF]
    @article{lincoln37396,
    volume = {42},
    number = {8},
    month = {August},
    author = {A. Seddaoui and Mini Saaj},
    note = {cited By 0},
    title = {Combined nonlinear H? controller for a controlled-floating space robot},
    publisher = {Aerospace Research Central},
    year = {2019},
    journal = {Journal of Guidance, Control, and Dynamics},
    doi = {10.2514/1.G003811},
    pages = {1878--1885},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37396/}
    }
  • M. Al-Khafajiy, T. Baker, H. Al-Libawy, Z. Maamar, M. Aloqaily, and Y. Jararweh, “Improving fog computing performance via fog-2-fog collaboration,” Future generation computer systems, vol. 100, p. 266–280, 2019. doi:10.1016/j.future.2019.05.015
    [BibTeX] [Abstract] [Download PDF]

    In the Internet of Things (IoT) era, a large volume of data is continuously emitted from a plethora of connected devices. The current network paradigm, which relies on centralised data centres (aka Cloud computing), has become inefficient to respond to IoT latency concern. To address this concern, fog computing allows data processing and storage ?close? to IoT devices. However, fog is still not efficient due to spatial and temporal distribution of these devices, which leads to fog nodes? unbalanced loads. This paper proposes a new fog-2-fog (f2f) collaboration model that promotes offloading incoming requests among fog nodes, according to their load and processing capabilities, via a novel load balancing known as Fog Resource manAgeMEnt Scheme (FRAMES). A formal mathematical model of f2f and FRAMES has been formulated, and a set of experiments has been carried out demonstrating the technical doability of f2f collaboration. The performance of the proposed fog load balancing model is compared to other load balancing models.

    @article{lincoln47556,
    volume = {100},
    month = {November},
    author = {Mohammed Al-Khafajiy and Thar Baker and Hilal Al-Libawy and Zakaria Maamar and Moayad Aloqaily and Yaser Jararweh},
    title = {Improving fog computing performance via Fog-2-Fog collaboration},
    publisher = {Elsevier},
    year = {2019},
    journal = {Future Generation Computer Systems},
    doi = {10.1016/j.future.2019.05.015},
    pages = {266--280},
    keywords = {ARRAY(0x558e723a9ac0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47556/},
    abstract = {In the Internet of Things (IoT) era, a large volume of data is continuously emitted from a plethora of connected devices. The current network paradigm, which relies on centralised data centres (aka Cloud computing), has become inefficient to respond to IoT latency concern. To address this concern, fog computing allows data processing and storage ?close? to IoT devices. However, fog is still not efficient due to spatial and temporal distribution of these devices, which leads to fog nodes? unbalanced loads. This paper proposes a new fog-2-fog (f2f) collaboration model that promotes offloading incoming requests among fog nodes, according to their load and processing capabilities, via a novel load balancing known as Fog Resource manAgeMEnt Scheme (FRAMES). A formal mathematical model of f2f and FRAMES has been formulated, and a set of experiments has been carried out demonstrating the technical doability of f2f collaboration. The performance of the proposed fog load balancing model is compared to other load balancing models.}
    }
  • S. Maleki and C. Bingham, “Robust hierarchical clustering for novelty identification in sensor networks: with applications to industrial systems,” Applied soft computing journal, vol. 85, p. 105771, 2019. doi:10.1016/j.asoc.2019.105771
    [BibTeX] [Abstract] [Download PDF]

    The paper proposes a new, robust cluster-based classification technique for Novelty Identification in sensor networks that possess a high degree of correlation among data streams. During normal operation, a uniform cluster across objects (sensors) is generated that indicates the absence of novelties. Conversely, in presence of novelty, the associated sensor is clustered distinctly from the remaining sensors, thereby isolating the data stream which exhibits the novelty. It is shown how small perturbations (stemming from noise, for instance) can affect the performance of traditional clustering methods, and that the proposed variant exhibits a robustness to such influences. Moreover, the proposed method is compared with a recently reported technique, and shown that it performs 365\% faster computationally. To provide an application case study, the technique is used to identify emerging fault modes in a sensor network on a sub-15MW industrial gas turbine in presence of other abrupt, but normal changes that visually might otherwise be interpreted as malfunctions.

    @article{lincoln44909,
    volume = {85},
    month = {December},
    author = {Sepehr Maleki and Chris Bingham},
    title = {Robust hierarchical clustering for novelty identification in sensor networks: With applications to industrial systems},
    publisher = {Elsevier},
    year = {2019},
    journal = {Applied Soft Computing Journal},
    doi = {10.1016/j.asoc.2019.105771},
    pages = {105771},
    keywords = {ARRAY(0x558e722cc278)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44909/},
    abstract = {The paper proposes a new, robust cluster-based classification technique for Novelty Identification
    in sensor networks that possess a high degree of correlation among data streams. During normal
    operation, a uniform cluster across objects (sensors) is generated that indicates the absence of
    novelties. Conversely, in presence of novelty, the associated sensor is clustered distinctly from the
    remaining sensors, thereby isolating the data stream which exhibits the novelty. It is shown how
    small perturbations (stemming from noise, for instance) can affect the performance of traditional
    clustering methods, and that the proposed variant exhibits a robustness to such influences. Moreover,
    the proposed method is compared with a recently reported technique, and shown that it performs
    365\% faster computationally. To provide an application case study, the technique is used to identify
    emerging fault modes in a sensor network on a sub-15MW industrial gas turbine in presence of other
    abrupt, but normal changes that visually might otherwise be interpreted as malfunctions.}
    }
  • Q. Fu, C. Hu, J. Peng, C. Rind, and S. Yue, “A robust collision perception visual neural network with specific selectivity to darker objects,” Ieee transactions on cybernetics, p. 1–15, 2019. doi:10.1109/TCYB.2019.2946090
    [BibTeX] [Abstract] [Download PDF]

    Building an ef?cient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature and are working perfectly in the real world, could be ideal models for designing arti?cial vision systems. In the locust?s visual pathways, a lobula giant movement detector (LGMD), that is, the LGMD2, has been identi?ed as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds; similar situations which many ground vehicles and robots are often faced with. However, little has been done on modeling the LGMD2 and investigating its potential in robotics and vehicles. In this article, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust via the modeling of biased-ON and -OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro-mobile robot and tested with real-time experiments. The experimental results have veri?ed the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds.

    @article{lincoln39137,
    month = {December},
    author = {Qinbing Fu and Cheng Hu and Jigen Peng and Claire Rind and Shigang Yue},
    title = {A Robust Collision Perception Visual Neural Network with Specific Selectivity to Darker Objects},
    publisher = {IEEE},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/TCYB.2019.2946090},
    pages = {1--15},
    year = {2019},
    keywords = {ARRAY(0x558e722cbc30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39137/},
    abstract = {Building an ef?cient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature and are working perfectly in the real world, could be ideal models for designing arti?cial vision systems. In the locust?s visual pathways, a lobula giant movement detector (LGMD), that is, the LGMD2, has been identi?ed as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds; similar situations which many ground vehicles and robots are often faced with. However, little has been done on modeling the LGMD2 and investigating its potential in robotics and vehicles. In this article, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust via the modeling of biased-ON and -OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro-mobile robot and tested with real-time experiments. The experimental results have veri?ed the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds.}
    }
  • V. Marinoudi, C. Sorensen, S. Pearson, and D. Bochtis, “Robotics and labour in agriculture. a context consideration,” Biosystems engineering, vol. 184, p. 111–121, 2019. doi:10.1016/j.biosystemseng.2019.06.013
    [BibTeX] [Abstract] [Download PDF]

    Over the last century, agriculture transformed from a labour-intensive industry towards mechanisation and power-intensive production systems, while over the last 15 years agri- cultural industry has started to digitise. Through this transformation there was a continuous labour outflow from agriculture, mainly from standardized tasks within production process. Robots and artificial intelligence can now be used to conduct non-standardised tasks (e.g. fruit picking, selective weeding, crop sensing) previously reserved for human workers and at economically feasible costs. As a consequence, automation is no longer restricted to stan- dardized tasks within agricultural production (e.g. ploughing, combine harvesting). In addition, many job roles in agriculture may be augmented but not replaced by robots. Robots in many instances will work collaboratively with humans. This new robotic ecosystem creates complex ethical, legislative and social impacts. A key question, we consider here, is what are the short and mid-term effects of robotised agriculture on sector jobs and employment? The presented work outlines the conditions, constraints, and inherent re- lationships between labour input and technology input in bio-production, as well as, pro- vides the procedural framework and research design to be followed in order to evaluate the effect of adoption automation and robotics in agriculture.

    @article{lincoln36279,
    volume = {184},
    month = {August},
    author = {Vasso Marinoudi and Claus Sorensen and Simon Pearson and Dionysis Bochtis},
    title = {Robotics and labour in agriculture. A context consideration},
    publisher = {Elsevier},
    year = {2019},
    journal = {Biosystems Engineering},
    doi = {10.1016/j.biosystemseng.2019.06.013},
    pages = {111--121},
    keywords = {ARRAY(0x558e724170b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36279/},
    abstract = {Over the last century, agriculture transformed from a labour-intensive industry towards mechanisation and power-intensive production systems, while over the last 15 years agri- cultural industry has started to digitise. Through this transformation there was a continuous labour outflow from agriculture, mainly from standardized tasks within production process. Robots and artificial intelligence can now be used to conduct non-standardised tasks (e.g. fruit picking, selective weeding, crop sensing) previously reserved for human workers and at economically feasible costs. As a consequence, automation is no longer restricted to stan- dardized tasks within agricultural production (e.g. ploughing, combine harvesting). In addition, many job roles in agriculture may be augmented but not replaced by robots. Robots in many instances will work collaboratively with humans. This new robotic ecosystem creates complex ethical, legislative and social impacts. A key question, we consider here, is what are the short and mid-term effects of robotised agriculture on sector jobs and employment? The presented work outlines the conditions, constraints, and inherent re- lationships between labour input and technology input in bio-production, as well as, pro- vides the procedural framework and research design to be followed in order to evaluate the effect of adoption automation and robotics in agriculture.}
    }
  • D. Bechtsis, V. Moisiadis, N. Tsolakis, D. Vlachos, and D. Bochtis, “Unmanned ground vehicles in precision farming services: an integrated emulation modelling approach,” in Information and communication technologies in modern agricultural development, Springer, 2019, vol. 953, p. 177–190. doi:doi:10.1007/978-3-030-12998-9_13
    [BibTeX] [Abstract] [Download PDF]

    Autonomous systems are a promising alternative for safely executing precision farming activities in a 24/7 perspective. In this context Unmanned Ground Vehicles (UGVs) are used in custom agricultural fields, with sophisticated sensors and data fusion techniques for real-time mapping and navigation. The aim of this study is to present a simulation software tool for providing effective and efficient farming activities in orchard fields and demonstrating the applicability of simulation in routing algorithms, hence increasing productivity, while dynamically addressing operational and tactical level uncertainties. The three dimensional virtual world includes the field layout and the static objects (orchard trees, obstacles, physical boundaries) and is constructed in the open source Gazebo simulation software while the Robot Operating System (ROS) and the implemented algorithms are tested using a custom vehicle. As a result a routing algorithm is executed and enables the UGV to pass through all the orchard trees while dynamically avoiding static and dynamic obstacles. Unlike existing sophisticated tools, the developed mechanism could accommodate an extensive variety of agricultural activities and could be transparently transferred from the simulation environment to real world ROS compatible UGVs providing user-friendly and highly customizable navigation.

    @incollection{lincoln39234,
    volume = {953},
    month = {February},
    author = {Dimitrios Bechtsis and Vasileios Moisiadis and Naoum Tsolakis and Dimitrios Vlachos and Dionysis Bochtis},
    booktitle = {Information and Communication Technologies in Modern Agricultural Development},
    title = {Unmanned Ground Vehicles in Precision Farming Services: An Integrated Emulation Modelling Approach},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-12998-9\_13},
    pages = {177--190},
    keywords = {ARRAY(0x558e72454830)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39234/},
    abstract = {Autonomous systems are a promising alternative for safely executing precision farming activities in a 24/7 perspective. In this context Unmanned Ground Vehicles (UGVs) are used in custom agricultural fields, with sophisticated sensors and data fusion techniques for real-time mapping and navigation. The aim of this study is to present a simulation software tool for providing effective and efficient farming activities in orchard fields and demonstrating the applicability of simulation in routing algorithms, hence increasing productivity, while dynamically addressing operational and tactical level uncertainties. The three dimensional virtual world includes the field layout and the static objects (orchard trees, obstacles, physical boundaries) and is constructed in the open source Gazebo simulation software while the Robot Operating System (ROS) and the implemented algorithms are tested using a custom vehicle. As a result a routing algorithm is executed and enables the UGV to pass through all the orchard trees while dynamically avoiding static and dynamic obstacles. Unlike existing sophisticated tools, the developed mechanism could accommodate an extensive variety of agricultural activities and could be transparently transferred from the simulation environment to real world ROS compatible UGVs providing user-friendly and highly customizable navigation.}
    }
  • C. A. G. S. o, D. Kateris, and D. Bochtis, “Ict innovations and smart farming,” in Information and communication technologies in modern agricultural development, Springer, 2019, vol. 953, p. 1–19. doi:doi:10.1007/978-3-030-12998-9_1
    [BibTeX] [Abstract] [Download PDF]

    Agriculture plays a vital role in the global economy with the majority of the rural population in developing countries depending on it. The depletion of natural resources makes the improvement of the agricultural production more important but also more difficult than ever. This is the reason that although the demand is constantly growing, Information and Communication Technology (ICT) offers to producers the adoption of sustainability and improvement of their daily living conditions. ICT offers timely and updated relevant information such as weather forecast, market prices, the occurrence of new diseases and varieties, etc. The new knowledge offers a unique opportunity to bring the production enhancing technologies to the farmers and empower themselves with modern agricultural technology and act accordingly for increasing the agricultural production in a cost effective and profitable manner. The use of ICT itself or combined with other ICT systems results in productivity improvement and better resource use and reduces the time needed for farm management, marketing, logistics and quality assurance.

    @incollection{lincoln39235,
    volume = {953},
    month = {February},
    author = {Claus Aage Gr{\o}n S{\o}rensen and Dimitrios Kateris and Dionysis Bochtis},
    booktitle = {Information and Communication Technologies in Modern Agricultural Development},
    title = {ICT Innovations and Smart Farming},
    publisher = {Springer},
    year = {2019},
    doi = {doi:10.1007/978-3-030-12998-9\_1},
    pages = {1--19},
    keywords = {ARRAY(0x558e71fe7db8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39235/},
    abstract = {Agriculture plays a vital role in the global economy with the majority of the rural population in developing countries depending on it. The depletion of natural resources makes the improvement of the agricultural production more important but also more difficult than ever. This is the reason that although the demand is constantly growing, Information and Communication Technology (ICT) offers to producers the adoption of sustainability and improvement of their daily living conditions. ICT offers timely and updated relevant information such as weather forecast, market prices, the occurrence of new diseases and varieties, etc. The new knowledge offers a unique opportunity to bring the production enhancing technologies to the farmers and empower themselves with modern agricultural technology and act accordingly for increasing the agricultural production in a cost effective and profitable manner. The use of ICT itself or combined with other ICT systems results in productivity improvement and better resource use and reduces the time needed for farm management, marketing, logistics and quality assurance.}
    }
  • T. Zhivkov, E. Schneider, and E. Sklar, “Establishing continuous communication through dynamic team behaviour switching,” in 2nd uk-ras robotics and autonomous systems conference, 2019. doi:10.31256/UKRAS19.22
    [BibTeX] [Abstract] [Download PDF]

    Maintaining continuous communication is an important factor that contributes to the success of multi-robot systems. Most research involving multi-robot teams is conducted in controlled laboratory settings, where continuous communication is assumed, typically because there is a wireless network (wifi) that keeps all the robots connected. But for multi-robot teams to operate successfully ?in the wild?, it is crucial to consider how communication can be maintained when signals fail or robots move out of range. This paper presents a novel ?leader-follower behaviour? with dynamic role switching and messaging that supports uninterrupted communication, regardless of network perturbations. A series of experiments were conducted in which it is shown how network perturbations effect performance, comparing a baseline with the new leaderfollower behaviour. The experiments record metrics on team success, given the two conditions. These results are significant for real-world multi-robot systems applications that require continuous communication amongst team members.

    @inproceedings{lincoln45010,
    booktitle = {2nd UK-RAS ROBOTICS AND AUTONOMOUS SYSTEMS CONFERENCE},
    month = {January},
    title = {Establishing Continuous Communication through Dynamic Team Behaviour Switching},
    author = {Tsvetan Zhivkov and Eric Schneider and Elizabeth Sklar},
    publisher = {UK-RAS19 Conference},
    year = {2019},
    doi = {10.31256/UKRAS19.22},
    keywords = {ARRAY(0x558e72264c20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45010/},
    abstract = {Maintaining continuous communication is an important factor that contributes to the success of multi-robot systems. Most research involving multi-robot teams is conducted in controlled laboratory settings, where continuous communication is assumed, typically because there is a wireless network (wifi) that keeps all the robots connected. But for multi-robot teams to operate successfully ?in the wild?, it is crucial to consider how communication can be maintained when signals fail or robots move out of range. This paper presents a novel ?leader-follower behaviour? with dynamic role switching and messaging that supports uninterrupted communication, regardless of network perturbations. A series of experiments were conducted in which it is shown how network perturbations effect performance, comparing a baseline with the new leaderfollower behaviour. The experiments record metrics on team success, given the two conditions. These results are significant for real-world multi-robot systems applications that require continuous communication amongst team members.}
    }
  • P. Baxter, F. D. Duchetto, and M. Hanheide, “Engaging learners in dialogue interactivity development for mobile robots,” in Edurobotics 2018, 2019. doi:10.1007/978-3-030-18141-3_12
    [BibTeX] [Abstract] [Download PDF]

    The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain (a museum guide robot) our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.

    @inproceedings{lincoln40135,
    booktitle = {EDUROBOTICS 2018},
    month = {December},
    title = {Engaging Learners in Dialogue Interactivity Development for Mobile Robots},
    author = {Paul Baxter and Francesco Del Duchetto and Marc Hanheide},
    publisher = {Springer, Cham},
    year = {2019},
    doi = {10.1007/978-3-030-18141-3\_12},
    keywords = {ARRAY(0x558e722cc260)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40135/},
    abstract = {The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain (a museum guide robot) our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.}
    }
  • J. Lock, I. Gilchrist, G. Cielniak, and N. Bellotto, “Bone-conduction audio interface to guide people with visual impairments,” in International workshop on assistive engineering and information technology (aeit 2019), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The ActiVis project’s aim is to build a mobile guidance aid to help people with limited vision find objects in an unknown environment. This system uses bone-conduction headphones to transmit audio signals to the user and requires an effective non-visual interface. To this end, we propose a new audio-based interface that uses a spatialised signal to convey a target?s position on the horizontal plane. The vertical position on the median plan is given by adjusting the tone?s pitch to overcome the audio localisation limitations of bone-conduction headphones. This interface is validated through a set of experiments with blindfolded and visually impaired participants.

    @inproceedings{lincoln36793,
    booktitle = {International Workshop on Assistive Engineering and Information Technology (AEIT 2019)},
    month = {November},
    title = {Bone-Conduction Audio Interface to Guide People with Visual Impairments},
    author = {Jacobus Lock and Iain Gilchrist and Grzegorz Cielniak and Nicola Bellotto},
    year = {2019},
    keywords = {ARRAY(0x558e7245f010)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36793/},
    abstract = {The ActiVis project's aim is to build a mobile guidance aid to help people with limited vision find objects in an unknown environment. This system uses bone-conduction headphones to transmit audio signals to the user and requires an effective non-visual interface. To this end, we propose a new audio-based interface that uses a spatialised signal to convey a target?s position on the horizontal plane. The vertical position on the median plan is given by adjusting the tone?s pitch to overcome the audio localisation limitations of bone-conduction headphones. This interface is validated through a set of experiments with blindfolded and visually impaired participants.}
    }
  • A. Zaganidis, A. Zerntev, T. Duckett, and G. Cielniak, “Semantically assisted loop closure in slam using ndt histograms,” in International conference on intelligent robots and systems (iros), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Precise knowledge of pose is of great importance for reliable operation of mobile robots in outdoor environments. Simultaneous localization and mapping (SLAM) is the online construction of a map during exploration of an environment. One of the components of SLAM is loop closure detection, identifying that the same location has been visited and is present on the existing map, and localizing against it. We have shown in previous work that using semantics from a deep segmentation network in conjunction with the Normal Distributions Transform point cloud registration improves the robustness, speed and accuracy of lidar odometry. In this work we extend the method for loop closure detection, using the labels already available from local registration into NDT Histograms, and we present a SLAM pipeline based on Semantic assisted NDT and PointNet++. We experimentally demonstrate on sequences from the KITTI benchmark that the map descriptor we propose outperforms NDT Histograms without semantics, and we validate its use on a SLAM task.

    @inproceedings{lincoln37750,
    booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
    month = {November},
    title = {Semantically Assisted Loop Closure in SLAM Using NDT Histograms},
    author = {Anestis Zaganidis and Alexandros Zerntev and Tom Duckett and Grzegorz Cielniak},
    year = {2019},
    keywords = {ARRAY(0x558e722560d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37750/},
    abstract = {Precise knowledge of pose is of great importance for reliable operation of mobile robots in outdoor environments. Simultaneous localization and mapping (SLAM) is the online construction of a map during exploration of an environment. One of the components of SLAM is loop closure detection, identifying that the same location has been visited and is present on the existing map, and localizing against it. We have shown in previous work that using semantics from a deep segmentation network in conjunction with the Normal Distributions Transform point cloud registration improves the robustness, speed and accuracy of lidar odometry. In this work we extend the method for loop closure detection, using the labels already available from local registration into NDT Histograms, and we present a SLAM pipeline based on Semantic assisted NDT and PointNet++. We experimentally demonstrate on sequences from the KITTI benchmark that the map descriptor we propose outperforms NDT Histograms without semantics, and we validate its use on a SLAM task.}
    }
  • F. Camara, P. Dickinson, N. Merat, and C. Fox, “Towards game theoretic av controllers: measuring pedestrian behaviour in virtual reality,” in Ieee/rsj international conference on intelligent robots and systems (iros 2019) workshops, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian interaction is of great importance for autonomous vehicles (AVs). The present study investigates pedestrian behaviour during crossing scenarios with an autonomous vehicle using Virtual Reality. The self-driving car is driven by a game theoretic controller which adapts its driving style to pedestrian crossing behaviour. We found that subjects value collision avoidance about 8 times more than saving 0.02 seconds. A previous lab study found time saving to be more important than collision avoidance in a highly unrealistic board game style version of the game. The present result suggests that the VR simulation reproduces real world road-crossings better than the lab study and provides a reliable test-bed for the development of game theoretic models for AVs.

    @inproceedings{lincoln37261,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019) Workshops},
    month = {November},
    title = {Towards game theoretic AV controllers: measuring pedestrian behaviour in Virtual Reality},
    author = {Fanta Camara and Patrick Dickinson and Natasha Merat and Charles Fox},
    publisher = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019) Workshops},
    year = {2019},
    keywords = {ARRAY(0x558e722c1798)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37261/},
    abstract = {Understanding pedestrian interaction is of great importance for autonomous vehicles (AVs). The present study investigates pedestrian behaviour during crossing scenarios with an autonomous vehicle using Virtual Reality. The self-driving car is driven by a game theoretic controller which adapts its driving style to pedestrian crossing behaviour. We found that subjects value collision avoidance about 8 times more than saving 0.02 seconds. A previous lab study found time saving to be more important than collision avoidance in a highly unrealistic board game style version of the game. The present result suggests that the VR simulation reproduces real world road-crossings better than the lab study and provides a reliable test-bed for the development of game theoretic models for AVs.}
    }
  • A. Seddaoui, C. M. Saaj, and S. Eckersley, “Collision-free optimal trajectory generator for a controlled floating space robot,” in Towards autonomous robotic systems conference, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39420,
    booktitle = {Towards Autonomous Robotic Systems Conference},
    title = {Collision-Free Optimal Trajectory Generator for a Controlled Floating Space Robot},
    author = {Asma Seddaoui and Chakravarthini M. Saaj and Steve Eckersley},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39420/}
    }
  • A. Gabriel, N. Bellotto, and P. Baxter, “Towards a dataset of activities for action recognition in open fields,” in 2nd uk-ras robotics and autonomous systems conference, 2019, p. 64–67.
    [BibTeX] [Abstract] [Download PDF]

    In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural robot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.

    @inproceedings{lincoln36201,
    booktitle = {2nd UK-RAS Robotics and Autonomous Systems Conference},
    month = {January},
    title = {Towards a Dataset of Activities for Action Recognition in Open Fields},
    author = {Alexander Gabriel and Nicola Bellotto and Paul Baxter},
    publisher = {UK-RAS},
    year = {2019},
    pages = {64--67},
    keywords = {ARRAY(0x558e723a54d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36201/},
    abstract = {In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural
    robot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.}
    }
  • R. Jiang, X. Li, A. Gao, L. Li, H. Meng, S. Yue, and L. Zhang, “Learning spectral and spatial features based on generative adversarial network for hyperspectral image super-resolution,” in The 2019 ieee international geoscience and remote sensing symposium (igarss2019), 2019. doi:10.1109/IGARSS.2019.8900228
    [BibTeX] [Abstract] [Download PDF]

    Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.

    @inproceedings{lincoln42331,
    booktitle = {The 2019 IEEE International Geoscience and Remote Sensing Symposium (IGARSS2019)},
    month = {November},
    title = {Learning spectral and spatial features based on generative adversarial network for hyperspectral image super-resolution},
    author = {Ruituo Jiang and Xu Li and Ang Gao and Lixin Li and Hongying Meng and Shigang Yue and Lei Zhang},
    year = {2019},
    doi = {10.1109/IGARSS.2019.8900228},
    keywords = {ARRAY(0x558e722b32e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42331/},
    abstract = {Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.}
    }
  • S. Lucarotti, C. M. Saaj, E. Allouis, and P. Bianco, “A self-reconfigurable undulating grasper for asteroid mining,” in 15th esa symposium on advanced space technologies in robotics and automation, 2019.
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39624,
    booktitle = {15th ESA Symposium on Advanced Space Technologies in Robotics and Automation},
    title = {A Self-Reconfigurable Undulating Grasper for Asteroid Mining},
    author = {Suzanna Lucarotti and Chakravarthini M. Saaj and Elie Allouis and Paolo Bianco},
    year = {2019},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39624/}
    }
  • H. Montes, T. Duckett, and G. Cielniak, “Model based 3d point cloud segmentation for automated selective broccoli harvesting,” in Smart industry workshop 2019, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Segmentation of 3D objects in cluttered scenes is a highly relevant problem. Given a 3D point cloud produced by a depth sensor, the goal is to separate objects of interest in the foreground from other elements in the background. We research 3D imaging methods to accurately segment and identify broccoli plants in the field. The ability to separate parts into different sets of sensor readings is an important task towards this goal. Our research is focused on the broccoli head segmentation problem as a first step towards size estimation of each broccoli crop in order to establish whether or not it is suitable for cutting.

    @inproceedings{lincoln39207,
    booktitle = {Smart Industry Workshop 2019},
    month = {January},
    title = {MODEL BASED 3D POINT CLOUD SEGMENTATION FOR AUTOMATED SELECTIVE BROCCOLI HARVESTING},
    author = {Hector Montes and Tom Duckett and Grzegorz Cielniak},
    year = {2019},
    keywords = {ARRAY(0x558e72424b88)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39207/},
    abstract = {Segmentation of 3D objects in cluttered scenes is a highly relevant problem. Given a 3D point cloud produced by a depth sensor, the goal is to separate objects of interest in the foreground from other elements in the background. We research 3D imaging methods to accurately segment and identify broccoli plants in the field. The ability to separate parts into different sets of sensor readings is an important task towards this goal. Our research is focused on the broccoli head segmentation problem as a first step towards size estimation of each broccoli crop in order to establish whether or not it is suitable for cutting.}
    }
  • L. Jackson, C. M. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and M. Ferris, “The downsizing of a free-flying space robot,” in 20th annual conference, taros 2019, 2019, p. 480–483. doi:10.1007/978-3-030-25332-5
    [BibTeX] [Download PDF]
    @inproceedings{lincoln39418,
    booktitle = {20th Annual Conference, TAROS 2019},
    title = {The Downsizing of a Free-Flying Space Robot},
    author = {Lucy Jackson and Chakravarthini M. Saaj and Asma Seddaoui and Calem Whiting and Steve Eckersley and Mark Ferris},
    publisher = {Springer},
    year = {2019},
    pages = {480--483},
    doi = {10.1007/978-3-030-25332-5},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39418/}
    }
  • J. Lock, G. Cielniak, and N. Bellotto, “Active object search with a mobile device for people with visual impairments,” in 14th international conference on computer vision theory and applications (visapp), 2019, p. 476–485. doi:10.5220/0007582304760485
    [BibTeX] [Abstract] [Download PDF]

    Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera’s video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.

    @inproceedings{lincoln34596,
    booktitle = {14th International Conference on Computer Vision Theory and Applications (VISAPP)},
    title = {Active Object Search with a Mobile Device for People with Visual Impairments},
    author = {Jacobus Lock and Grzegorz Cielniak and Nicola Bellotto},
    publisher = {VISIGRAPP},
    year = {2019},
    pages = {476--485},
    doi = {10.5220/0007582304760485},
    keywords = {ARRAY(0x558e7235e5f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/34596/},
    abstract = {Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera's video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this paper, we propose a control algorithm based on a Markov Decision Process that uses a smartphone?s camera to generate real-time instructions to guide a user towards a target object. The solution is part of a more general active vision application for people with visual impairments. An initial implementation of the system on a smartphone was experimentally evaluated with participants with healthy eyesight to determine the performance of the control algorithm. The results show the effectiveness of our solution and its potential application to help people with visual impairments find objects in unknown environments.}
    }
  • J. Lock, A. G. Tramontano, S. Ghidoni, and N. Bellotto, “Activis: mobile object detection and active guidance for people with visual impairments,” in Proc. of the int. conf. on image analysis and processing (iciap), 2019.
    [BibTeX] [Abstract] [Download PDF]

    The ActiVis project aims to deliver a mobile system that is able to guide a person with visual impairments towards a target object or area in an unknown indoor environment. For this, it uses new developments in object detection, mobile computing, action generation and human-computer interfacing to interpret the user’s surroundings and present effective guidance directions. Our approach to direction generation uses a Partially Observable Markov Decision Process (POMDP) to track the system’s state and output the optimal location to be investigated. This system includes an object detector and an audio-based guidance interface to provide a complete active search pipeline. The ActiVis system was evaluated in a set of experiments showing better performance than a simpler unguided case.

    @inproceedings{lincoln36413,
    booktitle = {Proc. of the Int. Conf. on Image Analysis and Processing (ICIAP)},
    title = {ActiVis: Mobile Object Detection and Active Guidance for People with Visual Impairments},
    author = {Jacobus Lock and A. G. Tramontano and S. Ghidoni and Nicola Bellotto},
    year = {2019},
    keywords = {ARRAY(0x558e7242ded0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36413/},
    abstract = {The ActiVis project aims to deliver a mobile system that is able to guide a person with visual impairments towards a target object or area in an unknown indoor environment. For this, it uses new developments in object detection, mobile computing, action generation and human-computer interfacing to interpret the user's surroundings and present effective guidance directions. Our approach to direction generation uses a Partially Observable Markov Decision Process (POMDP) to track the system's state and output the optimal location to be investigated. This system includes an object detector and an audio-based guidance interface to provide a complete active search pipeline. The ActiVis system was evaluated in a set of experiments showing better performance than a simpler unguided case.}
    }
  • M. Al-Khafajiy, T. Baker, A. Waraich, D. Al-Jumeily, and A. Hussain, “Iot-fog optimal workload via fog offloading,” in 2018 ieee/acm international conference on utility and cloud computing companion (ucc companion), 2019, p. 359–364. doi:doi:10.1109/UCC-Companion.2018.00081
    [BibTeX] [Abstract] [Download PDF]

    Billions of devises are expected to be connected to the Internet of Things network in the near future, therefore, a considerable amount of data will be generated, and gathered every second. The current network paradigm, which relies on centralised data-centres (a.k.a. Cloud computing), becomes impractical solution for IoT data due to the long distance between the data source and designated data-center. In other words, the amount of time taken by data to travel to a data-centre makes the importance of the data vanished. Therefore, the network topology have been evolved to permit data processing at the edge of the network, introducing what so-called “Fog computing”. The later will obviously lead to improvements in quality of service via efficient and quick responding to sensors requests. In this paper, we are proposing a fog computing architecture and framework to enhance QoS via request offloading method. The proposed method employ a collaboration strategy among fog nodes in order to permit data processing in a shared mode, hence satisfies QoS and serves largest number of IoT requests. The proposed framework could have the potential in achieving sustainable network paradigm and highlights significant benefits of fog computing into the computing ecosystem.

    @inproceedings{lincoln47567,
    month = {January},
    author = {Mohammed Al-Khafajiy and Thar Baker and Atif Waraich and Dhiya Al-Jumeily and Abir Hussain},
    booktitle = {2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion)},
    title = {IoT-Fog Optimal Workload via Fog Offloading},
    publisher = {IEEE},
    doi = {doi:10.1109/UCC-Companion.2018.00081},
    pages = {359--364},
    year = {2019},
    keywords = {ARRAY(0x558e722ac3a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47567/},
    abstract = {Billions of devises are expected to be connected to the Internet of Things network in the near future, therefore, a considerable amount of data will be generated, and gathered every second. The current network paradigm, which relies on centralised data-centres (a.k.a. Cloud computing), becomes impractical solution for IoT data due to the long distance between the data source and designated data-center. In other words, the amount of time taken by data to travel to a data-centre makes the importance of the data vanished. Therefore, the network topology have been evolved to permit data processing at the edge of the network, introducing what so-called "Fog computing". The later will obviously lead to improvements in quality of service via efficient and quick responding to sensors requests. In this paper, we are proposing a fog computing architecture and framework to enhance QoS via request offloading method. The proposed method employ a collaboration strategy among fog nodes in order to permit data processing in a shared mode, hence satisfies QoS and serves largest number of IoT requests. The proposed framework could have the potential in achieving sustainable network paradigm and highlights significant benefits of fog computing into the computing ecosystem.}
    }
  • M. Sorour, K. Elgeneidy, A. Srinivasan, and M. Hanheide, “Grasping unknown objects based on gripper workspace spheres,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2019, p. 1541–1547. doi:10.1109/IROS40897.2019.8967989
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present a novel grasp planning algorithm for unknown objects given a registered point cloud of the target from different views. The proposed methodology requires no prior knowledge of the object, nor offline learning. In our approach, the gripper kinematic model is used to generate a point cloud of each finger workspace, which is then filled with spheres. At run-time, first the object is segmented, its major axis is computed, in a plane perpendicular to which, the main grasping action is constrained. The object is then uniformly sampled and scanned for various gripper poses that assure at least one object point is located in the workspace of each finger. In addition, collision checks with the object or the table are performed using computationally inexpensive gripper shape approximation. Our methodology is both time efficient (consumes less than 1.5 seconds in average) and versatile. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand).

    @inproceedings{lincoln36370,
    month = {November},
    author = {Mohamed Sorour and Khaled Elgeneidy and Aravinda Srinivasan and Marc Hanheide},
    booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Grasping Unknown Objects Based on Gripper Workspace Spheres},
    publisher = {IEEE},
    year = {2019},
    journal = {Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
    doi = {10.1109/IROS40897.2019.8967989},
    pages = {1541--1547},
    keywords = {ARRAY(0x558e722cbde0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36370/},
    abstract = {In this paper, we present a novel grasp planning algorithm for unknown objects given a registered point cloud of the target from different views. The proposed methodology requires no prior knowledge of the object, nor offline learning. In our approach, the gripper kinematic model is used to generate a point cloud of each finger workspace, which is then filled with spheres. At run-time, first the object is segmented, its major axis is computed, in a plane perpendicular to which, the main grasping action is constrained. The object is then
    uniformly sampled and scanned for various gripper poses that assure at least one object point is located in the workspace of each finger. In addition, collision checks with the object or the table are performed using computationally inexpensive gripper shape approximation. Our methodology is both time efficient (consumes less than 1.5 seconds in average) and versatile. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand).}
    }
  • C. Zhao, L. Sun, P. Purkait, T. Duckett, and R. Stolkin, “Learning monocular visual odometry with dense 3d mapping from dense 3d flow,” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 2019. doi:10.1109/IROS.2018.8594151
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.

    @inproceedings{lincoln36001,
    booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {January},
    title = {Learning Monocular Visual Odometry with Dense 3D Mapping from Dense 3D Flow},
    author = {Cheng Zhao and Li Sun and Pulak Purkait and Tom Duckett and Rustam Stolkin},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/IROS.2018.8594151},
    keywords = {ARRAY(0x558e723ae670)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36001/},
    abstract = {This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 \% for average translational error and 0.0143?/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.}
    }
  • F. Camara, N. Merat, and C. Fox, “A heuristic model for pedestrian intention estimation,” in Ieee intelligent transportation systems conference, 2019. doi:10.1109/ITSC.2019.8917195
    [BibTeX] [Abstract] [Download PDF]

    Understanding pedestrian behaviour and controlling interactions with pedestrians is of critical importance for autonomous vehicles, but remains a complex and challenging problem. This study infers pedestrian intent during possible road-crossing interactions, to assist autonomous vehicle decisions to yield or not yield when approaching them, and tests a simple heuristic model of intent on pedestrian-vehicle trajectory data for the first time. It relies on a heuristic approach based on the observed positions of the agents over time. The method can predict pedestrian crossing intent, crossing or stopping, with 96\% accuracy by the time the pedestrian reaches the curbside, on the standard Daimler pedestrian dataset. This result is important in demarcating scenarios which have a clear winner and can be predicted easily with the simple heuristic, from those which may require more complex game-theoretic models to predict and control.

    @inproceedings{lincoln36758,
    booktitle = {IEEE Intelligent Transportation Systems Conference},
    month = {November},
    title = {A heuristic model for pedestrian intention estimation},
    author = {Fanta Camara and Natasha Merat and Charles Fox},
    publisher = {IEEE},
    year = {2019},
    doi = {10.1109/ITSC.2019.8917195},
    keywords = {ARRAY(0x558e722c9b60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36758/},
    abstract = {Understanding pedestrian behaviour and controlling interactions with pedestrians is of critical importance for autonomous vehicles, but remains a complex and challenging problem. This study infers pedestrian intent during possible road-crossing interactions, to assist autonomous vehicle decisions to yield or not yield when approaching them, and tests a simple heuristic model of intent on pedestrian-vehicle trajectory data for the first time. It relies on a heuristic approach based
    on the observed positions of the agents over time. The method can predict pedestrian crossing intent, crossing or stopping, with 96\% accuracy by the time the pedestrian reaches the curbside, on the standard Daimler pedestrian dataset. This result is important in demarcating scenarios which have a clear winner and can be predicted easily with the simple heuristic, from those which may require more complex game-theoretic models to predict and control.}
    }
  • A. Babu, P. Lightbody, G. Das, P. Liu, S. Gomez-Gonzalez, and G. Neumann, “Improving local trajectory optimisation using probabilistic movement primitives,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2019, p. 2666–2671. doi:10.1109/IROS40897.2019.8967980
    [BibTeX] [Abstract] [Download PDF]

    Local trajectory optimisation techniques are a powerful tool for motion planning. However, they often get stuck in local optima depending on the quality of the initial solution and consequently, often do not find a valid (i.e. collision free) trajectory. Moreover, they often require fine tuning of a cost function to obtain the desired motions. In this paper, we address both problems by combining local trajectory optimisation with learning from demonstrations. The human expert demonstrates how to reach different target end-effector locations in different ways. From these demonstrations, we estimate a trajectory distribution, represented by a Probabilistic Movement Primitive (ProMP). For a new target location, we sample different trajectories from the ProMP and use these trajectories as initial solutions for the local optimisation. As the ProMP generates versatile initial solutions for the optimisation, the chance of finding poor local minima is significantly reduced. Moreover, the learned trajectory distribution is used to specify the smoothness costs for the optimisation, resulting in solutions of similar shape as the demonstrations. We demonstrate the effectiveness of our approach in several complex obstacle avoidance scenarios.

    @inproceedings{lincoln40837,
    booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Improving Local Trajectory Optimisation using Probabilistic Movement Primitives},
    author = {Ashith Babu and Peter Lightbody and Gautham Das and Pengcheng Liu and Sebastian Gomez-Gonzalez and Gerhard Neumann},
    publisher = {IEEE},
    year = {2019},
    pages = {2666--2671},
    doi = {10.1109/IROS40897.2019.8967980},
    keywords = {ARRAY(0x558e72437aa0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40837/},
    abstract = {Local trajectory optimisation techniques are a powerful tool for motion planning. However, they often get stuck in local optima depending on the quality of the initial solution and consequently, often do not find a valid (i.e. collision free) trajectory. Moreover, they often require fine tuning of a cost function to obtain the desired motions. In this paper, we address both problems by combining local trajectory optimisation with learning from demonstrations. The human expert demonstrates how to reach different target end-effector locations in different ways. From these demonstrations, we estimate a trajectory distribution, represented by a Probabilistic Movement Primitive (ProMP). For a new target location, we sample different trajectories from the ProMP and use these trajectories as initial solutions for the local optimisation. As the ProMP generates versatile initial solutions for the optimisation, the chance of finding poor local minima is significantly reduced. Moreover, the learned trajectory distribution is used to specify the smoothness costs for the optimisation, resulting in solutions of similar shape as the demonstrations. We demonstrate the effectiveness of our approach in several complex obstacle avoidance scenarios.}
    }
  • J. Koleosho and C. M. Saaj, “System design and control of a di-wheel rover,” in Towards autonomous robotic systems, 2019, p. 409–421. doi:10.1007/978-3-030-25332-5_35
    [BibTeX] [Abstract] [Download PDF]

    Traditionally, wheeled rovers are used for planetary surface exploration and six-wheeled chassis designs based on the Rocker-Bogie suspension system have been tested successfully on Mars. However, it is difficult to explore craters and crevasses using large six or four-wheeled rovers. Innovative designs based on smaller Di-Wheel Rovers might be better suited for such challenging terrains. A Di-Wheel Rover is a self – balancing two-wheeled mobile robot that can move in all directions within a two-dimensional plane, as well as stand upright by balancing on two wheels. This paper presents the outcomes of a feasibility study on a Di-Wheel Rover for planetary exploration missions. This includes developing its chassis design based on the hardware and software requirements, prototyping, and subsequent testing. The main contribution of this paper is the design of a self-balancing control system for the Di-Wheel Rover. This challenging design exercise was successfully completed through extensive experimentation thereby validating the performance of the Di-Wheel Rover. The details on the structural design, tuning controller gains based on an inverted pendulum model, and testing on different ground surfaces are described in this paper. The results presented in this paper give a new insight into designing low-cost Di-Wheel Rovers and clearly, there is a potential to use Di-Wheel Rovers for future planetary exploration.

    @inproceedings{lincoln39621,
    volume = {11650},
    author = {John Koleosho and Chakravarthini M. Saaj},
    booktitle = {Towards Autonomous Robotic Systems},
    title = {System Design and Control of a Di-Wheel Rover},
    publisher = {Springer},
    doi = {10.1007/978-3-030-25332-5\_35},
    pages = {409--421},
    year = {2019},
    keywords = {ARRAY(0x558e723eeb48)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39621/},
    abstract = {Traditionally, wheeled rovers are used for planetary surface exploration and six-wheeled chassis designs based on the Rocker-Bogie suspension system have been tested successfully on Mars. However, it is difficult to explore craters and crevasses using large six or four-wheeled rovers. Innovative designs based on smaller Di-Wheel Rovers might be better suited for such challenging terrains. A Di-Wheel Rover is a self - balancing two-wheeled mobile robot that can move in all directions within a two-dimensional plane, as well as stand upright by balancing on two wheels.
    This paper presents the outcomes of a feasibility study on a Di-Wheel Rover for planetary exploration missions. This includes developing its chassis design based on the hardware and software requirements, prototyping, and subsequent testing. The main contribution of this paper is the design of a self-balancing control system for the Di-Wheel Rover. This challenging design exercise was successfully completed through extensive experimentation thereby validating the performance of the Di-Wheel Rover. The details on the structural design, tuning controller gains based on an inverted pendulum model, and testing on different ground surfaces are described in this paper. The results presented in this paper give a new insight into designing low-cost Di-Wheel Rovers and clearly, there is a potential to use Di-Wheel Rovers for future planetary exploration.}
    }
  • K. Elgeneidy, G. Neumann, S. Pearson, M. Jackson, and N. Lohse, “Contact detection and size estimation using a modular soft gripper with embedded flex sensors,” in International conference on intelligent robots and systems (iros 2018), 2019.
    [BibTeX] [Abstract] [Download PDF]

    Grippers made from soft elastomers are able to passively and gently adapt to their targets allowing deformable objects to be grasped safely without causing bruise or damage. However, it is difficult to regulate the contact forces due to the lack of contact feedback for such grippers. In this paper, a modular soft gripper is presented utilizing interchangeable soft pneumatic actuators with embedded flex sensors as fingers of the gripper. The fingers can be assembled in different configurations using 3D printed connectors. The paper investigates the potential of utilizing the simple sensory feedback from the flex and pressure sensors to make additional meaningful inferences regarding the contact state and grasped object size. We study the effect of the grasped object size and contact type on the combined feedback from the embedded flex sensors of opposing fingers. Our results show that a simple linear relationship exists between the grasped object size and the final flex sensor reading at fixed input conditions, despite the variation in object weight and contact type. Additionally, by simply monitoring the time series response from the flex sensor, contact can be detected by comparing the response to the known free-bending response at the same input conditions. Furthermore, by utilizing the measured internal pressure supplied to the soft fingers, it is possible to distinguish between power and pinch grasps, as the contact type affects the rate of change in the flex sensor readings against the internal pressure.

    @inproceedings{lincoln34713,
    booktitle = {International Conference on Intelligent Robots and Systems (IROS 2018)},
    month = {January},
    title = {Contact Detection and Size Estimation Using a Modular Soft Gripper with Embedded Fle