Download the BibTeX file of all L-CAS publications
Below a list of all outputs created with the involvement of L-CAS academics. Filter by author or type.
2023
- J. C. M. Baños, P. r a, and G. Cielniak, “Towards safe robotic agricultural applications: safe navigation system design for a robotic grass-mowing application through the risk management method,” Robotics, vol. 12, iss. 63, 2023. doi:10.3390/robotics12030063
[BibTeX] [Abstract] [Download PDF]
Safe navigation is a key objective for autonomous applications, particularly those involving mobile tasks, to avoid dangerous situations and prevent harm to humans. However, the integration of a risk management process is not yet mandatory in robotics development. Ensuring safety using mobile robots is critical for many real-world applications, especially those in which contact with the robot could result in fatal consequences, such as agricultural environments where a mobile device with an industrial cutter is used for grass-mowing. In this paper, we propose an explicit integration of a risk management process into the design of the software for an autonomous grass mower, with the aim of enhancing safety. Our approach is tested and validated in simulated scenarios that assess the effectiveness of different custom safety functionalities in terms of collision prevention, execution time, and the number of required human interventions.
@article{lincoln54478, volume = {12}, number = {63}, month = {June}, author = {Jos{\'e} Carlos Mayoral Ba{\~n}os and P{\r a}l Johan From and Grzegorz Cielniak}, title = {Towards Safe Robotic Agricultural Applications: Safe Navigation System Design for a Robotic Grass-Mowing Application through the Risk Management Method}, publisher = {MDPI}, year = {2023}, journal = {Robotics}, doi = {10.3390/robotics12030063}, keywords = {ARRAY(0x559d325e5c78)}, url = {https://eprints.lincoln.ac.uk/id/eprint/54478/}, abstract = {Safe navigation is a key objective for autonomous applications, particularly those involving mobile tasks, to avoid dangerous situations and prevent harm to humans. However, the integration of a risk management process is not yet mandatory in robotics development. Ensuring safety using mobile robots is critical for many real-world applications, especially those in which contact with the robot could result in fatal consequences, such as agricultural environments where a mobile device with an industrial cutter is used for grass-mowing. In this paper, we propose an explicit integration of a risk management process into the design of the software for an autonomous grass mower, with the aim of enhancing safety. Our approach is tested and validated in simulated scenarios that assess the effectiveness of different custom safety functionalities in terms of collision prevention, execution time, and the number of required human interventions.} }
- L. Manning, S. Brewer, P. Craigon, J. Frey, A. Gutierrez, N. Jacobs, S. Kanza, S. Munday, J. Sacks, and S. Pearson, “Reflexive governance architectures: considering the ethical implications of autonomous technology adoption in food supply chains,” Trends in food science & technology, vol. 133, p. 114–126, 2023. doi:10.1016/j.tifs.2023.01.015
[BibTeX] [Abstract] [Download PDF]
Background: The application of autonomous technology in food supply chains gives rise to a number of ethical considerations associated with the interaction between human and technology, human-technology-plant and human-technology-animal. These considerations and their implications influence technology design, the ways in which technology is applied, how the technology changes food supply chain practices, decision-making and the associated ethical aspects and outcomes. Scope and approach: Using the concept of reflexive governance, this paper has critiqued existing reflective food-related ethical assessment tools and proposed the structural elements required for reflexive governance architectures which address both the sharing of data, and the use of artificial intelligence (AI) and machine learning in food supply chains. Key findings and conclusions: Considering the ethical implications of using autonomous technology in real life contexts is challenging. The current approach, focusing on discrete ethical elements in isolation e.g., ethical aspects or outcomes, normative standards or ethically orientated compliance-based business strategies is not sufficient in itself. Alternatively, the application of more holistic, reflexive governance architectures can inform consideration of ethical aspects, potential ethical outcomes, in particular how they are interlinked and/or interdependent, and the need for mitigation at all lifecycle stages of technology and food product conceptualisation, design, realisation and adoption in the food supply chain. This research is of interest to those who are undertaking ethical deliberation on data sharing, and the use of AI and machine learning in food supply chains.
@article{lincoln53439, volume = {133}, month = {March}, author = {L. Manning and S. Brewer and P. Craigon and J. Frey and A. Gutierrez and N. Jacobs and S. Kanza and S. Munday and J. Sacks and S. Pearson}, title = {Reflexive governance architectures: considering the ethical implications of autonomous technology adoption in food supply chains}, publisher = {Elsevier}, year = {2023}, journal = {Trends in Food Science \& Technology}, doi = {10.1016/j.tifs.2023.01.015}, pages = {114--126}, keywords = {ARRAY(0x559d325e5ca8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53439/}, abstract = {Background: The application of autonomous technology in food supply chains gives rise to a number of ethical considerations associated with the interaction between human and technology, human-technology-plant and human-technology-animal. These considerations and their implications influence technology design, the ways in which technology is applied, how the technology changes food supply chain practices, decision-making and the associated ethical aspects and outcomes. Scope and approach: Using the concept of reflexive governance, this paper has critiqued existing reflective food-related ethical assessment tools and proposed the structural elements required for reflexive governance architectures which address both the sharing of data, and the use of artificial intelligence (AI) and machine learning in food supply chains. Key findings and conclusions: Considering the ethical implications of using autonomous technology in real life contexts is challenging. The current approach, focusing on discrete ethical elements in isolation e.g., ethical aspects or outcomes, normative standards or ethically orientated compliance-based business strategies is not sufficient in itself. Alternatively, the application of more holistic, reflexive governance architectures can inform consideration of ethical aspects, potential ethical outcomes, in particular how they are interlinked and/or interdependent, and the need for mitigation at all lifecycle stages of technology and food product conceptualisation, design, realisation and adoption in the food supply chain. This research is of interest to those who are undertaking ethical deliberation on data sharing, and the use of AI and machine learning in food supply chains.} }
- A. M. G. Esfahani, “Haptic-guided grasping to minimise torque effort during robotic telemanipulation,” Autonomous robots, 2023.
[BibTeX] [Abstract] [Download PDF]
Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in human subjects studies. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visually guidance, although combining them together leads to the best overall results.
@article{lincoln53715, title = {Haptic-guided Grasping to Minimise Torque Effort during Robotic Telemanipulation}, author = {Amir Masoud Ghalamzan Esfahani}, publisher = {Springer}, year = {2023}, journal = {Autonomous Robots}, keywords = {ARRAY(0x559d325ed5f0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53715/}, abstract = {Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in human subjects studies. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visually guidance, although combining them together leads to the best overall results.} }
- V. Wichitwechkarn and C. Fox, “Macarons: a modular and open-sourced automation system for vertical farming,” Jounral of open hardware, vol. 7, iss. 1, 2023. doi:10.5334/joh.53
[BibTeX] [Abstract] [Download PDF]
The Modular Automated Crop Array Online System (MACARONS) is an extensible, scalable, open hardware system for plant transport in automated horticulture systems such as vertical farms. It is specified to move trays of plants up to 1060mm \${$\backslash$}times\$ 630mm and 12.5kg at a rate of 100mm/s along the guide rails and 41.7mm/s up the lifts, such as between stations for monitoring and actuating plants. The cost for the construction of one grow unit of MACARONS is 144.96USD which equates to 128.85USD/m\${\^{ }}2\$ of grow area. The designs are released and meets the requirements of CERN-OSH-W, which includes step-by-step graphical build instructions and can be built by a typical technical person in one day at a cost of 1535.50 USD. Integrated tests are included in the build instructions are used to validate against the specifications, and we report on a successful build. Through a simple analysis, we demonstrate that MACARONS can operate at a rate sufficient to automate tray loading/unloading, to reduce labour costs in a vertical farm.
@article{lincoln52872, volume = {7}, number = {1}, month = {January}, author = {Vijja Wichitwechkarn and Charles Fox}, title = {MACARONS: A Modular and Open-Sourced Automation System for Vertical Farming}, publisher = {Ubiquity Press}, year = {2023}, journal = {Jounral of Open Hardware}, doi = {10.5334/joh.53}, keywords = {ARRAY(0x559d325ed590)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52872/}, abstract = {The Modular Automated Crop Array Online System (MACARONS) is an extensible, scalable, open hardware system for plant transport in automated horticulture systems such as vertical farms. It is specified to move trays of plants up to 1060mm \${$\backslash$}times\$ 630mm and 12.5kg at a rate of 100mm/s along the guide rails and 41.7mm/s up the lifts, such as between stations for monitoring and actuating plants. The cost for the construction of one grow unit of MACARONS is 144.96USD which equates to 128.85USD/m\${\^{ }}2\$ of grow area. The designs are released and meets the requirements of CERN-OSH-W, which includes step-by-step graphical build instructions and can be built by a typical technical person in one day at a cost of 1535.50 USD. Integrated tests are included in the build instructions are used to validate against the specifications, and we report on a successful build. Through a simple analysis, we demonstrate that MACARONS can operate at a rate sufficient to automate tray loading/unloading, to reduce labour costs in a vertical farm.} }
- P. Craigon, J. Sacks, S. Brewer, J. Frey, G. A. Mendoza, S. Kanza, L. Manning, S. Munday, A. Wintour, and S. Pearson, “Ethics by design: responsible research & innovation for ai in the food sector,” Journal of responsible technology, vol. 13, iss. 100051, 2023. doi:10.1016/j.jrt.2022.100051
[BibTeX] [Abstract] [Download PDF]
Here we reflect on how a multi-disciplinary working group explored the ethical complexities of the use of new technologies for data sharing in the food supply chain. We used a three-part process of varied design methods, which included collaborative ideation and speculative scenario development, the creation of design fiction objects, and assessment using the Moral-IT deck, a card-based tool. We present, through the lens of the EPSRC’s Framework for Responsible Innovation how processes of anticipation, reflection, engagement and action built a plausible, fictional world in which a data trust uses artificial intelligence (AI) to support data sharing and decision-making across the food supply chain. This approach provides rich opportunities for considering ethical challenges to data sharing as part of a reflexive and engaged responsible innovation approach. We reflect on the value and potential of this approach as a method for engaged (co-)design and responsible innovation.
@article{lincoln52115, volume = {13}, number = {100051}, month = {April}, author = {P. Craigon and J. Sacks and S. Brewer and J. Frey and A. Gutierrez Mendoza and S. Kanza and L. Manning and S. Munday and A. Wintour and S. Pearson}, title = {Ethics by Design: Responsible Research \& Innovation for AI in the Food Sector}, publisher = {Elsevier}, year = {2023}, journal = {Journal of Responsible Technology}, doi = {10.1016/j.jrt.2022.100051}, keywords = {ARRAY(0x559d325e2708)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52115/}, abstract = {Here we reflect on how a multi-disciplinary working group explored the ethical complexities of the use of new technologies for data sharing in the food supply chain. We used a three-part process of varied design methods, which included collaborative ideation and speculative scenario development, the creation of design fiction objects, and assessment using the Moral-IT deck, a card-based tool. We present, through the lens of the EPSRC's Framework for Responsible Innovation how processes of anticipation, reflection, engagement and action built a plausible, fictional world in which a data trust uses artificial intelligence (AI) to support data sharing and decision-making across the food supply chain. This approach provides rich opportunities for considering ethical challenges to data sharing as part of a reflexive and engaged responsible innovation approach. We reflect on the value and potential of this approach as a method for engaged (co-)design and responsible innovation.} }
- A. Perrett, H. Pollard, C. Barnes, M. Schofield, L. Qie, P. Bosilj, and J. Brown, “Deepverge: classification of roadside verge biodiversity and conservation potential,” Computers, environment and urban systems, 2023.
[BibTeX] [Abstract] [Download PDF]
Grasslands are increasingly modified by anthropogenic activities and species rich grasslands have become rare habitats in the UK. However, grassy roadside verges often contain conservation priority plant species and should be targeted for protection. Identification of verges with high conservation potential represents a considerable challenge for ecologists, driving the development of automated methods to make up for the shortfall of relevant expertise nationally. Using survey data from 3,900 km of roadside verges alongside publicly available street-view imagery, we present DeepVerge: a deep learning-based method that can automatically survey sections of roadside verge by detecting the presence of positive indicator species. Using images and ground truth survey data from the rural UK county of Lincolnshire, DeepVerge achieved a mean accuracy of 88\% and a mean F1 score of 0.82. Such a method may be used by local authorities to identify new local wildlife sites, and aid management and environmental planning in line with legal and government policy obligations, saving thousands of hours of skilled labour
@article{lincoln54285, title = {DeepVerge: Classification of Roadside Verge Biodiversity and Conservation Potential}, author = {Andrew Perrett and Harry Pollard and Charlie Barnes and Mark Schofield and Lan Qie and Petra Bosilj and James Brown}, publisher = {Elsevier}, year = {2023}, journal = {Computers, Environment and Urban Systems}, keywords = {ARRAY(0x559d325ed680)}, url = {https://eprints.lincoln.ac.uk/id/eprint/54285/}, abstract = {Grasslands are increasingly modified by anthropogenic activities and species rich grasslands have become rare habitats in the UK. However, grassy roadside verges often contain conservation priority plant species and should be targeted for protection. Identification of verges with high conservation potential represents a considerable challenge for ecologists, driving the development of automated methods to make up for the shortfall of relevant expertise nationally. Using survey data from 3,900 km of roadside verges alongside publicly available street-view imagery, we present DeepVerge: a deep learning-based method that can automatically survey sections of roadside verge by detecting the presence of positive indicator species. Using images and ground truth survey data from the rural UK county of Lincolnshire, DeepVerge achieved a mean accuracy of 88\% and a mean F1 score of 0.82. Such a method may be used by local authorities to identify new local wildlife sites, and aid management and environmental planning in line with legal and government policy obligations, saving thousands of hours of skilled labour} }
- D. A. Hafeth, S. Kollias, and M. Ghafoor, “Semantic representations with attention networks for boosting image captioning,” Ieee access, vol. 11, p. 40230–40239, 2023. doi:10.1109/ACCESS.2023.3268744
[BibTeX] [Abstract] [Download PDF]
Image captioning has shown encouraging outcomes with Transformer-based architectures that typically use attention-based methods to establish semantic associations between objects in an image for caption prediction. Nevertheless, when appearance features of objects in an image display low interdependence, attention-based methods have difficulty in capturing the semantic association between them. To tackle this problem, additional knowledge beyond the task-specific dataset is often required to create captions that are more precise and meaningful. In this article, a semantic attention network is proposed to incorporate general-purpose knowledge into a transformer attention block model. This design combines visual and semantic properties of internal image knowledge in one place for fusion, serving as a reference point to aid in the learning of alignments between vision and language and to improve visual attention and semantic association. The proposed framework is validated on the Microsoft COCO dataset, and experimental results demonstrate competitive performance against the current state of the art.
@article{lincoln54257, volume = {11}, month = {April}, author = {Deema Abdal Hafeth and Stefanos Kollias and Mubeen Ghafoor}, title = {Semantic Representations with Attention Networks for Boosting Image Captioning}, publisher = {IEEE}, year = {2023}, journal = {IEEE Access}, doi = {10.1109/ACCESS.2023.3268744}, pages = {40230--40239}, keywords = {ARRAY(0x559d325e5d08)}, url = {https://eprints.lincoln.ac.uk/id/eprint/54257/}, abstract = {Image captioning has shown encouraging outcomes with Transformer-based architectures that typically use attention-based methods to establish semantic associations between objects in an image for caption prediction. Nevertheless, when appearance features of objects in an image display low interdependence, attention-based methods have difficulty in capturing the semantic association between them. To tackle this problem, additional knowledge beyond the task-specific dataset is often required to create captions that are more precise and meaningful. In this article, a semantic attention network is proposed to incorporate general-purpose knowledge into a transformer attention block model. This design combines visual and semantic properties of internal image knowledge in one place for fusion, serving as a reference point to aid in the learning of alignments between vision and language and to improve visual attention and semantic association. The proposed framework is validated on the Microsoft COCO dataset, and experimental results demonstrate competitive performance against the current state of the art.} }
- Z. Zhu, G. Das, and M. Hanheide, “Autonomous topological optimisation for multi-robot systems in logistics,” in The 38th acm/sigapp symposium on applied computing, 2023.
[BibTeX] [Abstract] [Download PDF]
Multi-robot systems (MRS) are currently being introduced in many in-field logistics operations in large environments such as warehouses and commercial soft-fruit production. Collision avoidance is a critical problem in MRS as it may introduce deadlocks during the motion planning. In this work, a discretised topological map representation is used for low-cost route planning of individual robots as well as to easily switch the navigation actions depending on the constraints in the environment. However, this topological map could also have bottlenecks which leads to deadlocks and low transportation efficiency when used for an MRS. In this paper, we propose a resource container based Request-Release-Interrupt (RRI) algorithm that constrains each topological node with a capacity of one entity and therefore helps to avoid collisions and detect deadlocks. Furthermore, we integrate a Genetic Algorithm (GA) with Discrete Event Simulation (DES) for optimising the topological map to reduce deadlocks and improve transportation efficiency in logistics tasks. Performance analysis of the proposed algorithms are conducted after running a set of simulations with multiple robots and different maps. The results validate the effectiveness of our algorithms.
@inproceedings{lincoln53246, booktitle = {The 38th ACM/SIGAPP Symposium On Applied Computing}, month = {March}, title = {Autonomous Topological Optimisation for Multi-robot Systems in Logistics}, author = {Zuyuan Zhu and Gautham Das and Marc Hanheide}, year = {2023}, keywords = {ARRAY(0x559d325ed500)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53246/}, abstract = {Multi-robot systems (MRS) are currently being introduced in many in-field logistics operations in large environments such as warehouses and commercial soft-fruit production. Collision avoidance is a critical problem in MRS as it may introduce deadlocks during the motion planning. In this work, a discretised topological map representation is used for low-cost route planning of individual robots as well as to easily switch the navigation actions depending on the constraints in the environment. However, this topological map could also have bottlenecks which leads to deadlocks and low transportation efficiency when used for an MRS. In this paper, we propose a resource container based Request-Release-Interrupt (RRI) algorithm that constrains each topological node with a capacity of one entity and therefore helps to avoid collisions and detect deadlocks. Furthermore, we integrate a Genetic Algorithm (GA) with Discrete Event Simulation (DES) for optimising the topological map to reduce deadlocks and improve transportation efficiency in logistics tasks. Performance analysis of the proposed algorithms are conducted after running a set of simulations with multiple robots and different maps. The results validate the effectiveness of our algorithms.} }
- M. Constantinou, R. Polvara, and E. Makridis, “The technologisation of thematic analysis: a case study into automatising qualitative research,” in 17th international technology, education and development conference, 2023, p. 1092–1098. doi:10.21125/inted.2023.0323
[BibTeX] [Abstract] [Download PDF]
Thematic analysis is the most commonly used form of qualitative analysis used extensively in educational sciences. While the process is straightforward in the sense that a hermeneutic analysis is conducted so as to detect patterns and assign themes emerging from the data acquired, replicability can be challenging. As a result, there is significant debate about what constitutes reliability and rigour in relation to qualitative coding. Traditional thematic analysis in educational sciences requires the development of a codebook and the recruitment of a research team for intercoder reviewing and code testing. Such a process is often lengthy and infeasible when the number of texts to be analysed increases exponentially. To overcome these limitations, in this work, we use an unsupervised text analysis technique called the Latent Dirichlet Allocation (LDA) to identify distinct abstract topics which are then clustered into potential themes. Our results show that thematic analysis in the field of educational sciences using the LDA text analysis technique has prospects of demonstrating rigour and higher thematic coding reliability and validity while offering a valid intra-coder complementary support to the researcher.
@inproceedings{lincoln54118, month = {March}, author = {Marina Constantinou and Riccardo Polvara and Evagoras Makridis}, booktitle = {17th International Technology, Education and Development Conference}, title = {The technologisation of thematic analysis: a case study into automatising qualitative research}, publisher = {IATED}, year = {2023}, journal = {17th International Technology, Education and Development Conference}, doi = {10.21125/inted.2023.0323}, pages = {1092--1098}, keywords = {ARRAY(0x559d325e5c90)}, url = {https://eprints.lincoln.ac.uk/id/eprint/54118/}, abstract = {Thematic analysis is the most commonly used form of qualitative analysis used extensively in educational sciences. While the process is straightforward in the sense that a hermeneutic analysis is conducted so as to detect patterns and assign themes emerging from the data acquired, replicability can be challenging. As a result, there is significant debate about what constitutes reliability and rigour in relation to qualitative coding. Traditional thematic analysis in educational sciences requires the development of a codebook and the recruitment of a research team for intercoder reviewing and code testing. Such a process is often lengthy and infeasible when the number of texts to be analysed increases exponentially. To overcome these limitations, in this work, we use an unsupervised text analysis technique called the Latent Dirichlet Allocation (LDA) to identify distinct abstract topics which are then clustered into potential themes. Our results show that thematic analysis in the field of educational sciences using the LDA text analysis technique has prospects of demonstrating rigour and higher thematic coding reliability and validity while offering a valid intra-coder complementary support to the researcher.} }
- K. Seemakurthy, C. Fox, E. Aptoula, and P. Bosilj, “Domain generalised faster r-cnn,” in The 37th aaai conference on artificial intelligence, 2023.
[BibTeX] [Abstract] [Download PDF]
Domain generalisation (i.e. out-of-distribution generalisation) is an open problem in machine learning, where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains. While the topic is attracting increasing interest, it has not been studied in detail in the context of object detection. The established approaches all operate under the covariate shift assumption, where the conditional distributions are assumed to be approximately equal across source domains. This is the first paper to address domain generalisation in the context of object detection, with a rigorous mathematical analysis of domain shift, without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Faster R-CNN and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines. All the codes for replicating the results in this paper can be found at https://github.com/karthikiitm87/domain-generalisation.git
@inproceedings{lincoln53771, booktitle = {The 37th AAAI conference on Artificial Intelligence}, month = {March}, title = {Domain Generalised Faster R-CNN}, author = {Karthik Seemakurthy and Charles Fox and Erchan Aptoula and Petra Bosilj}, publisher = {Association for Advancement of Artificial Intelligence}, year = {2023}, keywords = {ARRAY(0x559d325e5d50)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53771/}, abstract = {Domain generalisation (i.e. out-of-distribution generalisation) is an open problem in machine learning, where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains. While the topic is attracting increasing interest, it has not been studied in detail in the context of object detection. The established approaches all operate under the covariate shift assumption, where the conditional distributions are assumed to be approximately equal across source domains. This is the first paper to address domain generalisation in the context of object detection, with a rigorous mathematical analysis of domain shift, without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Faster R-CNN and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines. All the codes for replicating the results in this paper can be found at https://github.com/karthikiitm87/domain-generalisation.git} }
- L. Castri, S. Mghames, M. Hanheide, and N. Bellotto, “Enhancing causal discovery from robot sensor data in dynamic scenarios,” in Conference on causal learning and reasoning (clear), 2023.
[BibTeX] [Abstract] [Download PDF]
Identifying the main features and learning the causal relationships of a dynamic system from time-series of sensor data are key problems in many real-world robot applications. In this paper, we propose an extension of a state-of-the-art causal discovery method, PCMCI, embedding an additional feature-selection module based on transfer entropy. Starting from a prefixed set of variables, the new algorithm reconstructs the causal model of the observed system by considering only the its main features and neglecting those deemed unnecessary for understanding the evolution of the system. We first validate the method on a toy problem, for which the ground-truth model is available, and then on a real-world robotics scenario using a large-scale time-series dataset of human trajectories. The experiments demonstrate that our solution outperforms the previous state-of-the-art technique in terms of accuracy and computational efficiency, allowing better and faster causal discovery of meaningful models from robot sensor data.
@inproceedings{lincoln53113, booktitle = {Conference on Causal Learning and Reasoning (CLeaR)}, month = {April}, title = {Enhancing Causal Discovery from Robot Sensor Data in Dynamic Scenarios}, author = {Luca Castri and Sariah Mghames and Marc Hanheide and Nicola Bellotto}, year = {2023}, keywords = {ARRAY(0x559d325e5cf0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53113/}, abstract = {Identifying the main features and learning the causal relationships of a dynamic system from time-series of sensor data are key problems in many real-world robot applications. In this paper, we propose an extension of a state-of-the-art causal discovery method, PCMCI, embedding an additional feature-selection module based on transfer entropy. Starting from a prefixed set of variables, the new algorithm reconstructs the causal model of the observed system by considering only the its main features and neglecting those deemed unnecessary for understanding the evolution of the system. We first validate the method on a toy problem, for which the ground-truth model is available, and then on a real-world robotics scenario using a large-scale time-series dataset of human trajectories. The experiments demonstrate that our solution outperforms the previous state-of-the-art technique in terms of accuracy and computational efficiency, allowing better and faster causal discovery of meaningful models from robot sensor data.} }
- Z. Yan, L. Sun, T. Krajnik, T. Duckett, and N. Bellotto, “Towards long-term autonomy: a perspective from robot learning,” in Aaai bridge program ?ai and robotics?, 2023.
[BibTeX] [Abstract] [Download PDF]
In the future, service robots are expected to be able to operate autonomously for long periods of time without human intervention. Many work striving for this goal have been emerging with the development of robotics, both hardware and software. Today we believe that an important underpinning of long-term robot autonomy is the ability of robots to learn on site and on-the-fly, especially when they are deployed in changing environments or need to traverse different environments. In this paper, we examine the problem of long-term autonomy from the perspective of robot learning, especially in an online way, and discuss in tandem its premise “data” and the subsequent “deployment”.
@inproceedings{lincoln53115, booktitle = {AAAI Bridge Program ?AI and Robotics?}, month = {February}, title = {Towards Long-term Autonomy: A Perspective from Robot Learning}, author = {Zhi Yan and Li Sun and Tomas Krajnik and Tom Duckett and Nicola Bellotto}, year = {2023}, keywords = {ARRAY(0x559d325ed548)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53115/}, abstract = {In the future, service robots are expected to be able to operate autonomously for long periods of time without human intervention. Many work striving for this goal have been emerging with the development of robotics, both hardware and software. Today we believe that an important underpinning of long-term robot autonomy is the ability of robots to learn on site and on-the-fly, especially when they are deployed in changing environments or need to traverse different environments. In this paper, we examine the problem of long-term autonomy from the perspective of robot learning, especially in an online way, and discuss in tandem its premise "data" and the subsequent "deployment".} }
- F. Atas, G. Cielniak, and L. Grimstad, “Benchmark of sampling-based optimizing planners for outdoor robot navigation,” in 17th international conference on intelligent autonomous systems, 2023, p. 231–243. doi:10.1007/978-3-031-22216-0_16
[BibTeX] [Abstract] [Download PDF]
This paper evaluates Sampling-Based Optimizing (SBO) planners from the Open Motion Planning Library (OMPL) in the context of mobile robot navigation in outdoor environments. Many SBO planners have been proposed, and determining performance differences among these planners for path planning problems can be time-consuming and ambiguous. The probabilistic nature of SBO planners can also complicate this procedure, as different results for the same planning problem can be obtained even in consecutive queries from the same planner. We compare all available SBO planners in OMPL with an automated planning problem generation method designed specifically for outdoor robot navigation scenarios. Several evaluation metrics are chosen, such as the length, smoothness, and success rate of the resulting path, and probability distributions for metrics are presented. With the experimental results obtained, clear recommendations on high-performing planners for mobile robot path planning problems are made, which will be useful to researchers and practitioners in mobile robot planning and navigation.
@inproceedings{lincoln50521, month = {January}, author = {Fetullah Atas and Grzegorz Cielniak and Lars Grimstad}, note = {ISBN: 978-3-031-22216-0}, booktitle = {17th International Conference on Intelligent Autonomous Systems}, title = {Benchmark of Sampling-Based Optimizing Planners for Outdoor Robot Navigation}, publisher = {Springer}, year = {2023}, doi = {10.1007/978-3-031-22216-0\_16}, pages = {231--243}, keywords = {ARRAY(0x559d325ed578)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50521/}, abstract = {This paper evaluates Sampling-Based Optimizing (SBO) planners from the Open Motion Planning Library (OMPL) in the context of mobile robot navigation in outdoor environments. Many SBO planners have been proposed, and determining performance differences among these planners for path planning problems can be time-consuming and ambiguous. The probabilistic nature of SBO planners can also complicate this procedure, as different results for the same planning problem can be obtained even in consecutive queries from the same planner. We compare all available SBO planners in OMPL with an automated planning problem generation method designed specifically for outdoor robot navigation scenarios. Several evaluation metrics are chosen, such as the length, smoothness, and success rate of the resulting path, and probability distributions for metrics are presented. With the experimental results obtained, clear recommendations on high-performing planners for mobile robot path planning problems are made, which will be useful to researchers and practitioners in mobile robot planning and navigation.} }
- L. Castri, S. Mghames, and N. Bellotto, “From continual learning to causal discovery in robotics,” in Aaai bridge program ?continual causality?, 2023.
[BibTeX] [Abstract] [Download PDF]
Reconstructing accurate causal models of dynamic systems from time-series of sensor data is a key problem in many real-world scenarios. In this paper, we present an overview based on our experience about practical challenges that the causal analysis encounters when applied to autonomous robots and how Continual Learning{\texttt{\char126}}(CL) could help to overcome them. We propose a possible way to leverage the CL paradigm to make causal discovery feasible for robotics applications where the computational resources are limited, while at the same time exploiting the robot as an active agent that helps to increase the quality of the reconstructed causal models.
@inproceedings{lincoln53116, booktitle = {AAAI Bridge Program ?Continual Causality?}, month = {January}, title = {From Continual Learning to Causal Discovery in Robotics}, author = {Luca Castri and Sariah Mghames and Nicola Bellotto}, year = {2023}, keywords = {ARRAY(0x559d325ed5c0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53116/}, abstract = {Reconstructing accurate causal models of dynamic systems from time-series of sensor data is a key problem in many real-world scenarios. In this paper, we present an overview based on our experience about practical challenges that the causal analysis encounters when applied to autonomous robots and how Continual Learning{\texttt{\char126}}(CL) could help to overcome them. We propose a possible way to leverage the CL paradigm to make causal discovery feasible for robotics applications where the computational resources are limited, while at the same time exploiting the robot as an active agent that helps to increase the quality of the reconstructed causal models.} }
- K. Seemakurthy, P. Bosilj, E. Aptoula, and C. Fox, “Domain generalised fully convolutional one stage detection,” in International conference on robotics and automation (icra), 2023.
[BibTeX] [Abstract] [Download PDF]
Abstract{–}Real-time vision in robotics plays an important role in localising and recognising objects. Recently, deep learning approaches have been widely used in robotic vision. However, most of these approaches have assumed that training and test sets come from similar data distributions, which is not valid in many real world applications. This study proposes an approach to address domain generalisation (i.e. out-of distribution generalisation, OODG) where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains using single stage detectors. All existing approaches which deal with OODG either use slow two stage detectors or operate under the covariate shift assumption which may not be useful for real-time robotics. This is the first paper to address domain generalisation in the context of single stage anchor free object detector FCOS without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Fully Convolutional One Stage (DGFCOS) detection and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines and is able to run in real-time for robotics.
@inproceedings{lincoln53780, booktitle = {International Conference on Robotics and Automation (ICRA)}, month = {April}, title = {Domain Generalised Fully Convolutional One Stage Detection}, author = {Karthik Seemakurthy and Petra Bosilj and Erchan Aptoula and Charles Fox}, publisher = {IEEE}, year = {2023}, keywords = {ARRAY(0x559d325e26f0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53780/}, abstract = {Abstract{--}Real-time vision in robotics plays an important role in localising and recognising objects. Recently, deep learning approaches have been widely used in robotic vision. However, most of these approaches have assumed that training and test sets come from similar data distributions, which is not valid in many real world applications. This study proposes an approach to address domain generalisation (i.e. out-of distribution generalisation, OODG) where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains using single stage detectors. All existing approaches which deal with OODG either use slow two stage detectors or operate under the covariate shift assumption which may not be useful for real-time robotics. This is the first paper to address domain generalisation in the context of single stage anchor free object detector FCOS without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Fully Convolutional One Stage (DGFCOS) detection and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines and is able to run in real-time for robotics.} }
- S. Mghames, L. Castri, M. Hanheide, and N. Bellotto, “A neuro-symbolic approach for enhanced human motion prediction,” in International joint conference on neural networks (ijcnn), 2023.
[BibTeX] [Abstract] [Download PDF]
Reasoning on the context of human beings is crucial for many real-world applications especially for those deploying autonomous systems (e.g. robots). In this paper, we present a new approach for context reasoning to further advance the field of human motion prediction. We therefore propose a neuro-symbolic approach for human motion prediction (NeuroSyM), which weights differently the interactions in the neighbourhood by leveraging an intuitive technique for spatial representation called Qualitative Trajectory Calculus (QTC). The proposed approach is experimentally tested on medium and long term time horizons using two architectures from the state of art, one of which is a baseline for human motion prediction and the other is a baseline for generic multivariate time-series prediction. Six datasets of challenging crowded scenarios, collected from both fixed and mobile cameras, were used for testing. Experimental results show that the NeuroSyM approach outperforms in most cases the baseline architectures in terms of prediction accuracy.
@inproceedings{lincoln54335, booktitle = {International Joint Conference on Neural Networks (IJCNN)}, title = {A Neuro-Symbolic Approach for Enhanced Human Motion Prediction}, author = {Sariah Mghames and Luca Castri and Marc Hanheide and Nicola Bellotto}, publisher = {IEEE}, year = {2023}, keywords = {ARRAY(0x559d325ed620)}, url = {https://eprints.lincoln.ac.uk/id/eprint/54335/}, abstract = {Reasoning on the context of human beings is crucial for many real-world applications especially for those deploying autonomous systems (e.g. robots). In this paper, we present a new approach for context reasoning to further advance the field of human motion prediction. We therefore propose a neuro-symbolic approach for human motion prediction (NeuroSyM), which weights differently the interactions in the neighbourhood by leveraging an intuitive technique for spatial representation called Qualitative Trajectory Calculus (QTC). The proposed approach is experimentally tested on medium and long term time horizons using two architectures from the state of art, one of which is a baseline for human motion prediction and the other is a baseline for generic multivariate time-series prediction. Six datasets of challenging crowded scenarios, collected from both fixed and mobile cameras, were used for testing. Experimental results show that the NeuroSyM approach outperforms in most cases the baseline architectures in terms of prediction accuracy.} }
- B. Moncur, M. G. J. Trigo, and L. Mortara, “Augmented reality to reduce cognitive load in operational decision-making,” in Hci international 2023, 2023.
[BibTeX] [Abstract] [Download PDF]
Augmented reality (AR) technologies can overlay digital in- formation onto the real world. This makes them well suited for deci- sion support by providing contextually-relevant information to decision- makers. However, processing large amounts of information simultane- ously, particularly in time-pressured conditions, can result in poor decision- making due to excess cognitive load. This paper presents the results of an exploratory study investigating the effects of AR on cognitive load. A within-subjects experiment was conducted where participants were asked to complete a variable-sized bin packing task with and without the as- sistance of an augmented reality decision support system (AR DSS). Semi-structured interviews were conducted to elicit perceptions about the ease of the task with and without the AR DSS. This was supple- mented by collecting quantitative data to investigate if any changes in perceived ease of the task translated into changes in task performance. The qualitative data suggests that the presence of the AR DSS made the task feel easier to participants; however, there was only a statistically in- significant increase in mean task performance. Analysing the data at the individual level does not provide evidence of a translation of increased perceived ease to increased task performance.
@inproceedings{lincoln53555, booktitle = {HCI International 2023}, title = {Augmented Reality to Reduce Cognitive Load in Operational Decision-Making}, author = {Bethan Moncur and Maria J. Galvez Trigo and Letizia Mortara}, year = {2023}, keywords = {ARRAY(0x559d325ed650)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53555/}, abstract = {Augmented reality (AR) technologies can overlay digital in- formation onto the real world. This makes them well suited for deci- sion support by providing contextually-relevant information to decision- makers. However, processing large amounts of information simultane- ously, particularly in time-pressured conditions, can result in poor decision- making due to excess cognitive load. This paper presents the results of an exploratory study investigating the effects of AR on cognitive load. A within-subjects experiment was conducted where participants were asked to complete a variable-sized bin packing task with and without the as- sistance of an augmented reality decision support system (AR DSS). Semi-structured interviews were conducted to elicit perceptions about the ease of the task with and without the AR DSS. This was supple- mented by collecting quantitative data to investigate if any changes in perceived ease of the task translated into changes in task performance. The qualitative data suggests that the presence of the AR DSS made the task feel easier to participants; however, there was only a statistically in- significant increase in mean task performance. Analysing the data at the individual level does not provide evidence of a translation of increased perceived ease to increased task performance.} }
- F. Pasti and N. Bellotto, “Evaluation of computer vision-based person detection on low-cost embedded systems,” in 18th international conference on computer vision theory and applications (visapp), 2023.
[BibTeX] [Abstract] [Download PDF]
Person detection applications based on computer vision techniques often rely on complex Convolutional Neural Networks that require powerful hardware in order achieve good runtime performance. The work of this paper has been developed with the aim of implementing a safety system, based on computer vision algorithms, able to detect people in working environments using an embedded device. Possible applications for such safety systems include remote site monitoring and autonomous mobile robots in warehouses and industrial premises. Similar studies already exist in the literature, but they mostly rely on systems like NVidia Jetson that, with a Cuda enabled GPU, are able to provide satisfactory results. This, however, comes with a significant downside as such devices are usually expensive and require significant power consumption. The current paper instead is going to consider various implementations of computer vision-based person detection on two power-efficient and inexpensive devices, namely Raspberry Pi 3 and 4. In order to do so, some solutions based on off-the-shelf algorithms are first explored by reporting experimental results based on relevant performance metrics. Then, the paper presents a newly-created custom architecture, called eYOLO, that tries to solve some limitations of the previous systems. The experimental evaluation demonstrates the good performance of the proposed approach and suggests ways for further improvement.
@inproceedings{lincoln53114, booktitle = {18th International Conference on Computer Vision Theory and Applications (VISAPP)}, month = {February}, title = {Evaluation of Computer Vision-Based Person Detection on Low-Cost Embedded Systems}, author = {Francesco Pasti and Nicola Bellotto}, year = {2023}, keywords = {ARRAY(0x559d325ed530)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53114/}, abstract = {Person detection applications based on computer vision techniques often rely on complex Convolutional Neural Networks that require powerful hardware in order achieve good runtime performance. The work of this paper has been developed with the aim of implementing a safety system, based on computer vision algorithms, able to detect people in working environments using an embedded device. Possible applications for such safety systems include remote site monitoring and autonomous mobile robots in warehouses and industrial premises. Similar studies already exist in the literature, but they mostly rely on systems like NVidia Jetson that, with a Cuda enabled GPU, are able to provide satisfactory results. This, however, comes with a significant downside as such devices are usually expensive and require significant power consumption. The current paper instead is going to consider various implementations of computer vision-based person detection on two power-efficient and inexpensive devices, namely Raspberry Pi 3 and 4. In order to do so, some solutions based on off-the-shelf algorithms are first explored by reporting experimental results based on relevant performance metrics. Then, the paper presents a newly-created custom architecture, called eYOLO, that tries to solve some limitations of the previous systems. The experimental evaluation demonstrates the good performance of the proposed approach and suggests ways for further improvement.} }
2022
- C. Qi, M. Sandroni, J. C. Westergaard, E. H. o, M. Bagge, E. Alexandersson, and J. Gao, “In-field classification of the asymptomatic biotrophic phase of potato late blight based on deep learning and proximal hyperspectral imaging,” Computers and electronics in agriculture, vol. 205, 2022. doi:10.1016/j.compag.2022.107585
[BibTeX] [Abstract] [Download PDF]
Effective detection of potato late blight (PLB) is an essential aspect of potato cultivation. However, it is a challenge to detect late blight in asymptomatic biotrophic phase in fields with conventional imaging approaches because of the lack of visual symptoms in the canopy. Hyperspectral imaging can capture spectral signals from a wide range of wavelengths also outside the visual wavelengths. Here, we propose a deep learning classification architecture for hyperspectral images by combining 2D convolutional neural network (2D-CNN) and 3D-CNN with deep cooperative attention networks (PLB-2D-3D-A). First, 2D-CNN and 3D-CNN are used to extract rich spectral space features, and then the attention mechanism AttentionBlock and SE-ResNet are used to emphasize the salient features in the feature maps and increase the generalization ability of the model. The dataset is built with 15,360 images (64x64x204), cropped from 240 raw images captured in an experimental field with over 20 potato genotypes. The accuracy in the test dataset of 2000 images reached 0.739 in the full band and 0.790 in the specific bands (492 nm, 519 nm, 560 nm, 592 nm, 717 nm and 765 nm). This study shows an encouraging result for classification of the asymptomatic biotrophic phase of PLB disease with deep learning and proximal hyperspectral imaging.
@article{lincoln52940, volume = {205}, month = {December}, author = {Chao Qi and Murilo Sandroni and Jesper Cairo Westergaard and Ea H{\o}egh Riis Sundmark and Merethe Bagge and Erik Alexandersson and Junfeng Gao}, title = {In-field classification of the asymptomatic biotrophic phase of potato late blight based on deep learning and proximal hyperspectral imaging}, publisher = {Elsevier}, journal = {Computers and Electronics in Agriculture}, doi = {10.1016/j.compag.2022.107585}, year = {2022}, keywords = {ARRAY(0x559d325ed6b0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52940/}, abstract = {Effective detection of potato late blight (PLB) is an essential aspect of potato cultivation. However, it is a challenge to detect late blight in asymptomatic biotrophic phase in fields with conventional imaging approaches because of the lack of visual symptoms in the canopy. Hyperspectral imaging can capture spectral signals from a wide range of wavelengths also outside the visual wavelengths. Here, we propose a deep learning classification architecture for hyperspectral images by combining 2D convolutional neural network (2D-CNN) and 3D-CNN with deep cooperative attention networks (PLB-2D-3D-A). First, 2D-CNN and 3D-CNN are used to extract rich spectral space features, and then the attention mechanism AttentionBlock and SE-ResNet are used to emphasize the salient features in the feature maps and increase the generalization ability of the model. The dataset is built with 15,360 images (64x64x204), cropped from 240 raw images captured in an experimental field with over 20 potato genotypes. The accuracy in the test dataset of 2000 images reached 0.739 in the full band and 0.790 in the specific bands (492 nm, 519 nm, 560 nm, 592 nm, 717 nm and 765 nm). This study shows an encouraging result for classification of the asymptomatic biotrophic phase of PLB disease with deep learning and proximal hyperspectral imaging.} }
- M. Badaoui, P. Buigues, D. Berta, G. Mandana, H. Gu, T. Földes, C. Dickson, V. Hornak, M. Kato, C. Molteni, S. Parsons, and E. Rosta, “Combined free-energy calculation and machine learning methods for understanding ligand unbinding kinetics,” Journal of chemical theory and computation, vol. 18, iss. 4, p. 2543–2555, 2022. doi:10.1021/acs.jctc.1c00924
[BibTeX] [Abstract] [Download PDF]
The determination of drug residence times, which define the time an inhibitor is in complex with its target, is a fundamental part of the drug discovery process. Synthesis and experimental measurements of kinetic rate constants are, however, expensive, and time-consuming. In this work, we aimed to obtain drug residence times computationally. Furthermore, we propose a novel algorithm to identify molecular design objectives based on ligand unbinding kinetics. We designed an enhanced sampling technique to accurately predict the free energy profiles of the ligand unbinding process, focusing on the free energy barrier for unbinding. Our method first identifies unbinding paths determining a corresponding set of internal coordinates (IC) that form contacts between the protein and the ligand, it then iteratively updates these interactions during a series of biased molecular-dynamics (MD) simulations to reveal the ICs that are important for the whole of the unbinding process. Subsequently, we performed finite temperature string simulations to obtain the free energy barrier for unbinding using the set of ICs as a complex reaction coordinate. Importantly, we also aimed to enable further design of drugs focusing on improved residence times. To this end, we developed a supervised machine learning (ML) approach with inputs from unbiased ?downhill? trajectories initiated near the transition state (TS) ensemble of the string unbinding path. We demonstrate that our ML method can identify key ligand-protein interactions driving the system through the TS. Some of the most important drugs for cancer treatment are kinase inhibitors. One of these kinase targets is Cyclin Dependent Kinase 2 (CDK2), an appealing target for anticancer drug development. Here, we tested our method using two different CDK2 inhibitors for potential further development of these compounds. We compared the free energy barriers obtained from our calculations with those observed in available experimental data. We highlighted important interactions at the distal ends of the ligands that can be targeted for improved residence times. Our method provides a new tool to determine unbinding rates, and to identify key structural features of the inhibitors that can be used as starting points for novel design strategies in drug discovery.
@article{lincoln49062, volume = {18}, number = {4}, month = {April}, author = {Magd Badaoui and Pedro Buigues and Denes Berta and Guarav Mandana and Hankang Gu and Tam{\'a}s F{\"o}ldes and Callum Dickson and Viktor Hornak and Mitsunori Kato and Carla Molteni and Simon Parsons and Edina Rosta}, title = {Combined Free-Energy Calculation and Machine Learning Methods for Understanding Ligand Unbinding Kinetics}, publisher = {American Chemical Society}, year = {2022}, journal = {Journal of Chemical Theory and Computation}, doi = {10.1021/acs.jctc.1c00924}, pages = {2543--2555}, keywords = {ARRAY(0x559d325ee040)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49062/}, abstract = {The determination of drug residence times, which define the time an inhibitor is in complex with its target, is a fundamental part of the drug discovery process. Synthesis and experimental measurements of kinetic rate constants are, however, expensive, and time-consuming. In this work, we aimed to obtain drug residence times computationally. Furthermore, we propose a novel algorithm to identify molecular design objectives based on ligand unbinding kinetics. We designed an enhanced sampling technique to accurately predict the free energy profiles of the ligand unbinding process, focusing on the free energy barrier for unbinding. Our method first identifies unbinding paths determining a corresponding set of internal coordinates (IC) that form contacts between the protein and the ligand, it then iteratively updates these interactions during a series of biased molecular-dynamics (MD) simulations to reveal the ICs that are important for the whole of the unbinding process. Subsequently, we performed finite temperature string simulations to obtain the free energy barrier for unbinding using the set of ICs as a complex reaction coordinate. Importantly, we also aimed to enable further design of drugs focusing on improved residence times. To this end, we developed a supervised machine learning (ML) approach with inputs from unbiased ?downhill? trajectories initiated near the transition state (TS) ensemble of the string unbinding path. We demonstrate that our ML method can identify key ligand-protein interactions driving the system through the TS. Some of the most important drugs for cancer treatment are kinase inhibitors. One of these kinase targets is Cyclin Dependent Kinase 2 (CDK2), an appealing target for anticancer drug development. Here, we tested our method using two different CDK2 inhibitors for potential further development of these compounds. We compared the free energy barriers obtained from our calculations with those observed in available experimental data. We highlighted important interactions at the distal ends of the ligands that can be targeted for improved residence times. Our method provides a new tool to determine unbinding rates, and to identify key structural features of the inhibitors that can be used as starting points for novel design strategies in drug discovery.} }
- S. Latif, H. Cuayahuitl, F. Pervez, F. Shamshad, H. S. Ali, and E. Cambria, “A survey on deep reinforcement learning for audio?based applications,” Artifcial intelligence review, 2022. doi:10.1007/s10462-022-10224-2
[BibTeX] [Abstract] [Download PDF]
Deep reinforcement learning (DRL) is poised to revolutionise the field of artificial intelligence (AI) by endowing autonomous systems with high levels of understanding of the real world. Currently, deep learning (DL) is enabling DRL to effectively solve various intractable problems in various fields including computer vision, natural language processing, healthcare, robotics, to name a few. Most importantly, DRL algorithms are also being employed in audio signal processing to learn directly from speech, music and other sound signals in order to create audio-based autonomous systems that have many promising applications in the real world. In this article, we conduct a comprehensive survey on the progress of DRL in the audio domain by bringing together research studies across different but related areas in speech and music. We begin with an introduction to the general field of DL and reinforcement learning (RL), then progress to the main DRL methods and their applications in the audio domain. We conclude by presenting important challenges faced by audio-based DRL agents and by highlighting open areas for future research and investigation. The findings of this paper will guide researchers interested in DRL for the audio domain.
@article{lincoln50054, month = {July}, title = {A survey on deep reinforcement learning for audio?based applications}, author = {Siddique Latif and Heriberto Cuayahuitl and Farrukh Pervez and Fahad Shamshad and Hafiz Shehbaz Ali and Erik Cambria}, publisher = {Springer Nature B.V.}, year = {2022}, doi = {10.1007/s10462-022-10224-2}, journal = {Artifcial Intelligence Review}, keywords = {ARRAY(0x559d325edda0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50054/}, abstract = {Deep reinforcement learning (DRL) is poised to revolutionise the field of artificial intelligence (AI) by endowing autonomous systems with high levels of understanding of the real world. Currently, deep learning (DL) is enabling DRL to effectively solve various intractable problems in various fields including computer vision, natural language processing, healthcare, robotics, to name a few. Most importantly, DRL algorithms are also being employed in audio signal processing to learn directly from speech, music and other sound signals in order to create audio-based autonomous systems that have many promising applications in the real world. In this article, we conduct a comprehensive survey on the progress of DRL in the audio domain by bringing together research studies across different but related areas in speech and music. We begin with an introduction to the general field of DL and reinforcement learning (RL), then progress to the main DRL methods and their applications in the audio domain. We conclude by presenting important challenges faced by audio-based DRL agents and by highlighting open areas for future research and investigation. The findings of this paper will guide researchers interested in DRL for the audio domain.} }
- A. Drake, I. Sassoon, P. Balatsoukas, T. Porat, M. Ashworth, E. Wright, V. Curcin, M. Chapman, N. Kokciyan, M. Sanjay, E. Sklar, and S. Parsons, “The relationship of socio-demographic factors and patient attitudes to connected health technologies: a survey of stroke survivors.,” Health informatics journal, vol. 28, iss. 2, 2022. doi:10.1177\%2F14604582221102373
[BibTeX] [Abstract] [Download PDF]
More evidence is needed on technology implementation for remote monitoring and self-management across the various settings relevant to chronic conditions. This paper describes the findings of a survey designed to explore the relevance of socio-demographic factors to attitudes towards connected health technologies in a community of patients. Stroke survivors living in the UK were invited to answer questions about themselves and about their attitudes to a prototype remote monitoring and self-management app developed around their preferences. Eighty (80) responses were received and analysed, with limitations and results presented in full. Socio-demographic factors were not found to be associated with variations in participants? willingness to use the system and attitudes to data sharing. Individuals? levels of interest in relevant technology was suggested as a more important determinant of attitudes. These observations run against the grain of most relevant literature to date, and tend to underline the importance of prioritising patient-centred participatory research in efforts to advance connected health technologies.
@article{lincoln49926, volume = {28}, number = {2}, month = {June}, author = {Archie Drake and Isabel Sassoon and Panos Balatsoukas and Talya Porat and Mark Ashworth and Ellen Wright and Vasa Curcin and Martin Chapman and Nadin Kokciyan and Modgil Sanjay and Elizabeth Sklar and Simon Parsons}, title = {The relationship of socio-demographic factors and patient attitudes to connected health technologies: a survey of stroke survivors.}, publisher = {SAGE Publications}, year = {2022}, journal = {Health Informatics Journal}, doi = {10.1177\%2F14604582221102373}, keywords = {ARRAY(0x559d325ede00)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49926/}, abstract = {More evidence is needed on technology implementation for remote monitoring and self-management across the various settings relevant to chronic conditions. This paper describes the findings of a survey designed to explore the relevance of socio-demographic factors to attitudes towards connected health technologies in a community of patients. Stroke survivors living in the UK were invited to answer questions about themselves and about their attitudes to a prototype remote monitoring and self-management app developed around their preferences. Eighty (80) responses were received and analysed, with limitations and results presented in full. Socio-demographic factors were not found to be associated with variations in participants? willingness to use the system and attitudes to data sharing. Individuals? levels of interest in relevant technology was suggested as a more important determinant of attitudes. These observations run against the grain of most relevant literature to date, and tend to underline the importance of prioritising patient-centred participatory research in efforts to advance connected health technologies.} }
- H. Yahyaoui, Z. Maamar, M. Al-Khafajiy, and H. Al-Hamadi, “Trust-based management in iot federations,” Future generation computer systems, vol. 136, p. 182–192, 2022. doi:10.1016/j.future.2022.06.003
[BibTeX] [Abstract] [Download PDF]
This paper presents a trust-based evolutionary game model for managing Internet-of-Things (IoT) federations. The model adopts trust-based payoff to either reward or penalize things based on the behaviors they expose. The model also resorts to monitoring these behaviors to ensure that the share of untrustworthy things in a federation does not hinder the good functioning of trustworthy things in this federation. The trust scores are obtained using direct experience with things and feedback from other things and are integrated into game strategies. These strategies capture the dynamic nature of federations since the population of trustworthy versus untrustworthy things changes over time with the aim of retaining the trustworthy ones. To demonstrate the technical doability of the game strategies along with rewarding/penalizing things, a set of experiments were carried out and results were benchmarked as per the existing literature. The results show a better mitigation of attacks such as bad-mouthing and ballot-stuffing on trustworthy things.
@article{lincoln49874, volume = {136}, month = {June}, author = {Hamdi Yahyaoui and Zakaria Maamar and Mohammed Al-Khafajiy and Hamid Al-Hamadi}, title = {Trust-based management in IoT federations}, publisher = {Elsevier}, year = {2022}, journal = {Future Generation Computer Systems}, doi = {10.1016/j.future.2022.06.003}, pages = {182--192}, keywords = {ARRAY(0x559d325ede90)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49874/}, abstract = {This paper presents a trust-based evolutionary game model for managing Internet-of-Things (IoT) federations. The model adopts trust-based payoff to either reward or penalize things based on the behaviors they expose. The model also resorts to monitoring these behaviors to ensure that the share of untrustworthy things in a federation does not hinder the good functioning of trustworthy things in this federation. The trust scores are obtained using direct experience with things and feedback from other things and are integrated into game strategies. These strategies capture the dynamic nature of federations since the population of trustworthy versus untrustworthy things changes over time with the aim of retaining the trustworthy ones. To demonstrate the technical doability of the game strategies along with rewarding/penalizing things, a set of experiments were carried out and results were benchmarked as per the existing literature. The results show a better mitigation of attacks such as bad-mouthing and ballot-stuffing on trustworthy things.} }
- F. D. Duchetto and M. Hanheide, “Learning on the job: long-term behavioural adaptation in human-robot interactions,” Ieee robotics and automation letters, vol. 7, iss. 3, p. 6934–6941, 2022. doi:10.1109/LRA.2022.3178807
[BibTeX] [Abstract] [Download PDF]
In this work, we propose a framework for allowing autonomous robots deployed for extended periods of time in public spaces to adapt their own behaviour online from user interactions. The robot behaviour planning is embedded in a Reinforcement Learning (RL) framework, where the objective is maximising the level of overall user engagement during the interactions. We use the Upper-Confidence-Bound Value-Iteration (UCBVI) algorithm, which gives a helpful way of managing the exploration-exploitation trade-off for real-time interactions. An engagement model trained end-to-end generates the reward function in real-time during policy execution. We test this approach in a public museum in Lincoln (U.K.), where the robot is deployed as a tour guide for the visitors. Results show that after a couple of months of exploration, the robot policy learned to maintain the engagement of users for longer, with an increase of 22.8\% over the initial static policy in the number of items visited during the tour and a 30\% increase in the probability of completing the tour. This work is a promising step toward behavioural adaptation in long-term scenarios for robotics applications in social settings.
@article{lincoln49961, volume = {7}, number = {3}, month = {June}, author = {Francesco Del Duchetto and Marc Hanheide}, title = {Learning on the Job: Long-Term Behavioural Adaptation in Human-Robot Interactions}, publisher = {IEEE}, year = {2022}, journal = {IEEE Robotics and Automation Letters}, doi = {10.1109/LRA.2022.3178807}, pages = {6934--6941}, keywords = {ARRAY(0x559d325edec0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49961/}, abstract = {In this work, we propose a framework for allowing autonomous robots deployed for extended periods of time in public spaces to adapt their own behaviour online from user interactions. The robot behaviour planning is embedded in a Reinforcement Learning (RL) framework, where the objective is maximising the level of overall user engagement during the interactions. We use the Upper-Confidence-Bound Value-Iteration (UCBVI) algorithm, which gives a helpful way of managing the exploration-exploitation trade-off for real-time interactions. An engagement model trained end-to-end generates the reward function in real-time during policy execution. We test this approach in a public museum in Lincoln (U.K.), where the robot is deployed as a tour guide for the visitors. Results show that after a couple of months of exploration, the robot policy learned to maintain the engagement of users for longer, with an increase of 22.8\% over the initial static policy in the number of items visited during the tour and a 30\% increase in the probability of completing the tour. This work is a promising step toward behavioural adaptation in long-term scenarios for robotics applications in social settings.} }
- S. Raza and H. Cuayahuitl, “A comparison of neural?based visual recognisers for speech activity detection,” International journal of speech technology, 2022. doi:10.1007/s10772-021-09956-3
[BibTeX] [Abstract] [Download PDF]
Existing literature on speech activity detection (SAD) highlights different approaches within neural networks but does not provide a comprehensive comparison to these methods. This is important because such neural approaches often require hardware-intensive resources. In this article, we provide a comparative analysis of three different approaches: classification with still images (CNN model), classification based on previous images (CRNN model), and classification of sequences of images (Seq2Seq model). Our experimental results using the Vid-TIMIT dataset show that the CNN model can achieve an accuracy of 97\% whereas the CRNN and Seq2Seq models increase the classification to 99\%. Further experiments show that the CRNN model is almost as accurate as the Seq2Seq model (99.1\% vs. 99.6\% of classification accuracy, respectively) but 57\% faster to train (326 vs. 761 secs. per epoch).
@article{lincoln49800, month = {June}, title = {A comparison of neural?based visual recognisers for speech activity detection}, author = {Sajjadali Raza and Heriberto Cuayahuitl}, publisher = {Springer}, year = {2022}, doi = {10.1007/s10772-021-09956-3}, journal = {International Journal of Speech Technology}, keywords = {ARRAY(0x559d325edef0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49800/}, abstract = {Existing literature on speech activity detection (SAD) highlights different approaches within neural networks but does not provide a comprehensive comparison to these methods. This is important because such neural approaches often require hardware-intensive resources. In this article, we provide a comparative analysis of three different approaches: classification with still images (CNN model), classification based on previous images (CRNN model), and classification of sequences of images (Seq2Seq model). Our experimental results using the Vid-TIMIT dataset show that the CNN model can achieve an accuracy of 97\% whereas the CRNN and Seq2Seq models increase the classification to 99\%. Further experiments show that the CRNN model is almost as accurate as the Seq2Seq model (99.1\% vs. 99.6\% of classification accuracy, respectively) but 57\% faster to train (326 vs. 761 secs. per epoch).} }
- X. Li, R. Lloyd, S. Ward, J. Cox, S. Coutts, and C. Fox, “Robotic crop row tracking around weeds using cereal-specific features,” Computers and electronics in agriculture, vol. 197, 2022. doi:10.1016/j.compag.2022.106941
[BibTeX] [Abstract] [Download PDF]
Crop row following is especially challenging in narrow row cereal crops, such as wheat. Separation between plants within a row disappears at an early growth stage, and canopy closure between rows, when leaves from different rows start to occlude each other, occurs three to four months after the crop emerges. Canopy closure makes it challenging to identify separate rows through computer vision as clear lanes become obscured. Cereal crops are grass species and so their leaves have a predictable shape and orientation. We introduce an image processing pipeline which exploits grass shape to identify and track rows. The key observation exploited is that leaf orientations tend to be vertical along rows and horizontal between rows due to the location of the stems within the rows. Adaptive mean-shift clustering on Hough line segments is then used to obtain lane centroids, and followed by a nearest neighbor data association creating lane line candidates in 2D space. Lane parameters are fit with linear regression and a Kalman filter is used for tracking lanes between frames. The method is achieves sub-50 mm accuracy which is sufficient for placing a typical agri-robot?s wheels between real-world, early-growth wheat crop rows to drive between them, as long as the crop is seeded in a wider spacing such as 180 mm row spacing for an 80 mm wheel width robot.
@article{lincoln49340, volume = {197}, month = {June}, author = {Xiaodong Li and Rob Lloyd and Sarah Ward and Jonathan Cox and Shaun Coutts and Charles Fox}, title = {Robotic crop row tracking around weeds using cereal-specific features}, publisher = {Elsevier}, journal = {Computers and Electronics in Agriculture}, doi = {10.1016/j.compag.2022.106941}, year = {2022}, keywords = {ARRAY(0x559d325edf20)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49340/}, abstract = {Crop row following is especially challenging in narrow row cereal crops, such as wheat. Separation between plants within a row disappears at an early growth stage, and canopy closure between rows, when leaves from different rows start to occlude each other, occurs three to four months after the crop emerges. Canopy closure makes it challenging to identify separate rows through computer vision as clear lanes become obscured. Cereal crops are grass species and so their leaves have a predictable shape and orientation. We introduce an image processing pipeline which exploits grass shape to identify and track rows. The key observation exploited is that leaf orientations tend to be vertical along rows and horizontal between rows due to the location of the stems within the rows. Adaptive mean-shift clustering on Hough line segments is then used to obtain lane centroids, and followed by a nearest neighbor data association creating lane line candidates in 2D space. Lane parameters are fit with linear regression and a Kalman filter is used for tracking lanes between frames. The method is achieves sub-50 mm accuracy which is sufficient for placing a typical agri-robot?s wheels between real-world, early-growth wheat crop rows to drive between them, as long as the crop is seeded in a wider spacing such as 180 mm row spacing for an 80 mm wheel width robot.} }
- S. Pearson, T. C. Camacho-Villa, R. Valluru, O. Gaju, M. Rai, I. Gould, S. Brewer, and E. Sklar, “Robotics and autonomous systems for net-zero agriculture,” Current robotics reports, vol. 3, p. 57–64, 2022. doi:10.1007/s43154-022-00077-6
[BibTeX] [Abstract] [Download PDF]
Purpose of ReviewThe paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emis-sions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.Recent FindingsAgri-robotic systems provide multiple emerging opportunities that facilitate the transition towards net zero agriculture. Five focus themes were identified where robotics could impact sustainable food production systems to (1) increase nitrogen use efficiency, (2) accelerate plant breeding, (3) deliver regenerative agriculture, (4) electrify robotic vehicles, (5) reduce food waste.SummaryRAS technologies create opportunities to (i) optimise the use of inputs such as fertiliser, seeds, and fuel/energy; (ii) reduce the environmental impact on soil and other natural resources; (iii) improve the efficiency and precision of agri-cultural processes and equipment; (iv) enhance farmers? decisions to improve crop care and reduce farm waste. Further and scaled research and technology development are needed to exploit these opportunities.
@article{lincoln50887, volume = {3}, month = {June}, author = {Simon Pearson and Tania Carolina Camacho-Villa and Ravi Valluru and Oorbessy Gaju and Mini Rai and Iain Gould and Steve Brewer and Elizabeth Sklar}, title = {Robotics and autonomous systems for net-zero agriculture}, publisher = {Springer}, year = {2022}, journal = {Current Robotics Reports}, doi = {10.1007/s43154-022-00077-6}, pages = {57--64}, keywords = {ARRAY(0x559d325edf50)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50887/}, abstract = {Purpose of ReviewThe paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emis-sions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.Recent FindingsAgri-robotic systems provide multiple emerging opportunities that facilitate the transition towards net zero agriculture. Five focus themes were identified where robotics could impact sustainable food production systems to (1) increase nitrogen use efficiency, (2) accelerate plant breeding, (3) deliver regenerative agriculture, (4) electrify robotic vehicles, (5) reduce food waste.SummaryRAS technologies create opportunities to (i) optimise the use of inputs such as fertiliser, seeds, and fuel/energy; (ii) reduce the environmental impact on soil and other natural resources; (iii) improve the efficiency and precision of agri-cultural processes and equipment; (iv) enhance farmers? decisions to improve crop care and reduce farm waste. Further and scaled research and technology development are needed to exploit these opportunities.} }
- S. Pearson, C. Camacho?Villa, R. Valluru, G. Oorbessy, M. Rai, I. Gould, S. Brewer, and E. Sklar, “Robotics and autonomous systems for net zero agriculture,” Current robotics reports, vol. 3, p. 57–64, 2022. doi:10.1007/s43154-022-00077-6
[BibTeX] [Abstract] [Download PDF]
The paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emissions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.
@article{lincoln49460, volume = {3}, month = {June}, author = {Simon Pearson and Carolina Camacho?Villa and Ravi Valluru and Gaju Oorbessy and Mini Rai and Iain Gould and Steve Brewer and Elizabeth Sklar}, title = {Robotics and Autonomous Systems for Net Zero Agriculture}, publisher = {Springer}, year = {2022}, journal = {Current Robotics Reports}, doi = {10.1007/s43154-022-00077-6}, pages = {57--64}, keywords = {ARRAY(0x559d325edf80)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49460/}, abstract = {The paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emissions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.} }
- C. Qi, J. Gao, S. Pearson, H. Harman, K. Chen, and L. Shu, “Tea chrysanthemum detection under unstructured environments using the tc-yolo model,” Expert systems with applications, vol. 193, 2022. doi:10.1016/j.eswa.2021.116473
[BibTeX] [Abstract] [Download PDF]
Tea chrysanthemum detection at its flowering stage is one of the key components for selective chrysanthemum harvesting robot development. However, it is a challenge to detect flowering chrysanthemums under unstructured field environments given variations on illumination, occlusion and object scale. In this context, we propose a highly fused and lightweight deep learning architecture based on YOLO for tea chrysanthemum detection (TC-YOLO). First, in the backbone component and neck component, the method uses the Cross-Stage Partially Dense network (CSPDenseNet) and the Cross-Stage Partial ResNeXt network (CSPResNeXt) as the main networks, respectively, and embeds custom feature fusion modules to guide the gradient flow. In the final head component, the method combines the recursive feature pyramid (RFP) multiscale fusion reflow structure and the Atrous Spatial Pyramid Pool (ASPP) module with cavity convolution to achieve the detection task. The resulting model was tested on 300 field images using a data enhancement strategy combining flipping and rotation, showing that under the NVIDIA Tesla P100 GPU environment, if the inference speed is 47.23 FPS for each image (416 {$\times$} 416), TC-YOLO can achieve the average precision (AP) of 92.49\% on our own tea chrysanthemum dataset. Through further validation, it was found that overlap had the least effect on tea chrysanthemum detection, and illumination had the greatest effect on tea chrysanthemum detection. In addition, this method (13.6 M) can be deployed on a single mobile GPU, and it could be further developed as a perception system for a selective chrysanthemum harvesting robot in the future.
@article{lincoln47700, volume = {193}, month = {May}, author = {Chao Qi and Junfeng Gao and Simon Pearson and Helen Harman and Kunjie Chen and Lei Shu}, title = {Tea chrysanthemum detection under unstructured environments using the TC-YOLO model}, publisher = {Elsevier}, journal = {Expert Systems with Applications}, doi = {10.1016/j.eswa.2021.116473}, year = {2022}, keywords = {ARRAY(0x559d325edfe0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47700/}, abstract = {Tea chrysanthemum detection at its flowering stage is one of the key components for selective chrysanthemum harvesting robot development. However, it is a challenge to detect flowering chrysanthemums under unstructured field environments given variations on illumination, occlusion and object scale. In this context, we propose a highly fused and lightweight deep learning architecture based on YOLO for tea chrysanthemum detection (TC-YOLO). First, in the backbone component and neck component, the method uses the Cross-Stage Partially Dense network (CSPDenseNet) and the Cross-Stage Partial ResNeXt network (CSPResNeXt) as the main networks, respectively, and embeds custom feature fusion modules to guide the gradient flow. In the final head component, the method combines the recursive feature pyramid (RFP) multiscale fusion reflow structure and the Atrous Spatial Pyramid Pool (ASPP) module with cavity convolution to achieve the detection task. The resulting model was tested on 300 field images using a data enhancement strategy combining flipping and rotation, showing that under the NVIDIA Tesla P100 GPU environment, if the inference speed is 47.23 FPS for each image (416 {$\times$} 416), TC-YOLO can achieve the average precision (AP) of 92.49\% on our own tea chrysanthemum dataset. Through further validation, it was found that overlap had the least effect on tea chrysanthemum detection, and illumination had the greatest effect on tea chrysanthemum detection. In addition, this method (13.6 M) can be deployed on a single mobile GPU, and it could be further developed as a perception system for a selective chrysanthemum harvesting robot in the future.} }
- S. M. Mellado, G. Cielniak, and T. Duckett, “Robotic exploration for learning human motion patterns,” Ieee transaction on robotics, 2022. doi:10.1109/TRO.2021.3101358
[BibTeX] [Abstract] [Download PDF]
Understanding how people are likely to move is key to efficient and safe robot navigation in human environments. However, mobile robots can only observe a fraction of the environment at a time, while the activity patterns of people may also change at different times. This paper introduces a new methodology for mobile robot exploration to maximise the knowledge of human activity patterns by deciding where and when to collect observations. We introduce an exploration policy driven by the entropy levels in a spatio-temporal map of pedestrian flows, and compare multiple spatio-temporal exploration strategies including both informed and uninformed approaches. The evaluation is performed by simulating mobile robot exploration using real sensory data from three long-term pedestrian datasets. The results show that for certain scenarios the models built with proposed exploration system can better predict the flow patterns than uninformed strategies, allowing the robot to move in a more socially compliant way, and that the exploration ratio is a key factor when it comes to the model prediction accuracy.
@article{lincoln46497, month = {April}, title = {Robotic Exploration for Learning Human Motion Patterns}, author = {Sergio Molina Mellado and Grzegorz Cielniak and Tom Duckett}, publisher = {IEEE}, year = {2022}, doi = {10.1109/TRO.2021.3101358}, journal = {IEEE Transaction on Robotics}, keywords = {ARRAY(0x559d325ee0d0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46497/}, abstract = {Understanding how people are likely to move is key to efficient and safe robot navigation in human environments. However, mobile robots can only observe a fraction of the environment at a time, while the activity patterns of people may also change at different times. This paper introduces a new methodology for mobile robot exploration to maximise the knowledge of human activity patterns by deciding where and when to collect observations. We introduce an exploration policy driven by the entropy levels in a spatio-temporal map of pedestrian flows, and compare multiple spatio-temporal exploration strategies including both informed and uninformed approaches. The evaluation is performed by simulating mobile robot exploration using real sensory data from three long-term pedestrian datasets. The results show that for certain scenarios the models built with proposed exploration system can better predict the flow patterns than uninformed strategies, allowing the robot to move in a more socially compliant way, and that the exploration ratio is a key factor when it comes to the model prediction accuracy.} }
- K. J. Parnell, J. Fischer, J. Clark, A. Bodenman, M. G. J. Trigo, M. P. Brito, M. D. Soorati, K. Plant, and S. Ramchurn, “Trustworthy uav relationships: applying the schema action world taxonomy to uavs and uav swarm operations,” International journal of human?computer interaction, 2022. doi:10.1080/10447318.2022.2108961
[BibTeX] [Abstract] [Download PDF]
Human Factors play a significant role in the development and integration of avionic systems to ensure that they are trusted and can be used effectively. As Unoccupied Aerial Vehicle (UAV) technology becomes increasingly important to the aviation domain this holds true. The study presented in this paper aims to gain an understanding of UAV operators? trust requirements when piloting UAVs by utilising a popular aviation interview methodology (Schema World Action Research Method), in combination with key questions on trust identified from the literature. Interviews were conducted with six UAV operators, with a range of experience, to identify the trust requirements that UAV operators hold and their views on how UAV swarms may alter the trust relationship between the operator and the UAV technology. Both methodological and practical contributions of the research interviews are discussed.
@article{lincoln50550, month = {August}, title = {Trustworthy UAV relationships: Applying the Schema Action World taxonomy to UAVs and UAV swarm operations}, author = {Katie J. Parnell and Joel Fischer and Jed Clark and Adrian Bodenman and Maria J. Galvez Trigo and Mario P. Brito and Mohammad Divband Soorati and Katherine Plant and Sarvapali Ramchurn}, publisher = {Taylor and Francis}, year = {2022}, doi = {10.1080/10447318.2022.2108961}, journal = {International Journal of Human?Computer Interaction}, keywords = {ARRAY(0x559d325edc80)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50550/}, abstract = {Human Factors play a significant role in the development and integration of avionic systems to ensure that they are trusted and can be used effectively. As Unoccupied Aerial Vehicle (UAV) technology becomes increasingly important to the aviation domain this holds true. The study presented in this paper aims to gain an understanding of UAV operators? trust requirements when piloting UAVs by utilising a popular aviation interview methodology (Schema World Action Research Method), in combination with key questions on trust identified from the literature. Interviews were conducted with six UAV operators, with a range of experience, to identify the trust requirements that UAV operators hold and their views on how UAV swarms may alter the trust relationship between the operator and the UAV technology. Both methodological and practical contributions of the research interviews are discussed.} }
- C. R. Carignan, R. Detry, M. R. Saaj, G. Marani, and J. V. D. Hook, “Editorial: robotic in-situ servicing, assembly and manufacturing,” Frontiers in robotics and ai, vol. 9, 2022. doi:10.3389/frobt.2022.887506
[BibTeX] [Abstract] [Download PDF]
This research topic is dedicated to articles focused on robotic manufacturing, assembly, and servicing utilizing in-situ resources, especially for space robotic applications. The purpose was to gather resource material for researchers from a variety of disciplines to identify common themes, formulate problems, and share promising technologies for autonomous large-scale construction, servicing, and assembly robots. The articles under this special topic provide a snapshot of several key technologies under development to support on-orbit robotic servicing, assembly, and manufacturing.
@article{lincoln49488, volume = {9}, month = {March}, author = {Craig R. Carignan and Renaud Detry and Mini Rai Saaj and Giacomo Marani and Joshua D. Vander Hook}, title = {Editorial: Robotic In-Situ Servicing, Assembly and Manufacturing}, publisher = {Frontiers Media}, journal = {Frontiers in Robotics and AI}, doi = {10.3389/frobt.2022.887506}, year = {2022}, keywords = {ARRAY(0x559d325ee130)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49488/}, abstract = {This research topic is dedicated to articles focused on robotic manufacturing, assembly, and servicing utilizing in-situ resources, especially for space robotic applications. The purpose was to gather resource material for researchers from a variety of disciplines to identify common themes, formulate problems, and share promising technologies for autonomous large-scale construction, servicing, and assembly robots. The articles under this special topic provide a snapshot of several key technologies under development to support on-orbit robotic servicing, assembly, and manufacturing.} }
- F. Lei, Z. Peng, M. Liu, J. Peng, V. Cutsuridis, and S. Yue, “A robust visual system for looming cue detection against translating motion,” Ieee transactions on neural networks and learning systems, p. 1–15, 2022. doi:10.1109/TNNLS.2022.3149832
[BibTeX] [Abstract] [Download PDF]
Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models can not distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This paper presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events – the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.
@article{lincoln48358, month = {February}, author = {Fang Lei and Zhiping Peng and Mei Liu and Jigen Peng and Vassilis Cutsuridis and Shigang Yue}, title = {A Robust Visual System for Looming Cue Detection Against Translating Motion}, publisher = {IEEE}, journal = {IEEE Transactions on Neural Networks and Learning Systems}, doi = {10.1109/TNNLS.2022.3149832}, pages = {1--15}, year = {2022}, keywords = {ARRAY(0x559d325ee190)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48358/}, abstract = {Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models can not distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This paper presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events -- the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.} }
- F. Lei, Z. Peng, M. Liu, J. Peng, V. Cutsuridis, and S. Yue, “A robust visual system for looming cue detection against translation motion,” Ieee transactions on neural networks and learning systems, 2022. doi:10.1109/TNNLS.2022.3149832
[BibTeX] [Abstract] [Download PDF]
Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models cannot distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This article presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON?OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events{–}the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.
@article{lincoln49162, month = {February}, title = {A Robust Visual System for Looming Cue Detection Against Translation Motion}, author = {Fang Lei and Zhiping Peng and Mei Liu and Jigen Peng and Vassilis Cutsuridis and Shigang Yue}, publisher = {IEEE}, year = {2022}, doi = {10.1109/TNNLS.2022.3149832}, journal = {IEEE Transactions on Neural Networks and Learning Systems}, keywords = {ARRAY(0x559d325ee1c0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49162/}, abstract = {Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models cannot distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This article presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON?OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events{--}the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.} }
- A. Mazzeo, J. Aguzzi, M. Calisti, S. Canese, M. Angiolillo, L. Allcock, F. Vecchi, S. Stefanni, and M. Controzzi, “Marine robotics for deep-sea specimen collection: a taxonomy of underwater manipulative actions,” Sensors, vol. 22, iss. 1471, 2022. doi:10.3390/s22041471
[BibTeX] [Abstract] [Download PDF]
In order to develop a gripping system or control strategy that improves scientific sampling procedures, knowledge of the process and the consequent definition of requirements is fundamental. Nevertheless, factors influencing sampling procedures have not been extensively described, and selected strategies mostly depend on pilots? and researchers? experience. We interviewed 17 researchers and remotely operated vehicle (ROV) technical operators, through a formal questionnaire or in-person interviews, to collect evidence of sampling procedures based on their direct field experience. We methodologically analyzed sampling procedures to extract single basic actions (called atomic manipulations). Available equipment, environment and species-specific features strongly influenced the manipulative choices. We identified a list of functional and technical requirements for the development of novel end-effectors for marine sampling. Our results indicate that the unstructured and highly variable deep-sea environment requires a versatile system, capable of robust interactions with hard surfaces such as pushing or scraping, precise tuning of gripping force for tasks such as pulling delicate organisms away from hard and soft substrates, and rigid holding, as well as a mechanism for rapidly switching among external tools.
@article{lincoln52103, volume = {22}, number = {1471}, month = {February}, author = {Angela Mazzeo and Jacopo Aguzzi and Marcello Calisti and Simonpietro Canese and Michela Angiolillo and Louise Allcock and Fabrizio Vecchi and Sergio Stefanni and Marco Controzzi}, title = {Marine Robotics for Deep-Sea Specimen Collection: A Taxonomy of Underwater Manipulative Actions}, publisher = {MDPI}, year = {2022}, journal = {Sensors}, doi = {10.3390/s22041471}, keywords = {ARRAY(0x559d325ee1f0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52103/}, abstract = {In order to develop a gripping system or control strategy that improves scientific sampling procedures, knowledge of the process and the consequent definition of requirements is fundamental. Nevertheless, factors influencing sampling procedures have not been extensively described, and selected strategies mostly depend on pilots? and researchers? experience. We interviewed 17 researchers and remotely operated vehicle (ROV) technical operators, through a formal questionnaire or in-person interviews, to collect evidence of sampling procedures based on their direct field experience. We methodologically analyzed sampling procedures to extract single basic actions (called atomic manipulations). Available equipment, environment and species-specific features strongly influenced the manipulative choices. We identified a list of functional and technical requirements for the development of novel end-effectors for marine sampling. Our results indicate that the unstructured and highly variable deep-sea environment requires a versatile system, capable of robust interactions with hard surfaces such as pushing or scraping, precise tuning of gripping force for tasks such as pulling delicate organisms away from hard and soft substrates, and rigid holding, as well as a mechanism for rapidly switching among external tools.} }
- J. Aguzzi, S. Flogel, S. Marini, L. Thomsen, J. Albiez, P. Weiss, G. Picardi, M. Calisti, S. Stefanni, L. Mirimin, F. Vecchi, C. Laschi, A. Branch, E. Clark, B. Foing, A. Wedler, D. Chatzievangelou, M. Tangherlini, A. Purser, L. Dartnell, and R. Danovaro, “Developing technological synergies between deep-sea and space research,” Elementa: science of the anthropocene, vol. 10, iss. 1, p. 64, 2022. doi:10.1525/elementa.2021.00064
[BibTeX] [Abstract] [Download PDF]
Recent advances in robotic design, autonomy and sensor integration create solutions for the exploration of deep-sea environments, transferable to the oceans of icy moons. Marine platforms do not yet have the mission autonomy capacity of their space counterparts (e.g., the state of the art Mars Perseverance rover mission), although different levels of autonomous navigation and mapping, as well as sampling, are an extant capability. In this setting their increasingly biomimicked designs may allow access to complex environmental scenarios, with novel, highly-integrated life-detecting, oceanographic and geochemical sensor packages. Here, we lay an outlook for the upcoming advances in deep-sea robotics through synergies with space technologies within three major research areas: biomimetic structure and propulsion (including power storage and generation), artificial intelligence and cooperative networks, and life-detecting instrument design. New morphological and material designs, with miniaturized and more diffuse sensor packages, will advance robotic sensing systems. Artificial intelligence algorithms controlling navigation and communications will allow the further development of the behavioral biomimicking by cooperating networks. Solutions will have to be tested within infrastructural networks of cabled observatories, neutrino telescopes, and off-shore industry sites with agendas and modalities that are beyond the scope of our work, but could draw inspiration on the proposed examples for the operational combination of fixed and mobile platforms.
@article{lincoln52102, volume = {10}, number = {1}, month = {February}, author = {Jacopo Aguzzi and Sascha Flogel and Simone Marini and Laurenz Thomsen and Jan Albiez and Peter Weiss and Giacomo Picardi and Marcello Calisti and Sergio Stefanni and Luca Mirimin and Fabrizio Vecchi and Cecilia Laschi and Andrew Branch and Evan Clark and Bernard Foing and Armin Wedler and Damianos Chatzievangelou and Michael Tangherlini and Autun Purser and Lewis Dartnell and Roberto Danovaro}, title = {Developing technological synergies between deep-sea and space research}, publisher = {University of California}, year = {2022}, journal = {Elementa: Science of the Anthropocene}, doi = {10.1525/elementa.2021.00064}, pages = {00064}, keywords = {ARRAY(0x559d325ee220)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52102/}, abstract = {Recent advances in robotic design, autonomy and sensor integration create solutions for the exploration of deep-sea environments, transferable to the oceans of icy moons. Marine platforms do not yet have the mission autonomy capacity of their space counterparts (e.g., the state of the art Mars Perseverance rover mission), although different levels of autonomous navigation and mapping, as well as sampling, are an extant capability. In this setting their increasingly biomimicked designs may allow access to complex environmental scenarios, with novel, highly-integrated life-detecting, oceanographic and geochemical sensor packages. Here, we lay an outlook for the upcoming advances in deep-sea robotics through synergies with space technologies within three major research areas: biomimetic structure and propulsion (including power storage and generation), artificial intelligence and cooperative networks, and life-detecting instrument design. New morphological and material designs, with miniaturized and more diffuse sensor packages, will advance robotic sensing systems. Artificial intelligence algorithms controlling navigation and communications will allow the further development of the behavioral biomimicking by cooperating networks. Solutions will have to be tested within infrastructural networks of cabled observatories, neutrino telescopes, and off-shore industry sites with agendas and modalities that are beyond the scope of our work, but could draw inspiration on the proposed examples for the operational combination of fixed and mobile platforms.} }
- H. Luan, Q. Fu, Y. Zhang, M. Hua, S. Chen, and S. Yue, “A looming spatial localization neural network inspired by mlg1 neurons in the crab neohelice,” Frontiers in neuroscience, 2022. doi:10.3389/fnins.2021.787256
[BibTeX] [Abstract] [Download PDF]
Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s? receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.
@article{lincoln49094, month = {January}, title = {A Looming Spatial Localization Neural Network Inspired by MLG1 Neurons in the Crab Neohelice}, author = {Hao Luan and Qingbing Fu and Yicheng Zhang and Mu Hua and Shengyong Chen and Shigang Yue}, publisher = {Frontiers Media}, year = {2022}, doi = {10.3389/fnins.2021.787256}, journal = {Frontiers in Neuroscience}, keywords = {ARRAY(0x559d325ee280)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49094/}, abstract = {Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s? receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.} }
- A. Mazzeo, J. Aguzzi, M. Calisti, S. Canese, F. Vecchi, S. Stefanni, and M. Controzzi, “Marine robotics for deep-sea specimen collection: a systematic review of underwater grippers,” Sensors, vol. 22, iss. 2, p. 648, 2022. doi:10.3390/s22020648
[BibTeX] [Abstract] [Download PDF]
The collection of delicate deep-sea specimens of biological interest with remotely operated vehicle (ROV) industrial grippers and tools is a long and expensive procedure. Industrial grippers were originally designed for heavy manipulation tasks, while sampling specimens requires dexterity and precision. We describe the grippers and tools commonly used in underwater sampling for scientific purposes, systematically review the state of the art of research in underwater gripping technologies, and identify design trends. We discuss the possibility of executing typical manipulations of sampling procedures with commonly used grippers and research prototypes. Our results indicate that commonly used grippers ensure that the basic actions either of gripping or caging are possible, and their functionality is extended by holding proper tools. Moreover, the approach of the research status seems to have changed its focus in recent years: from the demonstration of the validity of a specific technology (actuation, transmission, sensing) for marine applications, to the solution of specific needs of underwater manipulation. Finally, we summarize the environmental and operational requirements that should be considered in the design of an underwater gripper.
@article{lincoln52101, volume = {22}, number = {2}, month = {January}, author = {Angelo Mazzeo and Jacopo Aguzzi and Marcello Calisti and Simonpietro Canese and Fabrizio Vecchi and Sergio Stefanni and Marco Controzzi}, title = {Marine Robotics for Deep-Sea Specimen Collection: A Systematic Review of Underwater Grippers}, publisher = {MDPI}, year = {2022}, journal = {Sensors}, doi = {10.3390/s22020648}, pages = {648}, keywords = {ARRAY(0x559d325ee2b0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52101/}, abstract = {The collection of delicate deep-sea specimens of biological interest with remotely operated vehicle (ROV) industrial grippers and tools is a long and expensive procedure. Industrial grippers were originally designed for heavy manipulation tasks, while sampling specimens requires dexterity and precision. We describe the grippers and tools commonly used in underwater sampling for scientific purposes, systematically review the state of the art of research in underwater gripping technologies, and identify design trends. We discuss the possibility of executing typical manipulations of sampling procedures with commonly used grippers and research prototypes. Our results indicate that commonly used grippers ensure that the basic actions either of gripping or caging are possible, and their functionality is extended by holding proper tools. Moreover, the approach of the research status seems to have changed its focus in recent years: from the demonstration of the validity of a specific technology (actuation, transmission, sensing) for marine applications, to the solution of specific needs of underwater manipulation. Finally, we summarize the environmental and operational requirements that should be considered in the design of an underwater gripper.} }
- L. Manning, S. Brewer, P. Craigon, P. J. Frey, A. Gutierrez, N. Jacobs, S. Kanza, S. Munday, J. Sacks, and S. Pearson, “Artificial intelligence and ethics within the food sector: developing a common language for technology adoption across the supply chain,” Trends in food science and technology, 2022.
[BibTeX] [Abstract] [Download PDF]
Background: The use of artificial intelligence (AI) is growing in food supply chains. The ethical language associated with food supply and technology is contextualised and framed by the meaning given to it by stakeholders. Failure to differentiate between these nuanced meanings can create a barrier to technology adoption and reduce the benefit derived. Scope and approach: The aim of this review paper is to consider the embedded ethical language used by stakeholders who collaborate in the adoption of AI in food supply chains. Ethical perspectives frame this literature review and provide structure to consider how to shape a common discourse to build trust in, and frame more considered utilisation of, AI in food supply chains to the benefit of users, and wider society. Key findings and conclusions: Whilst the nature of data within the food system is much broader than the personal data covered by the European Union General Data Protection Regulation (GDPR), the ethical issues for computational and AI systems are similar and can be considered in terms of particular aspects: transparency, traceability, explainability, interpretability, accessibility, accountability and responsibility. The outputs of this research assist in giving a more rounded understanding of the language used, exploring the ethical interaction of aspects of AI used in food supply chains and also the management activities and actions that can be adopted to improve the applicability of AI technology, increase engagement and derive greater performance benefits. This work has implications for those developing AI governance protocols for the food supply chain as well as supply chain practitioners.
@article{lincoln49072, title = {Artificial intelligence and ethics within the food sector: developing a common language for technology adoption across the supply chain}, author = {Louise Manning and Steve Brewer and Peter Craigon and P.J Frey and Anabel Gutierrez and Naomi Jacobs and Samantha Kanza and Samuel Munday and Justin Sacks and Simon Pearson}, publisher = {Elsevier}, year = {2022}, journal = {Trends in Food Science and Technology}, keywords = {ARRAY(0x559d325ee370)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49072/}, abstract = {Background: The use of artificial intelligence (AI) is growing in food supply chains. The ethical language associated with food supply and technology is contextualised and framed by the meaning given to it by stakeholders. Failure to differentiate between these nuanced meanings can create a barrier to technology adoption and reduce the benefit derived. Scope and approach: The aim of this review paper is to consider the embedded ethical language used by stakeholders who collaborate in the adoption of AI in food supply chains. Ethical perspectives frame this literature review and provide structure to consider how to shape a common discourse to build trust in, and frame more considered utilisation of, AI in food supply chains to the benefit of users, and wider society. Key findings and conclusions: Whilst the nature of data within the food system is much broader than the personal data covered by the European Union General Data Protection Regulation (GDPR), the ethical issues for computational and AI systems are similar and can be considered in terms of particular aspects: transparency, traceability, explainability, interpretability, accessibility, accountability and responsibility. The outputs of this research assist in giving a more rounded understanding of the language used, exploring the ethical interaction of aspects of AI used in food supply chains and also the management activities and actions that can be adopted to improve the applicability of AI technology, increase engagement and derive greater performance benefits. This work has implications for those developing AI governance protocols for the food supply chain as well as supply chain practitioners.} }
- A. Pal, G. Das, M. Hanheide, A. C. Leite, and P. From, “An agricultural event prediction framework towards anticipatory scheduling of robot fleets: general concepts and case studies,” Agronomy, vol. 12, iss. 6, 2022. doi:10.3390/agronomy12061299
[BibTeX] [Abstract] [Download PDF]
Harvesting in soft-fruit farms is labor intensive, time consuming and is severely affected by scarcity of skilled labors. Among several activities during soft-fruit harvesting, human pickers take 20?30\% of overall operation time into the logistics activities. Such an unproductive time, for example, can be reduced by optimally deploying a fleet of agricultural robots and schedule them by anticipating the human activity behaviour (state) during harvesting. In this paper, we propose a framework for spatio-temporal prediction of human pickers? activities while they are picking fruits in agriculture fields. Here we exploit temporal patterns of picking operation and 2D discrete points, called topological nodes, as spatial constraints imposed by the agricultural environment. Both information are used in the prediction framework in combination with a variant of the Hidden Markov Model (HMM) algorithm to create two modules. The proposed methodology is validated with two test cases. In Test Case 1, the first module selects an optimal temporal model called as picking_state_progression model that uses temporal features of a picker state (event) to statistically evaluate an adequate number of intra-states also called sub-states. In Test Case 2, the second module uses the outcome from the optimal temporal model in the subsequent spatial model called node_transition model and performs ?spatio-temporal predictions? of the picker?s movement while the picker is in a particular state. The Discrete Event Simulation (DES) framework, a proven agricultural multi-robot logistics model, is used to simulate the different picking operation scenarios with and without our proposed prediction framework and the results are then statistically compared to each other. Our prediction framework can reduce the so-called unproductive logistics time in a fully manual harvesting process by about 80 percent in the overall picking operation. This research also indicates that the different rates of picking operations involve different numbers of sub-states, and these sub-states are associated with different trends considered in spatio-temporal predictions.
@article{lincoln49668, volume = {12}, number = {6}, author = {Abhishesh Pal and Gautham Das and Marc Hanheide and Antonio Candea Leite and Pal From}, title = {An Agricultural Event Prediction Framework towards Anticipatory Scheduling of Robot Fleets: General Concepts and Case Studies}, publisher = {MDPI}, journal = {Agronomy}, doi = {10.3390/agronomy12061299}, year = {2022}, keywords = {ARRAY(0x559d325e8738)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49668/}, abstract = {Harvesting in soft-fruit farms is labor intensive, time consuming and is severely affected by scarcity of skilled labors. Among several activities during soft-fruit harvesting, human pickers take 20?30\% of overall operation time into the logistics activities. Such an unproductive time, for example, can be reduced by optimally deploying a fleet of agricultural robots and schedule them by anticipating the human activity behaviour (state) during harvesting. In this paper, we propose a framework for spatio-temporal prediction of human pickers? activities while they are picking fruits in agriculture fields. Here we exploit temporal patterns of picking operation and 2D discrete points, called topological nodes, as spatial constraints imposed by the agricultural environment. Both information are used in the prediction framework in combination with a variant of the Hidden Markov Model (HMM) algorithm to create two modules. The proposed methodology is validated with two test cases. In Test Case 1, the first module selects an optimal temporal model called as picking\_state\_progression model that uses temporal features of a picker state (event) to statistically evaluate an adequate number of intra-states also called sub-states. In Test Case 2, the second module uses the outcome from the optimal temporal model in the subsequent spatial model called node\_transition model and performs ?spatio-temporal predictions? of the picker?s movement while the picker is in a particular state. The Discrete Event Simulation (DES) framework, a proven agricultural multi-robot logistics model, is used to simulate the different picking operation scenarios with and without our proposed prediction framework and the results are then statistically compared to each other. Our prediction framework can reduce the so-called unproductive logistics time in a fully manual harvesting process by about 80 percent in the overall picking operation. This research also indicates that the different rates of picking operations involve different numbers of sub-states, and these sub-states are associated with different trends considered in spatio-temporal predictions.} }
- B. Mazzolai, A. Mondini, E. D. Dottore, L. Margheri, K. Suzumori, M. Cianchetti, T. Speck, S. Smoukov, I. Burget, T. Keplinger, G. D. F. Siqueira, F. Vanneste, O. Goury, C. Duriez, T. Nanayakkara, B. Vanderborght, J. Brancart, S. Terryn, S. Rich, R. Liu, K. Fukuda, T. Someya, M. Calisti, C. Laschi, W. Sun, G. Wang, L. Wen, R. Baines, P. K. Sree, R. Kramer-Bottiglio, D. Rus, P. Fischer, F. Simmel, and A. Lendlein, “Roadmap on soft robotics: multifunctionality, adaptability and growth without borders,” Multifunctional materials, vol. 5, p. 32001, 2022. doi:10.1088/2399-7532/ac4c95
[BibTeX] [Abstract] [Download PDF]
Soft robotics aims at creating systems with improved performance of movement and adaptability in unknown, challenging, environments and with higher level of safety during interactions with humans. This Roadmap on Soft Robotics covers selected aspects for the design of soft robots significantly linked to the area of multifunctional materials, as these are considered a fundamental component in the design of soft robots for an improvement of their peculiar abilities, such as morphing, adaptivity and growth. The roadmap includes different approaches for components and systems design, bioinspired materials, methodologies for building soft robots, strategies for the implementation and control of their functionalities and behavior, and examples of soft-bodied systems showing abilities across different environments. For each covered topic, the author(s) describe the current status and research directions, current and future challenges, and perspective advances in science and technology to meet the challenges.
@article{lincoln52106, volume = {5}, month = {August}, author = {Barbara Mazzolai and Alessio Mondini and Emanuela Del Dottore and Laura Margheri and Koichi Suzumori and Matteo Cianchetti and Thomas Speck and Stoyan Smoukov and Ingo Burget and Tobias Keplinger and Gilberto De Freitas Siqueira and Felix Vanneste and Olivier Goury and Christian Duriez and Thrishantha Nanayakkara and Bram Vanderborght and Joost Brancart and Seppe Terryn and Steven Rich and Ruiyuan Liu and Kenjiro Fukuda and Takao Someya and Marcello Calisti and Cecilia Laschi and Wenguang Sun and Gang Wang and Li Wen and Robert Baines and Patiballa Kalyan Sree and Rebecca Kramer-Bottiglio and Daniela Rus and Peer Fischer and Friedrich Simmel and Andreas Lendlein}, title = {Roadmap on soft robotics: multifunctionality, adaptability and growth without borders}, publisher = {IOP Publishing}, year = {2022}, journal = {Multifunctional Materials}, doi = {10.1088/2399-7532/ac4c95}, pages = {032001}, keywords = {ARRAY(0x559d325edcb0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52106/}, abstract = {Soft robotics aims at creating systems with improved performance of movement and adaptability in unknown, challenging, environments and with higher level of safety during interactions with humans. This Roadmap on Soft Robotics covers selected aspects for the design of soft robots significantly linked to the area of multifunctional materials, as these are considered a fundamental component in the design of soft robots for an improvement of their peculiar abilities, such as morphing, adaptivity and growth. The roadmap includes different approaches for components and systems design, bioinspired materials, methodologies for building soft robots, strategies for the implementation and control of their functionalities and behavior, and examples of soft-bodied systems showing abilities across different environments. For each covered topic, the author(s) describe the current status and research directions, current and future challenges, and perspective advances in science and technology to meet the challenges.} }
- C. Qi, J. Gao, K. Chen, L. Shu, and S. Pearson, “Tea chrysanthemum detection by leveraging generative adversarial networks and edge computing,” Frontiers in plant science, 2022.
[BibTeX] [Abstract] [Download PDF]
A high resolution dataset is one of the prerequisites for tea chrysanthemum detection with deep learning algorithms. This is crucial for further developing a selective chrysanthemum harvesting robot. However, generating high resolution datasets of the tea chrysanthemum with complex unstructured environments is a challenge. In this context, we propose a novel generative adversarial network (TC-GAN) that attempts to deal with this challenge. First, we designed a non-linear mapping network for untangling the features of the underlying code. Then, a customized regularisation method was used to provide fine-grained control over the image details. Finally, a gradient diversion design with multi-scale feature extraction capability was adopted to optimize the training process. The proposed TC-GAN was compared with 12 state-of-the-art generative adversarial networks, showing that an optimal average precision (AP) of 90.09\% was achieved with the generated images (512*512) on the developed TC-YOLO object detection model under the NVIDIA Tesla P100 GPU environment. Moreover, the detection model was deployed into the embedded NVIDIA Jetson TX2 platform with 0.1s inference time, and this edge computing device could be further developed into a perception system for selective chrysanthemum picking robots in the future.
@article{lincoln48499, title = {Tea Chrysanthemum Detection by Leveraging Generative Adversarial Networks and Edge Computing}, author = {Chao Qi and Junfeng Gao and Kunjie Chen and Lei Shu and Simon Pearson}, publisher = {Frontiers Media}, year = {2022}, journal = {Frontiers in plant science}, keywords = {ARRAY(0x559d325e8768)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48499/}, abstract = {A high resolution dataset is one of the prerequisites for tea chrysanthemum detection with deep learning algorithms. This is crucial for further developing a selective chrysanthemum harvesting robot. However, generating high resolution datasets of the tea chrysanthemum with complex unstructured environments is a challenge. In this context, we propose a novel generative adversarial network (TC-GAN) that attempts to deal with this challenge. First, we designed a non-linear mapping network for untangling the features of the underlying code. Then, a customized regularisation method was used to provide fine-grained control over the image details. Finally, a gradient diversion design with multi-scale feature extraction capability was adopted to optimize the training process. The proposed TC-GAN was compared with 12 state-of-the-art generative adversarial networks, showing that an optimal average precision (AP) of 90.09\% was achieved with the generated images (512*512) on the developed TC-YOLO object detection model under the NVIDIA Tesla P100 GPU environment. Moreover, the detection model was deployed into the embedded NVIDIA Jetson TX2 platform with 0.1s inference time, and this edge computing device could be further developed into a perception system for selective chrysanthemum picking robots in the future.} }
- K. M. F. James, D. J. Sargent, A. Whitehouse, and G. Cielniak, “High-throughput phenotyping for breeding targets – current status and future directions of strawberry trait automation,” Plants, people, planet, vol. 4, iss. 5, p. 432–443, 2022. doi:10.1002/ppp3.10275
[BibTeX] [Abstract] [Download PDF]
Automated image-based phenotyping has become widely accepted in crop phenotyping, particularly in cereal crops, yet few traits used by breeders in the strawberry industry have been automated. Early phenotypic assessment remains largely qualitative in this area since the manual phenotyping process is laborious and domain experts are constrained by time. Precision agriculture, facilitated by robotic technologies, is increasing in the strawberry industry, and the development of quantitative automated phenotyping methods is essential to ensure that breeding programs remain economically competitive. In this review, we investigate the external morphological traits relevant to the breeding of strawberries that have been automated and assess the potential for automation of traits that are still evaluated manually, highlighting challenges and limitations of the approaches used, particularly when applying high-throughput strawberry phenotyping in real-world environmental conditions.
@article{lincoln49681, volume = {4}, number = {5}, month = {August}, author = {Katherine Margaret Frances James and Daniel James Sargent and Adam Whitehouse and Grzegorz Cielniak}, title = {High-throughput phenotyping for breeding targets - Current status and future directions of strawberry trait automation}, publisher = {Wiley}, year = {2022}, journal = {Plants, People, Planet}, doi = {10.1002/ppp3.10275}, pages = {432--443}, keywords = {ARRAY(0x559d325edb90)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49681/}, abstract = {Automated image-based phenotyping has become widely accepted in crop phenotyping, particularly in cereal crops, yet few traits used by breeders in the strawberry industry have been automated. Early phenotypic assessment remains largely qualitative in this area since the manual phenotyping process is laborious and domain experts are constrained by time. Precision agriculture, facilitated by robotic technologies, is increasing in the strawberry industry, and the development of quantitative automated phenotyping methods is essential to ensure that breeding programs remain economically competitive. In this review, we investigate the external morphological traits relevant to the breeding of strawberries that have been automated and assess the potential for automation of traits that are still evaluated manually, highlighting challenges and limitations of the approaches used, particularly when applying high-throughput strawberry phenotyping in real-world environmental conditions.} }
- K. Smith and M. Hanheide, “Future leaders in agri?food robotics,” Food science and technology, vol. 36, iss. 3, p. 62–65, 2022. doi:10.1002/fsat.3603_15.x
[BibTeX] [Abstract] [Download PDF]
The AgriFoRwArdS EPSRC Centre for Doctoral Training1 (CDT) is at the fore of nurturing and developing the next cohort of experts in the agri-food robotics sector. The Centre, established by the University of Lincoln in collaboration with the University of Cambridge and the University of East Anglia and funded by UKRI’s Engineering and Physical Sciences Research Council, is providing fully funded opportunities for 50 students to undertake their PhD studies and become the next leaders in the agri-food robotics community. Through collaboration with industry partners and utilising the expertise of the three partner organisations, the AgriFoRwArdS CDT aims to ensure that its work, and that of its students, helps transform agri-food robotics and the wider food production industry.
@article{lincoln51719, volume = {36}, number = {3}, month = {September}, author = {Kate Smith and Marc Hanheide}, title = {Future leaders in agri?food robotics}, publisher = {Wiley}, year = {2022}, journal = {Food Science and Technology}, doi = {10.1002/fsat.3603\_15.x}, pages = {62--65}, keywords = {ARRAY(0x559d325edb00)}, url = {https://eprints.lincoln.ac.uk/id/eprint/51719/}, abstract = {The AgriFoRwArdS EPSRC Centre for Doctoral Training1 (CDT) is at the fore of nurturing and developing the next cohort of experts in the agri-food robotics sector. The Centre, established by the University of Lincoln in collaboration with the University of Cambridge and the University of East Anglia and funded by UKRI's Engineering and Physical Sciences Research Council, is providing fully funded opportunities for 50 students to undertake their PhD studies and become the next leaders in the agri-food robotics community. Through collaboration with industry partners and utilising the expertise of the three partner organisations, the AgriFoRwArdS CDT aims to ensure that its work, and that of its students, helps transform agri-food robotics and the wider food production industry.} }
- H. Harman and E. Sklar, “Multi-agent task allocation for harvest management,” Frontiers in robotics and ai, 2022. doi:10.3389/frobt.2022.864745
[BibTeX] [Abstract] [Download PDF]
Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks. The harvesting workforce is typically organised by farm manager(s) who assign workers to the fields that are ready to be harvested and team leaders who manage the workers in the fields. Creating these assignments is a dynamic and complex problem, as the skill of the workforce and the yield (quantity of ripe fruit picked) are variable and not entirely predictable. The work presented here posits that multi-agent task allocation methods can assist farm managers and team leaders to manage the harvesting workforce effectively and efficiently. There are three key challenges faced when adapting multi-agent approaches to this problem: (i) staff time (and thus cost) should be minimised; (ii) tasks must be distributed fairly to keep staff motivated; and (iii) the approach must be able to handle incremental (incomplete) data as the season progresses. An adapted variation of Round Robin (RR) is proposed for the problem of assigning workers to fields, and market-based task allocation mechanisms are applied to the challenge of assigning tasks to workers within the fields. To evaluate the approach introduced here, experiments are performed based on data that was supplied by a large commercial soft fruit farm for the past two harvesting seasons. The results demonstrate that our approach produces appropriate worker-to-field allocations. Moreover, simulated experiments demonstrate that there is a ?sweet spot? with respect to the ratio between two types of in-field workers.
@article{lincoln52212, month = {October}, title = {Multi-agent task allocation for harvest management}, author = {Helen Harman and Elizabeth Sklar}, publisher = {Frontiers}, year = {2022}, doi = {10.3389/frobt.2022.864745}, journal = {Frontiers in Robotics and AI}, keywords = {ARRAY(0x559d325ed830)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52212/}, abstract = {Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks. The harvesting workforce is typically organised by farm manager(s) who assign workers to the fields that are ready to be harvested and team leaders who manage the workers in the fields. Creating these assignments is a dynamic and complex problem, as the skill of the workforce and the yield (quantity of ripe fruit picked) are variable and not entirely predictable. The work presented here posits that multi-agent task allocation methods can assist farm managers and team leaders to manage the harvesting workforce effectively and efficiently. There are three key challenges faced when adapting multi-agent approaches to this problem: (i) staff time (and thus cost) should be minimised; (ii) tasks must be distributed fairly to keep staff motivated; and (iii) the approach must be able to handle incremental (incomplete) data as the season progresses. An adapted variation of Round Robin (RR) is proposed for the problem of assigning workers to fields, and market-based task allocation mechanisms are applied to the challenge of assigning tasks to workers within the fields. To evaluate the approach introduced here, experiments are performed based on data that was supplied by a large commercial soft fruit farm for the past two harvesting seasons. The results demonstrate that our approach produces appropriate worker-to-field allocations. Moreover, simulated experiments demonstrate that there is a ?sweet spot? with respect to the ratio between two types of in-field workers.} }
- G. Mengaldo, F. Renda, S. Brunton, M. Bacher, M. Calisti, C. Duriez, G. Chirikjian, and C. Laschi, “A concise guide to modelling the physics of embodied intelligence in soft robotics.,” Nature reviews physics, iss. 4, p. 595–610, 2022. doi:10.1038/s42254-022-00481-z
[BibTeX] [Abstract] [Download PDF]
Embodied intelligence (intelligence that requires and leverages a physical body) is a well-known paradigm in soft robotics, but its mathematical description and consequent computational modelling remain elusive, with a need for models that can be used for design and control purposes. We argue that filling this gap will enable full uptake of embodied intelligence in soft robots. We provide a concise guide to the main mathematical modelling approaches, and consequent computational modelling strategies, that can be used to describe soft robots and their physical interactions with the surrounding environment, including fluid and solid media. We aim to convey the challenges and opportunities within the context of modelling the physical interactions underpinning embodied intelligence. We emphasize that interdisciplinary work is required, especially in the context of fully coupled robot?environment interaction modelling. Promoting this dialogue across disciplines is a necessary step to further advance the field of soft robotics.
@article{lincoln52104, number = {4}, month = {September}, author = {Gianmarco Mengaldo and Federico Renda and Steven Brunton and Moritz Bacher and Marcello Calisti and Christian Duriez and Gregory Chirikjian and Cecilia Laschi}, title = {A concise guide to modelling the physics of embodied intelligence in soft robotics.}, publisher = {Nature Research}, year = {2022}, journal = {Nature Reviews Physics}, doi = {10.1038/s42254-022-00481-z}, pages = {595--610}, keywords = {ARRAY(0x559d325eda10)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52104/}, abstract = {Embodied intelligence (intelligence that requires and leverages a physical body) is a well-known paradigm in soft robotics, but its mathematical description and consequent computational modelling remain elusive, with a need for models that can be used for design and control purposes. We argue that filling this gap will enable full uptake of embodied intelligence in soft robots. We provide a concise guide to the main mathematical modelling approaches, and consequent computational modelling strategies, that can be used to describe soft robots and their physical interactions with the surrounding environment, including fluid and solid media. We aim to convey the challenges and opportunities within the context of modelling the physical interactions underpinning embodied intelligence. We emphasize that interdisciplinary work is required, especially in the context of fully coupled robot?environment interaction modelling. Promoting this dialogue across disciplines is a necessary step to further advance the field of soft robotics.} }
- H. R. Karbasian, J. A. Esfahani, A. M. Aliyu, and K. C. Kim, “Numerical analysis of wind turbines blade in deep dynamic stall,” Renewable energy, vol. 197, p. 1094–1105, 2022. doi:10.1016/j.renene.2022.07.115
[BibTeX] [Abstract] [Download PDF]
This study numerically investigates kinematics of dynamic stall, which is a crucial matter in wind turbines. Distinct movements of the blade with the same angle of attack (AOA) profile may provoke the flow field due to their kinematic characteristics. This induction can significantly change aerodynamic loads and dynamic stall process in wind turbines. The simulation involves a 3D NACA 0012 airfoil with two distinct pure-heaving and pure-pitching motions. The flow field over this 3D airfoil was simulated using Delayed Detached Eddy Simulations (DDES). The airfoil begins to oscillate at a Reynolds number of Re = 1.35 {$\times$} 105. The given attack angle profile remains unchanged for all cases. It is shown that the flow structures differ notably between pure-heaving and pure-pitching motions, such that the pure-pitching motions induce higher drag force on the airfoil than the pure-heaving motion. Remarkably, heaving motion causes excessive turbulence in the boundary layer, and then the coherent structures seem to be more stable. Hence, pure-heaving motion contains more energetic core vortices, yielding higher lift at post-stall. In contrast to conventional studies on the dynamic stall of wind turbines, current results show that airfoils? kinematics significantly affect the load predictions during the dynamic stall phenomenon.
@article{lincoln50417, volume = {197}, month = {September}, author = {Hamid Reza Karbasian and Javad Abolfazli Esfahani and Aliyu Musa Aliyu and Kyung Chun Kim}, title = {Numerical analysis of wind turbines blade in deep dynamic stall}, publisher = {Elsevier}, year = {2022}, journal = {Renewable Energy}, doi = {10.1016/j.renene.2022.07.115}, pages = {1094--1105}, keywords = {ARRAY(0x559d325edaa0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50417/}, abstract = {This study numerically investigates kinematics of dynamic stall, which is a crucial matter in wind turbines. Distinct movements of the blade with the same angle of attack (AOA) profile may provoke the flow field due to their kinematic characteristics. This induction can significantly change aerodynamic loads and dynamic stall process in wind turbines. The simulation involves a 3D NACA 0012 airfoil with two distinct pure-heaving and pure-pitching motions. The flow field over this 3D airfoil was simulated using Delayed Detached Eddy Simulations (DDES). The airfoil begins to oscillate at a Reynolds number of Re = 1.35 {$\times$} 105. The given attack angle profile remains unchanged for all cases. It is shown that the flow structures differ notably between pure-heaving and pure-pitching motions, such that the pure-pitching motions induce higher drag force on the airfoil than the pure-heaving motion. Remarkably, heaving motion causes excessive turbulence in the boundary layer, and then the coherent structures seem to be more stable. Hence, pure-heaving motion contains more energetic core vortices, yielding higher lift at post-stall. In contrast to conventional studies on the dynamic stall of wind turbines, current results show that airfoils? kinematics significantly affect the load predictions during the dynamic stall phenomenon.} }
- M. A. A. Mdfaa, G. Kulathunga, and A. Klimchik, “3d-siammask: vision-based multi-rotor aerial-vehicle tracking for a moving object,” Remote sensing, vol. 14, iss. 22, p. 5756, 2022. doi:10.3390/rs14225756
[BibTeX] [Abstract] [Download PDF]
This paper aims to develop a multi-rotor-based visual tracker for a specified moving object. Visual object-tracking algorithms for multi-rotors are challenging due to multiple issues such as occlusion, quick camera motion, and out-of-view scenarios. Hence, algorithmic changes are required for dealing with images or video sequences obtained by multi-rotors. Therefore, we propose two approaches: a generic object tracker and a class-specific tracker. Both tracking settings require the object bounding box to be selected in the first frame. As part of the later steps, the object tracker uses the updated template set and the calibrated RGBD sensor data as inputs to track the target object using a Siamese network and a machine-learning model for depth estimation. The class-specific tracker is quite similar to the generic object tracker but has an additional auxiliary object classifier. The experimental study and validation were carried out in a robot simulation environment. The simulation environment was designed to serve multiple case scenarios using Gazebo. According to the experiment results, the class-specific object tracker performed better than the generic object tracker in terms of stability and accuracy. Experiments show that the proposed generic tracker achieves promising results on three challenging datasets. Our tracker runs at approximately 36 fps on GPU. {\copyright} 2022 by the authors.
@article{lincoln53298, volume = {14}, number = {22}, month = {November}, author = {Mohamad Al Al Mdfaa and Geesara Kulathunga and Alexandr Klimchik}, title = {3D-SiamMask: Vision-Based Multi-Rotor Aerial-Vehicle Tracking for a Moving Object}, publisher = {MDPI}, year = {2022}, journal = {Remote Sensing}, doi = {10.3390/rs14225756}, pages = {5756}, keywords = {ARRAY(0x559d325ed770)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53298/}, abstract = {This paper aims to develop a multi-rotor-based visual tracker for a specified moving object. Visual object-tracking algorithms for multi-rotors are challenging due to multiple issues such as occlusion, quick camera motion, and out-of-view scenarios. Hence, algorithmic changes are required for dealing with images or video sequences obtained by multi-rotors. Therefore, we propose two approaches: a generic object tracker and a class-specific tracker. Both tracking settings require the object bounding box to be selected in the first frame. As part of the later steps, the object tracker uses the updated template set and the calibrated RGBD sensor data as inputs to track the target object using a Siamese network and a machine-learning model for depth estimation. The class-specific tracker is quite similar to the generic object tracker but has an additional auxiliary object classifier. The experimental study and validation were carried out in a robot simulation environment. The simulation environment was designed to serve multiple case scenarios using Gazebo. According to the experiment results, the class-specific object tracker performed better than the generic object tracker in terms of stability and accuracy. Experiments show that the proposed generic tracker achieves promising results on three challenging datasets. Our tracker runs at approximately 36 fps on GPU. {\copyright} 2022 by the authors.} }
- F. Camara and C. Fox, “Unfreezing autonomous vehicles with game theory, proxemics, and trust,” Frontiers in computer science, 2022. doi:10.3389/fcomp.2022.969194
[BibTeX] [Abstract] [Download PDF]
Recent years have witnessed the rapid deployment of robotic systems in public places such as roads, pavements, workplaces and care homes. Robot navigation in environments with static objects is largely solved, but navigating around humans in dynamic environments remains an active research question for autonomous vehicles (AVs). To navigate in human social spaces, self-driving cars and other robots must also show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. Most current AVs, for legal and safety reasons, consider pedestrians to be obstacles, so these AVs always stop for or replan to drive around them. But this highly safe nature may lead pedestrians to take advantage over them and slow their progress, even to a complete halt. We provide a review of our recent research on predicting and controlling human?AV interactions, which combines game theory, proxemics and trust, and uni?es these ?elds via quantitative, probabilistic models and robot controllers, to solve this ?freezing robot? problem.
@article{lincoln52159, month = {October}, title = {Unfreezing autonomous vehicles with game theory, proxemics, and trust}, author = {Fanta Camara and Charles Fox}, publisher = {Frontiers Media}, year = {2022}, doi = {10.3389/fcomp.2022.969194}, journal = {Frontiers in Computer Science}, keywords = {ARRAY(0x559d325ed8f0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52159/}, abstract = {Recent years have witnessed the rapid deployment of robotic systems in public places such as roads, pavements, workplaces and care homes. Robot navigation in environments with static objects is largely solved, but navigating around humans in dynamic environments remains an active research question for autonomous vehicles (AVs). To navigate in human social spaces, self-driving cars and other robots must also show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. Most current AVs, for legal and safety reasons, consider pedestrians to be obstacles, so these AVs always stop for or replan to drive around them. But this highly safe nature may lead pedestrians to take advantage over them and slow their progress, even to a complete halt. We provide a review of our recent research on predicting and controlling human?AV interactions, which combines game theory, proxemics and trust, and uni?es these ?elds via quantitative, probabilistic models and robot controllers, to solve this ?freezing robot? problem.} }
- A. L. Zorrilla, I. M. Torres, and H. Cuayahuitl, “Audio embedding-aware dialogue policy learning,” Ieee transactions on audio, speech, and language processing, vol. 31, p. 525–538, 2022. doi:10.1109/TASLP.2022.3225658
[BibTeX] [Abstract] [Download PDF]
Following the success of Natural Language Processing (NLP) transformers pretrained via self-supervised learning, similar models have been proposed recently for speech processing such as Wav2Vec2, HuBERT and UniSpeech-SAT. An interesting yet unexplored area of application of these models is Spoken Dialogue Systems, where the users? audio signals are typically just mapped to word-level features derived from an Automatic Speech Recogniser (ASR), and then processed using NLP techniques to generate system responses. This paper reports a comprehensive comparison of dialogue policies trained using ASR-based transcriptions and extended with the aforementioned audio processing transformers in the DSTC2 task. Whilst our dialogue policies are trained with supervised and policy-based deep reinforcement learning, they are assessed using both automatic task completion metrics and a human evaluation. Our results reveal that using audio embeddings is more beneficial than detrimental in most of our trained dialogue policies, and that the benefits are stronger for supervised learning than reinforcement learning.
@article{lincoln52689, volume = {31}, month = {November}, author = {Asier Lopez Zorrilla and M. Ines Torres and Heriberto Cuayahuitl}, title = {Audio Embedding-Aware Dialogue Policy Learning}, publisher = {IEEE}, year = {2022}, journal = {IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING}, doi = {10.1109/TASLP.2022.3225658}, pages = {525--538}, keywords = {ARRAY(0x559d325ed740)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52689/}, abstract = {Following the success of Natural Language Processing (NLP) transformers pretrained via self-supervised learning, similar models have been proposed recently for speech processing such as Wav2Vec2, HuBERT and UniSpeech-SAT. An interesting yet unexplored area of application of these models is Spoken Dialogue Systems, where the users? audio signals are typically just mapped to word-level features derived from an Automatic Speech Recogniser (ASR), and then processed using NLP techniques to generate system responses. This paper reports a comprehensive comparison of dialogue policies trained using ASR-based transcriptions and extended with the aforementioned audio processing transformers in the DSTC2 task. Whilst our dialogue policies are trained with supervised and policy-based deep reinforcement learning, they are assessed using both automatic task completion metrics and a human evaluation. Our results reveal that using audio embeddings is more beneficial than detrimental in most of our trained dialogue policies, and that the benefits are stronger for supervised learning than reinforcement learning.} }
- N. A. Khan, M. Mohammadi, M. Ghafoor, and S. A. Tariq, “Convolutional neural networks based time-frequency image enhancement for the analysis of eeg signals,” Multidimensional systems and signal processing, vol. 33, p. 863–877, 2022. doi:10.1007/s11045-022-00822-2
[BibTeX] [Abstract] [Download PDF]
Quadratic time-frequency (TF) methods are commonly used for the analysis, modeling, and classification of time-varying non-stationary electroencephalogram (EEG) signals. Commonly employed TF methods suffer from an inherent tradeoff between cross-term suppression and preservation of auto-terms. In this paper, we propose a new convolutional neural network (CNN) based approach to enhancing TF images. The proposed method trains a CNN using the Wigner-Ville distribution as the input image and the ideal time-frequency distribution with the total concentration of signal energy along the IF curves as the output image. The results show significant improvement compared to the other state-of-the-art TF enhancement methods. The codes for reproducing the results can be accessed on the GitHub via https://github.com/nabeelalikhan1/CNN-based-TF-image-enhancement.
@article{lincoln52277, volume = {33}, month = {September}, author = {Nabeel Ali Khan and Mokhtar Mohammadi and Mubeen Ghafoor and Syed Ali Tariq}, title = {Convolutional Neural Networks Based Time-Frequency Image Enhancement For the Analysis of EEG Signals}, publisher = {Springer}, year = {2022}, journal = {Multidimensional Systems and Signal Processing}, doi = {10.1007/s11045-022-00822-2}, pages = {863--877}, keywords = {ARRAY(0x559d325edb60)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52277/}, abstract = {Quadratic time-frequency (TF) methods are commonly used for the analysis, modeling, and classification of time-varying non-stationary electroencephalogram (EEG) signals. Commonly employed TF methods suffer from an inherent tradeoff between cross-term suppression and preservation of auto-terms. In this paper, we propose a new convolutional neural network (CNN) based approach to enhancing TF images. The proposed method trains a CNN using the Wigner-Ville distribution as the input image and the ideal time-frequency distribution with the total concentration of signal energy along the IF curves as the output image. The results show significant improvement compared to the other state-of-the-art TF enhancement methods. The codes for reproducing the results can be accessed on the GitHub via https://github.com/nabeelalikhan1/CNN-based-TF-image-enhancement.} }
- S. A. Tariq, T. Zia, and M. Ghafoor, “Towards counterfactual and contrastive explainability and transparency of dcnn image classifiers,” Knowledge-based systems, vol. 257, iss. 109901, 2022. doi:10.1016/j.knosys.2022.109901
[BibTeX] [Abstract] [Download PDF]
Explainability of deep convolutional neural networks (DCNNs) is an important research topic that tries to uncover the reasons behind a DCNN model?s decisions and improve their understanding and reliability in high-risk environments. In this regard, we propose a novel method for generating interpretable counterfactual and contrastive explanations for DCNN models. The proposed method is model intrusive that probes the internal workings of a DCNN instead of altering the input image to generate explanations. Given an input image, we provide contrastive explanations by identifying the most important filters in the DCNN representing features and concepts that separate the model?s decision between classifying the image to the original inferred class or some other specified alter class. On the other hand, we provide counterfactual explanations by specifying the minimal changes necessary in such filters so that a contrastive output is obtained. Using these identified filters and concepts, our method can provide contrastive and counterfactual reasons behind a model?s decisions and makes the model more transparent. One of the interesting applications of this method is misclassification analysis, where we compare the identified concepts from a particular input image and compare them with class-specific concepts to establish the validity of the model?s decisions. The proposed method is compared with state-of-the-art and evaluated on the Caltech-UCSD Birds (CUB) 2011 dataset to show the usefulness of the explanations provided.
@article{lincoln52276, volume = {257}, number = {109901}, month = {December}, author = {Syed Ali Tariq and Tehseen Zia and Mubeen Ghafoor}, title = {Towards counterfactual and contrastive explainability and transparency of DCNN image classifiers}, publisher = {Elsevier}, year = {2022}, journal = {Knowledge-Based Systems}, doi = {10.1016/j.knosys.2022.109901}, keywords = {ARRAY(0x559d325ed710)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52276/}, abstract = {Explainability of deep convolutional neural networks (DCNNs) is an important research topic that tries to uncover the reasons behind a DCNN model?s decisions and improve their understanding and reliability in high-risk environments. In this regard, we propose a novel method for generating interpretable counterfactual and contrastive explanations for DCNN models. The proposed method is model intrusive that probes the internal workings of a DCNN instead of altering the input image to generate explanations. Given an input image, we provide contrastive explanations by identifying the most important filters in the DCNN representing features and concepts that separate the model?s decision between classifying the image to the original inferred class or some other specified alter class. On the other hand, we provide counterfactual explanations by specifying the minimal changes necessary in such filters so that a contrastive output is obtained. Using these identified filters and concepts, our method can provide contrastive and counterfactual reasons behind a model?s decisions and makes the model more transparent. One of the interesting applications of this method is misclassification analysis, where we compare the identified concepts from a particular input image and compare them with class-specific concepts to establish the validity of the model?s decisions. The proposed method is compared with state-of-the-art and evaluated on the Caltech-UCSD Birds (CUB) 2011 dataset to show the usefulness of the explanations provided.} }
- A. G. Esfahani, G. Das, I. Gould, P. Zarafshan, V. R. Sugathakumary, J. Heselden, A. Badiee, I. Wright, and S. Pearson, “Applications of robotic and solar energy in precision agriculture and smart farming,” in Solar energy advancements in agriculture and food production systems, S. Gorjian and P. E. Campana, Eds., Elsevier, 2022. doi:10.1016/C2020-0-03304-9
[BibTeX] [Abstract] [Download PDF]
Population growth, healthy diet requirements, and changes in food demand towards a more plant-based protein diet increase existing pressures for food production and land-use change. The increasing demand and current agriculture approaches jeopardise the health of soil and biodiversity which will affect the future ecosystem and food production. One of the solutions to the increasing pressure on agriculture is PA which offers to minimize the use of resources, including land, water, energy, herbicides, and pesticides, and maximise the yield. The development of PA requires a multidisciplinary approach including engineering, AI, and robotics. Robots will play a crucial role in delivering PA and will pave the way toward sustainable healthy food production. While PA is the way forward in the agriculture industry the related devices to collect various supporting data and also the agriculture machinery need to be run by clean energy to ensure sustainable growth in the sector. Among renewable energy sources, solar energy and solar PV have shown a great potential to dominate the future of sustainable energy and agriculture developments. For developing PV in rural and off-grid agriculture farms and lands the use of solar-powered devices is unavoidable. Such a transition to photovoltaic agriculture requires significant changes to agricultural practices and the adoption of smart technologies like IoT, robotics, and WSN. Future food production needs to adapt to changing consumer behaviour along with the rapidly deteriorating environmental factors. PA is also a response to future food production challenges where one of its key aims is to improve sustainability to minimize the use of diminishing resources and minimize GHG emissions by use of renewable energy sources. Along with these adaptations, the new technologies should be using green energy sources (i.e., solar energy) for meeting the power requirements for sustainable developments of these smart technologies. Since there is a rapid inflow of robotic technologies into the agriculture sector, increasing power demand is inevitable, especially in remote areas where PV-based systems can play a game-changing role. It is expected for the agriculture sector to witness a technological revolution toward sustainable food production which cannot be achieved without solar PV development and support.
@incollection{lincoln49943, month = {June}, author = {Amir Ghalamzan Esfahani and Gautham Das and Iain Gould and Payam Zarafshan and Vishnu Rajendran Sugathakumary and James Heselden and Amir Badiee and Isobel Wright and Simon Pearson}, booktitle = {Solar Energy Advancements in Agriculture and Food Production Systems}, editor = {Shiva Gorjian and Pietro Elia Campana}, title = {Applications of robotic and solar energy in precision agriculture and smart farming}, publisher = {Elsevier}, doi = {10.1016/C2020-0-03304-9}, year = {2022}, keywords = {ARRAY(0x559d325ede30)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49943/}, abstract = {Population growth, healthy diet requirements, and changes in food demand towards a more plant-based protein diet increase existing pressures for food production and land-use change. The increasing demand and current agriculture approaches jeopardise the health of soil and biodiversity which will affect the future ecosystem and food production. One of the solutions to the increasing pressure on agriculture is PA which offers to minimize the use of resources, including land, water, energy, herbicides, and pesticides, and maximise the yield. The development of PA requires a multidisciplinary approach including engineering, AI, and robotics. Robots will play a crucial role in delivering PA and will pave the way toward sustainable healthy food production. While PA is the way forward in the agriculture industry the related devices to collect various supporting data and also the agriculture machinery need to be run by clean energy to ensure sustainable growth in the sector. Among renewable energy sources, solar energy and solar PV have shown a great potential to dominate the future of sustainable energy and agriculture developments. For developing PV in rural and off-grid agriculture farms and lands the use of solar-powered devices is unavoidable. Such a transition to photovoltaic agriculture requires significant changes to agricultural practices and the adoption of smart technologies like IoT, robotics, and WSN. Future food production needs to adapt to changing consumer behaviour along with the rapidly deteriorating environmental factors. PA is also a response to future food production challenges where one of its key aims is to improve sustainability to minimize the use of diminishing resources and minimize GHG emissions by use of renewable energy sources. Along with these adaptations, the new technologies should be using green energy sources (i.e., solar energy) for meeting the power requirements for sustainable developments of these smart technologies. Since there is a rapid inflow of robotic technologies into the agriculture sector, increasing power demand is inevitable, especially in remote areas where PV-based systems can play a game-changing role. It is expected for the agriculture sector to witness a technological revolution toward sustainable food production which cannot be achieved without solar PV development and support.} }
- H. Harman and E. I. Sklar, “Challenges for multi-agent based agricultural workforce management,” in The 23rd international workshop on multi-agent-based simulation (mabs)), 2022.
[BibTeX] [Abstract] [Download PDF]
Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks, assigned by farm managers. The work here explores the application of artificial intelligence planning methodologies to optimise the existing workforce and applies multi-agent based simulation to evaluate the efficacy of the AI strategies. Key challenges threatening the acceptance of such an approach are highlighted and solutions are evaluated experimentally.
@inproceedings{lincoln49036, booktitle = {The 23rd International Workshop on Multi-Agent-Based Simulation (MABS))}, title = {Challenges for Multi-Agent Based Agricultural Workforce Management}, author = {Helen Harman and Elizabeth I. Sklar}, publisher = {Springer}, year = {2022}, keywords = {ARRAY(0x559d325ee340)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49036/}, abstract = {Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks, assigned by farm managers. The work here explores the application of artificial intelligence planning methodologies to optimise the existing workforce and applies multi-agent based simulation to evaluate the efficacy of the AI strategies. Key challenges threatening the acceptance of such an approach are highlighted and solutions are evaluated experimentally.} }
- A. Owen, H. Harman, and E. I. Sklar, “Towards the application of multi-agent task allocation to hygiene tasks in the food production industry.,” in 20th international conference on practical applications of agents and multi-agent systems, paams 2022, 2022.
[BibTeX] [Abstract] [Download PDF]
The food production industry faces the complex challenge of scheduling both production and hygiene tasks. These tasks are typically scheduled manually. However, due to the increasing costs of raw materials and the regulations factories must adhere to, inefficiencies can be costly. This paper presents the initial findings of a survey, conducted to learn more about the hygiene tasks within the industry and to inform research on how multi-agent task allocation (MATA) methodologies could automate and improve the scheduling of hygiene tasks. A simulation of a heterogeneous human workforce within a factory environment is presented. This work evaluates experimentally different strategies for applying market-based mechanisms, in particular Sequential Single Item (SSI) auctions, to the problem of allocation hygiene tasks to a heterogeneous workforce.
@inproceedings{lincoln51673, booktitle = {20th International Conference on Practical Applications of Agents and Multi-Agent Systems, PAAMS 2022}, title = {Towards the application of multi-agent task allocation to hygiene tasks in the food production industry.}, author = {Amie Owen and Helen Harman and Elizabeth I. Sklar}, publisher = {Springer Cham}, year = {2022}, keywords = {ARRAY(0x559d325e8708)}, url = {https://eprints.lincoln.ac.uk/id/eprint/51673/}, abstract = {The food production industry faces the complex challenge of scheduling both production and hygiene tasks. These tasks are typically scheduled manually. However, due to the increasing costs of raw materials and the regulations factories must adhere to, inefficiencies can be costly. This paper presents the initial findings of a survey, conducted to learn more about the hygiene tasks within the industry and to inform research on how multi-agent task allocation (MATA) methodologies could automate and improve the scheduling of hygiene tasks. A simulation of a heterogeneous human workforce within a factory environment is presented. This work evaluates experimentally different strategies for applying market-based mechanisms, in particular Sequential Single Item (SSI) auctions, to the problem of allocation hygiene tasks to a heterogeneous workforce.} }
- S. Parsa, H. A. Maior, A. R. E. Thumwood, M. L. Wilson, M. Hanheide, and A. G. Esfahani, “The impact of motion scaling and haptic guidance on operators? workload and performance in teleoperation,” in Chi conference on human factors in computing systems extended abstracts, 2022, p. 1–7. doi:10.1145/3491101.3519814
[BibTeX] [Abstract] [Download PDF]
The use of human operator managed robotics, especially for safety critical work, includes a shift from physically demanding to mentally challenging work, and new techniques for Human-Robot Interaction are being developed to make teleoperation easier and more accurate. This study evaluates the impact of combining two teleoperation support features (i) scaling the velocity mapping of leader-follower arms (motion scaling), and (ii) haptic-feedback guided shared control (haptic guidance). We used purposely difficult peg-in-the-hole tasks requiring high precision insertion and manipulation, and obstacle avoidance, and evaluated the impact of using individual and combined support features on a) task performance and b) operator workload. As expected, long distance tasks led to higher mental workload and lower performance than short distance tasks. Our results showed that motion scaling and haptic guidance impact workload and improve performance during more difficult tasks, and we discussed this in contrast to participants preference for using different teleoperation features.
@inproceedings{lincoln50609, month = {April}, author = {Soran Parsa and Horia A. Maior and Alex Reeve Elliott Thumwood and Max L Wilson and Marc Hanheide and Amir Ghalamzan Esfahani}, booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts}, title = {The Impact of Motion Scaling and Haptic Guidance on Operators? Workload and Performance in Teleoperation}, publisher = {ACM}, doi = {10.1145/3491101.3519814}, pages = {1--7}, year = {2022}, keywords = {ARRAY(0x559d325ee100)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50609/}, abstract = {The use of human operator managed robotics, especially for safety critical work, includes a shift from physically demanding to mentally challenging work, and new techniques for Human-Robot Interaction are being developed to make teleoperation easier and more accurate. This study evaluates the impact of combining two teleoperation support features (i) scaling the velocity mapping of leader-follower arms (motion scaling), and (ii) haptic-feedback guided shared control (haptic guidance). We used purposely difficult peg-in-the-hole tasks requiring high precision insertion and manipulation, and obstacle avoidance, and evaluated the impact of using individual and combined support features on a) task performance and b) operator workload. As expected, long distance tasks led to higher mental workload and lower performance than short distance tasks. Our results showed that motion scaling and haptic guidance impact workload and improve performance during more difficult tasks, and we discussed this in contrast to participants preference for using different teleoperation features.} }
- A. Mohtasib, G. Neumann, and H. Cuayahuitl, “Robot policy learning from demonstration using advantage weighting and early termination,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2022, p. 7414–7420. doi:10.1109/IROS47612.2022.9981056
[BibTeX] [Abstract] [Download PDF]
Learning robotic tasks in the real world is still highly challenging and effective practical solutions remain to be found. Traditional methods used in this area are imitation learning and reinforcement learning, but they both have limitations when applied to real robots. Combining reinforcement learning with pre-collected demonstrations is a promising approach that can help in learning control policies to solve robotic tasks. In this paper, we propose an algorithm that uses novel techniques to leverage offline expert data using offline and online training to obtain faster convergence and improved performance. The proposed algorithm (AWET) weights the critic losses with a novel agent advantage weight to improve over the expert data. In addition, AWET makes use of an automatic early termination technique to stop and discard policy rollouts that are not similar to expert trajectories–-to prevent drifting far from the expert data. In an ablation study, AWET showed improved and promising performance when compared to state-of-the-art baselines on four standard robotic tasks.
@inproceedings{lincoln50442, month = {October}, author = {Abdalkarim Mohtasib and Gerhard Neumann and Heriberto Cuayahuitl}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, title = {Robot Policy Learning from Demonstration Using Advantage Weighting and Early Termination}, publisher = {IEEE}, doi = {10.1109/IROS47612.2022.9981056}, pages = {7414--7420}, year = {2022}, keywords = {ARRAY(0x559d325ed890)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50442/}, abstract = {Learning robotic tasks in the real world is still highly challenging and effective practical solutions remain to be found. Traditional methods used in this area are imitation learning and reinforcement learning, but they both have limitations when applied to real robots. Combining reinforcement learning with pre-collected demonstrations is a promising approach that can help in learning control policies to solve robotic tasks. In this paper, we propose an algorithm that uses novel techniques to leverage offline expert data using offline and online training to obtain faster convergence and improved performance. The proposed algorithm (AWET) weights the critic losses with a novel agent advantage weight to improve over the expert data. In addition, AWET makes use of an automatic early termination technique to stop and discard policy rollouts that are not similar to expert trajectories---to prevent drifting far from the expert data. In an ablation study, AWET showed improved and promising performance when compared to state-of-the-art baselines on four standard robotic tasks.} }
- H. A. Montes and G. Cielniak, “Multiple broccoli head detection and tracking in 3d point clouds for autonomous harvesting,” in Aaai – ai for agriculture and food systems, 2022.
[BibTeX] [Abstract] [Download PDF]
This paper explores a tracking method of broccoli heads that combine a Particle Filter and 3D features detectors to track multiple crops in a sequence of 3D data frames. The tracking accuracy is verified based on a data association method that matches detections with tracks over each frame. The particle filter incorporates a simple motion model to produce the posterior particle distribution, and a similarity model as probability function to measure the tracking accuracy. The method is tested with datasets of two broccoli varieties collected in planted fields from two different countries. Our evaluation shows the tracking method reduces the number of false negatives produced by the detectors on their own. In addition, the method accurately detects and tracks the 3D locations of broccoli heads relative to the vehicle at high frame rates
@inproceedings{lincoln48675, booktitle = {AAAI - AI for Agriculture and Food Systems}, month = {February}, title = {Multiple broccoli head detection and tracking in 3D point clouds for autonomous harvesting}, author = {Hector A. Montes and Grzegorz Cielniak}, year = {2022}, keywords = {ARRAY(0x559d325ee160)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48675/}, abstract = {This paper explores a tracking method of broccoli heads that combine a Particle Filter and 3D features detectors to track multiple crops in a sequence of 3D data frames. The tracking accuracy is verified based on a data association method that matches detections with tracks over each frame. The particle filter incorporates a simple motion model to produce the posterior particle distribution, and a similarity model as probability function to measure the tracking accuracy. The method is tested with datasets of two broccoli varieties collected in planted fields from two different countries. Our evaluation shows the tracking method reduces the number of false negatives produced by the detectors on their own. In addition, the method accurately detects and tracks the 3D locations of broccoli heads relative to the vehicle at high frame rates} }
- F. Atas, G. Cielniak, and G. Lars, “Elevation state-space: surfel-based navigation in uneven environments for mobile robots,” in 2022 ieee/rsj international conference on intelligent robots and systems (iros), 2022.
[BibTeX] [Abstract] [Download PDF]
This paper introduces a new method for robot motion planning and navigation in uneven environments through a surfel representation of underlying point clouds. The proposed method addresses the shortcomings of state-of-the-art navigation methods by incorporating both kinematic and physical constraints of a robot with standard motion planning algorithms (e.g., those from the Open Motion Planning Library), thus enabling efficient sampling-based planners for challenging uneven terrain navigation on raw point cloud maps. Unlike techniques based on Digital Elevation Maps (DEMs), our novel surfel-based state-space formulation and implementation are based on raw point cloud maps, allowing for the modeling of overlapping surfaces such as bridges, piers, and tunnels. Experimental results demonstrate the robustness of the proposed method for robot navigation in real and simulated unstructured environments. The proposed approach also optimizes planners’ performances by boosting their success rates up to 5x for challenging unstructured terrain planning and navigation, thanks to our surfel-based approach’s robot constraint-aware sampling strategy. Finally, we provide an open-source implementation of the proposed method to benefit the robotics community.
@inproceedings{lincoln52845, booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, month = {October}, title = {Elevation State-Space: Surfel-Based Navigation in Uneven Environments for Mobile Robots}, author = {Fetullah Atas and Grzegorz Cielniak and Grimstad Lars}, publisher = {IEEE}, year = {2022}, keywords = {ARRAY(0x559d325ed860)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52845/}, abstract = {This paper introduces a new method for robot motion planning and navigation in uneven environments through a surfel representation of underlying point clouds. The proposed method addresses the shortcomings of state-of-the-art navigation methods by incorporating both kinematic and physical constraints of a robot with standard motion planning algorithms (e.g., those from the Open Motion Planning Library), thus enabling efficient sampling-based planners for challenging uneven terrain navigation on raw point cloud maps. Unlike techniques based on Digital Elevation Maps (DEMs), our novel surfel-based state-space formulation and implementation are based on raw point cloud maps, allowing for the modeling of overlapping surfaces such as bridges, piers, and tunnels. Experimental results demonstrate the robustness of the proposed method for robot navigation in real and simulated unstructured environments. The proposed approach also optimizes planners' performances by boosting their success rates up to 5x for challenging unstructured terrain planning and navigation, thanks to our surfel-based approach's robot constraint-aware sampling strategy. Finally, we provide an open-source implementation of the proposed method to benefit the robotics community.} }
- L. Castri, S. Mghames, M. Hanheide, and N. Bellotto, “Causal discovery of dynamic models for predicting human spatial interactions,” in International conference on social robotics (icsr), 2022.
[BibTeX] [Abstract] [Download PDF]
Exploiting robots for activities in human-shared environments, whether warehouses, shopping centres or hospitals, calls for such robots to understand the underlying physical interactions between nearby agents and objects. In particular, modelling cause-and-effect relations between the latter can help to predict unobserved human behaviours and anticipate the outcome of specific robot interventions. In this paper, we propose an application of causal discovery methods to model human-robot spatial interactions, trying to understand human behaviours from real-world sensor data in two possible scenarios: humans interacting with the environment, and humans interacting with obstacles. New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm in some challenging human environments, with potential application in many service robotics scenarios. To demonstrate the utility of the causal models obtained from real-world datasets, we present a comparison between causal and non-causal prediction approaches. Our results show that the causal model correctly captures the underlying interactions of the considered scenarios and improves its prediction accuracy.
@inproceedings{lincoln52266, booktitle = {International Conference on Social Robotics (ICSR)}, month = {October}, title = {Causal Discovery of Dynamic Models for Predicting Human Spatial Interactions}, author = {Luca Castri and Sariah Mghames and Marc Hanheide and Nicola Bellotto}, publisher = {Springer}, year = {2022}, keywords = {ARRAY(0x559d325ed7d0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52266/}, abstract = {Exploiting robots for activities in human-shared environments, whether warehouses, shopping centres or hospitals, calls for such robots to understand the underlying physical interactions between nearby agents and objects. In particular, modelling cause-and-effect relations between the latter can help to predict unobserved human behaviours and anticipate the outcome of specific robot interventions. In this paper, we propose an application of causal discovery methods to model human-robot spatial interactions, trying to understand human behaviours from real-world sensor data in two possible scenarios: humans interacting with the environment, and humans interacting with obstacles. New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm in some challenging human environments, with potential application in many service robotics scenarios. To demonstrate the utility of the causal models obtained from real-world datasets, we present a comparison between causal and non-causal prediction approaches. Our results show that the causal model correctly captures the underlying interactions of the considered scenarios and improves its prediction accuracy.} }
- R. Polvara, S. M. Mellado, I. Hroob, G. Cielniak, and M. Hanheide, “Collection and evaluation of a long-term 4d agri-robotic dataset,” in Perception and navigation for autonomous robotics in unstructured and dynamic environments, 2022. doi:10.5281/zenodo.7135175
[BibTeX] [Abstract] [Download PDF]
Long-term autonomy is one of the most demanded capabilities looked into a robot. The possibility to perform the same task over and over on a long temporal horizon, offering a high standard of reproducibility and robustness, is appealing. Long-term autonomy can play a crucial role in the adoption of robotics systems for precision agriculture, for example in assisting humans in monitoring and harvesting crops in a large orchard. With this scope in mind, we report an ongoing effort in the long-term deployment of an autonomous mobile robot in a vineyard for data collection across multiple months. The main aim is to collect data from the same area at different points in time so to be able to analyse the impact of the environmental changes in the mapping and localisation tasks. In this work, we present a map-based localisation study taking 4 data sessions. We identify expected failures when the pre-built map visually differs from the environment’s current appearance and we anticipate LTS-Net, a solution pointed at extracting stable temporal features for improving long-term 4D localisation results.
@inproceedings{lincoln52350, booktitle = {Perception and Navigation for Autonomous Robotics in Unstructured and Dynamic Environments}, month = {October}, title = {Collection and Evaluation of a Long-Term 4D Agri-Robotic Dataset}, author = {Riccardo Polvara and Sergio Molina Mellado and Ibrahim Hroob and Grzegorz Cielniak and Marc Hanheide}, year = {2022}, doi = {10.5281/zenodo.7135175}, keywords = {ARRAY(0x559d325ed800)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52350/}, abstract = {Long-term autonomy is one of the most demanded capabilities looked into a robot. The possibility to perform the same task over and over on a long temporal horizon, offering a high standard of reproducibility and robustness, is appealing. Long-term autonomy can play a crucial role in the adoption of robotics systems for precision agriculture, for example in assisting humans in monitoring and harvesting crops in a large orchard. With this scope in mind, we report an ongoing effort in the long-term deployment of an autonomous mobile robot in a vineyard for data collection across multiple months. The main aim is to collect data from the same area at different points in time so to be able to analyse the impact of the environmental changes in the mapping and localisation tasks. In this work, we present a map-based localisation study taking 4 data sessions. We identify expected failures when the pre-built map visually differs from the environment's current appearance and we anticipate LTS-Net, a solution pointed at extracting stable temporal features for improving long-term 4D localisation results.} }
- S. Ghidoni, M. Terreran, D. Evangelista, E. Menegatti, C. Eitzinger, E. Villagrossi, N. Pedrocchi, N. Castaman, M. Malecha, S. Mghames, L. Castri, M. Hanheide, and N. Bellotto, “From human perception and action recognition to causal understanding of human-robot interaction in industrial environments,” in Ital-ia 2022, 2022.
[BibTeX] [Abstract] [Download PDF]
Human-robot collaboration is migrating from lightweight robots in laboratory environments to industrial applications, where heavy tasks and powerful robots are more common. In this scenario, a reliable perception of the humans involved in the process and related intentions and behaviors is fundamental. This paper presents two projects investigating the use of robots in relevant industrial scenarios, providing an overview of how industrial human-robot collaborative tasks can be successfully addressed.
@inproceedings{lincoln48515, booktitle = {Ital-IA 2022}, title = {From Human Perception and Action Recognition to Causal Understanding of Human-Robot Interaction in Industrial Environments}, author = {Stefano Ghidoni and Matteo Terreran and Daniele Evangelista and Emanuele Menegatti and Christian Eitzinger and Enrico Villagrossi and Nicola Pedrocchi and Nicola Castaman and Marcin Malecha and Sariah Mghames and Luca Castri and Marc Hanheide and Nicola Bellotto}, year = {2022}, keywords = {ARRAY(0x559d325ee310)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48515/}, abstract = {Human-robot collaboration is migrating from lightweight robots in laboratory environments to industrial applications, where heavy tasks and powerful robots are more common. In this scenario, a reliable perception of the humans involved in the process and related intentions and behaviors is fundamental. This paper presents two projects investigating the use of robots in relevant industrial scenarios, providing an overview of how industrial human-robot collaborative tasks can be successfully addressed.} }
- A. Owen, H. Harman, and E. Sklar, “Towards autonomous task allocation using a robot team in a food factory,” in Ukras2022 conference ?robotics for unconstrained environments?, 2022.
[BibTeX] [Abstract] [Download PDF]
Scheduling of hygiene tasks in a food production environment is a complex challenge which is typically performed manually. Many factors must be considered during scheduling; this includes what training a hygiene operative (i.e. cleaning staff member) has undergone, the availability of hygiene operatives (holiday commitments, sick leave etc.) and the production constraints (how long does the oven take to cool, when does production begin again etc.). This paper seeks to apply multiagent task allocation (MATA) to automate and optimise the process of allocating tasks to hygiene operatives. The intention is that this optimization module will form one part of a proposed larger system. that we propose to develop. A simulation has been created to function as a digital twin of a factory environment, allowing us to evaluate experimentally a variety of task allocation methodologies. Trialled methods include Round Robin (RR), Sequential Single Item (SSI) auctions, Lowest Bid and Least Contested Bid.
@inproceedings{lincoln51674, booktitle = {UKRAS2022 Conference ?Robotics for Unconstrained Environments?}, title = {Towards Autonomous Task Allocation Using a Robot Team in a Food Factory}, author = {Amie Owen and Helen Harman and Elizabeth Sklar}, publisher = {UK-RAS}, year = {2022}, keywords = {ARRAY(0x559d325e86d8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/51674/}, abstract = {Scheduling of hygiene tasks in a food production environment is a complex challenge which is typically performed manually. Many factors must be considered during scheduling; this includes what training a hygiene operative (i.e. cleaning staff member) has undergone, the availability of hygiene operatives (holiday commitments, sick leave etc.) and the production constraints (how long does the oven take to cool, when does production begin again etc.). This paper seeks to apply multiagent task allocation (MATA) to automate and optimise the process of allocating tasks to hygiene operatives. The intention is that this optimization module will form one part of a proposed larger system. that we propose to develop. A simulation has been created to function as a digital twin of a factory environment, allowing us to evaluate experimentally a variety of task allocation methodologies. Trialled methods include Round Robin (RR), Sequential Single Item (SSI) auctions, Lowest Bid and Least Contested Bid.} }
- T. Choi and G. Cielniak, “Channel randomisation with domain control for effective representation learning of visual anomalies in strawberries,” in Ai for agriculture and food systems, 2022.
[BibTeX] [Abstract] [Download PDF]
Channel Randomisation (CH-Rand) has appeared as a key data augmentation technique for anomaly detection on fruit images because neural networks can learn useful representations of colour irregularity whilst classifying the samples from the augmented “domain”. Our previous study has revealed its success with significantly more reliable performance than other state-of-the-art methods, largely specialised for identifying structural implausibility on non-agricultural objects (e.g., screws). In this paper, we further enhance CH-Rand with additional guidance to generate more informative data for representation learning of anomalies in fruits as most of its fundamental designs are still maintained. To be specific, we first control the “colour space” on which CH-Rand is executed to investigate whether a particular model{–}e.g., HSV , YCbCr, or L*a*b* {–}can better help synthesise realistic anomalies than the RGB, suggested in the original design. In addition, we develop a learning “curriculum” in which CH-Rand shifts its augmented domain to gradually increase the difficulty of the examples for neural networks to classify. To the best of our best knowledge, we are the first to connect the concept of curriculum to self-supervised representation learning for anomaly detection. Lastly, we perform evaluations with the Riseholme-2021 dataset, which contains {\ensuremath{>}} 3.5K real strawberry images at various growth levels along with anomalous examples. Our experimental results show that the trained models with the proposed strategies can achieve over 16\% higher scores of AUC-PR with more than three times less variability than the naive CH-Rand whilst using the same deep networks and data.
@inproceedings{lincoln48676, booktitle = {AI for Agriculture and Food Systems}, month = {January}, title = {Channel Randomisation with Domain Control for Effective Representation Learning of Visual Anomalies in Strawberries}, author = {Taeyeong Choi and Grzegorz Cielniak}, year = {2022}, keywords = {ARRAY(0x559d325ee250)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48676/}, abstract = {Channel Randomisation (CH-Rand) has appeared as a key data augmentation technique for anomaly detection on fruit images because neural networks can learn useful representations of colour irregularity whilst classifying the samples from the augmented "domain". Our previous study has revealed its success with significantly more reliable performance than other state-of-the-art methods, largely specialised for identifying structural implausibility on non-agricultural objects (e.g., screws). In this paper, we further enhance CH-Rand with additional guidance to generate more informative data for representation learning of anomalies in fruits as most of its fundamental designs are still maintained. To be specific, we first control the "colour space" on which CH-Rand is executed to investigate whether a particular model{--}e.g., HSV , YCbCr, or L*a*b* {--}can better help synthesise realistic anomalies than the RGB, suggested in the original design. In addition, we develop a learning "curriculum" in which CH-Rand shifts its augmented domain to gradually increase the difficulty of the examples for neural networks to classify. To the best of our best knowledge, we are the first to connect the concept of curriculum to self-supervised representation learning for anomaly detection. Lastly, we perform evaluations with the Riseholme-2021 dataset, which contains {\ensuremath{>}} 3.5K real strawberry images at various growth levels along with anomalous examples. Our experimental results show that the trained models with the proposed strategies can achieve over 16\% higher scores of AUC-PR with more than three times less variability than the naive CH-Rand whilst using the same deep networks and data.} }
- Y. Zhang, C. Hu, M. Liu, H. Luan, F. Lei, H. Cuayahuitl, and S. Yue, “Temperature-based collision detection in extreme low light condition with bio-inspired lgmd neural network,” in 2021 2nd international symposium on automation, information and computing (isaic 2021), 2022. doi:10.1088/1742-6596/2224/1/012004
[BibTeX] [Abstract] [Download PDF]
It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.
@inproceedings{lincoln49117, booktitle = {2021 2nd International Symposium on Automation, Information and Computing (ISAIC 2021)}, month = {April}, title = {Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network}, author = {Yicheng Zhang and Cheng Hu and Mei Liu and Hao Luan and Fang Lei and Heriberto Cuayahuitl and Shigang Yue}, publisher = {IOP Publishing Ltd}, year = {2022}, doi = {10.1088/1742-6596/2224/1/012004}, keywords = {ARRAY(0x559d325ee0a0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49117/}, abstract = {It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.} }
- R. D. Silva, G. Cielniak, and J. Gao, “Towards infield navigation: leveraging simulated data for crop row detection,” in Ieee international conference on automation science and engineering (case), 2022.
[BibTeX] [Abstract] [Download PDF]
Agricultural datasets for crop row detection are often bound by their limited number of images. This restricts the researchers from developing deep learning based models for precision agricultural tasks involving crop row detection. We suggest the utilization of small real-world datasets alongwith additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset. Our method could reach the performance of a deep learning based crop row detection model trained with real-world data by using 60\% less labelled realworld data. Our model performed well against field variations such as shadows, sunlight and growth stages. We introduce an automated pipeline to generate labelled images for crop row detection in simulation domain. An extensive comparison is done to analyze the contribution of simulated data towards reaching robust crop row detection in various real-world field scenarios.
@inproceedings{lincoln49913, booktitle = {IEEE International Conference on Automation Science and Engineering (CASE)}, title = {Towards Infield Navigation: leveraging simulated data for crop row detection}, author = {Rajitha De Silva and Grzegorz Cielniak and Junfeng Gao}, publisher = {IEEE}, year = {2022}, keywords = {ARRAY(0x559d325ee2e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49913/}, abstract = {Agricultural datasets for crop row detection are often bound by their limited number of images. This restricts the researchers from developing deep learning based models for precision agricultural tasks involving crop row detection. We suggest the utilization of small real-world datasets alongwith additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset. Our method could reach the performance of a deep learning based crop row detection model trained with real-world data by using 60\% less labelled realworld data. Our model performed well against field variations such as shadows, sunlight and growth stages. We introduce an automated pipeline to generate labelled images for crop row detection in simulation domain. An extensive comparison is done to analyze the contribution of simulated data towards reaching robust crop row detection in various real-world field scenarios.} }
- K. Nazari, W. Mandil, and A. G. Esfahani, “Proactive slip control by learned slip model and trajectory adaptation,” in 6th conference on robot learning, 2022.
[BibTeX] [Abstract] [Download PDF]
This paper presents a novel control approach to dealing with object slip during robotic manipulative movements. Slip is a major cause of failure in many robotic grasping and manipulation tasks. Existing works increase grip force to avoid/control slip. However, this may not be feasible when (i) the robot cannot increase the gripping force? the max gripping force is already applied or (ii) in- creased force damages the grasped object, such as soft fruit. Moreover, the robot fixes the gripping force when it forms a stable grasp on the surface of an object, and changing the gripping force during real-time manipulation may not be an effective control policy. We propose a novel control approach to slip avoidance including a learned action-conditioned slip predictor and a constrained optimiser avoiding a predicted slip given a desired robot action. We show the effectiveness of the proposed trajectory adaptation method with the receding horizon controller with a series of real-robot test cases. Our experimental results show our proposed data-driven predictive controller can control slip for objects unseen in training.
@inproceedings{lincoln52220, booktitle = {6th Conference on Robot Learning}, month = {November}, title = {Proactive slip control by learned slip model and trajectory adaptation}, author = {Kiyanoush Nazari and Willow Mandil and Amir Ghalamzan Esfahani}, year = {2022}, journal = {Conference of Robot Learning}, keywords = {ARRAY(0x559d325ed7a0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52220/}, abstract = {This paper presents a novel control approach to dealing with object slip during robotic manipulative movements. Slip is a major cause of failure in many robotic grasping and manipulation tasks. Existing works increase grip force to avoid/control slip. However, this may not be feasible when (i) the robot cannot increase the gripping force? the max gripping force is already applied or (ii) in- creased force damages the grasped object, such as soft fruit. Moreover, the robot fixes the gripping force when it forms a stable grasp on the surface of an object, and changing the gripping force during real-time manipulation may not be an effective control policy. We propose a novel control approach to slip avoidance including a learned action-conditioned slip predictor and a constrained optimiser avoiding a predicted slip given a desired robot action. We show the effectiveness of the proposed trajectory adaptation method with the receding horizon controller with a series of real-robot test cases. Our experimental results show our proposed data-driven predictive controller can control slip for objects unseen in training.} }
- H. Harman and E. Sklar, “Multi-agent task allocation for fruit picker team formation (extended abstract),” in The 21st international conference on autonomous agents and multiagent systems (aamas 2022), 2022, p. 1618–1620.
[BibTeX] [Abstract] [Download PDF]
Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as fruit farms, human labourers undertake harvesting tasks, organised each day by farm manager(s) who assign workers to the fields that are ready to be harvested. The work presented here considers three challenges identified in the adaptation of a multi-agent task allocation methodology applied to the problem of distributing workers to fields. First, the methodology must be fast to compute so that it can be applied on a daily basis. Second, the incremental acquisition of harvesting data used to make decisions about worker-task assignments means that a data-backed approach must be derived from incomplete information as the growing season unfolds. Third, the allocation must take ?fairness? into account and consider worker motivation. Solutions to these challenges are demonstrated, showing statistically significant results based on the operations at a soft fruit farm during their 2020 and 2021 harvesting seasons.
@inproceedings{lincoln49037, booktitle = {The 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2022)}, month = {May}, title = {Multi-agent Task Allocation for Fruit Picker Team Formation (Extended Abstract)}, author = {Helen Harman and Elizabeth Sklar}, publisher = {International Foundation for Autonomous Agents and Multiagent Systems}, year = {2022}, pages = {1618--1620}, keywords = {ARRAY(0x559d325ee010)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49037/}, abstract = {Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as fruit farms, human labourers undertake harvesting tasks, organised each day by farm manager(s) who assign workers to the fields that are ready to be harvested. The work presented here considers three challenges identified in the adaptation of a multi-agent task allocation methodology applied to the problem of distributing workers to fields. First, the methodology must be fast to compute so that it can be applied on a daily basis. Second, the incremental acquisition of harvesting data used to make decisions about worker-task assignments means that a data-backed approach must be derived from incomplete information as the growing season unfolds. Third, the allocation must take ?fairness? into account and consider worker motivation. Solutions to these challenges are demonstrated, showing statistically significant results based on the operations at a soft fruit farm during their 2020 and 2021 harvesting seasons.} }
- Z. Huang, E. Sklar, and S. Parsons, “Design of automatic strawberry harvest robot suitable in complex environments,” in Hri ’20: companion of the 2020 acm/ieee international conference on human-robot interaction, 2022, p. 567–569. doi:10.1145/3371382.3377443
[BibTeX] [Abstract] [Download PDF]
Strawberries are an important cash crop that are grown worldwide. They are also a labour-intensive crop, with harvesting a particularly labour-intensive task because the fruit needs careful handling. This project investigates collaborative human-robot strawberry harvesting, where interacting with a human potentially increases the adaptability of a robot to work in more complex environments. The project mainly concentrates on two aspects of the problem: the identification of the fruit and the picking of the fruit.
@inproceedings{lincoln53890, month = {April}, author = {Zhuoling Huang and Elizabeth Sklar and Simon Parsons}, booktitle = {HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction}, title = {Design of Automatic Strawberry Harvest Robot Suitable in Complex Environments}, publisher = {Association for Computing Machinery}, doi = {10.1145/3371382.3377443}, pages = {567--569}, year = {2022}, keywords = {ARRAY(0x559d325ee070)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53890/}, abstract = {Strawberries are an important cash crop that are grown worldwide. They are also a labour-intensive crop, with harvesting a particularly labour-intensive task because the fruit needs careful handling. This project investigates collaborative human-robot strawberry harvesting, where interacting with a human potentially increases the adaptability of a robot to work in more complex environments. The project mainly concentrates on two aspects of the problem: the identification of the fruit and the picking of the fruit.} }
- A. Salazar-Gomez, M. Darbyshire, J. Gao, E. Sklar, and S. Parsons, “Beyond map: towards practical object detection for weed spraying in precision agriculture,” in 2022 ieee/rsj international conference on intelligent robots and systems, 2022, p. 9232–9238. doi:10.1109/IROS47612.2022.9982139
[BibTeX] [Abstract] [Download PDF]
The evolution of smaller and more powerful GPUs over the last 2 decades has vastly increased the opportunity to apply robust deep learning-based machine vision approaches to real-time use cases in practical environments. One exciting application domain for such technologies is precision agriculture, where the ability to integrate on-board machine vision with data-driven actuation means that farmers can make decisions about crop care and harvesting at the level of the individual plant rather than the whole field. This makes sense both economically and environmentally. This paper assesses the feasibility of precision spraying weeds via a comprehensive evaluation of weed detection accuracy and speed using two separate datasets, two types of GPU, and several state-of-the-art object detection algorithms. A simplified model of precision spraying is used to determine whether the weed detection accuracy achieved could result in a sufficiently high weed hit rate combined with a significant reduction in herbicide usage. The paper introduces two metrics to capture these aspects of the real-world deployment of precision weeding and demonstrates their utility through experimental results.
@inproceedings{lincoln51680, month = {December}, author = {Adrian Salazar-Gomez and Madeleine Darbyshire and Junfeng Gao and Elizabeth Sklar and Simon Parsons}, booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems}, title = {Beyond mAP: Towards practical object detection for weed spraying in precision agriculture}, publisher = {IEEE Press}, doi = {10.1109/IROS47612.2022.9982139}, pages = {9232--9238}, year = {2022}, keywords = {ARRAY(0x559d325ed6e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/51680/}, abstract = {The evolution of smaller and more powerful GPUs over the last 2 decades has vastly increased the opportunity to apply robust deep learning-based machine vision approaches to real-time use cases in practical environments. One exciting application domain for such technologies is precision agriculture, where the ability to integrate on-board machine vision with data-driven actuation means that farmers can make decisions about crop care and harvesting at the level of the individual plant rather than the whole field. This makes sense both economically and environmentally. This paper assesses the feasibility of precision spraying weeds via a comprehensive evaluation of weed detection accuracy and speed using two separate datasets, two types of GPU, and several state-of-the-art object detection algorithms. A simplified model of precision spraying is used to determine whether the weed detection accuracy achieved could result in a sufficiently high weed hit rate combined with a significant reduction in herbicide usage. The paper introduces two metrics to capture these aspects of the real-world deployment of precision weeding and demonstrates their utility through experimental results.} }
- F. Camara and C. Fox, “Extending quantitative proxemics and trust to hri,” in 31st ieee international conference on robot & human interactive communication, 2022. doi:10.1109/RO-MAN53752.2022.9900821
[BibTeX] [Abstract] [Download PDF]
Human-robot interaction (HRI) requires quantitative models of proxemics and trust for robots to use in negotiating with people for space. Hall?s theory of proxemics has been used for decades to describe social interaction distances but has lacked detailed quantitative models and generative explanations to apply to these cases. In the limited case of autonomous vehicle interactions with pedestrians crossing a road, a recent model has explained the quantitative sizes of Hall?s distances to 4\% error and their links to the concept of trust in human interactions. The present study extends this model by generalising several of its assumptions to cover further cases including human-human and human-robot interactions. It tightens the explanations of Hall zones from 4\% to 1\% error and fits several more recent empirical HRI results. This may help to further unify these disparate fields and quantify them to a level which enables real-world operational HRI applications.
@inproceedings{lincoln49872, booktitle = {31st IEEE International Conference on Robot \& Human Interactive Communication}, month = {August}, title = {Extending Quantitative Proxemics and Trust to HRI}, author = {Fanta Camara and Charles Fox}, publisher = {IEEE}, year = {2022}, doi = {10.1109/RO-MAN53752.2022.9900821}, keywords = {ARRAY(0x559d325edbf0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49872/}, abstract = {Human-robot interaction (HRI) requires quantitative models of proxemics and trust for robots to use in negotiating with people for space. Hall?s theory of proxemics has been used for decades to describe social interaction distances but has lacked detailed quantitative models and generative explanations to apply to these cases. In the limited case of autonomous vehicle interactions with pedestrians crossing a road, a recent model has explained the quantitative sizes of Hall?s distances to 4\% error and their links to the concept of trust in human interactions. The present study extends this model by generalising several of its assumptions to cover further cases including human-human and human-robot interactions. It tightens the explanations of Hall zones from 4\% to 1\% error and fits several more recent empirical HRI results. This may help to further unify these disparate fields and quantify them to a level which enables real-world operational HRI applications.} }
- G. Clawson and C. Fox, “Blockchain crop assurance and localisation,” in The 5th uk robotics and autonomous systems conference, 2022.
[BibTeX] [Abstract] [Download PDF]
Food supply chain assurance should begin in the field with regular per-plant re-identification and logging. This is challenging due to localisation and storage requirements. A proof-of-concept solution is provided, using an image-based, super-GNSS precision, robotic localisation per-plant re-identification technique with decentralised storage and blockchain technology. ORB descriptors and RANSAC are used to align in-field stones to previously captured stone images for localisation. Blockchain smart contracts act as a data broker for repeated update and retrieval of an image from a distributed file share system. Results suggest that localisation can be achieved to sub 100mm within a time window of 18 seconds. The implementation is open source and available at: {$\backslash$}url\{https://github.com/garry-clawson/Blockchain-Crop-Assurance-and-Localisation\}
@inproceedings{lincoln50385, booktitle = {The 5th UK Robotics and Autonomous Systems Conference}, month = {August}, title = {Blockchain Crop Assurance and Localisation}, author = {Garry Clawson and Charles Fox}, publisher = {UKRAS}, year = {2022}, keywords = {ARRAY(0x559d325edc20)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50385/}, abstract = {Food supply chain assurance should begin in the field with regular per-plant re-identification and logging. This is challenging due to localisation and storage requirements. A proof-of-concept solution is provided, using an image-based, super-GNSS precision, robotic localisation per-plant re-identification technique with decentralised storage and blockchain technology. ORB descriptors and RANSAC are used to align in-field stones to previously captured stone images for localisation. Blockchain smart contracts act as a data broker for repeated update and retrieval of an image from a distributed file share system. Results suggest that localisation can be achieved to sub 100mm within a time window of 18 seconds. The implementation is open source and available at: {$\backslash$}url\{https://github.com/garry-clawson/Blockchain-Crop-Assurance-and-Localisation\}} }
- M. Darbyshire, A. Salazar-Gomez, C. Lennox, J. Gao, E. Sklar, and S. Parsons, “Localising weeds using a prototype weed sprayer,” in Ukras22 conference ?robotics for unconstrained environments?, 2022, p. 12–13. doi:10.31256/Ua7Pr2W
[BibTeX] [Abstract] [Download PDF]
The application of convolutional neural networks (CNNs) to challenging visual recognition tasks has been shown to be highly effective and robust compared to traditional machine vision techniques. The recent development of small, powerful GPUs has enabled embedded systems to incorporate real-time, CNN-based, visual inference. Agriculture is a domain where this technology could be hugely advantageous. One such application within agriculture is precision spraying where only weeds are targeted with herbicide. This approach promises weed control with significant economic and environmental benefits from re- duced herbicide usage. While existing research has validated that CNN-based vision methods can accurately discern between weeds and crops, this paper explores how such detections can be used to actuate a prototype precision sprayer that incorporates a CNN- based weed detection system and validates spraying performance in a simplified scenario.
@inproceedings{lincoln53105, month = {August}, author = {Madeleine Darbyshire and Adrian Salazar-Gomez and Callum Lennox and Junfeng Gao and Elizabeth Sklar and Simon Parsons}, booktitle = {UKRAS22 Conference ?Robotics for Unconstrained Environments?}, title = {Localising Weeds Using a Prototype Weed Sprayer}, publisher = {UK-RAS Network}, doi = {10.31256/Ua7Pr2W}, pages = {12--13}, year = {2022}, keywords = {ARRAY(0x559d325edc50)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53105/}, abstract = {The application of convolutional neural networks (CNNs) to challenging visual recognition tasks has been shown to be highly effective and robust compared to traditional machine vision techniques. The recent development of small, powerful GPUs has enabled embedded systems to incorporate real-time, CNN-based, visual inference. Agriculture is a domain where this technology could be hugely advantageous. One such application within agriculture is precision spraying where only weeds are targeted with herbicide. This approach promises weed control with significant economic and environmental benefits from re- duced herbicide usage. While existing research has validated that CNN-based vision methods can accurately discern between weeds and crops, this paper explores how such detections can be used to actuate a prototype precision sprayer that incorporates a CNN- based weed detection system and validates spraying performance in a simplified scenario.} }
- N. Wang, G. Das, and A. Millard, “Learning cooperative behaviours in adversarial multi-agent systems,” in Towards autonomous robotic systems, Cham, 2022, p. 179–189. doi:10.1007/978-3-031-15908-4_15
[BibTeX] [Abstract] [Download PDF]
This work extends an existing virtual multi-agent platform called RoboSumo to create TripleSumo–-a platform for investigating multi-agent cooperative behaviors in continuous action spaces, with physical contact in an adversarial environment. In this paper we investigate a scenario in which two agents, namely `Bug’ and `Ant’, must team up and push another agent `Spider’ out of the arena. To tackle this goal, the newly added agent `Bug’ is trained during an ongoing match between `Ant’ and `Spider’. `Bug’ must develop awareness of the other agents’ actions, infer the strategy of both sides, and eventually learn an action policy to cooperate. The reinforcement learning algorithm Deep Deterministic Policy Gradient (DDPG) is implemented with a hybrid reward structure combining dense and sparse rewards. The cooperative behavior is quantitatively evaluated by the mean probability of winning the match and mean number of steps needed to win.
@inproceedings{lincoln52230, month = {September}, author = {Ni Wang and Gautham Das and Alan Millard}, booktitle = {Towards Autonomous Robotic Systems}, address = {Cham}, title = {Learning Cooperative Behaviours in Adversarial Multi-agent Systems}, publisher = {Springer International Publishing}, year = {2022}, doi = {10.1007/978-3-031-15908-4\_15}, pages = {179--189}, keywords = {ARRAY(0x559d325edb30)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52230/}, abstract = {This work extends an existing virtual multi-agent platform called RoboSumo to create TripleSumo---a platform for investigating multi-agent cooperative behaviors in continuous action spaces, with physical contact in an adversarial environment. In this paper we investigate a scenario in which two agents, namely `Bug' and `Ant', must team up and push another agent `Spider' out of the arena. To tackle this goal, the newly added agent `Bug' is trained during an ongoing match between `Ant' and `Spider'. `Bug' must develop awareness of the other agents' actions, infer the strategy of both sides, and eventually learn an action policy to cooperate. The reinforcement learning algorithm Deep Deterministic Policy Gradient (DDPG) is implemented with a hybrid reward structure combining dense and sparse rewards. The cooperative behavior is quantitatively evaluated by the mean probability of winning the match and mean number of steps needed to win.} }
- I. Jestin, J. E. Fischer, M. G. J. Trigo, D. R. Large, and G. E. Burnett, “Effects of wording and gendered voices on acceptability of voice assistants in future autonomous vehicles,” in Cui ?22: conversational user interfaces conference, 2022, p. 1–11. doi:10.1145/3543829.3543836
[BibTeX] [Abstract] [Download PDF]
Voice assistants in future autonomous vehicles may play a major role in supporting the driver during periods of a transfer of control with the vehicle (handover and handback). However, little is known about the effects of different qualities of the voice assistant on its perceived acceptability, and thus its potential to support the driver?s trust in the vehicle. A desktop study was carried out with 18 participants, investigating the effects of three gendered voices and different wording of prompts during handover and handback driving scenarios on measures of acceptability. Participants rated prompts by the voice assistant in nine different driving scenarios, using 5-point Likert style items in a during and post-study questionnaire as well as a short interview at the end. A commanding/formally worded prompt was rated higher on most of the desirable measures of acceptability as compared to an informally worded prompt. The ?Matthew? voice used was perceived to be less artificial and more desirable than the ?Joanna? voice and the gender-ambiguous ?Jordan? voice; however, we caution against interpreting these results as indicative of a general preference of gender, and instead discuss our results to throw light on the complex socio-phonetic nature of voices (including gender) and wording of voice assistants, and the need for careful consideration while designing the same. Results gained facilitate the drawing of insights needed to take better care when designing the voice and wording for voice assistants in future autonomous vehicles.
@inproceedings{lincoln50291, month = {July}, author = {Iris Jestin and Joel E. Fischer and Maria J. Galvez Trigo and David R. Large and Gary E. Burnett}, booktitle = {CUI ?22: Conversational User Interfaces Conference}, title = {Effects of Wording and Gendered Voices on Acceptability of Voice Assistants in Future Autonomous Vehicles}, publisher = {ACM}, doi = {10.1145/3543829.3543836}, pages = {1--11}, year = {2022}, keywords = {ARRAY(0x559d325edce0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50291/}, abstract = {Voice assistants in future autonomous vehicles may play a major role in supporting the driver during periods of a transfer of control with the vehicle (handover and handback). However, little is known about the effects of different qualities of the voice assistant on its perceived acceptability, and thus its potential to support the driver?s trust in the vehicle. A desktop study was carried out with 18 participants, investigating the effects of three gendered voices and different wording of prompts during handover and handback driving scenarios on measures of acceptability. Participants rated prompts by the voice assistant in nine different driving scenarios, using 5-point Likert style items in a during and post-study questionnaire as well as a short interview at the end. A commanding/formally worded prompt was rated higher on most of the desirable measures of acceptability as compared to an informally worded prompt. The ?Matthew? voice used was perceived to be less artificial and more desirable than the ?Joanna? voice and the gender-ambiguous ?Jordan? voice; however, we caution against interpreting these results as indicative of a general preference of gender, and instead discuss our results to throw light on the complex socio-phonetic nature of voices (including gender) and wording of voice assistants, and the need for careful consideration while designing the same. Results gained facilitate the drawing of insights needed to take better care when designing the voice and wording for voice assistants in future autonomous vehicles.} }
- J. Stevenson and C. Fox, “Scaling a hippocampus model with gpu parallelisation and test-driven refactoring,” in 11th international conference on biomimetic and biohybrid systems (living machines), 2022.
[BibTeX] [Abstract] [Download PDF]
The hippocampus is the brain area used for localisation, mapping and episodic memory. Humans and animals can outperform robotic systems in these tasks, so functional models of hippocampus may be useful to improve robotic navigation, such as for self-driving cars. Previous work developed a biologically plausible model of hippocampus based on Unitary Coherent Particle Filter (UCPF) and Temporal Restricted Boltzmann Machine, which was able to learn to navigate around small test environments. However it was implemented in serial software, which becomes very slow as the environments and numbers of neurons scale up. Modern GPUs can parallelize execution of neural networks. The present Neural Software Engineering study develops a GPU accelerated version of the UCPF hippocampus software, using the formal Software Engineering techniques of profiling, optimisation and test-driven refactoring. Results show that the model can greatly benefit from parallel execution, which may enable it to scale from toy environments and applications to real-world ones such as self-driving car navigation. The refactored parallel code is released to the community as open source software as part of this publication.
@inproceedings{lincoln49936, booktitle = {11th International Conference on Biomimetic and Biohybrid Systems (Living Machines)}, month = {July}, title = {Scaling a hippocampus model with GPU parallelisation and test-driven refactoring}, author = {Jack Stevenson and Charles Fox}, publisher = {Springer LNCS}, year = {2022}, keywords = {ARRAY(0x559d325edd10)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49936/}, abstract = {The hippocampus is the brain area used for localisation, mapping and episodic memory. Humans and animals can outperform robotic systems in these tasks, so functional models of hippocampus may be useful to improve robotic navigation, such as for self-driving cars. Previous work developed a biologically plausible model of hippocampus based on Unitary Coherent Particle Filter (UCPF) and Temporal Restricted Boltzmann Machine, which was able to learn to navigate around small test environments. However it was implemented in serial software, which becomes very slow as the environments and numbers of neurons scale up. Modern GPUs can parallelize execution of neural networks. The present Neural Software Engineering study develops a GPU accelerated version of the UCPF hippocampus software, using the formal Software Engineering techniques of profiling, optimisation and test-driven refactoring. Results show that the model can greatly benefit from parallel execution, which may enable it to scale from toy environments and applications to real-world ones such as self-driving car navigation. The refactored parallel code is released to the community as open source software as part of this publication.} }
- T. Choi, O. Would, A. Salazar-Gomez, and G. Cielniak, “Self-supervised representation learning for reliable robotic monitoring of fruit anomalies,” in 2022 ieee international conference on robotics and automation (icra), 2022. doi:10.1109/ICRA46639.2022.9811954
[BibTeX] [Abstract] [Download PDF]
Data augmentation can be a simple yet powerful tool for autonomous robots to fully utilise available data for self-supervised identification of atypical scenes or objects. State-of-the-art augmentation methods arbitrarily embed “structural” peculiarity on typical images so that classifying these artefacts can provide guidance for learning representations for the detection of anomalous visual signals. In this paper, however, we argue that learning such structure-sensitive representations can be a suboptimal approach to some classes of anomaly (e.g., unhealthy fruits) which could be better recognised by a different type of visual element such as “colour”. We thus propose Channel Randomisation as a novel data augmentation method for restricting neural networks to learn encoding of “colour irregularity” whilst predicting channel-randomised images to ultimately build reliable fruit-monitoring robots identifying atypical fruit qualities. Our experiments show that (1) this colour-based alternative can better learn representations for consistently accurate identification of fruit anomalies in various fruit species, and also, (2) unlike other methods, the validation accuracy can be utilised as a criterion for early stopping of training in practice due to positive correlation between the performance in the self-supervised colour-differentiation task and the subsequent detection rate of actual anomalous fruits. Also, the proposed approach is evaluated on a new agricultural dataset, Riseholme-2021, consisting of 3.5K strawberry images gathered by a mobile robot, which we share online to encourage active agri-robotics research.
@inproceedings{lincoln48682, booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)}, month = {July}, title = {Self-supervised Representation Learning for Reliable Robotic Monitoring of Fruit Anomalies}, author = {Taeyeong Choi and Owen Would and Adrian Salazar-Gomez and Grzegorz Cielniak}, publisher = {IEEE}, year = {2022}, doi = {10.1109/ICRA46639.2022.9811954}, keywords = {ARRAY(0x559d325edd70)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48682/}, abstract = {Data augmentation can be a simple yet powerful tool for autonomous robots to fully utilise available data for self-supervised identification of atypical scenes or objects. State-of-the-art augmentation methods arbitrarily embed "structural" peculiarity on typical images so that classifying these artefacts can provide guidance for learning representations for the detection of anomalous visual signals. In this paper, however, we argue that learning such structure-sensitive representations can be a suboptimal approach to some classes of anomaly (e.g., unhealthy fruits) which could be better recognised by a different type of visual element such as "colour". We thus propose Channel Randomisation as a novel data augmentation method for restricting neural networks to learn encoding of "colour irregularity" whilst predicting channel-randomised images to ultimately build reliable fruit-monitoring robots identifying atypical fruit qualities. Our experiments show that (1) this colour-based alternative can better learn representations for consistently accurate identification of fruit anomalies in various fruit species, and also, (2) unlike other methods, the validation accuracy can be utilised as a criterion for early stopping of training in practice due to positive correlation between the performance in the self-supervised colour-differentiation task and the subsequent detection rate of actual anomalous fruits. Also, the proposed approach is evaluated on a new agricultural dataset, Riseholme-2021, consisting of 3.5K strawberry images gathered by a mobile robot, which we share online to encourage active agri-robotics research.} }
- L. Roberts-Elliott, G. Das, and A. Millard, “Agent-based simulation of multi-robot soil compaction mapping,” in Towards autonomous robotic systems, Cham, 2022, p. 251–265. doi:10.1007/978-3-031-15908-4_20
[BibTeX] [Abstract] [Download PDF]
Soil compaction, an increase in soil density and decrease in porosity, has a negative effect on crop yields, and damaging environmental impacts. Mapping soil compaction at a high resolution is an important step in enabling precision agriculture practices to address these issues. Autonomous ground-based robotic approaches using proximal sensing have been proposed as alternatives to time-consuming and costly manual soil sampling. Soil compaction has high spatial variance, which can be challenging to capture in a limited time window. A multi-robot system can parallelise the sampling process and reduce the overall sampling time. Multi-robot soil sampling is critically underexplored in literature, and requires selection of methods to efficiently coordinate the sampling. This paper presents a simulation of multi-agent spatial sampling, extending the Mesa agent-based simulation framework, with general applicability, but demonstrated here as a testbed for different methodologies of multi-robot soil compaction mapping. To reduce the necessary number of samples for accurate mapping, while maximising information gained per sample, a dynamic sampling strategy, informed by kriging variance from kriging interpolation of sampled soil compaction values, has been implemented. This is enhanced by task clustering and insertion heuristics for task queuing. Results from the evaluation trials show the suitability of sequential single item auctions in this highly dynamic environment, and high interpolation accuracy resulting from our dynamic sampling, with avenues for improvements in this bespoke sampling methodology in future work.
@inproceedings{lincoln53183, month = {September}, author = {Laurence Roberts-Elliott and Gautham Das and Alan Millard}, booktitle = {Towards Autonomous Robotic Systems}, address = {Cham}, title = {Agent-Based Simulation of Multi-robot Soil Compaction Mapping}, publisher = {Springer International Publishing}, year = {2022}, doi = {10.1007/978-3-031-15908-4\_20}, pages = {251--265}, keywords = {ARRAY(0x559d325edad0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53183/}, abstract = {Soil compaction, an increase in soil density and decrease in porosity, has a negative effect on crop yields, and damaging environmental impacts. Mapping soil compaction at a high resolution is an important step in enabling precision agriculture practices to address these issues. Autonomous ground-based robotic approaches using proximal sensing have been proposed as alternatives to time-consuming and costly manual soil sampling. Soil compaction has high spatial variance, which can be challenging to capture in a limited time window. A multi-robot system can parallelise the sampling process and reduce the overall sampling time. Multi-robot soil sampling is critically underexplored in literature, and requires selection of methods to efficiently coordinate the sampling. This paper presents a simulation of multi-agent spatial sampling, extending the Mesa agent-based simulation framework, with general applicability, but demonstrated here as a testbed for different methodologies of multi-robot soil compaction mapping. To reduce the necessary number of samples for accurate mapping, while maximising information gained per sample, a dynamic sampling strategy, informed by kriging variance from kriging interpolation of sampled soil compaction values, has been implemented. This is enhanced by task clustering and insertion heuristics for task queuing. Results from the evaluation trials show the suitability of sequential single item auctions in this highly dynamic environment, and high interpolation accuracy resulting from our dynamic sampling, with avenues for improvements in this bespoke sampling methodology in future work.} }
- H. Harman and E. Sklar, “Multi-agent task allocation techniques for harvest team formation,” in Advances in practical applications of agents, multi-agent systems, and complex systems simulation. the paams collection, 2022, p. 217–228. doi:10.1007/978-3-031-18192-4_18
[BibTeX] [Abstract] [Download PDF]
With increasing demands for soft fruit and shortages of seasonal workers, farms are seeking innovative solutions for efficiently managing their workforce. The harvesting workforce is typically organised by farm managers who assign workers to the fields that are ready to be harvested. They aim to minimise staff time (and costs) and distribute work fairly, whilst still picking all ripe fruit within the fields that need to be harvested. This paper posits that this problem can be addressed using multi-criteria, multi-agent task allocation techniques. The work presented compares the application of Genetic Algorithms (GAs) vs auction-based approaches to the challenge of assigning workers with various skill sets to fields with various estimated yields. These approaches are evaluated alongside a previously suggested method and the teams that were manually created by a farm manager during the 2021 harvesting season. Results indicate that the GA approach produces more efficient team allocations than the alternatives assessed.
@inproceedings{lincoln50057, month = {October}, author = {Helen Harman and Elizabeth Sklar}, booktitle = {Advances in Practical Applications of Agents, Multi-Agent Systems, and Complex Systems Simulation. The PAAMS Collection}, title = {Multi-Agent Task Allocation Techniques for Harvest Team Formation}, publisher = {Springer}, doi = {10.1007/978-3-031-18192-4\_18}, pages = {217--228}, year = {2022}, keywords = {ARRAY(0x559d325ed920)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50057/}, abstract = {With increasing demands for soft fruit and shortages of seasonal workers, farms are seeking innovative solutions for efficiently managing their workforce. The harvesting workforce is typically organised by farm managers who assign workers to the fields that are ready to be harvested. They aim to minimise staff time (and costs) and distribute work fairly, whilst still picking all ripe fruit within the fields that need to be harvested. This paper posits that this problem can be addressed using multi-criteria, multi-agent task allocation techniques. The work presented compares the application of Genetic Algorithms (GAs) vs auction-based approaches to the challenge of assigning workers with various skill sets to fields with various estimated yields. These approaches are evaluated alongside a previously suggested method and the teams that were manually created by a farm manager during the 2021 harvesting season. Results indicate that the GA approach produces more efficient team allocations than the alternatives assessed.} }
- M. G. Trigo, P. Standen, and S. Cobb, “Educational robots and their control interfaces: how can we make them more accessible for special education?,” in Hci international conference 2022, 2022, p. 15–34. doi:10.1007/978-3-031-05039-8_2
[BibTeX] [Abstract] [Download PDF]
Existing design standards and guidelines provide guidance on what factors to consider to produce interactive systems that are not only usable, but also accessible. However, these standards are usually general, and when it comes to designing an interactive system for children with Learning Difficulties or Disabilities (LD) and/or Autism Spectrum Conditions (ASC) they are often not specific enough, leading to systems that are not fit for that purpose. If we dive into the area of educational robotics, we face even more issues, in part due to the relative novelty of these technologies. In this paper, we present an analysis of 26 existing educational robots and the interfaces used to control them. Furthermore, we present the results of running focus groups and a questionnaire with 32 educators with expertise in Special Education and parents at four different institutions, to explore potential accessibility issues of existing systems and to identify desirable characteristics. We conclude introduc- ing an initial set of design recommendations, to complement existing design standards and guidelines, that would help with producing future more accessible control interfaces for educational robots, with an especial focus on helping pupils with LDs and/or ASC.
@inproceedings{lincoln48058, month = {June}, author = {Maria Galvez Trigo and Penelope Standen and Sue Cobb}, booktitle = {HCI International Conference 2022}, title = {Educational robots and their control interfaces: how can we make them more accessible for Special Education?}, publisher = {Springer}, doi = {10.1007/978-3-031-05039-8\_2}, pages = {15--34}, year = {2022}, keywords = {ARRAY(0x559d325ede60)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48058/}, abstract = {Existing design standards and guidelines provide guidance on what factors to consider to produce interactive systems that are not only usable, but also accessible. However, these standards are usually general, and when it comes to designing an interactive system for children with Learning Difficulties or Disabilities (LD) and/or Autism Spectrum Conditions (ASC) they are often not specific enough, leading to systems that are not fit for that purpose. If we dive into the area of educational robotics, we face even more issues, in part due to the relative novelty of these technologies. In this paper, we present an analysis of 26 existing educational robots and the interfaces used to control them. Furthermore, we present the results of running focus groups and a questionnaire with 32 educators with expertise in Special Education and parents at four different institutions, to explore potential accessibility issues of existing systems and to identify desirable characteristics. We conclude introduc- ing an initial set of design recommendations, to complement existing design standards and guidelines, that would help with producing future more accessible control interfaces for educational robots, with an especial focus on helping pupils with LDs and/or ASC.} }
- R. Godfrey, M. Rimmer, C. Headleand, and C. Fox, “Rhythmtrain: making rhythmic sight reading training fun,” in International computer music conference, 2022.
[BibTeX] [Abstract] [Download PDF]
Rhythmic sight-reading forms a barrier to many musicians’ progress. It is difficult to practice in isolation, as it is hard to get feedback on accuracy. Different performers have different starting skills in different styles so it is hard to create a general curriculum for study. It can be boring to rehearse the same rhythms many times. We examine theories of motivation, engagement, and fun, and draw them together to design a novel training system, RhythmTrain. This includes consideration of dynamic difficultly, gamification and juicy design. The system uses machine learning to learn individual performers’ strengths, weaknesses, and interests, and optimises the selection of rhythms presented to maximise their engagement. An open source implementation is released as part of this publication.
@inproceedings{lincoln49153, booktitle = {International Computer Music Conference}, month = {September}, title = {RhythmTrain: making rhythmic sight reading training fun}, author = {Reece Godfrey and Matthew Rimmer and Chris Headleand and Charles Fox}, publisher = {ICMA}, year = {2022}, keywords = {ARRAY(0x559d325eda70)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49153/}, abstract = {Rhythmic sight-reading forms a barrier to many musicians' progress. It is difficult to practice in isolation, as it is hard to get feedback on accuracy. Different performers have different starting skills in different styles so it is hard to create a general curriculum for study. It can be boring to rehearse the same rhythms many times. We examine theories of motivation, engagement, and fun, and draw them together to design a novel training system, RhythmTrain. This includes consideration of dynamic difficultly, gamification and juicy design. The system uses machine learning to learn individual performers' strengths, weaknesses, and interests, and optimises the selection of rhythms presented to maximise their engagement. An open source implementation is released as part of this publication.} }
- J. Bennett, B. Moncur, K. Fogarty, G. Clawson, and C. Fox, “Towards open source hardware robotic woodwind: an internal duct flute player,” in International computer music conference, 2022.
[BibTeX] [Abstract] [Download PDF]
We present the first open source hardware (OSH) design and build of an automated robotic internal duct flute player, including an artificial lung and pitch calibration system. Using a recorder as an introductory instrument, the system is designed to be as modular as possible, enabling modification to fit further instruments across the woodwind family. Design considerations include the need to be as open to modification and accessible to as many people and instruments as possible. The system is split into two physical modules: a blowing module and a fingering module, and three software modules: actuator control, pitch calibration and musical note processing via MIDI. The system is able to perform beginner level recorder player melodies.
@inproceedings{lincoln49154, booktitle = {International Computer Music Conference}, month = {September}, title = {Towards Open Source Hardware Robotic Woodwind: an Internal Duct Flute Player}, author = {James Bennett and Bethan Moncur and Kyle Fogarty and Garry Clawson and Charles Fox}, publisher = {ICMA}, year = {2022}, keywords = {ARRAY(0x559d325eda40)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49154/}, abstract = {We present the first open source hardware (OSH) design and build of an automated robotic internal duct flute player, including an artificial lung and pitch calibration system. Using a recorder as an introductory instrument, the system is designed to be as modular as possible, enabling modification to fit further instruments across the woodwind family. Design considerations include the need to be as open to modification and accessible to as many people and instruments as possible. The system is split into two physical modules: a blowing module and a fingering module, and three software modules: actuator control, pitch calibration and musical note processing via MIDI. The system is able to perform beginner level recorder player melodies.} }
- J. Lock, F. Camara, and C. Fox, “Emap: real-time terrain estimation,” in 23rd towards autonomous robotic systems (taros) conference, 2022.
[BibTeX] [Abstract] [Download PDF]
Terrain mapping has a many use cases in both land surveyance and autonomous vehicles. Popular methods generate occupancy maps over 3D space, which are sub-optimal in outdoor scenarios with large, clear spaces where gaps in LiDAR readings are common. A terrain can instead be modelled as a height map over 2D space which can iteratively be updated with incoming LiDAR data, which simplifies computation and allows missing points to be estimated based on the current terrain estimate. The latter point is of particular interest, since it can reduce the data collection effort required (and its associated costs) and current options are not suitable to real-time operation. In this work, we introduce a new method that is capable of performing such terrain mapping and inferencing tasks in real-time. We evaluate it with a set of mapping scenarios and show it is capable of generating maps with higher accuracy than an OctoMap-based method.
@inproceedings{lincoln50390, booktitle = {23rd Towards Autonomous Robotic Systems (TAROS) Conference}, month = {September}, title = {EMap: Real-time Terrain Estimation}, author = {Jacobus Lock and Fanta Camara and Charles Fox}, publisher = {Springer}, year = {2022}, keywords = {ARRAY(0x559d325ed9e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50390/}, abstract = {Terrain mapping has a many use cases in both land surveyance and autonomous vehicles. Popular methods generate occupancy maps over 3D space, which are sub-optimal in outdoor scenarios with large, clear spaces where gaps in LiDAR readings are common. A terrain can instead be modelled as a height map over 2D space which can iteratively be updated with incoming LiDAR data, which simplifies computation and allows missing points to be estimated based on the current terrain estimate. The latter point is of particular interest, since it can reduce the data collection effort required (and its associated costs) and current options are not suitable to real-time operation. In this work, we introduce a new method that is capable of performing such terrain mapping and inferencing tasks in real-time. We evaluate it with a set of mapping scenarios and show it is capable of generating maps with higher accuracy than an OctoMap-based method.} }
- J. Gregory, M. H. Nair, G. Bullegas, and M. R. Saaj, “Using semantic systems engineering techniques to verity the large aperture space telescope mission ? current status,” in Model based space systems and software engineering mbse2021, 2022.
[BibTeX] [Abstract] [Download PDF]
MBSE aims to integrate engineering models across tools and domain boundaries to support traditional systems engineering activities (e.g., requirements elicitation and traceability, design, analysis, verification and validation). However, MBSE does not inherently solve interoperability with the multiple model-based infrastructures involved in a complex systems engineering project. The challenge is to implement digital continuity in the three dimensions of systems engineering: across disciplines, throughout the lifecycle, and along the supply chain. Space systems are ideal candidates for the application of MBSE and semantic modelling as these complex and expensive systems are mission-critical and often co-developed by multiple stakeholders. In this paper, the authors introduce the concept of Semantic Systems Engineering (SES) as an expansion of MBSE practices to include semantic modelling through SWTs. The paper also presents the progress and status of a novel Semantic Systems Engineering Ontology (SESO) in the context of a specific design case study ? the Large Aperture Space Telescope mission.
@inproceedings{lincoln49463, booktitle = {Model Based Space Systems and Software Engineering MBSE2021}, month = {September}, title = {Using Semantic Systems Engineering Techniques to Verity the Large Aperture Space Telescope Mission ? Current Status}, author = {Joe Gregory and Manu H. Nair and Gianmaria Bullegas and Mini Rai Saaj}, publisher = {European Space Agency}, year = {2022}, keywords = {ARRAY(0x559d325ed9b0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49463/}, abstract = {MBSE aims to integrate engineering models across tools and domain boundaries to support traditional systems engineering activities (e.g., requirements elicitation and traceability, design, analysis, verification and validation). However, MBSE does not inherently solve interoperability with the multiple model-based infrastructures involved in a complex systems engineering project. The challenge is to implement digital continuity in the three dimensions of systems engineering: across disciplines, throughout the lifecycle, and along the supply chain. Space systems are ideal candidates for the application of MBSE and semantic modelling as these complex and expensive systems are mission-critical and often co-developed by multiple stakeholders. In this paper, the authors introduce the concept of Semantic Systems Engineering (SES) as an expansion of MBSE practices to include semantic modelling through SWTs. The paper also presents the progress and status of a novel Semantic Systems Engineering Ontology (SESO) in the context of a specific design case study ? the Large Aperture Space Telescope mission.} }
- F. Camara and C. Fox, “Game theory, proxemics and trust for self-driving car social navigation,” in Social robot navigation: advances and evaluation (seanavbench 2022), 2022.
[BibTeX] [Abstract] [Download PDF]
To navigate in human social spaces, self-driving cars and other robots must show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. The present paper gives an overview of our ongoing work on modelling and controlling human?self-driving car interactions using game theory, proxemics and trust, and unifying these fields via quantitative models and robot controllers.
@inproceedings{lincoln49183, booktitle = {Social Robot Navigation: Advances and Evaluation (SEANavBench 2022)}, month = {May}, title = {Game Theory, Proxemics and Trust for Self-Driving Car Social Navigation}, author = {Fanta Camara and Charles Fox}, publisher = {Social Robot Navigation: Advances and Evaluation}, year = {2022}, keywords = {ARRAY(0x559d325edfb0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49183/}, abstract = {To navigate in human social spaces, self-driving cars and other robots must show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. The present paper gives an overview of our ongoing work on modelling and controlling human?self-driving car interactions using game theory, proxemics and trust, and unifying these fields via quantitative models and robot controllers.} }
- H. Luan, M. Hua, J. Peng, S. Yue, S. Chen, and Q. Fu, “Accelerating motion perception model mimics the visual neuronal ensemble of crab,” in 2022 international joint conference on neural networks (ijcnn), 2022, p. 1–8. doi:10.1109/IJCNN55064.2022.9892540
[BibTeX] [Abstract] [Download PDF]
In nature, crabs have a panoramic vision for the localization and perception of accelerating motion from local segments to global view in order to guide reactive behaviours including escape. The visual neuronal ensemble in crab plays crucial roles in such capability, however, has never been investigated and modelled as an artificial vision system. To bridge this gap, we propose an accelerating motion perception model (AMPM) mimicking the visual neuronal ensemble in crab. The AMPM includes two main parts, wherein the pre-synaptic network from the previous modelling work simulates 16 MLG1 neurons covering the entire view to localize moving objects. The emphasis herein is laid on the original modelling of MLG1s? post-synaptic network to perceive accelerating motions from a global view, which employs a novel spatial-temporal difference encoder (STDE), and an adaptive spiking threshold temporal difference encoder (AT-TDE). Specifically, the STDE transforms ?time-to-travel? between activations of two successive segments of MLG1 into excitatory post-synaptic current (EPSC), which decays with the elapse of time. The AT-TDE in two directional, i.e., counter-clockwise and clockwise accelerating detectors guarantees ?non-firing? to con-stant movements. Accordingly, the accelerating motion can be effectively localized and perceived by the whole network. The systematic experiments verified the feasibility and robustness of the proposed method. The model responses to translational accelerating motion also fit many of the explored physiological features of direction selective neurons in the lobula complex of crab (i.e. lobula complex direction cells, LCDCs). This modelling study not only provides a reasonable hypothesis for such biological neural pathways, but is also critical for developing a new neuromorphic sensor strategy.
@inproceedings{lincoln52805, month = {September}, author = {Hao Luan and Mu Hua and Jigen Peng and Shigang Yue and Shengyong Chen and Qinbing Fu}, booktitle = {2022 International Joint Conference on Neural Networks (IJCNN)}, title = {Accelerating Motion Perception Model Mimics the Visual Neuronal Ensemble of Crab}, publisher = {IEEE}, doi = {10.1109/IJCNN55064.2022.9892540}, pages = {1--8}, year = {2022}, keywords = {ARRAY(0x559d325ed980)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52805/}, abstract = {In nature, crabs have a panoramic vision for the localization and perception of accelerating motion from local segments to global view in order to guide reactive behaviours including escape. The visual neuronal ensemble in crab plays crucial roles in such capability, however, has never been investigated and modelled as an artificial vision system. To bridge this gap, we propose an accelerating motion perception model (AMPM) mimicking the visual neuronal ensemble in crab. The AMPM includes two main parts, wherein the pre-synaptic network from the previous modelling work simulates 16 MLG1 neurons covering the entire view to localize moving objects. The emphasis herein is laid on the original modelling of MLG1s? post-synaptic network to perceive accelerating motions from a global view, which employs a novel spatial-temporal difference encoder (STDE), and an adaptive spiking threshold temporal difference encoder (AT-TDE). Specifically, the STDE transforms ?time-to-travel? between activations of two successive segments of MLG1 into excitatory post-synaptic current (EPSC), which decays with the elapse of time. The AT-TDE in two directional, i.e., counter-clockwise and clockwise accelerating detectors guarantees ?non-firing? to con-stant movements. Accordingly, the accelerating motion can be effectively localized and perceived by the whole network. The systematic experiments verified the feasibility and robustness of the proposed method. The model responses to translational accelerating motion also fit many of the explored physiological features of direction selective neurons in the lobula complex of crab (i.e. lobula complex direction cells, LCDCs). This modelling study not only provides a reasonable hypothesis for such biological neural pathways, but is also critical for developing a new neuromorphic sensor strategy.} }
- M. C. Mayoral, L. Grimstad, P. r a, and G. Cielniak, “Towards safety in open-field agricultural robotic applications: a method for human risk assessment using classifiers,” in 2022 15th international conference on human system interaction (hsi), 2022. doi:10.1109/HSI55341.2022.9869472
[BibTeX] [Abstract] [Download PDF]
Tractors and heavy machinery have been used for decades to improve the quality and overall agriculture production. Moreover, agriculture is becoming a trend domain for robotics, and as a consequence, the efforts towards automatizing agricultural task increases year by year. However, for autonomous applications, accident prevention is of prior importance for warrantying human safety during operation in any scenario. This paper rephrases human safety as a classification problem using a custom distance criterion where each detected human gets a risk level classification. We propose the use of a neural network trained to detect and classify humans in the scene according to these criteria. The proposed approach learns from real-world data corresponding to an open-field scenario and is assessed with a custom risk assessment method.
@inproceedings{lincoln52846, booktitle = {2022 15th International Conference on Human System Interaction (HSI)}, month = {August}, title = {Towards Safety in Open-field Agricultural Robotic Applications: A Method for Human Risk Assessment using Classifiers}, author = {C. Mayoral Mayoral and Lars Grimstad and P{\r a}l J. From and Grzegorz Cielniak}, publisher = {IEEE}, year = {2022}, doi = {10.1109/HSI55341.2022.9869472}, keywords = {ARRAY(0x559d325edbc0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52846/}, abstract = {Tractors and heavy machinery have been used for decades to improve the quality and overall agriculture production. Moreover, agriculture is becoming a trend domain for robotics, and as a consequence, the efforts towards automatizing agricultural task increases year by year. However, for autonomous applications, accident prevention is of prior importance for warrantying human safety during operation in any scenario. This paper rephrases human safety as a classification problem using a custom distance criterion where each detected human gets a risk level classification. We propose the use of a neural network trained to detect and classify humans in the scene according to these criteria. The proposed approach learns from real-world data corresponding to an open-field scenario and is assessed with a custom risk assessment method.} }
- F. Camara and C. Fox, “Learning pedestrian social behaviour for game-theoretic self-driving cars,” in Rss pioneers workshop, 2022.
[BibTeX] [Abstract] [Download PDF]
Robot navigation in environments with static objects appears to be a solved problem, but navigating around humans in dynamic and unstructured environments remains an active research question. This requires not only advanced path planning methods but also a good perception system, models of multi-agent interactions and realistic hardware for testing. To evolve in human social spaces, robots must also show social intelligence, i.e. the ability to understand human behaviour via explicit and implicit communication cues (e.g. proxemics) for better human-robot interactions (HRI) [28]. Similarly, autonomous vehicles (AVs), also called ?self-driving cars? that are appearing on the roads need a better understanding of pedestrians? social behaviour, especially in urban areas [26]. In particular, previous work showed that pedestrians may take advantage over autonomous vehicles [13] by intentionally and constantly stepping in front of AVs, hence preventing them from making progress on the roads. This inability of current AVs to read the intention of other road users, predict their future behaviour and interact with them is known as ?the big problem with self-driving cars? [1]. Thus, AVs need better decision-making models and must find a good balance between stopping for pedestrians when required and driving to reach their final destination as quickly as possible for their on-board passengers. A comprehensive review of existing pedestrian models for AVs, ranging from low-level sensing, detection and tracking models [9] to high-level interaction and game theoretic models of pedestrian behaviour [10], found that the lower-level models are accurate and mature enough to be deployed on AVs but more research is needed in the higher-level models. Hence, in this work, we focus on modelling, learning and operating pedestrian high-level social behaviour on self-driving cars using game theory and proxemics.
@inproceedings{lincoln50876, booktitle = {RSS Pioneers Workshop}, month = {June}, title = {Learning Pedestrian Social Behaviour for Game-Theoretic Self-Driving Cars}, author = {Fanta Camara and Charles Fox}, publisher = {RSS}, year = {2022}, keywords = {ARRAY(0x559d325eddd0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50876/}, abstract = {Robot navigation in environments with static objects appears to be a solved problem, but navigating around humans in dynamic and unstructured environments remains an active research question. This requires not only advanced path planning methods but also a good perception system, models of multi-agent interactions and realistic hardware for testing. To evolve in human social spaces, robots must also show social intelligence, i.e. the ability to understand human behaviour via explicit and implicit communication cues (e.g. proxemics) for better human-robot interactions (HRI) [28]. Similarly, autonomous vehicles (AVs), also called ?self-driving cars? that are appearing on the roads need a better understanding of pedestrians? social behaviour, especially in urban areas [26]. In particular, previous work showed that pedestrians may take advantage over autonomous vehicles [13] by intentionally and constantly stepping in front of AVs, hence preventing them from making progress on the roads. This inability of current AVs to read the intention of other road users, predict their future behaviour and interact with them is known as ?the big problem with self-driving cars? [1]. Thus, AVs need better decision-making models and must find a good balance between stopping for pedestrians when required and driving to reach their final destination as quickly as possible for their on-board passengers. A comprehensive review of existing pedestrian models for AVs, ranging from low-level sensing, detection and tracking models [9] to high-level interaction and game theoretic models of pedestrian behaviour [10], found that the lower-level models are accurate and mature enough to be deployed on AVs but more research is needed in the higher-level models. Hence, in this work, we focus on modelling, learning and operating pedestrian high-level social behaviour on self-driving cars using game theory and proxemics.} }
2021
- N. Andreakos, S. Yue, and V. Cutsuridis, “Quantitative investigation of memory recall performance of a computational microcircuit model of the hippocampus,” Brain informatics, vol. 8, p. 9, 2021. doi:10.1186/s40708-021-00131-7
[BibTeX] [Abstract] [Download PDF]
Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.
@article{lincoln44717, volume = {8}, month = {December}, author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis}, title = {Quantitative Investigation Of Memory Recall Performance Of A Computational Microcircuit Model Of The Hippocampus}, publisher = {SpringerOpen}, year = {2021}, journal = {Brain Informatics}, doi = {10.1186/s40708-021-00131-7}, pages = {9}, keywords = {ARRAY(0x559d325e8798)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44717/}, abstract = {Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.} }
- K. Munir, M. Ghafoor, M. Khafagy, and H. Ihshaish, “Agrosupportanalytics: a cloud-based complaints management and decision support system for sustainable farming in egypt,” Egyptian informatics journal, 2021. doi:10.1016/j.eij.2021.06.002
[BibTeX] [Abstract] [Download PDF]
Sustainable Farming requires up-to-date advice on crop diseases, patterns, and adequate prevention actions to face developing circumstances. Currently, in developing countries like Egypt, farmers? access to such information is extremely limited due to the agriculture support being either not available, inconsistent, or unreliable. The presented Cloud-based Complaints Management and Decision Support System for Sustainable Farming in Egypt, named as AgroSupportAnalytics, aims to resolve the problem of both the lack of support and advice for farmers, and the inconsistencies in doing so by current manual approach provided by agricultural experts. Key contribution is the development of an automated complaint management and decision support strategy, on the basis of extensive research on requirement analysis tailored for Egypt. The solution is grounded on the application of knowledge discovery and analysis on agricultural data and farmers? complaints, deployed on a Cloud platform, to provide farming stakeholders in Egypt with timely and suitable support. This paper presents the overall system architectural framework along with the information and storage services, which have been based on the requirements specifications phases of the project along with the historical data sets of past 10�year of farmers complaints and enquiries in Egypt.
@article{lincoln47917, month = {June}, title = {AgroSupportAnalytics: A Cloud-based Complaints Management and Decision Support System for Sustainable Farming in Egypt}, author = {Kamran Munir and Mubeen Ghafoor and Mohamed Khafagy and Hisham Ihshaish}, publisher = {Elsevier}, year = {2021}, doi = {10.1016/j.eij.2021.06.002}, journal = {Egyptian Informatics Journal}, keywords = {ARRAY(0x559d324b4d08)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47917/}, abstract = {Sustainable Farming requires up-to-date advice on crop diseases, patterns, and adequate prevention actions to face developing circumstances. Currently, in developing countries like Egypt, farmers? access to such information is extremely limited due to the agriculture support being either not available, inconsistent, or unreliable. The presented Cloud-based Complaints Management and Decision Support System for Sustainable Farming in Egypt, named as AgroSupportAnalytics, aims to resolve the problem of both the lack of support and advice for farmers, and the inconsistencies in doing so by current manual approach provided by agricultural experts. Key contribution is the development of an automated complaint management and decision support strategy, on the basis of extensive research on requirement analysis tailored for Egypt. The solution is grounded on the application of knowledge discovery and analysis on agricultural data and farmers? complaints, deployed on a Cloud platform, to provide farming stakeholders in Egypt with timely and suitable support. This paper presents the overall system architectural framework along with the information and storage services, which have been based on the requirements specifications phases of the project along with the historical data sets of past 10�year of farmers complaints and enquiries in Egypt.} }
- D. D. Barrie, M. Pandya, H. Pandya, M. Hanheide, and K. Elgeneidy, “A deep learning method for vision based force prediction of a soft fin ray gripper using simulation data,” Frontiers in robotics and ai, vol. 8, p. 631371, 2021. doi:10.3389/frobt.2021.631371
[BibTeX] [Abstract] [Download PDF]
Soft robotic grippers are increasingly desired in applications that involve grasping of complex and deformable objects. However, their flexible nature and non-linear dynamics makes the modelling and control difficult. Numerical techniques such as Finite Element Analysis (FEA) present an accurate way of modelling complex deformations. However, FEA approaches are computationally expensive and consequently challenging to employ for real-time control tasks. Existing analytical techniques simplify the modelling by approximating the deformed gripper geometry. Although this approach is less computationally demanding, it is limited in design scope and can lead to larger estimation errors. In this paper, we present a learning based framework that is able to predict contact forces as well as stress distribution from soft Fin Ray Effect (FRE) finger images in real-time. These images are used to learn internal representations for deformations using a deep neural encoder, which are further decoded to contact forces and stress maps using separate branches. The entire network is jointly learned in an end-to-end fashion. In order to address the challenge of having sufficient labelled data for training, we employ FEA to generate simulated images to supervise our framework. This leads to an accurate prediction, faster inference and availability of large and diverse data for better generalisability. Furthermore, our approach is able to predict a detailed stress distribution that can guide grasp planning, which would be particularly useful for delicate objects. Our proposed approach is validated by comparing the predicted contact forces to the computed ground-truth forces from FEA as well as real force sensor. We rigorously evaluate the performance of our approach under variations in contact point, object material, object shape, viewing angle, and level of occlusion.
@article{lincoln45569, volume = {8}, month = {May}, author = {Daniel De Barrie and Manjari Pandya and Harit Pandya and Marc Hanheide and Khaled Elgeneidy}, title = {A Deep Learning Method for Vision Based Force Prediction of a Soft Fin Ray Gripper Using Simulation Data}, publisher = {Frontiers Media}, year = {2021}, journal = {Frontiers in Robotics and AI}, doi = {10.3389/frobt.2021.631371}, pages = {631371}, keywords = {ARRAY(0x559d32468bd8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45569/}, abstract = {Soft robotic grippers are increasingly desired in applications that involve grasping of complex and deformable objects. However, their flexible nature and non-linear dynamics makes the modelling and control difficult. Numerical techniques such as Finite Element Analysis (FEA) present an accurate way of modelling complex deformations. However, FEA approaches are computationally expensive and consequently challenging to employ for real-time control tasks. Existing analytical techniques simplify the modelling by approximating the deformed gripper geometry. Although this approach is less computationally demanding, it is limited in design scope and can lead to larger estimation errors. In this paper, we present a learning based framework that is able to predict contact forces as well as stress distribution from soft Fin Ray Effect (FRE) finger images in real-time. These images are used to learn internal representations for deformations using a deep neural encoder, which are further decoded to contact forces and stress maps using separate branches. The entire network is jointly learned in an end-to-end fashion. In order to address the challenge of having sufficient labelled data for training, we employ FEA to generate simulated images to supervise our framework. This leads to an accurate prediction, faster inference and availability of large and diverse data for better generalisability. Furthermore, our approach is able to predict a detailed stress distribution that can guide grasp planning, which would be particularly useful for delicate objects. Our proposed approach is validated by comparing the predicted contact forces to the computed ground-truth forces from FEA as well as real force sensor. We rigorously evaluate the performance of our approach under variations in contact point, object material, object shape, viewing angle, and level of occlusion.} }
- J. Aguzzi, C. Costa, M. Calisti, V. Funari, S. Stefanni, R. Danovaro, H. Gomes, F. Vecchi, L. Dartnell, P. Weiss, K. Nowak, D. Chatzievangelou, and S. Marini, “Research trends and future perspectives in marine biomimicking robotics,” Sensors, vol. 21, iss. 11, p. 3778, 2021. doi:10.3390/s21113778
[BibTeX] [Abstract] [Download PDF]
Mechatronic and soft robotics are taking inspiration from the animal kingdom to create new high-performance robots. Here, we focused on marine biomimetic research and used innovative bibliographic statistics tools, to highlight established and emerging knowledge domains. A total of 6980 scientific publications retrieved from the Scopus database (1950?2020), evidencing a sharp research increase in 2003?2004. Clustering analysis of countries collaborations showed two major Asian-North America and European clusters. Three significant areas appeared: (i) energy provision, whose advancement mainly relies on microbial fuel cells, (ii) biomaterials for not yet fully operational soft-robotic solutions; and finally (iii), design and control, chiefly oriented to locomotor designs. In this scenario, marine biomimicking robotics still lacks solutions for the long-lasting energy provision, which presently hinders operation autonomy. In the research environment, identifying natural processes by which living organisms obtain energy is thus urgent to sustain energy-demanding tasks while, at the same time, the natural designs must increasingly inform to optimize energy consumption.
@article{lincoln46134, volume = {21}, number = {11}, month = {May}, author = {Jacopo Aguzzi and Corrado Costa and Marcello Calisti and Valerio Funari and Sergio Stefanni and Roberto Danovaro and Helena Gomes and Fabrizio Vecchi and Lewis Dartnell and Peter Weiss and Kathrin Nowak and Damianos Chatzievangelou and Simone Marini}, title = {Research Trends and Future Perspectives in Marine Biomimicking Robotics}, year = {2021}, journal = {Sensors}, doi = {10.3390/s21113778}, pages = {3778}, keywords = {ARRAY(0x559d32468b78)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46134/}, abstract = {Mechatronic and soft robotics are taking inspiration from the animal kingdom to create new high-performance robots. Here, we focused on marine biomimetic research and used innovative bibliographic statistics tools, to highlight established and emerging knowledge domains. A total of 6980 scientific publications retrieved from the Scopus database (1950?2020), evidencing a sharp research increase in 2003?2004. Clustering analysis of countries collaborations showed two major Asian-North America and European clusters. Three significant areas appeared: (i) energy provision, whose advancement mainly relies on microbial fuel cells, (ii) biomaterials for not yet fully operational soft-robotic solutions; and finally (iii), design and control, chiefly oriented to locomotor designs. In this scenario, marine biomimicking robotics still lacks solutions for the long-lasting energy provision, which presently hinders operation autonomy. In the research environment, identifying natural processes by which living organisms obtain energy is thus urgent to sustain energy-demanding tasks while, at the same time, the natural designs must increasingly inform to optimize energy consumption.} }
- D. C. Rose, J. Lyon, A. de Broon, M. Hanheide, and S. Pearson, “Responsible development of autonomous robots in agriculture,” Nature food, vol. 2, iss. 5, p. 306–309, 2021. doi:10.1038/s43016-021-00287-9
[BibTeX] [Abstract] [Download PDF]
Despite the potential contributions of autonomous robots to agricultural sustainability, social, legal and ethical issues threaten adoption. We discuss how responsible innovation principles can be embedded into the user-centred design of autonomous robots and identify areas for further empirical research.
@article{lincoln45058, volume = {2}, number = {5}, month = {May}, author = {David Christian Rose and Jessica Lyon and Auvikki de Broon and Marc Hanheide and Simon Pearson}, title = {Responsible Development of Autonomous Robots in Agriculture}, publisher = {Springer Nature}, year = {2021}, journal = {Nature Food}, doi = {10.1038/s43016-021-00287-9}, pages = {306--309}, keywords = {ARRAY(0x559d32468b30)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45058/}, abstract = {Despite the potential contributions of autonomous robots to agricultural sustainability, social, legal and ethical issues threaten adoption. We discuss how responsible innovation principles can be embedded into the user-centred design of autonomous robots and identify areas for further empirical research.} }
- A. Badiee, J. R. Wallbank, J. P. Fentanes, E. Trill, P. Scarlet, Y. Zhu, G. Cielniak, H. Cooper, J. R. Blake, J. G. Evans, M. Zreda, K. Markus, and S. Pearson, “Using additional moderator to control the footprint of a cosmos rover for soil moisture measurement,” Water resources research, vol. 57, iss. 6, p. e2020WR028478, 2021. doi:10.1029/2020WR028478
[BibTeX] [Abstract] [Download PDF]
Cosmic Ray Neutron Probes (CRNP) have found application in soil moisture estimation due to their conveniently large ({\ensuremath{>}}100 m) footprints. Here we explore the possibility of using high density polyethylene (HDPE) moderator to limit the field of view, and hence the footprint, of a soil moisture sensor formed of 12 CRNP mounted on to a mobile robotic platform (Thorvald) for better in-field localisation of moisture variation. URANOS neutron scattering simulations are used to show that 5 cm of additional HDPE moderator (used to shield the upper surface and sides of the detector) is sufficient to (i), reduce the footprint of the detector considerably, (ii) approximately double the percentage of neutrons detected from within 5 m of the detector, and (iii), does not affect the shape of the curve used to convert neutron counts into soil moisture. Simulation and rover measurements for a transect crossing between grass and concrete additionally suggest that (iv), soil moisture changes can be sensed over a length scales of tens of meters or less (roughly an order of magnitude smaller than commonly used footprint distances), and (v), the additional moderator does not reduce the detected neutron count rate (and hence increase noise) as much as might be expected given the extent of the additional moderator. The detector with additional HDPE moderator was also used to conduct measurements on a stubble field over three weeks to test the rover system in measuring spatial and temporal soil moisture variation.
@article{lincoln45017, volume = {57}, number = {6}, month = {June}, author = {Amir Badiee and John R. Wallbank and Jaime Pulido Fentanes and Emily Trill and Peter Scarlet and Yongchao Zhu and Grzegorz Cielniak and Hollie Cooper and James R. Blake and Jonathan G. Evans and Marek Zreda and K{\"o}hli Markus and Simon Pearson}, title = {Using Additional Moderator to Control the Footprint of a COSMOS Rover for Soil Moisture Measurement}, publisher = {Wiley}, year = {2021}, journal = {Water Resources Research}, doi = {10.1029/2020WR028478}, pages = {e2020WR028478}, keywords = {ARRAY(0x559d32468ae8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45017/}, abstract = {Cosmic Ray Neutron Probes (CRNP) have found application in soil moisture estimation due to their conveniently large ({\ensuremath{>}}100 m) footprints. Here we explore the possibility of using high density polyethylene (HDPE) moderator to limit the field of view, and hence the footprint, of a soil moisture sensor formed of 12 CRNP mounted on to a mobile robotic platform (Thorvald) for better in-field localisation of moisture variation. URANOS neutron scattering simulations are used to show that 5 cm of additional HDPE moderator (used to shield the upper surface and sides of the detector) is sufficient to (i), reduce the footprint of the detector considerably, (ii) approximately double the percentage of neutrons detected from within 5 m of the detector, and (iii), does not affect the shape of the curve used to convert neutron counts into soil moisture. Simulation and rover measurements for a transect crossing between grass and concrete additionally suggest that (iv), soil moisture changes can be sensed over a length scales of tens of meters or less (roughly an order of magnitude smaller than commonly used footprint distances), and (v), the additional moderator does not reduce the detected neutron count rate (and hence increase noise) as much as might be expected given the extent of the additional moderator. The detector with additional HDPE moderator was also used to conduct measurements on a stubble field over three weeks to test the rover system in measuring spatial and temporal soil moisture variation.} }
- S. D. Mohan, F. J. Davis, A. Badiee, P. Hadley, C. A. Twitchen, and S. Pearson, “Optical and thermal properties of commercial polymer film,modeling the albedo effect,” Journal of applied polymer science, vol. 138, iss. 24, p. 50581, 2021. doi:10.1002/app.50 581
[BibTeX] [Abstract] [Download PDF]
Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.
@article{lincoln44141, volume = {138}, number = {24}, month = {June}, author = {Saeed D Mohan and Fred J Davis and Amir Badiee and Paul Hadley and Carrie A Twitchen and Simon Pearson}, title = {Optical and thermal properties of commercial polymer film,modeling the albedo effect}, publisher = {Wiley}, year = {2021}, journal = {Journal of Applied Polymer Science}, doi = {10.1002/app.50 581}, pages = {50581}, keywords = {ARRAY(0x559d324b4b40)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44141/}, abstract = {Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.} }
- M. Al-Khafajiy, S. Otoum, T. Baker, M. Asim, Z. Maamar, M. Aloqaily, M. Taylor, and M. Randles, “Intelligent control and security of fog resources in healthcare systems via a cognitive fog model,” Acm transactions on internet technology, vol. 21, iss. 3, p. 1–23, 2021. doi:10.1145/3382770
[BibTeX] [Abstract] [Download PDF]
There have been significant advances in the field of Internet of Things (IoT) recently, which have not always considered security or data security concerns: A high degree of security is required when considering the sharing of medical data over networks. In most IoT-based systems, especially those within smart-homes and smart-cities, there is a bridging point (fog computing) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks, as well as small amounts of data processing. The fog nodes can have useful knowledge and potential for constructive security and control over both the sensor network and the data transmitted over the Internet. Smart healthcare services utilise such networks of IoT systems. It is therefore vital that medical data emanating from IoT systems is highly secure, to prevent fraudulent use, whilst maintaining quality of service providing assured, verified and complete data. In this article, we examine the development of a Cognitive Fog (CF) model, for secure, smart healthcare services, that is able to make decisions such as opting-in and opting-out from running processes and invoking new processes when required, and providing security for the operational processes within the fog system. Overall, the proposed ensemble security model performed better in terms of Accuracy Rate, Detection Rate, and a lower False Positive Rate (standard intrusion detection measurements) than three base classifiers (K-NN, DBSCAN, and DT) using a standard security dataset (NSL-KDD).
@article{lincoln47555, volume = {21}, number = {3}, month = {June}, author = {Mohammed Al-Khafajiy and Safa Otoum and Thar Baker and Muhammad Asim and Zakaria Maamar and Moayad Aloqaily and Mark Taylor and Martin Randles}, title = {Intelligent Control and Security of Fog Resources in Healthcare Systems via a Cognitive Fog Model}, publisher = {ACM}, year = {2021}, journal = {ACM Transactions on Internet Technology}, doi = {10.1145/3382770}, pages = {1--23}, keywords = {ARRAY(0x559d325a1e60)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47555/}, abstract = {There have been significant advances in the field of Internet of Things (IoT) recently, which have not always considered security or data security concerns: A high degree of security is required when considering the sharing of medical data over networks. In most IoT-based systems, especially those within smart-homes and smart-cities, there is a bridging point (fog computing) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks, as well as small amounts of data processing. The fog nodes can have useful knowledge and potential for constructive security and control over both the sensor network and the data transmitted over the Internet. Smart healthcare services utilise such networks of IoT systems. It is therefore vital that medical data emanating from IoT systems is highly secure, to prevent fraudulent use, whilst maintaining quality of service providing assured, verified and complete data. In this article, we examine the development of a Cognitive Fog (CF) model, for secure, smart healthcare services, that is able to make decisions such as opting-in and opting-out from running processes and invoking new processes when required, and providing security for the operational processes within the fog system. Overall, the proposed ensemble security model performed better in terms of Accuracy Rate, Detection Rate, and a lower False Positive Rate (standard intrusion detection measurements) than three base classifiers (K-NN, DBSCAN, and DT) using a standard security dataset (NSL-KDD).} }
- H. Isakhani, C. Xiong, W. Chen, and S. Yue, “Towards locust-inspired gliding wing prototypes for micro aerial vehicle applications,” Royal society open science, vol. 8, iss. 6, p. 202253, 2021. doi:10.1098/rsos.202253
[BibTeX] [Abstract] [Download PDF]
In aviation, gliding is the most economical mode of flight explicitly appreciated by natural fliers. They achieve it by high-performance wing structures evolved over millions of years in nature. Among other prehistoric beings, locust (Schistocerca gregaria) is a perfect example of such natural glider capable of endured transatlantic flights that could inspire a practical solution to achieve similar capabilities on micro aerial vehicles. This study investigates the effects of haemolymph on the flexibility of several flying insect wings further showcasing the superior structural performance of locusts. However, biomimicry of such aerodynamic and structural properties is hindered by the limitations of modern as well as conventional fabrication technologies in terms of availability and precision, respectively. Therefore, here we adopt finite element analysis (FEA) to investigate the manufacturing-worthiness of a 3D digitally reconstructed locust tandem wing, and propose novel combinations of economical and readily-available manufacturing methods to develop the model into prototypes that are structurally similar to their counterparts in nature while maintaining the optimum gliding ratio previously obtained in the aerodynamic simulations. Latter is evaluated in the future study and the former is assessed here via an experimental analysis of the flexural stiffness and maximum deformation rate. Ultimately, a comparative study of the mechanical properties reveals the feasibility of each prototype for gliding micro aerial vehicle applications.
@article{lincoln47017, volume = {8}, number = {6}, month = {June}, author = {Hamid Isakhani and Caihua Xiong and Wenbin Chen and Shigang Yue}, title = {Towards locust-inspired gliding wing prototypes for micro aerial vehicle applications}, publisher = {The Royal Society}, year = {2021}, journal = {Royal Society Open Science}, doi = {10.1098/rsos.202253}, pages = {202253}, keywords = {ARRAY(0x559d324b4870)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47017/}, abstract = {In aviation, gliding is the most economical mode of flight explicitly appreciated by natural fliers. They achieve it by high-performance wing structures evolved over millions of years in nature. Among other prehistoric beings, locust (Schistocerca gregaria) is a perfect example of such natural glider capable of endured transatlantic flights that could inspire a practical solution to achieve similar capabilities on micro aerial vehicles. This study investigates the effects of haemolymph on the flexibility of several flying insect wings further showcasing the superior structural performance of locusts. However, biomimicry of such aerodynamic and structural properties is hindered by the limitations of modern as well as conventional fabrication technologies in terms of availability and precision, respectively. Therefore, here we adopt finite element analysis (FEA) to investigate the manufacturing-worthiness of a 3D digitally reconstructed locust tandem wing, and propose novel combinations of economical and readily-available manufacturing methods to develop the model into prototypes that are structurally similar to their counterparts in nature while maintaining the optimum gliding ratio previously obtained in the aerodynamic simulations. Latter is evaluated in the future study and the former is assessed here via an experimental analysis of the flexural stiffness and maximum deformation rate. Ultimately, a comparative study of the mechanical properties reveals the feasibility of each prototype for gliding micro aerial vehicle applications.} }
- F. Camara, P. Dickinson, and C. Fox, “Evaluating pedestrian interaction preferences with a game theoretic autonomous vehicle in virtual reality,” Transportation research part f, vol. 78, p. 410–423, 2021. doi:10.1016/j.trf.2021.02.017
[BibTeX] [Abstract] [Download PDF]
Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved problems, but how to control their interactions with other road users in mixed traffic environments, especially with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game theory model has been developed only in unrealistic lab environments. To improve their realism, this study empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to investigate participants? preference for AV driving style. The results show that the majority of participants were not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two environments and with different car models suggestive of car and last-mile style vehicles. These results provide some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and more generally show how such parameters can be inferred from virtual reality experiments.
@article{lincoln44566, volume = {78}, month = {April}, author = {Fanta Camara and Patrick Dickinson and Charles Fox}, title = {Evaluating Pedestrian Interaction Preferences with a Game Theoretic Autonomous Vehicle in Virtual Reality}, publisher = {Elsevier}, year = {2021}, journal = {Transportation Research Part F}, doi = {10.1016/j.trf.2021.02.017}, pages = {410--423}, keywords = {ARRAY(0x559d32468c08)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44566/}, abstract = {Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved problems, but how to control their interactions with other road users in mixed traffic environments, especially with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game theory model has been developed only in unrealistic lab environments. To improve their realism, this study empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to investigate participants? preference for AV driving style. The results show that the majority of participants were not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two environments and with different car models suggestive of car and last-mile style vehicles. These results provide some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and more generally show how such parameters can be inferred from virtual reality experiments.} }
- L. Gong, M. Yu, S. Jiang, V. Cutsuridis, and S. Pearson, “Deep learning based prediction on greenhouse crop yield combined tcn and rnn,” Sensors, vol. 21, iss. 13, p. 4537, 2021. doi:10.3390/s21134537
[BibTeX] [Abstract] [Download PDF]
Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses? environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing{–}temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.
@article{lincoln46522, volume = {21}, number = {13}, month = {July}, author = {Liyun Gong and Miao Yu and Shouyong Jiang and Vassilis Cutsuridis and Simon Pearson}, title = {Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN}, publisher = {MDPI}, year = {2021}, journal = {Sensors}, doi = {10.3390/s21134537}, pages = {4537}, keywords = {ARRAY(0x559d324affd8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46522/}, abstract = {Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses? environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing{--}temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.} }
- L. Freund, S. Al-Majeed, and A. Millard, “Complexity space modelling for industrial manufacturing systems,” International journal of computing and digital systems, 2021.
[BibTeX] [Abstract] [Download PDF]
The static and dynamic complexity of an industrial engineered system are integrated in a complexity space modelling approach, where information complexity boundaries expand over time and serve as an indicator for system instability in a static complexity space. In a first step, model-based static and dynamic conceptions of complexity are introduced and described. The necessary capabilities are theoretically demonstrated, alongside a set of assumptions concerning the behavior of industrial system complexity and its functions as a core foundation for the proposed complexity space model. In a second step, the successful application of the proposed modelling approach on a real-world industrial system is presented. Case study results are briefly presented and discussed as a first proof of concept for the general applicability of the proposed modelling approach for current and future industrial systems. In a final step a short research outlook is provided.
@article{lincoln46666, month = {July}, title = {Complexity Space Modelling for Industrial Manufacturing Systems}, author = {Lucas Freund and Salah Al-Majeed and Alan Millard}, publisher = {University of Bahrain}, year = {2021}, journal = {International Journal of Computing and Digital Systems}, keywords = {ARRAY(0x559d325ed340)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46666/}, abstract = {The static and dynamic complexity of an industrial engineered system are integrated in a complexity space modelling approach, where information complexity boundaries expand over time and serve as an indicator for system instability in a static complexity space. In a first step, model-based static and dynamic conceptions of complexity are introduced and described. The necessary capabilities are theoretically demonstrated, alongside a set of assumptions concerning the behavior of industrial system complexity and its functions as a core foundation for the proposed complexity space model. In a second step, the successful application of the proposed modelling approach on a real-world industrial system is presented. Case study results are briefly presented and discussed as a first proof of concept for the general applicability of the proposed modelling approach for current and future industrial systems. In a final step a short research outlook is provided.} }
- L. Gong, M. Yu, S. Jiang, V. Cutsuridis, S. Kollias, and S. Pearson, “Studies of evolutionary algorithms for the reduced tomgro model calibration for modelling tomato yields,” Smart agricultural technology, vol. 1, p. 100011, 2021. doi:10.1016/j.atech.2021.100011
[BibTeX] [Abstract] [Download PDF]
The reduced Tomgro model is one of the popular biophysical models, which can reflect the actual growth process and model the yields of tomato-based on environmental parameters in a greenhouse. It is commonly integrated with the greenhouse environmental control system for optimally controlling environmental parameters to maximize the tomato growth/yields under acceptable energy consumption. In this work, we compare three mainstream evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and differential evolutionary (DE)) for calibrating the reduced Tomgro model, to model the tomato mature fruit dry matter (DM) weights. Different evolutionary algorithms have been applied to calibrate 14 key parameters of the reduced Tomgro model. And the performance of the calibrated Tomgro models based on different evolutionary algorithms has been evaluated based on three datasets obtained from a real tomato grower, with each dataset containing greenhouse environmental parameters (e.g., carbon dioxide concentration, temperature, photosynthetically active radiation (PAR)) and tomato yield information at a particular greenhouse for one year. Multiple metrics (root mean square errors (RMSEs), relative root mean square errors (r-RSMEs), and mean average errors (MAEs)) between actual DM weights and model-simulated ones for all three datasets, are used to validate the performance of calibrated reduced Tomgro model.
@article{lincoln46525, volume = {1}, month = {December}, author = {Liyun Gong and Miao Yu and Shouyong Jiang and Vassilis Cutsuridis and Stefanos Kollias and Simon Pearson}, title = {Studies of evolutionary algorithms for the reduced Tomgro model calibration for modelling tomato yields}, publisher = {Elsevier}, year = {2021}, journal = {Smart Agricultural Technology}, doi = {10.1016/j.atech.2021.100011}, pages = {100011}, keywords = {ARRAY(0x559d325e87c8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46525/}, abstract = {The reduced Tomgro model is one of the popular biophysical models, which can reflect the actual growth process and model the yields of tomato-based on environmental parameters in a greenhouse. It is commonly integrated with the greenhouse environmental control system for optimally controlling environmental parameters to maximize the tomato growth/yields under acceptable energy consumption. In this work, we compare three mainstream evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and differential evolutionary (DE)) for calibrating the reduced Tomgro model, to model the tomato mature fruit dry matter (DM) weights. Different evolutionary algorithms have been applied to calibrate 14 key parameters of the reduced Tomgro model. And the performance of the calibrated Tomgro models based on different evolutionary algorithms has been evaluated based on three datasets obtained from a real tomato grower, with each dataset containing greenhouse environmental parameters (e.g., carbon dioxide concentration, temperature, photosynthetically active radiation (PAR)) and tomato yield information at a particular greenhouse for one year. Multiple metrics (root mean square errors (RMSEs), relative root mean square errors (r-RSMEs), and mean average errors (MAEs)) between actual DM weights and model-simulated ones for all three datasets, are used to validate the performance of calibrated reduced Tomgro model.} }
- S. Brewer, S. Pearson, R. Maull, P. Godsiff, J. G. Frey, A. Zisman, G. Parr, A. McMillan, S. Cameron, H. Blackmore, L. Manning, and L. Bidaut, “A trust framework for digital food systems.,” Nature food, vol. 2, p. 543–545, 2021. doi:10.1038/s43016-021-00346-1
[BibTeX] [Abstract] [Download PDF]
The full potential for a digitally transformed food system has not yet been realised – or indeed imagined. Data flows across, and within, vast but largely decentralised and tiered supply chain networks. Data defines internal inputs, bi-directional flows of food, information and finance within the supply chain, and intended and extraneous outputs. Data exchanges can orchestrate critical network dependencies, define standards and underpin food safety. Poore and Nemecek1 hypothesised that digital technologies could drive system transformation for the public good by empowering personalised selection of foods with, for example, lower intrinsic greenhouse gas emissions. Here, we contend that the full potential of a digitally transformed food system can only be realised if permissioned and trusted data can flow seemlessly through complex, multi-lateral supply chains, effectively from farms through to the consumer.
@article{lincoln47264, volume = {2}, month = {August}, author = {Steve Brewer and Simon Pearson and Roger Maull and Phil Godsiff and Jeremy G. Frey and Andrea Zisman and Gerard Parr and Andrew McMillan and Sarah Cameron and Hannah Blackmore and Louise Manning and Luc Bidaut}, title = {A trust framework for digital food systems.}, publisher = {Nature Research}, year = {2021}, journal = {Nature Food}, doi = {10.1038/s43016-021-00346-1}, pages = {543--545}, keywords = {ARRAY(0x559d325e2b70)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47264/}, abstract = {The full potential for a digitally transformed food system has not yet been realised - or indeed imagined. Data flows across, and within, vast but largely decentralised and tiered supply chain networks. Data defines internal inputs, bi-directional flows of food, information and finance within the supply chain, and intended and extraneous outputs. Data exchanges can orchestrate critical network dependencies, define standards and underpin food safety. Poore and Nemecek1 hypothesised that digital technologies could drive system transformation for the public good by empowering personalised selection of foods with, for example, lower intrinsic greenhouse gas emissions. Here, we contend that the full potential of a digitally transformed food system can only be realised if permissioned and trusted data can flow seemlessly through complex, multi-lateral supply chains, effectively from farms through to the consumer.} }
- Q. Fu, X. Sun, T. liu, C. Hu, and S. Yue, “Robustness of bio-inspired visual systems for collision prediction in critical robot traffic,” Frontiers in robotics and ai, vol. 8, p. 529872, 2021. doi:doi:10.3389/frobt.2021.529872
[BibTeX] [Abstract] [Download PDF]
Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust?s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.
@article{lincoln46873, volume = {8}, month = {August}, author = {Qinbing Fu and Xuelong Sun and Tian liu and Cheng Hu and Shigang Yue}, title = {Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic}, publisher = {Frontiers Media}, year = {2021}, journal = {Frontiers in Robotics and AI}, doi = {doi:10.3389/frobt.2021.529872}, pages = {529872}, keywords = {ARRAY(0x559d324b5020)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46873/}, abstract = {Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust?s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.} }
- I. Sassoon, N. Kokciyan, S. Modgil, and S. Parsons, “Argumentation schemes for clinical decision support,” Argument & computation, 2021. doi:10.3233/AAC-200550
[BibTeX] [Abstract] [Download PDF]
This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created, and illustrates the overall process on a small set of case studies.
@article{lincoln46566, month = {August}, title = {Argumentation Schemes for Clinical Decision Support}, author = {Isabel Sassoon and Nadin Kokciyan and Sanjay Modgil and Simon Parsons}, publisher = {IOS Press}, year = {2021}, doi = {10.3233/AAC-200550}, journal = {Argument \& Computation}, keywords = {ARRAY(0x559d3265d290)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46566/}, abstract = {This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created, and illustrates the overall process on a small set of case studies.} }
- C. Jansen and E. Sklar, “Exploring co-creative drawing workflows,” Frontiers in robotics and ai, vol. 8, 2021. doi:10.3389/frobt.2021.577770
[BibTeX] [Abstract] [Download PDF]
This article presents the outcomes from a mixed-methods study of drawing practitioners (e.g., professional illustrators, fine artists, and art students) that was conducted in Autumn 2018 as a preliminary investigation for the development of a physical human-AI co-creative drawing system. The aim of the study was to discover possible roles that technology could play in observing, modeling, and possibly assisting an artist with their drawing. The study had three components: a paper survey of artists’ drawing practises, technology usage and attitudes, video recorded drawing exercises and a follow-up semi-structured interview which included a co-design discussion on how AI might contribute to their drawing workflow. Key themes identified from the interviews were (1) drawing with physical mediums is a traditional and primary way of creation; (2) artists’ views on AI varied, where co-creative AI is preferable to didactic AI; and (3) artists have a critical and skeptical view on the automation of creative work with AI. Participants’ input provided the basis for the design and technical specifications of a co-creative drawing prototype, for which details are presented in this article. In addition, lessons learned from conducting the user study are presented with a reflection on future studies with drawing practitioners.
@article{lincoln50873, volume = {8}, month = {May}, author = {Chipp Jansen and Elizabeth Sklar}, title = {Exploring Co-creative Drawing Workflows}, publisher = {Frontiers}, journal = {Frontiers in Robotics and AI}, doi = {10.3389/frobt.2021.577770}, year = {2021}, keywords = {ARRAY(0x559d32468bc0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/50873/}, abstract = {This article presents the outcomes from a mixed-methods study of drawing practitioners (e.g., professional illustrators, fine artists, and art students) that was conducted in Autumn 2018 as a preliminary investigation for the development of a physical human-AI co-creative drawing system. The aim of the study was to discover possible roles that technology could play in observing, modeling, and possibly assisting an artist with their drawing. The study had three components: a paper survey of artists' drawing practises, technology usage and attitudes, video recorded drawing exercises and a follow-up semi-structured interview which included a co-design discussion on how AI might contribute to their drawing workflow. Key themes identified from the interviews were (1) drawing with physical mediums is a traditional and primary way of creation; (2) artists' views on AI varied, where co-creative AI is preferable to didactic AI; and (3) artists have a critical and skeptical view on the automation of creative work with AI. Participants' input provided the basis for the design and technical specifications of a co-creative drawing prototype, for which details are presented in this article. In addition, lessons learned from conducting the user study are presented with a reflection on future studies with drawing practitioners.} }
- A. S. Gomez, E. Aptoula, S. Parsons, and S. Bosilj, “Deep regression versus detection for counting in robotic phenotyping,” Ieee robotics and automation letters, vol. 6, iss. 2, p. 2902–2907, 2021. doi:10.1109/LRA.2021.3062586
[BibTeX] [Abstract] [Download PDF]
Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.
@article{lincoln44001, volume = {6}, number = {2}, month = {April}, author = {Adrian Salazar Gomez and E Aptoula and Simon Parsons and Simon Bosilj}, title = {Deep Regression versus Detection for Counting in Robotic Phenotyping}, publisher = {IEEE}, year = {2021}, journal = {IEEE Robotics and Automation Letters}, doi = {10.1109/LRA.2021.3062586}, pages = {2902--2907}, keywords = {ARRAY(0x559d32468c68)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44001/}, abstract = {Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.} }
- A. G. Esfahani, K. N. Sasikolomi, H. Hashempour, and F. Zhong, “Deep-lfd: deep robot learning from demonstrations,” Software impacts, vol. 9, p. 100087, 2021. doi:10.1016/j.simpa.2021.100087
[BibTeX] [Abstract] [Download PDF]
Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.
@article{lincoln45212, volume = {9}, month = {August}, author = {Amir Ghalamzan Esfahani and Kiyanoush Nazari Sasikolomi and Hamidreza Hashempour and Fangxun Zhong}, title = {Deep-LfD: Deep robot learning from demonstrations}, publisher = {Elsevier}, year = {2021}, journal = {Software Impacts}, doi = {10.1016/j.simpa.2021.100087}, pages = {100087}, keywords = {ARRAY(0x559d324b4cd8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45212/}, abstract = {Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.} }
- T. Vintr, Z. Yan, K. Eyisoy, F. Kubis, J. Blaha, J. Ulrich, C. Swaminathan, S. M. Mellado, T. Kucner, M. Magnusson, G. Cielniak, J. Faigl, T. Duckett, A. Lilienthal, and T. Krajnik, “Natural criteria for comparison of pedestrian flow forecasting models,” 2020 ieee/rjs international conference on intelligent robots and systems (iros), p. 11197–11204, 2021. doi:10.1109/IROS45743.2020.9341672
[BibTeX] [Abstract] [Download PDF]
Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-theart pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.
@article{lincoln48928, month = {February}, author = {Tomas Vintr and Zhi Yan and Kerem Eyisoy and Filip Kubis and Jan Blaha and Jiri Ulrich and Chittaranjan Swaminathan and Sergio Molina Mellado and Tomasz Kucner and Martin Magnusson and Grzegorz Cielniak and Jan Faigl and Tom Duckett and Achim Lilienthal and Tomas Krajnik}, title = {Natural criteria for comparison of pedestrian flow forecasting models}, publisher = {IEEE}, journal = {2020 IEEE/RJS International Conference on Intelligent Robots and Systems (IROS)}, doi = {10.1109/IROS45743.2020.9341672}, pages = {11197--11204}, year = {2021}, keywords = {ARRAY(0x559d32468e90)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48928/}, abstract = {Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-theart pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.} }
- H. Wang, H. Wang, J. Zhao, C. Hu, J. Peng, and S. Yue, “A time-delay feedback neural network for discriminating small, fast-moving targets in complex dynamic environments,” Ieee transactions on neural networks and learning systems, 2021. doi:10.1109/TNNLS.2021.3094205
[BibTeX] [Abstract] [Download PDF]
Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this paper, we propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop, and find it shows preference for high-velocity objects. Extensive experiments suggest that the Feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.
@article{lincoln45567, title = {A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments}, author = {Hongxin Wang and Huatian Wang and Jiannan Zhao and Cheng Hu and Jigen Peng and Shigang Yue}, publisher = {IEEE}, year = {2021}, doi = {10.1109/TNNLS.2021.3094205}, journal = {IEEE Transactions on Neural Networks and Learning Systems}, keywords = {ARRAY(0x559d32469088)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45567/}, abstract = {Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this paper, we propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop, and find it shows preference for high-velocity objects. Extensive experiments suggest that the Feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.} }
- N. Kokciyan, I. Sassoon, E. Sklar, S. Parsons, and S. Modgil, “Applying metalevel argumentation frameworks to support medical decision making,” Ieee intelligent systems, 2021. doi:10.1109/MIS.2021.3051420
[BibTeX] [Abstract] [Download PDF]
People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.
@article{lincoln43690, title = {Applying Metalevel Argumentation Frameworks to Support Medical Decision Making}, author = {Nadin Kokciyan and Isabel Sassoon and Elizabeth Sklar and Simon Parsons and Sanjay Modgil}, publisher = {IEEE}, year = {2021}, doi = {10.1109/MIS.2021.3051420}, journal = {IEEE Intelligent Systems}, keywords = {ARRAY(0x559d32469040)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43690/}, abstract = {People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.} }
- C. Armanini, M. Farman, M. Calisti, F. Giorgio-Serchi, C. Stefanini, and F. Renda, “Flagellate underwater robotics at macroscale: design, modeling, and characterization,” Ieee transactions on robotics, p. 1–17, 2021. doi:10.1109/TRO.2021.3094051
[BibTeX] [Abstract] [Download PDF]
Prokaryotic flagellum is considered as the only known example of a biological ?wheel,? a system capable of converting the action of rotatory actuator into a continuous propulsive force. For this reason, flagella are an interesting case study in soft robotics and they represent an appealing source of inspiration for the design of underwater robots. A great number of flagellum-inspired devices exists, but these are all characterized by a size ranging in the micrometer scale and mostly realized with rigid materials. Here, we present the design and development of a novel generation of macroscale underwater propellers that draw their inspiration from flagellated organisms. Through a simple rotatory actuation and exploiting the capability of the soft material to store energy when interacting with the surrounding fluid, the propellers attain different helical shapes that generate a propulsive thrust. A theoretical model is presented, accurately describing and predicting the kinematic and the propulsive capabilities of the proposed solution. Different experimental trials are presented to validate the accuracy of the model and to investigate the performance of the proposed design. Finally, an underwater robot prototype propelled by four flagellar modules is presented.
@article{lincoln46191, title = {Flagellate Underwater Robotics at Macroscale: Design, Modeling, and Characterization}, author = {Costanza Armanini and Madiha Farman and Marcello Calisti and Francesco Giorgio-Serchi and Cesare Stefanini and Federico Renda}, year = {2021}, pages = {1--17}, doi = {10.1109/TRO.2021.3094051}, journal = {IEEE Transactions on Robotics}, keywords = {ARRAY(0x559d32468fc8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46191/}, abstract = {Prokaryotic flagellum is considered as the only known example of a biological ?wheel,? a system capable of converting the action of rotatory actuator into a continuous propulsive force. For this reason, flagella are an interesting case study in soft robotics and they represent an appealing source of inspiration for the design of underwater robots. A great number of flagellum-inspired devices exists, but these are all characterized by a size ranging in the micrometer scale and mostly realized with rigid materials. Here, we present the design and development of a novel generation of macroscale underwater propellers that draw their inspiration from flagellated organisms. Through a simple rotatory actuation and exploiting the capability of the soft material to store energy when interacting with the surrounding fluid, the propellers attain different helical shapes that generate a propulsive thrust. A theoretical model is presented, accurately describing and predicting the kinematic and the propulsive capabilities of the proposed solution. Different experimental trials are presented to validate the accuracy of the model and to investigate the performance of the proposed design. Finally, an underwater robot prototype propelled by four flagellar modules is presented.} }
- Z. Al-saadi, D. Sirintuna, A. Kucukyilmaz, and C. Basdogan, “A novel haptic feature set for the classification of interactive motor behaviors in collaborative object transfer,” Ieee transactions on haptics, p. 1–1, 2021. doi:10.1109/TOH.2020.3034244
[BibTeX] [Abstract] [Download PDF]
Haptics provides a natural and intuitive channel of communication during the interaction of two humans in complex physical tasks, such as joint object transportation. However, despite the utmost importance of touch in physical interactions, the use of haptics is underrepresented when developing intelligent systems. This study explores the prominence of haptic data to extract information about underlying interaction patterns within human-human cooperation. For this purpose, we design salient haptic features describing the collaboration quality within a physical dyadic task and investigate the use of these features to classify the interaction patterns. We categorize the interaction into four discrete behavior classes. These classes describe whether the partners work in harmony or face conflicts while jointly transporting an object through translational or rotational movements. We test the proposed features on a physical human-human interaction (pHHI) dataset, consisting of data collected from 12 human dyads. Using these data, we verify the salience of haptic features by achieving a correct classification rate over 91\% using a Random Forest classifier.
@article{lincoln43742, title = {A Novel Haptic Feature Set for the Classification of Interactive Motor Behaviors in Collaborative Object Transfer}, author = {Zaid Al-saadi and Doganay Sirintuna and Ayse Kucukyilmaz and Cagatay Basdogan}, publisher = {IEEE}, year = {2021}, pages = {1--1}, doi = {10.1109/TOH.2020.3034244}, journal = {IEEE Transactions on Haptics}, keywords = {ARRAY(0x559d32468f68)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43742/}, abstract = {Haptics provides a natural and intuitive channel of communication during the interaction of two humans in complex physical tasks, such as joint object transportation. However, despite the utmost importance of touch in physical interactions, the use of haptics is underrepresented when developing intelligent systems. This study explores the prominence of haptic data to extract information about underlying interaction patterns within human-human cooperation. For this purpose, we design salient haptic features describing the collaboration quality within a physical dyadic task and investigate the use of these features to classify the interaction patterns. We categorize the interaction into four discrete behavior classes. These classes describe whether the partners work in harmony or face conflicts while jointly transporting an object through translational or rotational movements. We test the proposed features on a physical human-human interaction (pHHI) dataset, consisting of data collected from 12 human dyads. Using these data, we verify the salience of haptic features by achieving a correct classification rate over 91\% using a Random Forest classifier.} }
- G. Picardi, H. Hauser, C. Laschi, and M. Calisti, “Morphologically induced stability on an underwater legged robot with a deformable body,” The international journal of robotics research, vol. 40, iss. 1, p. 435–448, 2021. doi:10.1177/0278364919840426
[BibTeX] [Abstract] [Download PDF]
For robots to navigate successfully in the real world, unstructured environment adaptability is a prerequisite. Although this is typically implemented within the control layer, there have been recent proposals of adaptation through a morphing of the body. However, the successful demonstration of this approach has mostly been theoretical and in simulations thus far. In this work we present an underwater hopping robot that features a deformable body implemented as a deployable structure that is covered by a soft skin for which it is possible to manually change the body size without altering any other property (e.g. buoyancy or weight). For such a system, we show that it is possible to induce a stable hopping behavior instead of a fall, by just increasing the body size. We provide a mathematical model that describes the hopping behavior of the robot under the influence of shape-dependent underwater contributions (drag, buoyancy, and added mass) in order to analyze and compare the results obtained. Moreover, we show that for certain conditions, a stable hopping behavior can only be obtained through changing the morphology of the robot as the controller (i.e. actuator) would already be working at maximum capacity. The presented work demonstrates that, through the exploitation of shape-dependent forces, the dynamics of a system can be modified through altering the morphology of the body to induce a desirable behavior and, thus, a morphological change can be an effective alternative to the classic control.
@article{lincoln46149, volume = {40}, number = {1}, month = {January}, author = {Giacomo Picardi and Helmut Hauser and Cecilia Laschi and Marcello Calisti}, title = {Morphologically induced stability on an underwater legged robot with a deformable body}, year = {2021}, journal = {The International Journal of Robotics Research}, doi = {10.1177/0278364919840426}, pages = {435--448}, keywords = {ARRAY(0x559d32468f20)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46149/}, abstract = {For robots to navigate successfully in the real world, unstructured environment adaptability is a prerequisite. Although this is typically implemented within the control layer, there have been recent proposals of adaptation through a morphing of the body. However, the successful demonstration of this approach has mostly been theoretical and in simulations thus far. In this work we present an underwater hopping robot that features a deformable body implemented as a deployable structure that is covered by a soft skin for which it is possible to manually change the body size without altering any other property (e.g. buoyancy or weight). For such a system, we show that it is possible to induce a stable hopping behavior instead of a fall, by just increasing the body size. We provide a mathematical model that describes the hopping behavior of the robot under the influence of shape-dependent underwater contributions (drag, buoyancy, and added mass) in order to analyze and compare the results obtained. Moreover, we show that for certain conditions, a stable hopping behavior can only be obtained through changing the morphology of the robot as the controller (i.e. actuator) would already be working at maximum capacity. The presented work demonstrates that, through the exploitation of shape-dependent forces, the dynamics of a system can be modified through altering the morphology of the body to induce a desirable behavior and, thus, a morphological change can be an effective alternative to the classic control.} }
- L. Korir, A. Drake, M. Collison, C. C. Villa, E. Sklar, and S. Pearson, “Current and emergent economic impacts of covid-19 and brexit on uk fresh produce and horticultural businesses,” Arxiv, 2021. doi:10.22004/ag.econ.312068
[BibTeX] [Abstract] [Download PDF]
This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK?s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.
@article{lincoln46766, month = {January}, title = {Current and Emergent Economic Impacts of Covid-19 and Brexit on UK Fresh Produce and Horticultural Businesses}, author = {Lilian Korir and Archie Drake and Martin Collison and Carolina Camacho Villa and Elizabeth Sklar and Simon Pearson}, year = {2021}, doi = {10.22004/ag.econ.312068}, journal = {ArXiv}, keywords = {ARRAY(0x559d32468f38)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46766/}, abstract = {This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK?s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.} }
- M. Lujak, E. I. Sklar, and F. Semet, “Agriculture fleet vehicle routing: a decentralised and dynamic problem,” Ai communications, vol. 34, iss. 1, p. 55–71, 2021. doi:10.3233/AIC-201581
[BibTeX] [Abstract] [Download PDF]
To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.
@article{lincoln43570, volume = {34}, number = {1}, month = {February}, author = {Marin Lujak and Elizabeth I Sklar and Frederic Semet}, title = {Agriculture fleet vehicle routing: A decentralised and dynamic problem}, publisher = {IOS Press}, year = {2021}, journal = {AI Communications}, doi = {10.3233/AIC-201581}, pages = {55--71}, keywords = {ARRAY(0x559d32468e48)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43570/}, abstract = {To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.} }
- N. Dethlefs, A. Schoene, and H. Cuayahuitl, “A divide-and-conquer approach to neural natural language generation from structured data,” Neurocomputing, vol. 433, p. 300–309, 2021. doi:10.1016/j.neucom.2020.12.083
[BibTeX] [Abstract] [Download PDF]
Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.
@article{lincoln43748, volume = {433}, month = {April}, author = {Nina Dethlefs and Annika Schoene and Heriberto Cuayahuitl}, title = {A Divide-and-Conquer Approach to Neural Natural Language Generation from Structured Data}, publisher = {Elsevier}, year = {2021}, journal = {Neurocomputing}, doi = {10.1016/j.neucom.2020.12.083}, pages = {300--309}, keywords = {ARRAY(0x559d32468cf8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43748/}, abstract = {Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.} }
- Á. D. Santos, N. Fili, D. S. Pearson, Y. Hari-Gupta, and C. P. Toseland, “High-throughput mechanobiology: force modulation of ensemble biochemical and cell-based assays.,” Biophysical journal, vol. 120, iss. 4, p. 631–641, 2021. doi:10.1016/j.bpj.2020.12.024
[BibTeX] [Abstract] [Download PDF]
Mechanobiology is focused on how the physical forces and mechanical properties of proteins, cells, and tissues contribute to physiology and disease. Although the response of proteins and cells to mechanical stimuli is critical for function, the tools to probe these activities are typically restricted to single-molecule manipulations. Here, we have developed a novel microplate reader assay to encompass mechanical measurements with ensemble biochemical and cellular assays, using a microplate lid modified with magnets. This configuration enables multiple static magnetic tweezers to function simultaneously across the microplate, thereby greatly increasing throughput. We demonstrate the broad applicability and versatility through in�vitro and in cellulo approaches. Overall, our methodology allows, for the first time (to our knowledge), ensemble biochemical and cell-based assays to be performed under force in high-throughput format. This approach substantially increases the availability of mechanobiology measurements.
@article{lincoln46356, volume = {120}, number = {4}, month = {February}, author = {{\'A}lia Dos Santos and Natalia Fili and David S. Pearson and Yukti Hari-Gupta and Christopher P. Toseland}, title = {High-throughput mechanobiology: Force modulation of ensemble biochemical and cell-based assays.}, publisher = {Elsevier}, year = {2021}, journal = {Biophysical Journal}, doi = {10.1016/j.bpj.2020.12.024}, pages = {631--641}, keywords = {ARRAY(0x559d32468e00)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46356/}, abstract = {Mechanobiology is focused on how the physical forces and mechanical properties of proteins, cells, and tissues contribute to physiology and disease. Although the response of proteins and cells to mechanical stimuli is critical for function, the tools to probe these activities are typically restricted to single-molecule manipulations. Here, we have developed a novel microplate reader assay to encompass mechanical measurements with ensemble biochemical and cellular assays, using a microplate lid modified with magnets. This configuration enables multiple static magnetic tweezers to function simultaneously across the microplate, thereby greatly increasing throughput. We demonstrate the broad applicability and versatility through in�vitro and in cellulo approaches. Overall, our methodology allows, for the first time (to our knowledge), ensemble biochemical and cell-based assays to be performed under force in high-throughput format. This approach substantially increases the availability of mechanobiology measurements.} }
- A. Seddaoui and M. C. Saaj, “Collision-free optimal trajectory generation for a space robot using genetic algorithm,” Acta astronautica, vol. 179, p. 311–321, 2021. doi:10.1016/j.actaastro.2020.11.001
[BibTeX] [Abstract] [Download PDF]
Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around their target. Several challenges arise when modelling, controlling and planning the motion of such systems, therefore, new methodologies are required. A safe approach towards the grasping point implies that the space robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking using a nonlinear adaptive
@article{lincoln43074, volume = {179}, month = {February}, author = {Asma Seddaoui and Mini Chakravarthini Saaj}, note = {The paper is the outcome of a PhD I supervised at University of Surrey.}, title = {Collision-free optimal trajectory generation for a space robot using genetic algorithm}, publisher = {Elsevier}, year = {2021}, journal = {Acta Astronautica}, doi = {10.1016/j.actaastro.2020.11.001}, pages = {311--321}, keywords = {ARRAY(0x559d32468e18)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43074/}, abstract = {Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around their target. Several challenges arise when modelling, controlling and planning the motion of such systems, therefore, new methodologies are required. A safe approach towards the grasping point implies that the space robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking using a nonlinear adaptive} }
- P. McBurney and S. Parsons, “Argument schemes and dialogue protocols: doug walton’s legacy in artificial intelligence,” Journal of applied logics, vol. 8, iss. 1, p. 263–286, 2021.
[BibTeX] [Abstract] [Download PDF]
This paper is intended to honour the memory of Douglas Walton (1942–2020), a Canadian philosopher of argumentation who died in January 2020. Walton’s contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions –- argumentation schemes and dialogue protocols –- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.
@article{lincoln43751, volume = {8}, number = {1}, month = {February}, author = {Peter McBurney and Simon Parsons}, title = {Argument Schemes and Dialogue Protocols: Doug Walton's legacy in artificial intelligence}, publisher = {College Publications}, year = {2021}, journal = {Journal of Applied Logics}, pages = {263--286}, keywords = {ARRAY(0x559d32468db8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43751/}, abstract = {This paper is intended to honour the memory of Douglas Walton (1942--2020), a Canadian philosopher of argumentation who died in January 2020. Walton's contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions --- argumentation schemes and dialogue protocols --- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.} }
- J. Gao, J. C. Westergaard, E. H. o, M. Bagge, E. Liljeroth, and E. Alexandersson, “Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning,” Knowledge-based systems, vol. 214, p. 106723, 2021. doi:10.1016/j.knosys.2020.106723
[BibTeX] [Abstract] [Download PDF]
The plant pathogen Phytophthora infestans causes the severe disease late blight in potato, which can result in huge yield loss for potato production. Automatic and accurate disease lesion segmentation enables fast evaluation of disease severity and assessment of disease progress. In tasks requiring computer vision, deep learning has recently gained tremendous success for image classification, object detection and semantic segmentation. To test whether we could extract late blight lesions from unstructured field environments based on high-resolution visual field images and deep learning algorithms, we collected{$\sim$}500 field RGB images in a set of diverse potato genotypes with different disease severity (0\%?70\%), resulting in 2100 cropped images. 1600 of these cropped images were used as the dataset for training deep neural networks and 250 cropped images were randomly selected as the validation dataset. Finally, the developed model was tested on the remaining 250 cropped images. The results show that the values for intersection over union (IoU) of the classes background (leaf and soil) and disease lesion in the test dataset were 0.996 and 0.386, respectively. Furthermore, we established a linear relationship (R2=0.655) between manual visual scores of late blight and the number of lesions detected by deep learning at the canopy level. We also showed that imbalance weights of lesion and background classes improved segmentation performance, and that fused masks based on the majority voting of the multiple masks enhanced the correlation with the visual disease scores. This study demonstrates the feasibility of using deep learning algorithms for disease lesion segmentation and severity evaluation based on proximal imagery, which could aid breeding for crop resistance in field environments, and also benefit precision farming.
@article{lincoln43642, volume = {214}, month = {February}, author = {Junfeng Gao and Jesper Cairo Westergaard and Ea H{\o}egh Riis Sundmark and Merethe Bagge and Erland Liljeroth and Erik Alexandersson}, title = {Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning}, publisher = {Elsevier}, year = {2021}, journal = {Knowledge-Based Systems}, doi = {10.1016/j.knosys.2020.106723}, pages = {106723}, keywords = {ARRAY(0x559d32468d70)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43642/}, abstract = {The plant pathogen Phytophthora infestans causes the severe disease late blight in potato, which can result in huge yield loss for potato production. Automatic and accurate disease lesion segmentation enables fast evaluation of disease severity and assessment of disease progress. In tasks requiring computer vision, deep learning has recently gained tremendous success for image classification, object detection and semantic segmentation. To test whether we could extract late blight lesions from unstructured field environments based on high-resolution visual field images and deep learning algorithms, we collected{$\sim$}500 field RGB images in a set of diverse potato genotypes with different disease severity (0\%?70\%), resulting in 2100 cropped images. 1600 of these cropped images were used as the dataset for training deep neural networks and 250 cropped images were randomly selected as the validation dataset. Finally, the developed model was tested on the remaining 250 cropped images. The results show that the values for intersection over union (IoU) of the classes background (leaf and soil) and disease lesion in the test dataset were 0.996 and 0.386, respectively. Furthermore, we established a linear relationship (R2=0.655) between manual visual scores of late blight and the number of lesions detected by deep learning at the canopy level. We also showed that imbalance weights of lesion and background classes improved segmentation performance, and that fused masks based on the majority voting of the multiple masks enhanced the correlation with the visual disease scores. This study demonstrates the feasibility of using deep learning algorithms for disease lesion segmentation and severity evaluation based on proximal imagery, which could aid breeding for crop resistance in field environments, and also benefit precision farming.} }
- C. Laschi and M. Calisti, “Soft robot reaches the deepest part of the ocean,” Nature, vol. 591, p. 35–36, 2021. doi:10.1038/d41586-021-00489-y
[BibTeX] [Abstract] [Download PDF]
A self-powered robot inspired by a fish can survive the extreme pressures at the bottom of the ocean?s deepest trench, thanks to its soft body and distributed electronic system {–} and might enable exploration of the uncharted ocean.
@article{lincoln52080, volume = {591}, month = {March}, author = {Cecilia Laschi and Marcello Calisti}, title = {Soft robot reaches the deepest part of the ocean}, publisher = {Nature Publishing Group}, year = {2021}, journal = {Nature}, doi = {10.1038/d41586-021-00489-y}, pages = {35--36}, keywords = {ARRAY(0x559d32468d88)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52080/}, abstract = {A self-powered robot inspired by a fish can survive the extreme pressures at the bottom of the ocean?s deepest trench, thanks to its soft body and distributed electronic system {--} and might enable exploration of the uncharted ocean.} }
- T. G. Thuruthel, G. Picardi, F. Iida, C. Laschi, and M. Calisti, “Learning to stop: a unifying principle for legged locomotion in varying environments,” Royal society open science, vol. 8, iss. 4, 2021. doi:10.1098/rsos.210223
[BibTeX] [Abstract] [Download PDF]
Evolutionary studies have unequivocally proven the transition of living organisms from water to land. Consequently, it can be deduced that locomotion strategies must have evolved from one environment to the other. However, the mechanism by which this transition happened and its implications on bio-mechanical studies and robotics research have not been explored in detail. This paper presents a unifying control strategy for locomotion in varying environments based on the principle of ?learning to stop?. Using a common reinforcement learning framework, deep deterministic policy gradient, we show that our proposed learning strategy facilitates a fast and safe methodology for transferring learned controllers from the facile water environment to the harsh land environment. Our results not only propose a plausible mechanism for safe and quick transition of locomotion strategies from a water to land environment but also provide a novel alternative for safer and faster training of robots.
@article{lincoln44628, volume = {8}, number = {4}, month = {April}, author = {T. G. Thuruthel and G. Picardi and F. Iida and C. Laschi and M. Calisti}, title = {Learning to stop: a unifying principle for legged locomotion in varying environments}, publisher = {The Royal Society}, year = {2021}, journal = {Royal Society Open Science}, doi = {10.1098/rsos.210223}, keywords = {ARRAY(0x559d32468ce0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44628/}, abstract = {Evolutionary studies have unequivocally proven the transition of living organisms from water to land. Consequently, it can be deduced that locomotion strategies must have evolved from one environment to the other. However, the mechanism by which this transition happened and its implications on bio-mechanical studies and robotics research have not been explored in detail. This paper presents a unifying control strategy for locomotion in varying environments based on the principle of ?learning to stop?. Using a common reinforcement learning framework, deep deterministic policy gradient, we show that our proposed learning strategy facilitates a fast and safe methodology for transferring learned controllers from the facile water environment to the harsh land environment. Our results not only propose a plausible mechanism for safe and quick transition of locomotion strategies from a water to land environment but also provide a novel alternative for safer and faster training of robots.} }
- F. Yang, L. Shu, Y. Yang, G. Han, S. Pearson, and K. Li, “Optimal deployment of solar insecticidal lamps over constrained locations in mixed-crop farmlands,” Ieee internet of things journal, vol. 8, iss. 16, p. 13095–13114, 2021. doi:10.1109/JIOT.2021.3064043
[BibTeX] [Abstract] [Download PDF]
Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.
@article{lincoln44192, volume = {8}, number = {16}, month = {August}, author = {Fan Yang and Lei Shu and Yuli Yang and Guangjie Han and Simon Pearson and Kailiang Li}, title = {Optimal Deployment of Solar Insecticidal Lamps over Constrained Locations in Mixed-Crop Farmlands}, publisher = {IEEE}, year = {2021}, journal = {IEEE Internet of Things Journal}, doi = {10.1109/JIOT.2021.3064043}, pages = {13095--13114}, keywords = {ARRAY(0x559d324b5008)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44192/}, abstract = {Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.} }
- J. Zhao, H. Wang, N. Bellotto, C. Hu, J. Peng, and S. Yue, “Enhancing lgmd’s looming selectivity for uav with spatial-temporal distributed presynaptic connections,” Ieee transactions on neural networks and learning systems, p. 1–15, 2021. doi:10.1109/TNNLS.2021.3106946
[BibTeX] [Abstract] [Download PDF]
Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect?s visual neuron, LGMD is considered to be an ideal basis for building UAV?s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, we proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts? synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. Systematic experiments have been conducted to verify the performance of the proposed model for UAV agile flights. The results have demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably, and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.
@article{lincoln47316, title = {Enhancing LGMD's Looming Selectivity for UAV With Spatial-Temporal Distributed Presynaptic Connections}, author = {Jiannan Zhao and Hongxin Wang and Nicola Bellotto and Cheng Hu and Jigen Peng and Shigang Yue}, publisher = {IEEE}, year = {2021}, pages = {1--15}, doi = {10.1109/TNNLS.2021.3106946}, journal = {IEEE Transactions on Neural Networks and Learning Systems}, keywords = {ARRAY(0x559d324690e8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47316/}, abstract = {Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect?s visual neuron, LGMD is considered to be an ideal basis for building UAV?s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, we proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts? synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. Systematic experiments have been conducted to verify the performance of the proposed model for UAV agile flights. The results have demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably, and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.} }
- S. Sarkadi, A. Rutherford, P. McBurney, S. Parsons, and I. Rahwan, “The evolution of deception,” Royal society open science, vol. 8, iss. 9, 2021. doi:10.1098/rsos.201032
[BibTeX] [Abstract] [Download PDF]
Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good–public goods games (PGGs)–with ideas from Interpersonal Deception Theory and Truth-Default Theory. This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception, and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralised case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of The Digital Commons.
@article{lincoln46543, volume = {8}, number = {9}, month = {September}, author = {Stefan Sarkadi and Alex Rutherford and Peter McBurney and Simon Parsons and Iyad Rahwan}, title = {The Evolution of Deception}, publisher = {Royal Society}, year = {2021}, journal = {Royal Society Open Science}, doi = {10.1098/rsos.201032}, keywords = {ARRAY(0x559d324b4f30)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46543/}, abstract = {Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good--public goods games (PGGs)--with ideas from Interpersonal Deception Theory and Truth-Default Theory. This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception, and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralised case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of The Digital Commons.} }
- H. Isakhani, N. Bellotto, Q. Fu, and S. Yue, “Generative design and fabrication of a locust-inspired gliding wing prototype for micro aerial robots,” Journal of computational design and engineering, vol. 8, iss. 5, p. 1191–1203, 2021. doi:10.1093/jcde/qwab040
[BibTeX] [Abstract] [Download PDF]
Gliding is generally one of the most efficient modes of flight in natural fliers that can be further emphasised in the aircraft industry to reduce emissions and facilitate endured flights. Natural wings being fundamentally responsible for this phenomenon are developed over millions of years of evolution. Artificial wings on the other hand, are limited to the human-proposed conceptual design phase often leading to sub-optimal results. However, the novel Generative Design (GD) method claims to produce mechanically improved solutions based on robust and rigorous models of design conditions and performance criteria. This study investigates the potential applications of this Computer-Associated Design (CAsD) technology to generate novel micro aerial vehicle wing concepts that are structurally more stable and efficient. Multiple performance-driven solutions (wings) with high-level goals are generated by an infinite scale cloud computing solution executing a machine learning based GD algorithm. Ultimately, the highest performing CAsD concepts are numerically analysed, fabricated, and mechanically tested according to our previous study, and the results are compared to the literature for qualitative as well as quantitative analysis and validations. It was concluded that the GD-based tandem wings’ (fore-& hindwing) ability to withstand fracture failure without compromising structural rigidity was optimised by 78\% compared to its peer models. However, the weight was slightly increased by 11\% with 14\% drop in stiffness when compared to our models from previous study.
@article{lincoln46871, volume = {8}, number = {5}, month = {October}, author = {Hamid Isakhani and Nicola Bellotto and Qinbing Fu and Shigang Yue}, title = {Generative design and fabrication of a locust-inspired gliding wing prototype for micro aerial robots}, publisher = {Oxford University Press}, year = {2021}, journal = {Journal of Computational Design and Engineering}, doi = {10.1093/jcde/qwab040}, pages = {1191--1203}, keywords = {ARRAY(0x559d32450c80)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46871/}, abstract = {Gliding is generally one of the most efficient modes of flight in natural fliers that can be further emphasised in the aircraft industry to reduce emissions and facilitate endured flights. Natural wings being fundamentally responsible for this phenomenon are developed over millions of years of evolution. Artificial wings on the other hand, are limited to the human-proposed conceptual design phase often leading to sub-optimal results. However, the novel Generative Design (GD) method claims to produce mechanically improved solutions based on robust and rigorous models of design conditions and performance criteria. This study investigates the potential applications of this Computer-Associated Design (CAsD) technology to generate novel micro aerial vehicle wing concepts that are structurally more stable and efficient. Multiple performance-driven solutions (wings) with high-level goals are generated by an infinite scale cloud computing solution executing a machine learning based GD algorithm. Ultimately, the highest performing CAsD concepts are numerically analysed, fabricated, and mechanically tested according to our previous study, and the results are compared to the literature for qualitative as well as quantitative analysis and validations. It was concluded that the GD-based tandem wings' (fore-\& hindwing) ability to withstand fracture failure without compromising structural rigidity was optimised by 78\% compared to its peer models. However, the weight was slightly increased by 11\% with 14\% drop in stiffness when compared to our models from previous study.} }
- T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A multiple pheromone communication system for swarm intelligence,” Ieee access, vol. 9, p. 148721–148737, 2021. doi:10.1109/ACCESS.2021.3124386
[BibTeX] [Abstract] [Download PDF]
Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviours such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS\${$\backslash$}Phi\$) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS\${$\backslash$}Phi\$ uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The colour sensors beneath the robots can detect and identify lingering “pheromones” on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.
@article{lincoln47447, volume = {9}, month = {December}, author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue}, title = {A Multiple Pheromone Communication System for Swarm Intelligence}, publisher = {IEEE}, year = {2021}, journal = {IEEE Access}, doi = {10.1109/ACCESS.2021.3124386}, pages = {148721--148737}, keywords = {ARRAY(0x559d325e87f8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47447/}, abstract = {Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviours such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS\${$\backslash$}Phi\$) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS\${$\backslash$}Phi\$ uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The colour sensors beneath the robots can detect and identify lingering "pheromones" on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.} }
- Z. Maamar, N. Faci, M. Al-Khafajiy, and M. Dohan, “Time-centric and resource-driven composition for the internet of things,” Internet of things, vol. 16, p. 100460, 2021. doi:10.1016/j.iot.2021.100460
[BibTeX] [Abstract] [Download PDF]
Internet of Things (IoT), one of the fastest growing Information and Communication Technologies (ICT), is playing a major role in provisioning contextualized, smart services to end-users and organizations. To sustain this role, many challenges must be tackled with focus in this paper on the design and development of thing composition. The complex nature of today?s needs requires groups of things, and not separate things, to work together to satisfy these needs. By analogy with other ICTs like Web services, thing composition is specified with a model that uses dependencies to decide upon things that will do what, where, when, and why. Two types of dependencies are adopted, regular that schedule the execution chronology of things and special that coordinate the operations of things when they run into obstacles like unavailability of resources to use. Both resource use and resource availability are specified in compliance with Allen?s time intervals upon which reasoning takes place. This reasoning is technically demonstrated through a system extending EdgeCloudSim and backed with a set of experiments.
@article{lincoln47573, volume = {16}, month = {December}, author = {Zakaria Maamar and Noura Faci and Mohammed Al-Khafajiy and Murtada Dohan}, title = {Time-centric and resource-driven composition for the Internet of Things}, publisher = {Elsevier}, year = {2021}, journal = {Internet of Things}, doi = {10.1016/j.iot.2021.100460}, pages = {100460}, keywords = {ARRAY(0x559d325e8828)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47573/}, abstract = {Internet of Things (IoT), one of the fastest growing Information and Communication Technologies (ICT), is playing a major role in provisioning contextualized, smart services to end-users and organizations. To sustain this role, many challenges must be tackled with focus in this paper on the design and development of thing composition. The complex nature of today?s needs requires groups of things, and not separate things, to work together to satisfy these needs. By analogy with other ICTs like Web services, thing composition is specified with a model that uses dependencies to decide upon things that will do what, where, when, and why. Two types of dependencies are adopted, regular that schedule the execution chronology of things and special that coordinate the operations of things when they run into obstacles like unavailability of resources to use. Both resource use and resource availability are specified in compliance with Allen?s time intervals upon which reasoning takes place. This reasoning is technically demonstrated through a system extending EdgeCloudSim and backed with a set of experiments.} }
- A. Zahra, M. Ghafoor, K. Munir, A. Ullah, and Z. U. Abideen, “Application of region-based video surveillance in smart cities using deep learning,” Multimedia tools and applications, 2021. doi:10.1007/s11042-021-11468-w
[BibTeX] [Abstract] [Download PDF]
Smart video surveillance helps to build more robust smart city environment. The varied angle cameras act as smart sensors and collect visual data from smart city environment and transmit it for further visual analysis. The transmitted visual data is required to be in high quality for efcient analysis which is a challenging task while transmitting videos on low capacity bandwidth communication channels. In latest smart surveillance cameras, high quality of video transmission is maintained through various video encoding techniques such as high efciency video coding. However, these video coding techniques still provide limited capabilities and the demand of high-quality based encoding for salient regions such as pedestrians, vehicles, cyclist/motorcyclist and road in video surveillance systems is still not met. This work is a contribution towards building an efcient salient region-based sur?veillance framework for smart cities. The proposed framework integrates a deep learning?based video surveillance technique that extracts salient regions from a video frame without information loss, and then encodes it in reduced size. We have applied this approach in diverse case studies environments of smart city to test the applicability of the framework. The successful result in terms of bitrate 56.92\%, peak signal to noise ratio 5.35 bd and SR based segmentation accuracy of 92\% and 96\% for two diferent benchmark datasets is the outcome of proposed work. Consequently, the generation of less computational region?based video data makes it adaptable to improve surveillance solution in Smart Cities.
@article{lincoln47914, month = {December}, title = {Application of region-based video surveillance in smart cities using deep learning}, author = {Asma Zahra and Mubeen Ghafoor and Kamran Munir and Ata Ullah and Zain Ul Abideen}, publisher = {Springer}, year = {2021}, doi = {10.1007/s11042-021-11468-w}, journal = {Multimedia Tools and Applications}, keywords = {ARRAY(0x559d325e8858)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47914/}, abstract = {Smart video surveillance helps to build more robust smart city environment. The varied angle cameras act as smart sensors and collect visual data from smart city environment and transmit it for further visual analysis. The transmitted visual data is required to be in high quality for efcient analysis which is a challenging task while transmitting videos on low capacity bandwidth communication channels. In latest smart surveillance cameras, high quality of video transmission is maintained through various video encoding techniques such as high efciency video coding. However, these video coding techniques still provide limited capabilities and the demand of high-quality based encoding for salient regions such as pedestrians, vehicles, cyclist/motorcyclist and road in video surveillance systems is still not met. This work is a contribution towards building an efcient salient region-based sur?veillance framework for smart cities. The proposed framework integrates a deep learning?based video surveillance technique that extracts salient regions from a video frame without information loss, and then encodes it in reduced size. We have applied this approach in diverse case studies environments of smart city to test the applicability of the framework. The successful result in terms of bitrate 56.92\%, peak signal to noise ratio 5.35 bd and SR based segmentation accuracy of 92\% and 96\% for two diferent benchmark datasets is the outcome of proposed work. Consequently, the generation of less computational region?based video data makes it adaptable to improve surveillance solution in Smart Cities.} }
- D. Laparidou, F. Curtis, J. Akanuwe, K. Goher, N. Siriwardena, and A. Kucukyilmaz, “Patient, carer, and staff perceptions of robotics in motor rehabilitation: a systematic review and qualitative meta?synthesis.,” Journal of neuroengineering and rehabilitation, vol. 18, iss. 181, 2021. doi:10.1186/s12984-021-00976-3
[BibTeX] [Abstract] [Download PDF]
Background: In recent years, robotic rehabilitation devices have often been used for motor training. However, to date, no systematic reviews of qualitative studies exploring the end-user experiences of robotic devices in motor rehabilitation have been published. The aim of this study was to review end-users? (patients, carers and healthcare professionals) experiences with robotic devices in motor rehabilitation, by conducting a systematic review and thematic meta-synthesis of qualitative studies concerning the users? experiences with such robotic devices. Methods: Qualitative studies and mixed-methods studies with a qualitative element were eligible for inclusion. Nine electronic databases were searched from inception to August 2020, supplemented with internet searches and forward and backward citation tracking from the included studies and review articles. Data were synthesised thematically following the Thomas and Harden approach. The CASP Qualitative Checklist was used to assess the quality of the included studies of this review. Results: The search strategy identified a total of 13,556 citations and after removing duplicates and excluding citations based on title and abstract, and full text screening, 30 studies were included. All studies were considered of acceptable quality. We developed six analytical themes: logistic barriers; technological challenges; appeal and engagement; supportive interactions and relationships; benefits for physical, psychological, and social function(ing); and expanding and sustaining therapeutic options. Conclusions: Despite experiencing technological and logistic challenges, participants found robotic devices acceptable, useful and beneficial (physically, psychologically, and socially), as well as fun and interesting. Having supportive relationships with significant others and positive therapeutic relationships with healthcare staff were considered the foundation for successful rehabilitation and recovery.
@article{lincoln47708, volume = {18}, number = {181}, month = {December}, author = {Despina Laparidou and Ffion Curtis and Joseph Akanuwe and Khaled Goher and Niro Siriwardena and Ayse Kucukyilmaz}, title = {Patient, carer, and staff perceptions of robotics in motor rehabilitation: a systematic review and qualitative meta?synthesis.}, publisher = {BMC}, year = {2021}, journal = {Journal of NeuroEngineering and Rehabilitation}, doi = {10.1186/s12984-021-00976-3}, keywords = {ARRAY(0x559d325e8888)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47708/}, abstract = {Background: In recent years, robotic rehabilitation devices have often been used for motor training. However, to date, no systematic reviews of qualitative studies exploring the end-user experiences of robotic devices in motor rehabilitation have been published. The aim of this study was to review end-users? (patients, carers and healthcare professionals) experiences with robotic devices in motor rehabilitation, by conducting a systematic review and thematic meta-synthesis of qualitative studies concerning the users? experiences with such robotic devices. Methods: Qualitative studies and mixed-methods studies with a qualitative element were eligible for inclusion. Nine electronic databases were searched from inception to August 2020, supplemented with internet searches and forward and backward citation tracking from the included studies and review articles. Data were synthesised thematically following the Thomas and Harden approach. The CASP Qualitative Checklist was used to assess the quality of the included studies of this review. Results: The search strategy identified a total of 13,556 citations and after removing duplicates and excluding citations based on title and abstract, and full text screening, 30 studies were included. All studies were considered of acceptable quality. We developed six analytical themes: logistic barriers; technological challenges; appeal and engagement; supportive interactions and relationships; benefits for physical, psychological, and social function(ing); and expanding and sustaining therapeutic options. Conclusions: Despite experiencing technological and logistic challenges, participants found robotic devices acceptable, useful and beneficial (physically, psychologically, and socially), as well as fun and interesting. Having supportive relationships with significant others and positive therapeutic relationships with healthcare staff were considered the foundation for successful rehabilitation and recovery.} }
- A. Seddaoui, C. M. Saaj, and M. H. Nair, “Modeling a controlled-floating space robot for in-space services: a beginner?s tutorial,” Frontiers in robotics and ai, vol. 8, 2021. doi:10.3389/frobt.2021.725333
[BibTeX] [Abstract] [Download PDF]
Ground-based applications of robotics and autonomous systems (RASs) are fast advancing, and there is a growing appetite for developing cost-effective RAS solutions for in situ servicing, debris removal, manufacturing, and assembly missions. An orbital space robot, that is, a spacecraft mounted with one or more robotic manipulators, is an inevitable system for a range of future in-orbit services. However, various practical challenges make controlling a space robot extremely difficult compared with its terrestrial counterpart. The state of the art of modeling the kinematics and dynamics of a space robot, operating in the free-flying and free-floating modes, has been well studied by researchers. However, these two modes of operation have various shortcomings, which can be overcome by operating the space robot in the controlled-floating mode. This tutorial article aims to address the knowledge gap in modeling complex space robots operating in the controlled-floating mode and under perturbed conditions. The novel research contribution of this article is the refined dynamic model of a chaser space robot, derived with respect to the moving target while accounting for the internal perturbations due to constantly changing the center of mass, the inertial matrix, Coriolis, and centrifugal terms of the coupled system; it also accounts for the external environmental disturbances. The nonlinear model presented accurately represents the multibody coupled dynamics of a space robot, which is pivotal for precise pose control. Simulation results presented demonstrate the accuracy of the model for closed-loop control. In addition to the theoretical contributions in mathematical modeling, this article also offers a commercially viable solution for a wide range of in-orbit missions.
@article{lincoln48335, volume = {8}, month = {December}, author = {Asma Seddaoui and Chakravarthini Mini Saaj and Manu Harikrishnan Nair}, title = {Modeling a Controlled-Floating Space Robot for In-Space Services: A Beginner?s Tutorial}, publisher = {Frontiers Media}, journal = {Frontiers in Robotics and AI}, doi = {10.3389/frobt.2021.725333}, year = {2021}, keywords = {ARRAY(0x559d325e88b8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48335/}, abstract = {Ground-based applications of robotics and autonomous systems (RASs) are fast advancing, and there is a growing appetite for developing cost-effective RAS solutions for in situ servicing, debris removal, manufacturing, and assembly missions. An orbital space robot, that is, a spacecraft mounted with one or more robotic manipulators, is an inevitable system for a range of future in-orbit services. However, various practical challenges make controlling a space robot extremely difficult compared with its terrestrial counterpart. The state of the art of modeling the kinematics and dynamics of a space robot, operating in the free-flying and free-floating modes, has been well studied by researchers. However, these two modes of operation have various shortcomings, which can be overcome by operating the space robot in the controlled-floating mode. This tutorial article aims to address the knowledge gap in modeling complex space robots operating in the controlled-floating mode and under perturbed conditions. The novel research contribution of this article is the refined dynamic model of a chaser space robot, derived with respect to the moving target while accounting for the internal perturbations due to constantly changing the center of mass, the inertial matrix, Coriolis, and centrifugal terms of the coupled system; it also accounts for the external environmental disturbances. The nonlinear model presented accurately represents the multibody coupled dynamics of a space robot, which is pivotal for precise pose control. Simulation results presented demonstrate the accuracy of the model for closed-loop control. In addition to the theoretical contributions in mathematical modeling, this article also offers a commercially viable solution for a wide range of in-orbit missions.} }
- M. Chellapurath, K. Walker, E. Donato, G. Picardi, S. Stefanni, C. Laschi, F. G. Serchi, and M. Calisti, “Analysis of station keeping performance of an underwater legged robot,” Ieee/asme transactions on mechatronics, p. 1–12, 2021. doi:10.1109/TMECH.2021.3132779
[BibTeX] [Abstract] [Download PDF]
Remotely operated vehicles (ROVs) can exploit contact with the substrate to enhance their station keeping capabilities. A negatively buoyant underwater legged robot can perform passive station keeping, relying on the frictional force to counteract disturbances acting on the robot. Unlike conventional propeller-based ROVs, this approach has similar, slightly higher efficiency while reducing disturbances to the substrate. Detailed analysis on the passive station keeping performance of an underwater legged robot was performed using Seabed Interaction Legged Vehicle for Exploration and Research 2 (SILVER2) as a reference platform, investigating the effect of leg configuration, net weight, and the nature of the substrate on station keeping performance. A numerical model was developed to study the effect of both geometrical and physical parameters on the station keeping performance, which accurately predicted the station keeping behavior of the robot during field tests. Finally, we defined a metric called station keeping efficiency for the evaluation of station keeping performance; the underwater legged robots showed higher station keeping efficiency (28\%) than commercial propeller-based ROVs (11\%), showing they could present an alternative for tasks such as environmental monitoring.
@article{lincoln52083, month = {December}, author = {Mrudul Chellapurath and Kyle Walker and Enrico Donato and Giacomo Picardi and Sergio Stefanni and Cecilia Laschi and Francesco Giorgio Serchi and Marcello Calisti}, title = {Analysis of Station Keeping Performance of an Underwater Legged Robot}, publisher = {IEEE}, journal = {IEEE/ASME Transactions on Mechatronics}, doi = {10.1109/TMECH.2021.3132779}, pages = {1--12}, year = {2021}, keywords = {ARRAY(0x559d325e88e8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52083/}, abstract = {Remotely operated vehicles (ROVs) can exploit contact with the substrate to enhance their station keeping capabilities. A negatively buoyant underwater legged robot can perform passive station keeping, relying on the frictional force to counteract disturbances acting on the robot. Unlike conventional propeller-based ROVs, this approach has similar, slightly higher efficiency while reducing disturbances to the substrate. Detailed analysis on the passive station keeping performance of an underwater legged robot was performed using Seabed Interaction Legged Vehicle for Exploration and Research 2 (SILVER2) as a reference platform, investigating the effect of leg configuration, net weight, and the nature of the substrate on station keeping performance. A numerical model was developed to study the effect of both geometrical and physical parameters on the station keeping performance, which accurately predicted the station keeping behavior of the robot during field tests. Finally, we defined a metric called station keeping efficiency for the evaluation of station keeping performance; the underwater legged robots showed higher station keeping efficiency (28\%) than commercial propeller-based ROVs (11\%), showing they could present an alternative for tasks such as environmental monitoring.} }
- J. L. Louedec and G. Cielniak, “3d shape sensing and deep learning-based segmentation of strawberries,” Computers and electronics in agriculture, vol. 190, 2021. doi:10.1016/j.compag.2021.106374
[BibTeX] [Abstract] [Download PDF]
Automation and robotisation of the agricultural sector are seen as a viable solution to socio-economic challenges faced by this industry. This technology often relies on intelligent perception systems providing information about crops, plants and the entire environment. The challenges faced by traditional 2D vision systems can be addressed by modern 3D vision systems which enable straightforward localisation of objects, size and shape estimation, or handling of occlusions. So far, the use of 3D sensing was mainly limited to indoor or structured environments. In this paper, we evaluate modern sensing technologies including stereo and time-of-flight cameras for 3D perception of shape in agriculture and study their usability for segmenting out soft fruit from background based on their shape. To that end, we propose a novel 3D deep neural network which exploits the organised nature of information originating from the camera-based 3D sensors. We demonstrate the superior performance and ef? ficiency of the proposed architecture compared to the state-of-the-art 3D networks. Through a simulated study, we also show the potential of the 3D sensing paradigm for object segmentation in agriculture and provide in? sights and analysis of what shape quality is needed and expected for further analysis of crops. The results of this work should encourage researchers and companies to develop more accurate and robust 3D sensing technologies to assure their wider adoption in practical agricultural applications.
@article{lincoln47035, volume = {190}, month = {November}, author = {Justin Le Louedec and Grzegorz Cielniak}, title = {3D shape sensing and deep learning-based segmentation of strawberries}, publisher = {Elsevier}, journal = {Computers and Electronics in Agriculture}, doi = {10.1016/j.compag.2021.106374}, year = {2021}, keywords = {ARRAY(0x559d325f9be0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47035/}, abstract = {Automation and robotisation of the agricultural sector are seen as a viable solution to socio-economic challenges faced by this industry. This technology often relies on intelligent perception systems providing information about crops, plants and the entire environment. The challenges faced by traditional 2D vision systems can be addressed by modern 3D vision systems which enable straightforward localisation of objects, size and shape estimation, or handling of occlusions. So far, the use of 3D sensing was mainly limited to indoor or structured environments. In this paper, we evaluate modern sensing technologies including stereo and time-of-flight cameras for 3D perception of shape in agriculture and study their usability for segmenting out soft fruit from background based on their shape. To that end, we propose a novel 3D deep neural network which exploits the organised nature of information originating from the camera-based 3D sensors. We demonstrate the superior performance and ef? ficiency of the proposed architecture compared to the state-of-the-art 3D networks. Through a simulated study, we also show the potential of the 3D sensing paradigm for object segmentation in agriculture and provide in? sights and analysis of what shape quality is needed and expected for further analysis of crops. The results of this work should encourage researchers and companies to develop more accurate and robust 3D sensing technologies to assure their wider adoption in practical agricultural applications.} }
- A. Astolfi, G. Picardi, and M. Calisti, “Multilegged underwater running with articulated legs,” Ieee transactions on robotics, vol. 38, iss. 3, p. 1841–1855, 2021. doi:10.1109/TRO.2021.3118204
[BibTeX] [Abstract] [Download PDF]
Drawing inspiration from the locomotion modalities of animals, legged robots demonstrated the potential to traverse irregular and unstructured environments. Successful approaches exploited single-leg templates, like the spring-loaded inverted pendulum (SLIP), as a reference for the control of multilegged machines. Nevertheless, the anchoring between the low-order model and the actual multilegged structure is still an open challenge. This article proposes a novel strategy to derive actuation inputs for a multilegged robot by expressing the control requirements in terms of jump height and forward speed (derived from the limit cycle). We found that these requirements could be associated with a specific maximum force, successively split on an arbitrary number of legs and their relative actuation sets. The proposed approach has been validated in multibody simulation and real-world experiments by employing the underwater hexapod robot SILVER2. Results show that locomotion performances of the low-order model are reflected by the simulated and actual robot, showing that the articulated-USLIP (a-USLIP) model can faithfully explain the multilegged behavior under the imposed control inputs once hydrodynamic parameters have been tuned. More importantly, the proposed controller can be translated to the terrestrial case with minimal modifications and extended with additional layers to obtain more complex behaviors.
@article{lincoln52081, volume = {38}, number = {3}, month = {November}, author = {Anna Astolfi and Giacomo Picardi and Marcello Calisti}, title = {Multilegged Underwater Running With Articulated Legs}, publisher = {IEEE}, year = {2021}, journal = {IEEE Transactions on Robotics}, doi = {10.1109/TRO.2021.3118204}, pages = {1841--1855}, keywords = {ARRAY(0x559d324b44e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52081/}, abstract = {Drawing inspiration from the locomotion modalities of animals, legged robots demonstrated the potential to traverse irregular and unstructured environments. Successful approaches exploited single-leg templates, like the spring-loaded inverted pendulum (SLIP), as a reference for the control of multilegged machines. Nevertheless, the anchoring between the low-order model and the actual multilegged structure is still an open challenge. This article proposes a novel strategy to derive actuation inputs for a multilegged robot by expressing the control requirements in terms of jump height and forward speed (derived from the limit cycle). We found that these requirements could be associated with a specific maximum force, successively split on an arbitrary number of legs and their relative actuation sets. The proposed approach has been validated in multibody simulation and real-world experiments by employing the underwater hexapod robot SILVER2. Results show that locomotion performances of the low-order model are reflected by the simulated and actual robot, showing that the articulated-USLIP (a-USLIP) model can faithfully explain the multilegged behavior under the imposed control inputs once hydrodynamic parameters have been tuned. More importantly, the proposed controller can be translated to the terrestrial case with minimal modifications and extended with additional layers to obtain more complex behaviors.} }
- T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A multiple pheromone communication system for swarm intelligence,” Ieee access, vol. 9, 2021. doi:10.1109/ACCESS.2021.3124386
[BibTeX] [Abstract] [Download PDF]
Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviors such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS{\ensuremath{\Phi}}) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS{\ensuremath{\Phi}} uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The color sensors beneath the robots can detect and identify lingering “pheromones” on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.
@article{lincoln47216, volume = {9}, month = {November}, author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue}, title = {A Multiple Pheromone Communication System for Swarm Intelligence}, publisher = {IEEE}, journal = {IEEE Access}, doi = {10.1109/ACCESS.2021.3124386}, year = {2021}, keywords = {ARRAY(0x559d324b0428)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47216/}, abstract = {Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviors such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS{\ensuremath{\Phi}}) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS{\ensuremath{\Phi}} uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The color sensors beneath the robots can detect and identify lingering "pheromones" on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.} }
- F. Camara, N. Bellotto, S. Cosar, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, and C. Fox, “Pedestrian models for autonomous driving part i: low-level models, from sensing to tracking,” Ieee transactions on intelligent transport systems, vol. 22, iss. 10, p. 6131–6151, 2021. doi:10.1109/TITS.2020.3006768
[BibTeX] [Abstract] [Download PDF]
Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.
@article{lincoln41705, volume = {22}, number = {10}, month = {October}, author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Dimitris Nathanael and Mathias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Charles Fox}, title = {Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from Sensing to Tracking}, publisher = {IEEE}, year = {2021}, journal = {IEEE Transactions on Intelligent Transport Systems}, doi = {10.1109/TITS.2020.3006768}, pages = {6131--6151}, keywords = {ARRAY(0x559d3243e700)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41705/}, abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.} }
- H. Isakhani, S. Yue, C. Xiong, and W. Chen, “Aerodynamic analysis and optimization of gliding locust wing using nash genetic algorithm,” Aiaa journal, vol. 59, iss. 10, p. 4002–4013, 2021. doi:10.2514/1.J060298
[BibTeX] [Abstract] [Download PDF]
Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. This study investigates the aerodynamic characteristics of an insect species called desert locust (Schistocerca gregaria) with an extraordinary gliding skills at low Reynolds number. Here, locust tandem wings are subjected to a computational fluid dynamics (CFD) simulation using 2D and 3D Navier-Stokes equations revealing fore-hindwing interactions, and the influence of their corrugations on the aerodynamic performance. Furthermore, the obtained CFD results are mathematically parameterized using PARSEC method and optimized based on a novel fusion of Genetic Algorithms and Nash game theory to achieve Nash equilibrium being the optimized wings. It was concluded that the lift-drag (gliding) ratio of the optimized profiles were improved by at least 77\% and 150\% compared to the original wing and the published literature, respectively. Ultimately, the profiles are integrated and analyzed using 3D CFD simulations that demonstrated a 14\% performance improvement validating the proposed wing models for further fabrication and rapid prototyping presented in the future study.
@article{lincoln47016, volume = {59}, number = {10}, month = {October}, author = {Hamid Isakhani and Shigang Yue and Caihua Xiong and Wenbin Chen}, title = {Aerodynamic Analysis and Optimization of Gliding Locust Wing Using Nash Genetic Algorithm}, publisher = {Aerospace Research Central}, year = {2021}, journal = {AIAA Journal}, doi = {10.2514/1.J060298}, pages = {4002--4013}, keywords = {ARRAY(0x559d325ef908)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47016/}, abstract = {Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. This study investigates the aerodynamic characteristics of an insect species called desert locust (Schistocerca gregaria) with an extraordinary gliding skills at low Reynolds number. Here, locust tandem wings are subjected to a computational fluid dynamics (CFD) simulation using 2D and 3D Navier-Stokes equations revealing fore-hindwing interactions, and the influence of their corrugations on the aerodynamic performance. Furthermore, the obtained CFD results are mathematically parameterized using PARSEC method and optimized based on a novel fusion of Genetic Algorithms and Nash game theory to achieve Nash equilibrium being the optimized wings. It was concluded that the lift-drag (gliding) ratio of the optimized profiles were improved by at least 77\% and 150\% compared to the original wing and the published literature, respectively. Ultimately, the profiles are integrated and analyzed using 3D CFD simulations that demonstrated a 14\% performance improvement validating the proposed wing models for further fabrication and rapid prototyping presented in the future study.} }
- R. Polvara, F. D. Duchetto, G. Neumann, and M. Hanheide, “Navigate-and-seek: a robotics framework for people localization in agricultural environments,” Ieee robotics and automation letters, vol. 6, iss. 4, p. 6577–6584, 2021. doi:10.1109/LRA.2021.3094557
[BibTeX] [Abstract] [Download PDF]
The agricultural domain offers a working environment where many human laborers are nowadays employed to maintain or harvest crops, with huge potential for productivity gains through the introduction of robotic automation. Detecting and localizing humans reliably and accurately in such an environment, however, is a prerequisite to many services offered by fleets of mobile robots collaborating with human workers. Consequently, in this paper, we expand on the concept of a topological particle filter (TPF) to accurately and individually localize and track workers in a farm environment, integrating information from heterogeneous sensors and combining local active sensing (exploiting a robot?s onboard sensing employing a Next-Best-Sense planning approach) and global localization (using affordable IoT GNSS devices). We validate the proposed approach in topologies created for the deployment of robotics fleets to support fruit pickers in a real farm environment. By combining multi-sensor observations on the topological level complemented by active perception through the NBS approach, we show that we can improve the accuracy of picker localization in comparison to prior work.
@article{lincoln45627, volume = {6}, number = {4}, month = {October}, author = {Riccardo Polvara and Francesco Del Duchetto and Gerhard Neumann and Marc Hanheide}, title = {Navigate-and-Seek: a Robotics Framework for People Localization in Agricultural Environments}, publisher = {IEEE}, year = {2021}, journal = {IEEE Robotics and Automation Letters}, doi = {10.1109/LRA.2021.3094557}, pages = {6577--6584}, keywords = {ARRAY(0x559d324b4b70)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45627/}, abstract = {The agricultural domain offers a working environment where many human laborers are nowadays employed to maintain or harvest crops, with huge potential for productivity gains through the introduction of robotic automation. Detecting and localizing humans reliably and accurately in such an environment, however, is a prerequisite to many services offered by fleets of mobile robots collaborating with human workers. Consequently, in this paper, we expand on the concept of a topological particle filter (TPF) to accurately and individually localize and track workers in a farm environment, integrating information from heterogeneous sensors and combining local active sensing (exploiting a robot?s onboard sensing employing a Next-Best-Sense planning approach) and global localization (using affordable IoT GNSS devices). We validate the proposed approach in topologies created for the deployment of robotics fleets to support fruit pickers in a real farm environment. By combining multi-sensor observations on the topological level complemented by active perception through the NBS approach, we show that we can improve the accuracy of picker localization in comparison to prior work.} }
- S. Maleki, S. Maleki, and N. R. Jennings, “Unsupervised anomaly detection with lstm autoencoders using statistical data-filtering,” Applied soft computing, vol. 108, p. 107443, 2021. doi:10.1016/j.asoc.2021.107443
[BibTeX] [Abstract] [Download PDF]
To address one of the most challenging industry problems, we develop an enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series. We propose the outputs of a well-designed system are drawn from an unknown probability distribution, U, in normal conditions. We introduce a probability criterion based on the classical central limit theorem that allows evaluation of the likelihood that a data-point is drawn from U. This enables the labelling of the data on the fly. Non-anomalous data is passed to train a deep Long Short-Term Memory (LSTM) autoencoder that distinguishes anomalies when the reconstruction error exceeds a threshold. To illustrate our algorithm?s efficacy, we consider two real industrial case studies where gradually-developing and abrupt anomalies occur. Moreover, we compare our algorithm?s performance with four of the recent and widely used algorithms in the domain. We show that our algorithm achieves considerably better results in that it timely detects anomalies while others either miss or lag in doing so.
@article{lincoln44910, volume = {108}, month = {September}, author = {Sepehr Maleki and Sasan Maleki and Nicholas R. Jennings}, title = {Unsupervised anomaly detection with LSTM autoencoders using statistical data-filtering}, publisher = {Elsevier}, year = {2021}, journal = {Applied Soft Computing}, doi = {10.1016/j.asoc.2021.107443}, pages = {107443}, keywords = {ARRAY(0x559d324b4930)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44910/}, abstract = {To address one of the most challenging industry problems, we develop an enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series. We propose the outputs of a well-designed system are drawn from an unknown probability distribution, U, in normal conditions. We introduce a probability criterion based on the classical central limit theorem that allows evaluation of the likelihood that a data-point is drawn from U. This enables the labelling of the data on the fly. Non-anomalous data is passed to train a deep Long Short-Term Memory (LSTM) autoencoder that distinguishes anomalies when the reconstruction error exceeds a threshold. To illustrate our algorithm?s efficacy, we consider two real industrial case studies where gradually-developing and abrupt anomalies occur. Moreover, we compare our algorithm?s performance with four of the recent and widely used algorithms in the domain. We show that our algorithm achieves considerably better results in that it timely detects anomalies while others either miss or lag in doing so.} }
- M. M. N. Abid, T. Zia, M. Ghafoor, and D. Windridge, “Multi-view convolutional recurrent neural networks for lung cancer nodule identification,” Neurocomputing, vol. 453, p. 299–311, 2021. doi:10.1016/j.neucom.2020.06.144
[BibTeX] [Abstract] [Download PDF]
Screening via low-dose Computer Tomography (CT) has been shown to reduce lung cancer mortality rates by at least 20\%. However, the assessment of large numbers of CT scans by radiologists is cost intensive, and potentially produces varying and inconsistent results for differing radiologists (and also for temporally-separated assessments by the same radiologist). To overcome these challenges, computer aided diagnosis systems based on deep learning methods have proved effective in automatic detection and classification of lung cancer. Latterly, interest has focused on the full utilization of the 3D information in CT scans using 3D-CNNs and related approaches. However, such approaches do not intrinsically correlate size and shape information between slices. In this work, an innovative approach Multi-view Convolutional Recurrent Neural Network (MV-CRecNet) is proposed that exploits shape, size and cross-slice variations while learning to identify lung cancer nodules from CT scans. The multiple-views that are passed to the model ensure better generalization and the learning of robust features. We evaluate the proposed MV-CRecNet model on the reference Lung Image Database Consortium and Image Database Resource Initiative and Early Lung Cancer Action Program datasets; six evaluation metrics are applied to eleven comparison models for testing. Results demonstrate that proposed methodology outperforms all of the models against all of the evaluation metrics.
@article{lincoln47918, volume = {453}, month = {September}, author = {Mian Muhammad Naeem Abid and Tehseen Zia and Mubeen Ghafoor and David Windridge}, title = {Multi-view Convolutional Recurrent Neural Networks for Lung Cancer Nodule Identification}, publisher = {Elsevier}, year = {2021}, journal = {Neurocomputing}, doi = {10.1016/j.neucom.2020.06.144}, pages = {299--311}, keywords = {ARRAY(0x559d3245a0d0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47918/}, abstract = {Screening via low-dose Computer Tomography (CT) has been shown to reduce lung cancer mortality rates by at least 20\%. However, the assessment of large numbers of CT scans by radiologists is cost intensive, and potentially produces varying and inconsistent results for differing radiologists (and also for temporally-separated assessments by the same radiologist). To overcome these challenges, computer aided diagnosis systems based on deep learning methods have proved effective in automatic detection and classification of lung cancer. Latterly, interest has focused on the full utilization of the 3D information in CT scans using 3D-CNNs and related approaches. However, such approaches do not intrinsically correlate size and shape information between slices. In this work, an innovative approach Multi-view Convolutional Recurrent Neural Network (MV-CRecNet) is proposed that exploits shape, size and cross-slice variations while learning to identify lung cancer nodules from CT scans. The multiple-views that are passed to the model ensure better generalization and the learning of robust features. We evaluate the proposed MV-CRecNet model on the reference Lung Image Database Consortium and Image Database Resource Initiative and Early Lung Cancer Action Program datasets; six evaluation metrics are applied to eleven comparison models for testing. Results demonstrate that proposed methodology outperforms all of the models against all of the evaluation metrics.} }
- M. Ghafoor, S. A. Tariq, T. Zia, I. A. Taj, A. Abbas, A. Hassan, and A. Y. Zomaya, “Fingerprint identification with shallow multifeature view classifier,” Ieee transactions on cybernetics, vol. 51, iss. 9, p. 14515–4527, 2021. doi:10.1109/TCYB.2019.2957188
[BibTeX] [Abstract] [Download PDF]
This article presents an efficient fingerprint identification system that implements an initial classification for search-space reduction followed by minutiae neighbor-based feature encoding and matching. The current state-of-the-art fingerprint classification methods use a deep convolutional neural network (DCNN) to assign confidence for the classification prediction, and based on this prediction, the input fingerprint is matched with only the subset of the database that belongs to the predicted class. It can be observed for the DCNNs that as the architectures deepen, the farthest layers of the network learn more abstract information from the input images that result in higher prediction accuracies. However, the downside is that the DCNNs are data hungry and require lots of annotated (labeled) data to learn generalized network parameters for deeper layers. In this article, a shallow multifeature view CNN (SMV-CNN) fingerprint classifier is proposed that extracts: 1) fine-grained features from the input image and 2) abstract features from explicitly derived representations obtained from the input image. The multifeature views are fed to a fully connected neural network (NN) to compute a global classification prediction. The classification results show that the SMV-CNN demonstrated an improvement of 2.8\% when compared to baseline CNN consisting of a single grayscale view on an open-source database. Moreover, in comparison with the state-of-the-art residual network (ResNet-50) image classification model, the proposed method performs comparably while being less complex and more efficient during training. The result of classification-based fingerprint identification has shown that the search space is reduced by over 50\% without degradation of identification accuracies.
@article{lincoln43823, volume = {51}, number = {9}, month = {September}, author = {Mubeen Ghafoor and Syed Ali Tariq and Tehseen Zia and Imtiaz Ahmad Taj and Assad Abbas and Ali Hassan and Albert Y. Zomaya}, title = {Fingerprint Identification With Shallow Multifeature View Classifier}, publisher = {IEEE Transactions on Cybernetics}, year = {2021}, journal = {IEEE Transactions on Cybernetics}, doi = {10.1109/TCYB.2019.2957188}, pages = {14515--4527}, keywords = {ARRAY(0x559d324b4828)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43823/}, abstract = {This article presents an efficient fingerprint identification system that implements an initial classification for search-space reduction followed by minutiae neighbor-based feature encoding and matching. The current state-of-the-art fingerprint classification methods use a deep convolutional neural network (DCNN) to assign confidence for the classification prediction, and based on this prediction, the input fingerprint is matched with only the subset of the database that belongs to the predicted class. It can be observed for the DCNNs that as the architectures deepen, the farthest layers of the network learn more abstract information from the input images that result in higher prediction accuracies. However, the downside is that the DCNNs are data hungry and require lots of annotated (labeled) data to learn generalized network parameters for deeper layers. In this article, a shallow multifeature view CNN (SMV-CNN) fingerprint classifier is proposed that extracts: 1) fine-grained features from the input image and 2) abstract features from explicitly derived representations obtained from the input image. The multifeature views are fed to a fully connected neural network (NN) to compute a global classification prediction. The classification results show that the SMV-CNN demonstrated an improvement of 2.8\% when compared to baseline CNN consisting of a single grayscale view on an open-source database. Moreover, in comparison with the state-of-the-art residual network (ResNet-50) image classification model, the proposed method performs comparably while being less complex and more efficient during training. The result of classification-based fingerprint identification has shown that the search space is reduced by over 50\% without degradation of identification accuracies.} }
- J. Gao, J. C. Westergaard, and E. Alexandersson, “Computer vision and less complex image analyses to monitor potato traits in fields,” in Solanum tuberosum, D. Dobnik, K. Gruden, Ž. Ramšak, and A. Coll, Eds., New York: Springer, 2021, p. 273–299. doi:10.1007/978-1-0716-1609-3_13
[BibTeX] [Abstract] [Download PDF]
Field phenotyping of crops has recently gained considerable attention leading to the development of new protocols for recording plant traits of interest. Phenotyping in field conditions can be performed by various cameras, sensors and imaging platforms. In this chapter, practical aspects as well as advantages and disadvantages of above-ground phenotyping platforms are highlighted with a focus on drone-based imaging and relevant image analysis for field conditions. It includes useful planning tips for experimental design as well as protocols, sources, and tools for image acquisition, pre-processing, feature extraction and machine learning highlighting the possibilities with computer vision. Several open and free resources are given to speed up data analysis for biologists. This chapter targets professionals and researchers with limited computational background performing or wishing to perform phenotyping of field crops, especially with a drone-based platform. The advice and methods described focus on potato but can mostly be used for field phenotyping of any crops.
@incollection{lincoln46316, number = {2354}, month = {August}, author = {Junfeng Gao and Jesper Cairo Westergaard and Erik Alexandersson}, series = {Methods in Molecular Biology}, booktitle = {Solanum tuberosum}, editor = {David Dobnik and Kristina Gruden and {\v Z}iva Ram{\v s}ak and Anna Coll}, title = {Computer Vision and Less Complex Image Analyses to Monitor Potato Traits in Fields}, address = {New York}, publisher = {Springer}, year = {2021}, doi = {10.1007/978-1-0716-1609-3\_13}, pages = {273--299}, keywords = {ARRAY(0x559d324440f8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46316/}, abstract = {Field phenotyping of crops has recently gained considerable attention leading to the development of new protocols for recording plant traits of interest. Phenotyping in field conditions can be performed by various cameras, sensors and imaging platforms. In this chapter, practical aspects as well as advantages and disadvantages of above-ground phenotyping platforms are highlighted with a focus on drone-based imaging and relevant image analysis for field conditions. It includes useful planning tips for experimental design as well as protocols, sources, and tools for image acquisition, pre-processing, feature extraction and machine learning highlighting the possibilities with computer vision. Several open and free resources are given to speed up data analysis for biologists. This chapter targets professionals and researchers with limited computational background performing or wishing to perform phenotyping of field crops, especially with a drone-based platform. The advice and methods described focus on potato but can mostly be used for field phenotyping of any crops.} }
- E. Black, N. Maudet, and S. Parsons, “Argumentation-based dialogue,” in Handbook of formal argumentation, volume 2, D. Gabby, M. Giacomin, G. R. Simari, and M. Thimm, Eds., College publications, 2021.
[BibTeX] [Abstract] [Download PDF]
Dialogue is fundamental to argumentation, providing a dialectical basis for establishing which arguments are acceptable. Argumentation can also be used as the basis for dialogue. In such “argumentation-based” dialogues, participants take part in an exchange of arguments, and the mechanisms of argumentation are used to establish what participants take to be acceptable at the end of the exchange. This chapter considers such dialogues, discussing the elements that are required in order to carry out argumentation-based dialogues, giving examples, and discussing open issues.
@incollection{lincoln48566, booktitle = {Handbook of Formal Argumentation, Volume 2}, editor = {Dov Gabby and Massimiliano Giacomin and Guillermo R. Simari and Matthias Thimm}, month = {August}, title = {Argumentation-based Dialogue}, author = {Elizabeth Black and Nicolas Maudet and Simon Parsons}, publisher = {College Publications}, year = {2021}, keywords = {ARRAY(0x559d324b4e40)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48566/}, abstract = {Dialogue is fundamental to argumentation, providing a dialectical basis for establishing which arguments are acceptable. Argumentation can also be used as the basis for dialogue. In such ``argumentation-based'' dialogues, participants take part in an exchange of arguments, and the mechanisms of argumentation are used to establish what participants take to be acceptable at the end of the exchange. This chapter considers such dialogues, discussing the elements that are required in order to carry out argumentation-based dialogues, giving examples, and discussing open issues.} }
- A. Bikakis, A. Cohen, W. Dvorak, G. Flouris, and S. Parsons, “Joint attacks and accrual in argumentation frameworks,” in Handbook of formal argumentation, volume 2, D. Gabbay, M. Giacomin, G. R. Simari, and M. Thimm, Eds., College publications, 2021.
[BibTeX] [Abstract] [Download PDF]
While modelling arguments, it is often useful to represent “joint attacks”, i.e., cases where multiple arguments jointly attack another (note that this is different from the case where multiple arguments attack another in isolation). Based on this remark, the notion of joint attacks has been proposed as a useful extension of classical Abstract Argumentation Frameworks, and has been shown to constitute a genuine extension in terms of expressive power. In this chapter, we review various works considering the notion of joint attacks from various perspectives, including abstract and structured frameworks. Moreover, we present results detailing the relation among frameworks with joint attacks and classical argumentation frameworks, computational aspects, and applications of joint attacks. Last but not least, we propose a roadmap for future research on the subject, identifying gaps in current research and important research directions.
@incollection{lincoln48565, booktitle = {Handbook of Formal Argumentation, Volume 2}, editor = {Dov Gabbay and Massimiliano Giacomin and Guillermo R. Simari and Matthias Thimm}, month = {August}, title = {Joint Attacks and Accrual in Argumentation Frameworks}, author = {Antonis Bikakis and Andrea Cohen and Wolfgang Dvorak and Giorgos Flouris and Simon Parsons}, publisher = {College Publications}, year = {2021}, keywords = {ARRAY(0x559d325e2ba0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48565/}, abstract = {While modelling arguments, it is often useful to represent ``joint attacks'', i.e., cases where multiple arguments jointly attack another (note that this is different from the case where multiple arguments attack another in isolation). Based on this remark, the notion of joint attacks has been proposed as a useful extension of classical Abstract Argumentation Frameworks, and has been shown to constitute a genuine extension in terms of expressive power. In this chapter, we review various works considering the notion of joint attacks from various perspectives, including abstract and structured frameworks. Moreover, we present results detailing the relation among frameworks with joint attacks and classical argumentation frameworks, computational aspects, and applications of joint attacks. Last but not least, we propose a roadmap for future research on the subject, identifying gaps in current research and important research directions.} }
- I. Gould, J. D. Waegemaeker, D. Tzemi, I. Wright, S. Pearson, E. Ruto, L. Karrasch, L. S. Christensen, H. Aronsson, S. Eich-Greatorex, G. Bosworth, and P. Vellinga, “Salinization threats to agriculture across the north sea region,” in Future of sustainable agriculture in saline environments, Taylor and francis, 2021, p. 71–92. doi:doi:10.1201/9781003112327-5
[BibTeX] [Abstract] [Download PDF]
Salinization represents a global threat to agricultural productivity and human livelihoods. Historically, much saline research has focussed on arid or semi-arid systems. The North Sea region of Europe has seen very little attention in salinity literature, however, under future climate predictions, this is likely to change. In this review, we outline the mechanisms of salinization across the North Sea region. These include the intrusion of saline groundwater, coastal flooding, irrigation and airborne salinization. The extent of each degradation process is explored for the United Kingdom, Belgium, the Netherlands, Germany, Denmark, Sweden and Norway. The potential threat of salinization across the North Sea varies in a complex and diverse manner. However, we find an overall lack of data, both of water monitoring and soil sampling, on salinity in the region. For agricultural systems in the region to adapt against future salinization risk, more extensive mapping and monitoring of salinization need to be conducted, along with the development of appropriate land management practices.
@incollection{lincoln45934, booktitle = {Future of Sustainable Agriculture in Saline Environments}, title = {Salinization Threats to Agriculture across the North Sea Region}, author = {Iain Gould and Jeroen De Waegemaeker and Domna Tzemi and Isobel Wright and Simon Pearson and Eric Ruto and Leena Karrasch and Laurids Siig Christensen and Henrik Aronsson and Susanne Eich-Greatorex and Gary Bosworth and Pier Vellinga}, publisher = {Taylor and Francis}, year = {2021}, pages = {71--92}, doi = {doi:10.1201/9781003112327-5}, keywords = {ARRAY(0x559d32468fb0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45934/}, abstract = {Salinization represents a global threat to agricultural productivity and human livelihoods. Historically, much saline research has focussed on arid or semi-arid systems. The North Sea region of Europe has seen very little attention in salinity literature, however, under future climate predictions, this is likely to change. In this review, we outline the mechanisms of salinization across the North Sea region. These include the intrusion of saline groundwater, coastal flooding, irrigation and airborne salinization. The extent of each degradation process is explored for the United Kingdom, Belgium, the Netherlands, Germany, Denmark, Sweden and Norway. The potential threat of salinization across the North Sea varies in a complex and diverse manner. However, we find an overall lack of data, both of water monitoring and soil sampling, on salinity in the region. For agricultural systems in the region to adapt against future salinization risk, more extensive mapping and monitoring of salinization need to be conducted, along with the development of appropriate land management practices.} }
- H. Harman and E. Sklar, “Auction-based task allocation mechanisms for managing fruit harvesting tasks,” in Ukras21, 2021, p. 47–48. doi:10.31256/Dg2Zp9Q
[BibTeX] [Abstract] [Download PDF]
Multi-robot task allocation mechanisms are de-signed to distribute a set of activities fairly amongst a set of robots. Frequently, this can be framed as a multi-criteria optimisation problem, for example minimising cost while maximising rewards. In soft fruit farms, tasks, such as picking ripe fruit at harvest time, are assigned to human labourers. The work presented here explores the application of multi-robot task allocation mechanisms to the complex problem of managing a heterogeneous workforce to undertake activities associated with harvesting soft fruit.
@inproceedings{lincoln45349, booktitle = {UKRAS21}, title = {Auction-based Task Allocation Mechanisms for Managing Fruit Harvesting Tasks}, author = {Helen Harman and Elizabeth Sklar}, year = {2021}, pages = {47--48}, doi = {10.31256/Dg2Zp9Q}, keywords = {ARRAY(0x559d32468ff8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45349/}, abstract = {Multi-robot task allocation mechanisms are de-signed to distribute a set of activities fairly amongst a set of robots. Frequently, this can be framed as a multi-criteria optimisation problem, for example minimising cost while maximising rewards. In soft fruit farms, tasks, such as picking ripe fruit at harvest time, are assigned to human labourers. The work presented here explores the application of multi-robot task allocation mechanisms to the complex problem of managing a heterogeneous workforce to undertake activities associated with harvesting soft fruit.} }
- I. Hroob, R. Polvara, S. M. Mellado, G. Cielniak, and M. Hanheide, “Benchmark of visual and 3d lidar slam systems in simulation environment for vineyards,” in Towards autonomous robotic systems conference (taros), 2021.
[BibTeX] [Abstract] [Download PDF]
In this work, we present a comparative analysis of the trajectories estimated from various Simultaneous Localization and Mapping (SLAM) systems in a simulation environment for vineyards. Vineyard environment is challenging for SLAM methods, due to visual appearance changes over time, uneven terrain, and repeated visual patterns. For this reason, we created a simulation environment specifically for vineyards to help studying SLAM systems in such a challenging environment. We evaluated the following SLAM systems: LIO-SAM, StaticMapping, ORB-SLAM2, and RTAB-MAP in four different scenarios. The mobile robot used in this study is equipped with 2D and 3D lidars, IMU, and RGB-D camera (Kinect v2). The results show good and encouraging performance of RTAB-MAP in such an environment.
@inproceedings{lincoln45642, booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)}, title = {Benchmark of visual and 3D lidar SLAM systems in simulation environment for vineyards}, author = {Ibrahim Hroob and Riccardo Polvara and Sergio Molina Mellado and Grzegorz Cielniak and Marc Hanheide}, year = {2021}, journal = {The 22nd Towards Autonomous Robotic Systems Conference}, keywords = {ARRAY(0x559d32469058)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45642/}, abstract = {In this work, we present a comparative analysis of the trajectories estimated from various Simultaneous Localization and Mapping (SLAM) systems in a simulation environment for vineyards. Vineyard environment is challenging for SLAM methods, due to visual appearance changes over time, uneven terrain, and repeated visual patterns. For this reason, we created a simulation environment specifically for vineyards to help studying SLAM systems in such a challenging environment. We evaluated the following SLAM systems: LIO-SAM, StaticMapping, ORB-SLAM2, and RTAB-MAP in four different scenarios. The mobile robot used in this study is equipped with 2D and 3D lidars, IMU, and RGB-D camera (Kinect v2). The results show good and encouraging performance of RTAB-MAP in such an environment.} }
- A. Mohtasib, G. Neumann, and H. Cuayahuitl, “A study on dense and sparse (visual) rewards in robot policy learning,” in Towards autonomous robotic systems conference (taros), 2021.
[BibTeX] [Abstract] [Download PDF]
Deep Reinforcement Learning (DRL) is a promising approach for teaching robots new behaviour. However, one of its main limitations is the need for carefully hand-coded reward signals by an expert. We argue that it is crucial to automate the reward learning process so that new skills can be taught to robots by their users. To address such automation, we consider task success classifiers using visual observations to estimate the rewards in terms of task success. In this work, we study the performance of multiple state-of-the-art deep reinforcement learning algorithms under different types of reward: Dense, Sparse, Visual Dense, and Visual Sparse rewards. Our experiments in various simulation tasks (Pendulum, Reacher, Pusher, and Fetch Reach) show that while DRL agents can learn successful behaviours using visual rewards when the goal targets are distinguishable, their performance may decrease if the task goal is not clearly visible. Our results also show that visual dense rewards are more successful than visual sparse rewards and that there is no single best algorithm for all tasks.
@inproceedings{lincoln45983, booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)}, month = {September}, title = {A Study on Dense and Sparse (Visual) Rewards in Robot Policy Learning}, author = {Abdalkarim Mohtasib and Gerhard Neumann and Heriberto Cuayahuitl}, publisher = {University of Lincoln}, year = {2021}, keywords = {ARRAY(0x559d325f9bf8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45983/}, abstract = {Deep Reinforcement Learning (DRL) is a promising approach for teaching robots new behaviour. However, one of its main limitations is the need for carefully hand-coded reward signals by an expert. We argue that it is crucial to automate the reward learning process so that new skills can be taught to robots by their users. To address such automation, we consider task success classifiers using visual observations to estimate the rewards in terms of task success. In this work, we study the performance of multiple state-of-the-art deep reinforcement learning algorithms under different types of reward: Dense, Sparse, Visual Dense, and Visual Sparse rewards. Our experiments in various simulation tasks (Pendulum, Reacher, Pusher, and Fetch Reach) show that while DRL agents can learn successful behaviours using visual rewards when the goal targets are distinguishable, their performance may decrease if the task goal is not clearly visible. Our results also show that visual dense rewards are more successful than visual sparse rewards and that there is no single best algorithm for all tasks.} }
- Z. Maamar and M. Al-Khafajiy, “Cloud-edge coupling to mitigate execution failures,” in Proceedings of the 36th annual acm symposium on applied computing, 2021, p. 711–718. doi:10.1145/3412841.3442334
[BibTeX] [Abstract] [Download PDF]
This paper examines the doability of cloud-edge coupling to mitigate execution failures and hence, achieve business process continuity. These failures are the result of disruptions that impact the cycles of consuming cloud resources and/or edge resources. Cloud/Edge resources are subject to restrictions like limitedness and non-shareability that increase the complexity of resuming execution operations to the extent that some of these operations could be halted, which means failures. To mitigate failures, cloud and edge resources are synchronized using messages allowing proper consumption of these resources. A Microsoft Azure-based testbed simulating cloud-edge coupling is also presented in the paper.
@inproceedings{lincoln47575, month = {March}, author = {Zakaria Maamar and Mohammed Al-Khafajiy}, booktitle = {Proceedings of the 36th Annual ACM Symposium on Applied Computing}, title = {Cloud-edge coupling to mitigate execution failures}, publisher = {Association for Computing Machinery}, doi = {10.1145/3412841.3442334}, pages = {711--718}, year = {2021}, keywords = {ARRAY(0x559d32468d28)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47575/}, abstract = {This paper examines the doability of cloud-edge coupling to mitigate execution failures and hence, achieve business process continuity. These failures are the result of disruptions that impact the cycles of consuming cloud resources and/or edge resources. Cloud/Edge resources are subject to restrictions like limitedness and non-shareability that increase the complexity of resuming execution operations to the extent that some of these operations could be halted, which means failures. To mitigate failures, cloud and edge resources are synchronized using messages allowing proper consumption of these resources. A Microsoft Azure-based testbed simulating cloud-edge coupling is also presented in the paper.} }
- L. Korir, A. Drake, M. Collison, C. C. Villa, E. Sklar, and S. Pearson, “Current and emergent economic impacts of covid-19 and brexit on uk fresh produce and horticultural businesses,” in The 94 th annual conference of the agricultural economics society (aes), 2021. doi:10.22004/ag.econ.312068
[BibTeX] [Abstract] [Download PDF]
This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK’s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.
@inproceedings{lincoln46582, booktitle = {The 94 th Annual Conference of the Agricultural Economics Society (AES)}, month = {January}, title = {Current and Emergent Economic Impacts of Covid-19 and Brexit on UK Fresh Produce and Horticultural Businesses}, author = {Lilian Korir and Archie Drake and Martin Collison and Carolina Camacho Villa and Elizabeth Sklar and Simon Pearson}, year = {2021}, doi = {10.22004/ag.econ.312068}, keywords = {ARRAY(0x559d32468ed8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46582/}, abstract = {This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK's horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.} }
- A. L. Zorrilla, I. M. Torres, and H. Cuayahuitl, “Audio embeddings help to learn better dialogue policies,” in Ieee automatic speech recognition and understanding, 2021.
[BibTeX] [Abstract] [Download PDF]
Neural transformer architectures have gained a lot of interest for text-based dialogue management in the last few years. They have shown high learning capabilities for open domain dialogue with huge amounts of data and also for domain adaptation in task-oriented setups. But the potential benefits of exploiting the users’ audio signal have rarely been explored in such frameworks. In this work, we combine text dialogue history representations generated by a GPT-2 model with audio embeddings obtained by the recently released Wav2Vec2 transformer model. We jointly fine-tune these models to learn dialogue policies via supervised learning and two policy gradient-based reinforcement learning algorithms. Our experimental results, using the DSTC2 dataset and a simulated user model capable of sampling audio turns, reveal that audio embeddings lead to overall higher task success (than without using audio embeddings) with statistically significant results across evaluation metrics and training algorithms.
@inproceedings{lincoln46800, booktitle = {IEEE Automatic Speech Recognition and Understanding}, month = {December}, title = {Audio Embeddings Help to Learn Better Dialogue Policies}, author = {Asier Lopez Zorrilla and M. Ines Torres and Heriberto Cuayahuitl}, publisher = {IEEE}, year = {2021}, keywords = {ARRAY(0x559d324b03e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46800/}, abstract = {Neural transformer architectures have gained a lot of interest for text-based dialogue management in the last few years. They have shown high learning capabilities for open domain dialogue with huge amounts of data and also for domain adaptation in task-oriented setups. But the potential benefits of exploiting the users' audio signal have rarely been explored in such frameworks. In this work, we combine text dialogue history representations generated by a GPT-2 model with audio embeddings obtained by the recently released Wav2Vec2 transformer model. We jointly fine-tune these models to learn dialogue policies via supervised learning and two policy gradient-based reinforcement learning algorithms. Our experimental results, using the DSTC2 dataset and a simulated user model capable of sampling audio turns, reveal that audio embeddings lead to overall higher task success (than without using audio embeddings) with statistically significant results across evaluation metrics and training algorithms.} }
- S. Mghames, M. Hanheide, and A. G. Esfahani, “Interactive movement primitives: planning to push occluding pieces for fruit picking,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2021. doi:10.1109/IROS45743.2020.9341728
[BibTeX] [Abstract] [Download PDF]
Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.
@inproceedings{lincoln42217, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, month = {February}, title = {Interactive Movement Primitives: Planning to Push Occluding Pieces for Fruit Picking}, author = {Sariah Mghames and Marc Hanheide and Amir Ghalamzan Esfahani}, year = {2021}, doi = {10.1109/IROS45743.2020.9341728}, note = {{\copyright} 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.}, keywords = {ARRAY(0x559d32468ea8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42217/}, abstract = {Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.} }
- M. Cédérick, I. Ferrané, and H. Cuayahuitl, “Reward-based environment states for robot manipulation policy learning,” in Neurips 2021 workshop on deployable decision making in embodied systems (ddm), 2021.
[BibTeX] [Abstract] [Download PDF]
Training robot manipulation policies is a challenging and open problem in robotics and artificial intelligence. In this paper we propose a novel and compact state representation based on the rewards predicted from an image-based task success classifier. Our experiments{–}using the Pepper robot in simulation with two deep reinforcement learning algorithms on a grab-and-lift task{–}reveal that our proposed state representation can achieve up to 97\% task success using our best policies.
@inproceedings{lincoln47522, booktitle = {NeurIPS 2021 Workshop on Deployable Decision Making in Embodied Systems (DDM)}, month = {December}, title = {Reward-Based Environment States for Robot Manipulation Policy Learning}, author = {Mouliets C{\'e}d{\'e}rick and Isabelle Ferran{\'e} and Heriberto Cuayahuitl}, year = {2021}, keywords = {ARRAY(0x559d32497688)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47522/}, abstract = {Training robot manipulation policies is a challenging and open problem in robotics and artificial intelligence. In this paper we propose a novel and compact state representation based on the rewards predicted from an image-based task success classifier. Our experiments{--}using the Pepper robot in simulation with two deep reinforcement learning algorithms on a grab-and-lift task{--}reveal that our proposed state representation can achieve up to 97\% task success using our best policies.} }
- U. A. Zahidi and G. Cielniak, “Active learning for crop-weed discrimination by image classification from convolutional neural network?s feature pyramid levels,” in 13th international conference, icvs 2021, International Conference on Computer Vision Systems ICVS 2021: Computer Vision Systems, 2021. doi:10.1007/978-3-030-87156-7_20
[BibTeX] [Abstract] [Download PDF]
The amount of e?ort required for high-quality data acquisition and labelling for adequate supervised learning drives the need for building an e?cient and e?ective image sampling strategy. We propose a novel Batch Mode Active Learning that blends Region Convolutional Neural Network?s (RCNN) Feature Pyramid Network (FPN) levels together and employs t-distributed Stochastic Neighbour Embedding (t-SNE) classi?cation for selecting incremental batch based on feature similarity. Later, K-means clustering is performed on t-SNE instances for the selected sample size of images. Results show that t-SNE classi?cation on merged FPN feature maps outperforms the approach based on RGB images directly, random sampling and maximum entropy-based image sampling schemes. For comparison, we employ a publicly available data set of images of Sugar beet for a crop-weed discrimination task together with our newly acquired annotated images of Romaine and Apollo lettuce crops at di?erent growth stages. Batch sampling on all datasets by the proposed method shows that only 60\% of images are required to produce precision/recall statistics similar to the complete dataset. Two lettuce datasets used in our experiments are publicly available (Lettuce datasets: https://bit.ly/3g7Owc5) to facilitate further research opportunities.
@inproceedings{lincoln46648, month = {September}, author = {Usman A. Zahidi and Grzegorz Cielniak}, booktitle = {13th International Conference, ICVS 2021}, address = {International Conference on Computer Vision Systems ICVS 2021: Computer Vision Systems}, title = {Active Learning for Crop-Weed Discrimination by Image Classification from Convolutional Neural Network?s Feature Pyramid Levels}, publisher = {Springer Verlag}, doi = {10.1007/978-3-030-87156-7\_20}, year = {2021}, keywords = {ARRAY(0x559d3243af20)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46648/}, abstract = {The amount of e?ort required for high-quality data acquisition and labelling for adequate supervised learning drives the need for building an e?cient and e?ective image sampling strategy. We propose a novel Batch Mode Active Learning that blends Region Convolutional Neural Network?s (RCNN) Feature Pyramid Network (FPN) levels together and employs t-distributed Stochastic Neighbour Embedding (t-SNE) classi?cation for selecting incremental batch based on feature similarity. Later, K-means clustering is performed on t-SNE instances for the selected sample size of images. Results show that t-SNE classi?cation on merged FPN feature maps outperforms the approach based on RGB images directly, random sampling and maximum entropy-based image sampling schemes. For comparison, we employ a publicly available data set of images of Sugar beet for a crop-weed discrimination task together with our newly acquired annotated images of Romaine and Apollo lettuce crops at di?erent growth stages. Batch sampling on all datasets by the proposed method shows that only 60\% of images are required to produce precision/recall statistics similar to the complete dataset. Two lettuce datasets used in our experiments are publicly available (Lettuce datasets: https://bit.ly/3g7Owc5) to facilitate further research opportunities.} }
- J. L. Louedec and G. Cielniak, “Gaussian map predictions for 3d surface feature localisation and counting,” in Bmvc, 2021.
[BibTeX] [Abstract] [Download PDF]
In this paper, we propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features, addressing the limitations of state-of-the-art methods based on density estimation which struggle in presence of local disturbances. Gaussian maps indicate probable object location and can be generated directly from keypoint annotations avoiding laborious and costly per-pixel annotations. We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation enabling efficient processing by a neural network GNet, an improved UNet architecture, which generates the likely locations of surface features and their precise count. We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications. The results of training the proposed system on several hundreds of 3D scans of strawberries from a publicly available dataset demonstrate the accuracy and precision of the system which outperforms the state-of-the-art density-based methods for this application.
@inproceedings{lincoln48667, booktitle = {BMVC}, month = {November}, title = {Gaussian map predictions for 3D surface feature localisation and counting}, author = {Justin Le Louedec and Grzegorz Cielniak}, publisher = {BMVA}, year = {2021}, keywords = {ARRAY(0x559d324b4150)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48667/}, abstract = {In this paper, we propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features, addressing the limitations of state-of-the-art methods based on density estimation which struggle in presence of local disturbances. Gaussian maps indicate probable object location and can be generated directly from keypoint annotations avoiding laborious and costly per-pixel annotations. We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation enabling efficient processing by a neural network GNet, an improved UNet architecture, which generates the likely locations of surface features and their precise count. We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications. The results of training the proposed system on several hundreds of 3D scans of strawberries from a publicly available dataset demonstrate the accuracy and precision of the system which outperforms the state-of-the-art density-based methods for this application.} }
- H. Harman and E. Sklar, “A practical application of market-based mechanisms for allocating harvesting tasks,” in 19th international conference on practical applications of agents and multi-agent systems, 2021. doi:10.1007/978-3-030-85739-4_10
[BibTeX] [Abstract] [Download PDF]
Market-based task allocation mechanisms are designed to distribute a set of tasks fairly amongst a set of agents. Such mechanisms have been shown to be highly effective in simulation and when applied to multi-robot teams. Application of such mechanisms in real-world settings can present a range of practical challenges, such as knowing what is the best point in a complex process to allocate tasks and what information to consider in determining the allocation. The work presented here explores the application of market-based task allocation mechanisms to the problem of managing a heterogeneous human workforce to undertake activities associated with harvesting soft fruit. Soft fruit farms aim to maximise yield (the volume of fruit picked) while minimising labour time (and thus the cost of picking). Our work evaluates experimentally several different strategies for practical application of market-based mechanisms for allocating tasks to workers on soft fruit farms, identifying methods that appear best when simulated using a multi-agent model of farm activity.
@inproceedings{lincoln46475, month = {September}, author = {Helen Harman and Elizabeth Sklar}, booktitle = {19th International Conference on Practical Applications of Agents and Multi-Agent Systems}, title = {A Practical Application of Market-based Mechanisms for Allocating Harvesting Tasks}, publisher = {Springer}, journal = {Advances in Practical Applications of Agents, Multi-Agent Systems and Social Good: The PAAMS Collection}, doi = {10.1007/978-3-030-85739-4\_10}, year = {2021}, keywords = {ARRAY(0x559d32435608)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46475/}, abstract = {Market-based task allocation mechanisms are designed to distribute a set of tasks fairly amongst a set of agents. Such mechanisms have been shown to be highly effective in simulation and when applied to multi-robot teams. Application of such mechanisms in real-world settings can present a range of practical challenges, such as knowing what is the best point in a complex process to allocate tasks and what information to consider in determining the allocation. The work presented here explores the application of market-based task allocation mechanisms to the problem of managing a heterogeneous human workforce to undertake activities associated with harvesting soft fruit. Soft fruit farms aim to maximise yield (the volume of fruit picked) while minimising labour time (and thus the cost of picking). Our work evaluates experimentally several different strategies for practical application of market-based mechanisms for allocating tasks to workers on soft fruit farms, identifying methods that appear best when simulated using a multi-agent model of farm activity.} }
- W. King, L. Pooley, P. Johnson, and K. Elgeneidy, “Design and characterisation of a variable-stiffness soft actuator based on tendon twisting,” in Taros 2021, 2021.
[BibTeX] [Abstract] [Download PDF]
The field of soft robotics aims to address the challenges faced by traditional rigid robots in less structured and dynamic environments that require more adaptive interactions. Taking inspiration from biological organisms? such as octopus tentacles and elephant trunks, soft robots commonly use elastic materials and novel actuation methods to mimic the continuous deformation of their mostly soft bodies. While current robotic manipulators, such as those used in the DaVinci surgical robot, have seen use in precise minimally invasive surgeries applications, the capability of soft robotics to provide a greater degree of flexibility and inherently safe interactions shows great promise that motivates further study. Nevertheless, introducing softness consequently opens new challenges in achieving accurate positional control and sufficient force generation often required for manipulation tasks. In this paper, the feasibility of a stiffening mechanism based on tendon-twisting is investigated, as an alternative stiffening mechanism for soft actuators that can be easily scaled as needed based on tendon size, material properties, and arrangements, while offering simple means of controlling a gradual increase in stiffening during operation.
@inproceedings{lincoln45570, booktitle = {Taros 2021}, month = {September}, title = {Design and Characterisation of a Variable-Stiffness Soft Actuator Based on Tendon Twisting}, author = {William King and Luke Pooley and Philip Johnson and Khaled Elgeneidy}, year = {2021}, keywords = {ARRAY(0x559d32606530)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45570/}, abstract = {The field of soft robotics aims to address the challenges faced by traditional rigid robots in less structured and dynamic environments that require more adaptive interactions. Taking inspiration from biological organisms? such as octopus tentacles and elephant trunks, soft robots commonly use elastic materials and novel actuation methods to mimic the continuous deformation of their mostly soft bodies. While current robotic manipulators, such as those used in the DaVinci surgical robot, have seen use in precise minimally invasive surgeries applications, the capability of soft robotics to provide a greater degree of flexibility and inherently safe interactions shows great promise that motivates further study. Nevertheless, introducing softness consequently opens new challenges in achieving accurate positional control and sufficient force generation often required for manipulation tasks. In this paper, the feasibility of a stiffening mechanism based on tendon-twisting is investigated, as an alternative stiffening mechanism for soft actuators that can be easily scaled as needed based on tendon size, material properties, and arrangements, while offering simple means of controlling a gradual increase in stiffening during operation.} }
- M. Hua, Q. Fu, W. Duan, and S. Yue, “Investigating refractoriness in collision perception neuronal model,” in 2021 international joint conference on neural networks (ijcnn), 2021. doi:10.1109/IJCNN52387.2021.9533965
[BibTeX] [Abstract] [Download PDF]
Currently, collision detection methods based on visual cues are still challenged by several factors including ultrafast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant movement detectors (LGMDs) in locust?s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ?link (L) layer? located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.
@inproceedings{lincoln46692, booktitle = {2021 International Joint Conference on Neural Networks (IJCNN)}, month = {September}, title = {Investigating Refractoriness in Collision Perception Neuronal Model}, author = {Mu Hua and Qinbing Fu and Wenting Duan and Shigang Yue}, publisher = {IEEE}, year = {2021}, doi = {10.1109/IJCNN52387.2021.9533965}, keywords = {ARRAY(0x559d32643ed0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46692/}, abstract = {Currently, collision detection methods based on visual cues are still challenged by several factors including ultrafast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant movement detectors (LGMDs) in locust?s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ?link (L) layer? located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.} }
- A. Mohtasib, A. G. Esfahani, N. Bellotto, and H. Cuayahuitl, “Neural task success classifiers for robotic manipulation from few real demonstrations,” in International joint conference on neural networks (ijcnn), 2021.
[BibTeX] [Abstract] [Download PDF]
Robots learning a new manipulation task from a small amount of demonstrations are increasingly demanded in different workspaces. A classifier model assessing the quality of actions can predict the successful completion of a task, which can be used by intelligent agents for action-selection. This paper presents a novel classifier that learns to classify task completion only from a few demonstrations. We carry out a comprehensive comparison of different neural classifiers, e.g. fully connected-based, fully convolutional-based, sequence2sequence-based, and domain adaptation-based classification. We also present a new dataset including five robot manipulation tasks, which is publicly available. We compared the performances of our novel classifier and the existing models using our dataset and the MIME dataset. The results suggest domain adaptation and timing-based features improve success prediction. Our novel model, i.e. fully convolutional neural network with domain adaptation and timing features, achieves an average classification accuracy of 97.3\% and 95.5\% across tasks in both datasets whereas state-of-the-art classifiers without domain adaptation and timing-features only achieve 82.4\% and 90.3\%, respectively.
@inproceedings{lincoln45559, booktitle = {International Joint Conference on Neural Networks (IJCNN)}, month = {July}, title = {Neural Task Success Classifiers for Robotic Manipulation from Few Real Demonstrations}, author = {Abdalkarim Mohtasib and Amir Ghalamzan Esfahani and Nicola Bellotto and Heriberto Cuayahuitl}, publisher = {IEEE}, year = {2021}, keywords = {ARRAY(0x559d324b7060)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45559/}, abstract = {Robots learning a new manipulation task from a small amount of demonstrations are increasingly demanded in different workspaces. A classifier model assessing the quality of actions can predict the successful completion of a task, which can be used by intelligent agents for action-selection. This paper presents a novel classifier that learns to classify task completion only from a few demonstrations. We carry out a comprehensive comparison of different neural classifiers, e.g. fully connected-based, fully convolutional-based, sequence2sequence-based, and domain adaptation-based classification. We also present a new dataset including five robot manipulation tasks, which is publicly available. We compared the performances of our novel classifier and the existing models using our dataset and the MIME dataset. The results suggest domain adaptation and timing-based features improve success prediction. Our novel model, i.e. fully convolutional neural network with domain adaptation and timing features, achieves an average classification accuracy of 97.3\% and 95.5\% across tasks in both datasets whereas state-of-the-art classifiers without domain adaptation and timing-features only achieve 82.4\% and 90.3\%, respectively.} }
- D. Dai, J. Gao, S. Parsons, and E. Sklar, “Small datasets for fruit detection with transfer learning,” in 4th uk-ras conference, 2021, p. 5–6. doi:10.31256/Nf6Uh8Q
[BibTeX] [Abstract] [Download PDF]
A common approach to the problem of fruit detection in images is to design a deep learning network and train a model to locate objects, using bounding boxes to identify regions containing fruit. However, this requires sufficient data and presents challenges for small datasets. Transfer learning, which acquires knowledge from a source domain and brings that to a new target domain, can produce improved performance in the target domain. The work discussed in this paper shows the application of transfer learning for fruit detection with small datasets and presents analysis between the number of training images in source and target domains.
@inproceedings{lincoln46542, month = {July}, author = {Dan Dai and Junfeng Gao and Simon Parsons and Elizabeth Sklar}, booktitle = {4th UK-RAS Conference}, title = {Small datasets for fruit detection with transfer learning}, publisher = {UK-RAS}, doi = {10.31256/Nf6Uh8Q}, pages = {5--6}, year = {2021}, keywords = {ARRAY(0x559d324b4f78)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46542/}, abstract = {A common approach to the problem of fruit detection in images is to design a deep learning network and train a model to locate objects, using bounding boxes to identify regions containing fruit. However, this requires sufficient data and presents challenges for small datasets. Transfer learning, which acquires knowledge from a source domain and brings that to a new target domain, can produce improved performance in the target domain. The work discussed in this paper shows the application of transfer learning for fruit detection with small datasets and presents analysis between the number of training images in source and target domains.} }
- T. Choi and G. Cielniak, “Adaptive selection of informative path planning strategies via reinforcement learning,” in 2021 european conference on mobile robots (ecmr), 2021. doi:10.1109/ECMR50962.2021.9568796
[BibTeX] [Abstract] [Download PDF]
In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as ?attraction force? to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore ?local planning? approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.
@inproceedings{lincoln46371, booktitle = {2021 European Conference on Mobile Robots (ECMR)}, month = {October}, title = {Adaptive Selection of Informative Path Planning Strategies via Reinforcement Learning}, author = {Taeyeong Choi and Grzegorz Cielniak}, publisher = {IEEE}, year = {2021}, doi = {10.1109/ECMR50962.2021.9568796}, keywords = {ARRAY(0x559d32450bc0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46371/}, abstract = {In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as ?attraction force? to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore ?local planning? approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.} }
- L. Guevara, M. Khalid, M. Hanheide, and S. Parsons, “Assessing the probability of human injury during uv-c treatment of crops by robots,” in 4th uk-ras conference, 2021. doi:10.31256/Pj6Cz2L
[BibTeX] [Abstract] [Download PDF]
This paper describes a hazard analysis for an agricultural scenario where a crop is treated by a robot using UV-C light. Although human-robot interactions are not expected, it may be the case that unauthorized people approach the robot while it is operating. These potential human-robot interactions have been identified and modelled as Markov Decision Processes (MDP) and tested in the model checking tool PRISM.
@inproceedings{lincoln46537, booktitle = {4th UK-RAS Conference}, month = {July}, title = {Assessing the probability of human injury during UV-C treatment of crops by robots}, author = {Leonardo Guevara and Muhammad Khalid and Marc Hanheide and Simon Parsons}, publisher = {UK-RAS}, year = {2021}, doi = {10.31256/Pj6Cz2L}, keywords = {ARRAY(0x559d2e761dd0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46537/}, abstract = {This paper describes a hazard analysis for an agricultural scenario where a crop is treated by a robot using UV-C light. Although human-robot interactions are not expected, it may be the case that unauthorized people approach the robot while it is operating. These potential human-robot interactions have been identified and modelled as Markov Decision Processes (MDP) and tested in the model checking tool PRISM.} }
- M. Khalid, L. Guevara, M. Hanheide, and S. Parsons, “Assuring autonomy of robots in soft fruit production,” in 4th uk-ras conference, 2021. doi:10.31256/Ml6Ik7G
[BibTeX] [Abstract] [Download PDF]
This paper describes our work to assure safe autonomy in soft fruit production. The first step was hazard analysis, where all the possible hazards in representative scenarios were identified. Following this analysis, a three-layer safety architecture was identified that will minimise the occurrence of the identified hazards. Most of the hazards are minimised by upper layers, while unavoidable hazards are handled using emergency stops. In parallel, we are using probabilistic model checking to check the probability of a hazard’s occurrence. The results from the model checking will be used to improve safety system architecture.
@inproceedings{lincoln46541, booktitle = {4th UK-RAS Conference}, month = {July}, title = {Assuring autonomy of robots in soft fruit production}, author = {Muhammad Khalid and Leonardo Guevara and Marc Hanheide and Simon Parsons}, publisher = {UK-RAS}, year = {2021}, doi = {10.31256/Ml6Ik7G}, keywords = {ARRAY(0x559d324496b0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46541/}, abstract = {This paper describes our work to assure safe autonomy in soft fruit production. The first step was hazard analysis, where all the possible hazards in representative scenarios were identified. Following this analysis, a three-layer safety architecture was identified that will minimise the occurrence of the identified hazards. Most of the hazards are minimised by upper layers, while unavoidable hazards are handled using emergency stops. In parallel, we are using probabilistic model checking to check the probability of a hazard's occurrence. The results from the model checking will be used to improve safety system architecture.} }
- H. Rogers, B. Dawson, G. Clawson, and C. Fox, “Extending an open source hardware agri-robot with simulation and plant re-identification,” in Oxford autonomous intelligent machines and systems conference 2021, 2021.
[BibTeX] [Abstract] [Download PDF]
Previous work constructed an open source hardware (OSH) agri-robot platform for swarming agriculture research. We summarise recent developments from the community on this platform as a case study of how an OSH project can develop. The original platform has been extended by contributions of a simulation package and a vision-based plant-re-identification system used as a target for blockchain-based food assurance. Gaining new participants in OSH projects requires explicit instructions on how to contribute. The system hardware and software is open-sourced at https://github.com/Harry-Rogers/PiCar as part of this publication. We invite others to get involved and extend the platform.
@inproceedings{lincoln46862, booktitle = {Oxford Autonomous Intelligent Machines and Systems Conference 2021}, month = {October}, title = {Extending an Open Source Hardware Agri-Robot with Simulation and Plant Re-identification}, author = {Harry Rogers and Benjamin Dawson and Garry Clawson and Charles Fox}, publisher = {Oxford AIMS Conference 2021}, year = {2021}, keywords = {ARRAY(0x559d32588708)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46862/}, abstract = {Previous work constructed an open source hardware (OSH) agri-robot platform for swarming agriculture research. We summarise recent developments from the community on this platform as a case study of how an OSH project can develop. The original platform has been extended by contributions of a simulation package and a vision-based plant-re-identification system used as a target for blockchain-based food assurance. Gaining new participants in OSH projects requires explicit instructions on how to contribute. The system hardware and software is open-sourced at https://github.com/Harry-Rogers/PiCar as part of this publication. We invite others to get involved and extend the platform.} }
- A. Henry and C. Fox, “Open source hardware automated guitar player,” in International conference on computer music, 2021.
[BibTeX] [Abstract] [Download PDF]
We present the first open source hardware (OSH) design and build of a physical robotic automated guitar player. Users? own instruments being different shapes and sizes, the system is designed to be used and/or modified to physically attach to a wide range of instruments. Design objectives include ease and low cost of build. Automation is split into three modules: the left-hand fretting, right-hand string picking, and right hand palm muting. Automation is performed using cheap electric linear solenoids. Software APIs are designed and implemented for both low level actuator control and high level music performance.
@inproceedings{lincoln45327, booktitle = {International Conference on Computer Music}, month = {July}, title = {Open source hardware automated guitar player}, author = {Andrew Henry and Charles Fox}, publisher = {ICMC}, year = {2021}, keywords = {ARRAY(0x559d32625878)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45327/}, abstract = {We present the first open source hardware (OSH) design and build of a physical robotic automated guitar player. Users? own instruments being different shapes and sizes, the system is designed to be used and/or modified to physically attach to a wide range of instruments. Design objectives include ease and low cost of build. Automation is split into three modules: the left-hand fretting, right-hand string picking, and right hand palm muting. Automation is performed using cheap electric linear solenoids. Software APIs are designed and implemented for both low level actuator control and high level music performance.} }
- N. Wagner, R. Kirk, M. Hanheide, and G. Cielniak, “Efficient and robust orientation estimation of strawberries for fruit picking applications,” in Ieee international conference on robotics and automation (icra), 2021, p. 13857–1386. doi:10.1109/ICRA48506.2021.9561848
[BibTeX] [Abstract] [Download PDF]
Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB data only, mean errors of as low as 11? could be achieved.
@inproceedings{lincoln44426, month = {October}, author = {Nikolaus Wagner and Raymond Kirk and Marc Hanheide and Grzegorz Cielniak}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, title = {Efficient and Robust Orientation Estimation of Strawberries for Fruit Picking Applications}, publisher = {IEEE}, doi = {10.1109/ICRA48506.2021.9561848}, pages = {13857--1386}, year = {2021}, keywords = {ARRAY(0x559d32450c50)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44426/}, abstract = {Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB data only, mean errors of as low as 11? could be achieved.} }
- J. C. Mayoral, L. Grimstad, P. r a, and G. Cielniak, “Integration of a human-aware risk-based braking system into an open-field mobile robot,” in Ieee international conference on robotics and automation (icra), 2021, p. 2435–2442. doi:10.1109/ICRA48506.2021.9561522
[BibTeX] [Abstract] [Download PDF]
Safety integration components for robotic applications are a mandatory feature for any autonomous mobile application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor. Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.
@inproceedings{lincoln44427, month = {October}, author = {Jose C. Mayoral and Lars Grimstad and P{\r a}l J. From and Grzegorz Cielniak}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, title = {Integration of a Human-aware Risk-based Braking System into an Open-Field Mobile Robot}, publisher = {IEEE}, doi = {10.1109/ICRA48506.2021.9561522}, pages = {2435--2442}, year = {2021}, keywords = {ARRAY(0x559d32450c20)}, url = {https://eprints.lincoln.ac.uk/id/eprint/44427/}, abstract = {Safety integration components for robotic applications are a mandatory feature for any autonomous mobile application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor. Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.} }
- T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A versatile vision-pheromone-communication platform for swarm robotics,” in 2021 ieee international conference on robotics and automation (icra), 2021. doi:10.1109/ICRA48506.2021.9561911
[BibTeX] [Abstract] [Download PDF]
This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction, but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOS\${$\backslash$}Phi\$ (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform, and to investigate how complex scenarios can be facilitated by making use of this platform.
@inproceedings{lincoln47322, booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)}, month = {October}, title = {A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics}, author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue}, publisher = {IEEE}, year = {2021}, doi = {10.1109/ICRA48506.2021.9561911}, keywords = {ARRAY(0x559d32450bf0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47322/}, abstract = {This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction, but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOS\${$\backslash$}Phi\$ (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform, and to investigate how complex scenarios can be facilitated by making use of this platform.} }
- T. Zhivkov, A. Gomez, J. Gao, E. Sklar, and S. Parsons, “The need for speed: how 5g communication can support ai in the field,” in Epsrc uk-ras network (2021). ukras21 conference: robotics at home proceedings, 2021, p. 55–56. doi:10.31256/On8Hj9U
[BibTeX] [Abstract] [Download PDF]
Using AI for agriculture requires the fast transmission and processing of large volumes of data. Cost-effective high speed processing may not be possible on-board agricultural vehicles, and suitably fast transmission may not be possible with older generation wireless communications. In response, the work presented here investigates the use of 5G wireless technology to support the deployment of AI in this context.
@inproceedings{lincoln46574, month = {June}, author = {Tsvetan Zhivkov and Adrian Gomez and Junfeng Gao and Elizabeth Sklar and Simon Parsons}, booktitle = {EPSRC UK-RAS Network (2021). UKRAS21 Conference: Robotics at home Proceedings}, title = {The need for speed: How 5G communication can support AI in the field}, publisher = {UK-RAS}, doi = {10.31256/On8Hj9U}, pages = {55--56}, year = {2021}, keywords = {ARRAY(0x559d32468b48)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46574/}, abstract = {Using AI for agriculture requires the fast transmission and processing of large volumes of data. Cost-effective high speed processing may not be possible on-board agricultural vehicles, and suitably fast transmission may not be possible with older generation wireless communications. In response, the work presented here investigates the use of 5G wireless technology to support the deployment of AI in this context.} }
- K. Heiwolt, T. Duckett, and G. Cielniak, “Deep semantic segmentation of 3d plant point clouds,” in Towards autonomous robotic systems conference, 2021. doi:10.1007/978-3-030-89177-0_4
[BibTeX] [Abstract] [Download PDF]
Plant phenotyping is an essential step in the plant breeding cycle, necessary to ensure food safety for a growing world population. Standard procedures for evaluating three-dimensional plant morphology and extracting relevant phenotypic characteristics are slow, costly, and in need of automation. Previous work towards automatic semantic segmentation of plants relies on explicit prior knowledge about the species and sensor set-up, as well as manually tuned parameters. In this work, we propose to use a supervised machine learning algorithm to predict per-point semantic annotations directly from point cloud data of whole plants and minimise the necessary user input. We train a PointNet++ variant on a fully annotated procedurally generated data set of partial point clouds of tomato plants, and show that the network is capable of distinguishing between the semantic classes of leaves, stems, and soil based on structural data only. We present both quantitative and qualitative evaluation results, and establish a proof of concept, indicating that deep learning is a promising approach towards replacing the current complex, laborious, species-specific, state-of-the-art plant segmentation procedures.
@inproceedings{lincoln46669, booktitle = {Towards Autonomous Robotic Systems Conference}, month = {October}, title = {Deep semantic segmentation of 3D plant point clouds}, author = {Karoline Heiwolt and Tom Duckett and Grzegorz Cielniak}, publisher = {Springer International Publishing}, year = {2021}, doi = {10.1007/978-3-030-89177-0\_4}, keywords = {ARRAY(0x559d324b4678)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46669/}, abstract = {Plant phenotyping is an essential step in the plant breeding cycle, necessary to ensure food safety for a growing world population. Standard procedures for evaluating three-dimensional plant morphology and extracting relevant phenotypic characteristics are slow, costly, and in need of automation. Previous work towards automatic semantic segmentation of plants relies on explicit prior knowledge about the species and sensor set-up, as well as manually tuned parameters. In this work, we propose to use a supervised machine learning algorithm to predict per-point semantic annotations directly from point cloud data of whole plants and minimise the necessary user input. We train a PointNet++ variant on a fully annotated procedurally generated data set of partial point clouds of tomato plants, and show that the network is capable of distinguishing between the semantic classes of leaves, stems, and soil based on structural data only. We present both quantitative and qualitative evaluation results, and establish a proof of concept, indicating that deep learning is a promising approach towards replacing the current complex, laborious, species-specific, state-of-the-art plant segmentation procedures.} }
- N. Wagner and G. Cielniak, “Inference of mechanical properties of dynamic objects through active perception,” in Towards autonomous robotic systems conference (taros), 2021, p. 430–439. doi:10.1007/978-3-030-89177-0_45
[BibTeX] [Abstract] [Download PDF]
Current robotic systems often lack a deeper understanding of their surroundings, even if they are equipped with visual sensors like RGB-D cameras. Knowledge of the mechanical properties of the objects in their immediate surroundings, however, could bring huge benefits to applications such as path planning, obstacle avoidance & removal or estimating object compliance. In this paper, we present a novel approach to inferring mechanical properties of dynamic objects with the help of active perception and frequency analysis of objects’ stimulus responses. We perform FFT on a buffer of image flow maps to identify the spectral signature of objects and from that their eigenfrequency. Combining this with 3D depth information allows us to infer an object’s mass without having to weigh it. We perform experiments on a demonstrator with variable mass and stiffness to test our approach and provide an analysis on the influence of individual properties on the result. By simply applying a controlled amount of force to a system, we were able to infer mechanical properties of systems with an eigenfrequency of around 4.5 Hz in about 2 s. This lab-based feasibility study opens new exciting robotic applications targeting realistic, non-rigid objects such as plants, crops or fabric.
@inproceedings{lincoln46646, month = {October}, author = {Nikolaus Wagner and Grzegorz Cielniak}, booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)}, title = {Inference of Mechanical Properties of Dynamic Objects through Active Perception}, publisher = {Springer}, year = {2021}, journal = {Towards Autonomous Robotic Systems Conference (TAROS) 2021}, doi = {10.1007/978-3-030-89177-0\_45}, pages = {430--439}, keywords = {ARRAY(0x559d3265a908)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46646/}, abstract = {Current robotic systems often lack a deeper understanding of their surroundings, even if they are equipped with visual sensors like RGB-D cameras. Knowledge of the mechanical properties of the objects in their immediate surroundings, however, could bring huge benefits to applications such as path planning, obstacle avoidance \& removal or estimating object compliance. In this paper, we present a novel approach to inferring mechanical properties of dynamic objects with the help of active perception and frequency analysis of objects' stimulus responses. We perform FFT on a buffer of image flow maps to identify the spectral signature of objects and from that their eigenfrequency. Combining this with 3D depth information allows us to infer an object's mass without having to weigh it. We perform experiments on a demonstrator with variable mass and stiffness to test our approach and provide an analysis on the influence of individual properties on the result. By simply applying a controlled amount of force to a system, we were able to infer mechanical properties of systems with an eigenfrequency of around 4.5 Hz in about 2 s. This lab-based feasibility study opens new exciting robotic applications targeting realistic, non-rigid objects such as plants, crops or fabric.} }
- R. Ravikanna, M. Hanheide, G. Das, and Z. Zhu, “Maximising availability of transportation robots through intelligent allocation of parking spaces,” in Taros2021, 2021. doi:10.1007/978-3-030-89177-0_34
[BibTeX] [Abstract] [Download PDF]
Autonomous agricultural robots increasingly have an important role in tasks such as transportation, crop monitoring, weed detection etc. These tasks require the robots to travel to different locations in the field. Reducing time for this travel can greatly reduce the global task completion time and improve the availability of the robot to perform more number of tasks. Looking at in-field logistics robots for supporting human fruit pickers as a relevant scenario, this research deals with the design of various algorithms for automated allocation of parking spaces for the on-field robots, so as to make them most accessible to preferred areas of the field. These parking space allocation algorithms are tested for their performance by varying initial parameters like the size of the field, number of farm workers in the field, position of the farm workers etc. Various experiments are conducted for this purpose on a simulated environment. Their results are studied and discussed for better understanding about the contribution of intelligent parking space allocation towards improving the overall time efficiency of task completion.
@inproceedings{lincoln46635, booktitle = {TAROS2021}, month = {October}, title = {Maximising availability of transportation robots through intelligent allocation of parking spaces}, author = {Roopika Ravikanna and Marc Hanheide and Gautham Das and Zuyuan Zhu}, year = {2021}, doi = {10.1007/978-3-030-89177-0\_34}, keywords = {ARRAY(0x559d324b6fd0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46635/}, abstract = {Autonomous agricultural robots increasingly have an important role in tasks such as transportation, crop monitoring, weed detection etc. These tasks require the robots to travel to different locations in the field. Reducing time for this travel can greatly reduce the global task completion time and improve the availability of the robot to perform more number of tasks. Looking at in-field logistics robots for supporting human fruit pickers as a relevant scenario, this research deals with the design of various algorithms for automated allocation of parking spaces for the on-field robots, so as to make them most accessible to preferred areas of the field. These parking space allocation algorithms are tested for their performance by varying initial parameters like the size of the field, number of farm workers in the field, position of the farm workers etc. Various experiments are conducted for this purpose on a simulated environment. Their results are studied and discussed for better understanding about the contribution of intelligent parking space allocation towards improving the overall time efficiency of task completion.} }
- G. Picardi, R. Lovecchio, and M. Calisti, “Towards autonomous area inspection with a bio-inspired underwater legged robot,” in Iros, 2021. doi:10.1109/IROS51168.2021.9636316
[BibTeX] [Abstract] [Download PDF]
Recently, a new category of bio-inspired legged robots moving directly on the seabed have been proposed to complement the abilities of traditional underwater vehicles and to enhance manipulation and sampling tasks. So far, only tele-operated use of underwater legged robots has been reported and in this paper we attempt to fill such gap by presenting the first step towards autonomous area inspection. First, we present a 3 dimensional single-legged model for underwater hopping locomotion and derive a path following control strategy. Later, we adapt such control strategy to an underwater hexapod robot SILVER2 on the robotic simulator Webots. Finally, we simulate a full autonomous mission consisting in the inspection of an area over a pre-defined path, target recognition, transition to a safer gait and target approach. Our results show the feasibility of the approach and encourage the implementation of the presented control strategy on the robot SILVER2.
@inproceedings{lincoln52079, booktitle = {IROS}, month = {September}, title = {Towards autonomous area inspection with a bio-inspired underwater legged robot}, author = {Giacomo Picardi and Rossana Lovecchio and Marcello Calisti}, publisher = {IEEE/RSJ}, year = {2021}, doi = {10.1109/IROS51168.2021.9636316}, keywords = {ARRAY(0x559d3260bb78)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52079/}, abstract = {Recently, a new category of bio-inspired legged robots moving directly on the seabed have been proposed to complement the abilities of traditional underwater vehicles and to enhance manipulation and sampling tasks. So far, only tele-operated use of underwater legged robots has been reported and in this paper we attempt to fill such gap by presenting the first step towards autonomous area inspection. First, we present a 3 dimensional single-legged model for underwater hopping locomotion and derive a path following control strategy. Later, we adapt such control strategy to an underwater hexapod robot SILVER2 on the robotic simulator Webots. Finally, we simulate a full autonomous mission consisting in the inspection of an area over a pre-defined path, target recognition, transition to a safer gait and target approach. Our results show the feasibility of the approach and encourage the implementation of the presented control strategy on the robot SILVER2.} }
- C. Jansen and E. Sklar, “Predicting artist drawing activity via multi-camera inputs for co-creative drawing,” in Towards autonomous robotic systems conference (taros), 2021. doi:10.1007/978-3-030-89177-0_23
[BibTeX] [Abstract] [Download PDF]
This paper presents the results of experimentation in computer vision based for the perception of the artist drawing with analog media (pen and paper), with the aim to contribute towards a human- robot co-creative drawing framework. Using data gathered from user studies with artists and illustrators, two types of CNN models were de- signed and evaluated to predict an artist?s activity (e.g. are they drawing or not?) and the position of the pen on the canvas based only on a multi- camera input of the drawing surface. Results of different combination of input sources are presented, with an overall mean accuracy of 95\% (std: 7\%) for predicting when the artist is present and 68\% (std: 15\%) for predicting when the artist is drawing; and mean squared normalised error of 0.0034 (std: 0.0099) of predicting the pen?s position on the drawing canvas. These results point toward an autonomous robotic system having an awareness of an artist at work via camera based input and contributes toward the development of a more fluid physical to digital workflow for creative content creation.
@inproceedings{lincoln46480, booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)}, month = {October}, title = {Predicting Artist Drawing Activity via Multi-Camera Inputs for Co-Creative Drawing}, author = {Chipp Jansen and Elizabeth Sklar}, year = {2021}, doi = {10.1007/978-3-030-89177-0\_23}, journal = {Proceedings of the 22nd Towards Autonomous Robotic Systems (TAROS) Conference}, keywords = {ARRAY(0x559d324b4a20)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46480/}, abstract = {This paper presents the results of experimentation in computer vision based for the perception of the artist drawing with analog media (pen and paper), with the aim to contribute towards a human- robot co-creative drawing framework. Using data gathered from user studies with artists and illustrators, two types of CNN models were de- signed and evaluated to predict an artist?s activity (e.g. are they drawing or not?) and the position of the pen on the canvas based only on a multi- camera input of the drawing surface. Results of different combination of input sources are presented, with an overall mean accuracy of 95\% (std: 7\%) for predicting when the artist is present and 68\% (std: 15\%) for predicting when the artist is drawing; and mean squared normalised error of 0.0034 (std: 0.0099) of predicting the pen?s position on the drawing canvas. These results point toward an autonomous robotic system having an awareness of an artist at work via camera based input and contributes toward the development of a more fluid physical to digital workflow for creative content creation.} }
- C. Fox, “Musichastie: field-based hierarchical music representation,” in International conference on computer music, 2021.
[BibTeX] [Abstract] [Download PDF]
MusicHastie is a hierarchical music representation language designed for use in human and automated composition and for human and machine learning based music study and analysis. It represents and manipulates musical structure in a semantic form based on concepts from Schenkerian analysis, western European art music and popular music notations, electronica and some non-western forms such as modes and ragas. The representation is designed to model one form of musical perception by human musicians so can be used to aid human understanding and memorization of popular music pieces. An open source MusicHastie to MIDI compiler is released as part of this publication, now including capabilities for electronica MIDI control commands to model structures such as filter sweeps in addition to keys, chords, rhythms, patterns, and melodies.
@inproceedings{lincoln45328, booktitle = {International Conference on Computer Music}, month = {July}, title = {MusicHastie: field-based hierarchical music representation}, author = {Charles Fox}, publisher = {ICMC}, year = {2021}, keywords = {ARRAY(0x559d325ed388)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45328/}, abstract = {MusicHastie is a hierarchical music representation language designed for use in human and automated composition and for human and machine learning based music study and analysis. It represents and manipulates musical structure in a semantic form based on concepts from Schenkerian analysis, western European art music and popular music notations, electronica and some non-western forms such as modes and ragas. The representation is designed to model one form of musical perception by human musicians so can be used to aid human understanding and memorization of popular music pieces. An open source MusicHastie to MIDI compiler is released as part of this publication, now including capabilities for electronica MIDI control commands to model structures such as filter sweeps in addition to keys, chords, rhythms, patterns, and melodies.} }
- J. Heselden and G. Das, “Crh*: a deadlock free framework for scalable prioritised path planning in multi-robot systems,” in Towards autonomous robotic systems conference, 2021. doi:10.1007/978-3-030-89177-0_7
[BibTeX] [Abstract] [Download PDF]
Multi-robot system is an ever growing tool which is able to be applied to a wide range of industries to improve productivity and robustness, especially when tasks are distributed in space, time and functionality. Recent works have shown the benefits of multi-robot systems in fields such as warehouse automation, entertainment and agriculture. The work presented in this paper tackles the deadlock problem in multi-robot navigation, in which robots within a common work-space, are caught in situations where they are unable to navigate to their targets, being blocked by one another. This problem can be mitigated by efficient multi-robot path planning. Our work focused around the development of a scalable rescheduling algorithm named Conflict Resolution Heuristic A* (CRH*) for decoupled prioritised planning. Extensive experimental evaluation of CRH* was carried out in discrete event simulations of a fleet of autonomous agricultural robots. The results from these experiments proved that the algorithm was both scalable and deadlock-free. Additionally, novel customisation options were included to test further optimisations in system performance. Continuous Assignment and Dynamic Scoring showed to reduce the make-span of the routing whilst Combinatorial Heuristics showed to reduce the impact of outliers on priority orderings.
@inproceedings{lincoln46453, booktitle = {Towards Autonomous Robotic Systems Conference}, month = {October}, title = {CRH*: A Deadlock Free Framework for Scalable Prioritised Path Planning in Multi-Robot Systems}, author = {James Heselden and Gautham Das}, publisher = {Springer International Publishing}, year = {2021}, doi = {10.1007/978-3-030-89177-0\_7}, keywords = {ARRAY(0x559d32600a40)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46453/}, abstract = {Multi-robot system is an ever growing tool which is able to be applied to a wide range of industries to improve productivity and robustness, especially when tasks are distributed in space, time and functionality. Recent works have shown the benefits of multi-robot systems in fields such as warehouse automation, entertainment and agriculture. The work presented in this paper tackles the deadlock problem in multi-robot navigation, in which robots within a common work-space, are caught in situations where they are unable to navigate to their targets, being blocked by one another. This problem can be mitigated by efficient multi-robot path planning. Our work focused around the development of a scalable rescheduling algorithm named Conflict Resolution Heuristic A* (CRH*) for decoupled prioritised planning. Extensive experimental evaluation of CRH* was carried out in discrete event simulations of a fleet of autonomous agricultural robots. The results from these experiments proved that the algorithm was both scalable and deadlock-free. Additionally, novel customisation options were included to test further optimisations in system performance. Continuous Assignment and Dynamic Scoring showed to reduce the make-span of the routing whilst Combinatorial Heuristics showed to reduce the impact of outliers on priority orderings.} }
- K. Swann, P. Hadley, M. A. Hadley, S. Pearson, A. Badiee, and C. Twitchen, “The effect of light intensity and duration on yield and quality of everbearer and june-bearer strawberry cultivars in a led lit multi-tiered vertical growing system,” in Ix international strawberry symposium, 2021, p. 359–366. doi:10.17660/ActaHortic.2021.1309.52
[BibTeX] [Abstract] [Download PDF]
This study aimed to provide insights into the efficient use of supplementary lighting for strawberry crops produced in a multi-tiered LED lit vertical growing system, ascertaining the optimal light intensity and duration, with comparative energy use and costs. Furthermore, the suitability of a premium everbearer strawberry cultivar with a high yield potential was compared with a standard winter glasshouse June-bearer cultivar currently used for out-of-season production in the UK. Three lighting durations (11, 16 and 22 h) provided by LEDs were combined with two light intensities (344 and 227 ?mol) to give six light treatments on each tier of a three-tiered system to grow the two cultivars. The everbearer showed a higher yield with a higher correlation with increased lighting and a greater proportion of reproductive growth than the Junebearer. Light intensity and duration increased yield with duration also increasing sugar content (?Brix). However, even with yields of over 100 t ha?1 recorded in this study, yields are likely to be insufficient to cover the cost of electricity.
@inproceedings{lincoln45160, booktitle = {IX International Strawberry Symposium}, month = {April}, title = {The effect of light intensity and duration on yield and quality of everbearer and June-bearer strawberry cultivars in a LED lit multi-tiered vertical growing system}, author = {K Swann and P Hadley and M. A. Hadley and Simon Pearson and Amir Badiee and C. Twitchen}, year = {2021}, pages = {359--366}, doi = {10.17660/ActaHortic.2021.1309.52}, keywords = {ARRAY(0x559d32468c50)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45160/}, abstract = {This study aimed to provide insights into the efficient use of supplementary lighting for strawberry crops produced in a multi-tiered LED lit vertical growing system, ascertaining the optimal light intensity and duration, with comparative energy use and costs. Furthermore, the suitability of a premium everbearer strawberry cultivar with a high yield potential was compared with a standard winter glasshouse June-bearer cultivar currently used for out-of-season production in the UK. Three lighting durations (11, 16 and 22 h) provided by LEDs were combined with two light intensities (344 and 227 ?mol) to give six light treatments on each tier of a three-tiered system to grow the two cultivars. The everbearer showed a higher yield with a higher correlation with increased lighting and a greater proportion of reproductive growth than the Junebearer. Light intensity and duration increased yield with duration also increasing sugar content (?Brix). However, even with yields of over 100 t ha?1 recorded in this study, yields are likely to be insufficient to cover the cost of electricity.} }
- Z. Maamar, M. Al-Khafajiy, and M. Dohan, “An iot application business-model on top of cloud and fog nodes,” in Advanced information networking and applications, 2021, p. 174–186. doi:10.1007/978-3-030-75075-6_14
[BibTeX] [Abstract] [Download PDF]
This paper discusses the design of a business model dedicated for IoT applications that would be deployed on top of cloud and fog resources. This business model features 2 constructs, flow (specialized into data and collaboration) and placement (specialized into processing and storage). On the one hand, the flow construct is about who sends what and to whom, who collaborates with whom, and what restrictions exist on what to send, to whom to send, and with whom to collaborate. On the other hand, the placement construct is about what and how to fragment, where to store, and what restrictions exist on what and how to fragment, and where to store. The paper also discusses the development of a system built-upon a deep learning model that recommends how the different flows and placements should be formed. These recommendations consider the technical capabilities of cloud and fog resources as well as the networking topology connecting these resources to things.
@inproceedings{lincoln47574, volume = {226}, month = {April}, author = {Zakaria Maamar and Mohammed Al-Khafajiy and Murtada Dohan}, booktitle = {Advanced Information Networking and Applications}, title = {An IoT Application Business-Model on Top of Cloud and Fog Nodes}, publisher = {Springer}, year = {2021}, journal = {AINA 2021: Advanced Information Networking and Applications}, doi = {10.1007/978-3-030-75075-6\_14}, pages = {174--186}, keywords = {ARRAY(0x559d32468c98)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47574/}, abstract = {This paper discusses the design of a business model dedicated for IoT applications that would be deployed on top of cloud and fog resources. This business model features 2 constructs, flow (specialized into data and collaboration) and placement (specialized into processing and storage). On the one hand, the flow construct is about who sends what and to whom, who collaborates with whom, and what restrictions exist on what to send, to whom to send, and with whom to collaborate. On the other hand, the placement construct is about what and how to fragment, where to store, and what restrictions exist on what and how to fragment, and where to store. The paper also discusses the development of a system built-upon a deep learning model that recommends how the different flows and placements should be formed. These recommendations consider the technical capabilities of cloud and fog resources as well as the networking topology connecting these resources to things.} }
- E. Donato, G. Picardi, and M. Calisti, “Statics optimization of a hexapedal robot modelled as a stewart platform,” in Annual conference towards autonomous robotic systems, 2021. doi:10.1007/978-3-030-89177-0_39
[BibTeX] [Abstract] [Download PDF]
SILVER2 is an underwater legged robot designed with the aim of collecting litter on the seabed and sample the sediment to assess the presence of micro-plastics. Besides the original application, SILVER2 can also be a valuable tool for all underwater operations which require to interact with objects directly on the seabed. The advancement presented in this paper is to model SILVER2 as a Gough-Stewart platform, and therefore to enhance its ability to interact with the environment. Since the robot is equipped with six segmented legs with three actuated joints, it is able to make arbitrary movements in the six degrees of freedom. The robot?s performance has been analysed from both kinematics and statics points of view. The goal of this work is providing a strategy to harness the redundancy of SILVER2 by finding the optimal posture to maximize forces/torques that it can resist along/around constrained directions. Simulation results have been reported to show the advantages of the proposed method.
@inproceedings{lincoln52082, booktitle = {Annual Conference Towards Autonomous Robotic Systems}, month = {October}, title = {Statics Optimization of a Hexapedal Robot Modelled as a Stewart Platform}, author = {Enrico Donato and Giacomo Picardi and Marcello Calisti}, publisher = {Springer}, year = {2021}, doi = {10.1007/978-3-030-89177-0\_39}, keywords = {ARRAY(0x559d324ad8d8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/52082/}, abstract = {SILVER2 is an underwater legged robot designed with the aim of collecting litter on the seabed and sample the sediment to assess the presence of micro-plastics. Besides the original application, SILVER2 can also be a valuable tool for all underwater operations which require to interact with objects directly on the seabed. The advancement presented in this paper is to model SILVER2 as a Gough-Stewart platform, and therefore to enhance its ability to interact with the environment. Since the robot is equipped with six segmented legs with three actuated joints, it is able to make arbitrary movements in the six degrees of freedom. The robot?s performance has been analysed from both kinematics and statics points of view. The goal of this work is providing a strategy to harness the redundancy of SILVER2 by finding the optimal posture to maximize forces/torques that it can resist along/around constrained directions. Simulation results have been reported to show the advantages of the proposed method.} }
2020
- H. Wang, Q. Fu, H. Wang, P. Baxter, J. Peng, and S. Yue, “A bioinspired angular velocity decoding neural network model for visually guided flights,” Neural networks, 2020. doi:10.1016/j.neunet.2020.12.008
[BibTeX] [Abstract] [Download PDF]
Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.
@article{lincoln43704, title = {A bioinspired angular velocity decoding neural network model for visually guided flights}, author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Paul Baxter and Jigen Peng and Shigang Yue}, publisher = {Elsevier}, year = {2020}, doi = {10.1016/j.neunet.2020.12.008}, journal = {Neural Networks}, keywords = {ARRAY(0x559d32557278)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43704/}, abstract = {Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.} }
- S. Cosar and N. Bellotto, “Human re-identification with a robot thermal camera using entropy-based sampling,” Journal of intelligent and robotic systems, vol. 98, iss. 1, p. 85–102, 2020. doi:10.1007/s10846-019-01026-w
[BibTeX] [Abstract] [Download PDF]
Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.
@article{lincoln35778, volume = {98}, number = {1}, month = {April}, author = {Serhan Cosar and Nicola Bellotto}, title = {Human Re-Identification with a Robot Thermal Camera using Entropy-based Sampling}, publisher = {Springer}, year = {2020}, journal = {Journal of Intelligent and Robotic Systems}, doi = {10.1007/s10846-019-01026-w}, pages = {85--102}, keywords = {ARRAY(0x559d3244d4d0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/35778/}, abstract = {Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.} }
- H. Cuayahuitl, “A data-efficient deep learning approach for deployable multimodal social robots,” Neurocomputing, vol. 396, p. 587–598, 2020. doi:10.1016/j.neucom.2018.09.104
[BibTeX] [Abstract] [Download PDF]
The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games–-and use the game of `Noughts {$\backslash$}& Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.
@article{lincoln42805, volume = {396}, month = {July}, author = {Heriberto Cuayahuitl}, note = {The final published version of this article can be accessed online at https://www.journals.elsevier.com/neurocomputing/}, title = {A Data-Efficient Deep Learning Approach for Deployable Multimodal Social Robots}, publisher = {Elsevier}, year = {2020}, journal = {Neurocomputing}, doi = {10.1016/j.neucom.2018.09.104}, pages = {587--598}, keywords = {ARRAY(0x559d32581000)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42805/}, abstract = {The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games---and use the game of `Noughts {$\backslash$}\& Crosses' with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.} }
- Q. Fu and S. Yue, “Modelling drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds,” Biological cybernetics, 2020. doi:10.1007/s00422-020-00841-x
[BibTeX] [Abstract] [Download PDF]
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
@article{lincoln42133, month = {July}, title = {Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds}, author = {Qinbing Fu and Shigang Yue}, publisher = {Springer}, year = {2020}, doi = {10.1007/s00422-020-00841-x}, journal = {Biological Cybernetics}, keywords = {ARRAY(0x559d32632eb8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42133/}, abstract = {Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.} }
- D. Liu, N. Bellotto, and S. Yue, “Deep spiking neural network for video-based disguise face recognition based on dynamic facial movements,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 6, p. 1843–1855, 2020. doi:10.1109/TNNLS.2019.2927274
[BibTeX] [Abstract] [Download PDF]
With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well – it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios
@article{lincoln41718, volume = {31}, number = {6}, month = {June}, author = {Daqi Liu and Nicola Bellotto and Shigang Yue}, title = {Deep Spiking Neural Network for Video-based Disguise Face Recognition Based on Dynamic Facial Movements}, publisher = {IEEE}, year = {2020}, journal = {IEEE Transactions on Neural Networks and Learning Systems}, doi = {10.1109/TNNLS.2019.2927274}, pages = {1843--1855}, keywords = {ARRAY(0x559d32557a70)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41718/}, abstract = {With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well - it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios} }
- J. Liu, S. Iacoponi, C. Laschi, L. Wen, and M. Calisti, “Underwater mobile manipulation: a soft arm on a benthic legged robot,” Ieee robotics & automation magazine, vol. 27, iss. 4, p. 12–26, 2020. doi:10.1109/MRA.2020.3024001
[BibTeX] [Abstract] [Download PDF]
Robotic systems that can explore the sea floor, collect marine samples, gather shallow water refuse, and perform other underwater tasks are interesting and important in several fields, from biology and ecology to off-shore industry. In this article, we present a robotic platform that is, to our knowledge, the first to combine benthic legged locomotion and soft continuum manipulation to perform real-world underwater mission-like experiments. We experimentally exploit inverse kinematics for spatial manipulation in a laboratory environment and then examine the robot’s workspace extensibility, force, energy consumption, and grasping ability in different undersea scenarios.
@article{lincoln46137, volume = {27}, number = {4}, month = {December}, author = {Jiaqi Liu and Saverio Iacoponi and Cecilia Laschi and Li Wen and Marcello Calisti}, title = {Underwater Mobile Manipulation: A Soft Arm on a Benthic Legged Robot}, year = {2020}, journal = {IEEE Robotics \& Automation Magazine}, doi = {10.1109/MRA.2020.3024001}, pages = {12--26}, keywords = {ARRAY(0x559d32469118)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46137/}, abstract = {Robotic systems that can explore the sea floor, collect marine samples, gather shallow water refuse, and perform other underwater tasks are interesting and important in several fields, from biology and ecology to off-shore industry. In this article, we present a robotic platform that is, to our knowledge, the first to combine benthic legged locomotion and soft continuum manipulation to perform real-world underwater mission-like experiments. We experimentally exploit inverse kinematics for spatial manipulation in a laboratory environment and then examine the robot's workspace extensibility, force, energy consumption, and grasping ability in different undersea scenarios.} }
- Q. Fu, H. Wang, J. Peng, and S. Yue, “Improved collision perception neuronal system model with adaptive inhibition mechanism and evolutionary learning,” Ieee access, vol. 8, p. 108896–108912, 2020. doi:10.1109/ACCESS.2020.3001396
[BibTeX] [Abstract] [Download PDF]
Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.
@article{lincoln42131, volume = {8}, month = {June}, author = {Qinbing Fu and Huatian Wang and Jigen Peng and Shigang Yue}, title = {Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning}, publisher = {IEEE}, year = {2020}, journal = {IEEE Access}, doi = {10.1109/ACCESS.2020.3001396}, pages = {108896--108912}, keywords = {ARRAY(0x559d32590d88)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42131/}, abstract = {Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.} }
- Y. M. Lee, R. Madigan, O. Giles, L. Garach?Morcillo, G. Markkula, C. Fox, F. Camara, M. Rothmueller, S. A. Vendelbo?Larsen, P. H. Rasmussen, A. Dietrich, D. Nathanael, V. Portouli, A. Schieben, and N. Merat, “Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles,” Cognition, technology & work, 2020. doi:10.1007/s10111-020-00635-y
[BibTeX] [Abstract] [Download PDF]
To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu- nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred. This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle- based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit communication cues from drivers, although some cultural differences were observed. The implications of these findings are discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.
@article{lincoln41217, month = {June}, title = {Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles}, author = {Yee Mun Lee and Ruth Madigan and Oscar Giles and Laura Garach?Morcillo and Gustav Markkula and Charles Fox and Fanta Camara and Markus Rothmueller and Signe Alexandra Vendelbo?Larsen and Pernille Holm Rasmussen and Andre Dietrich and Dimitris Nathanael and Villy Portouli and Anna Schieben and Natasha Merat}, publisher = {Springer}, year = {2020}, doi = {10.1007/s10111-020-00635-y}, journal = {Cognition, Technology \& Work}, keywords = {ARRAY(0x559d3257dbf0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41217/}, abstract = {To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu- nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred. This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle- based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit communication cues from drivers, although some cultural differences were observed. The implications of these findings are discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.} }
- Z. Yan, S. Schreiberhuber, G. Halmetschlager, T. Duckett, M. Vincze, and N. Bellotto, “Robot perception of static and dynamic objects with an autonomous floor scrubber,” Intelligent service robotics, 2020. doi:10.1007/s11370-020-00324-9
[BibTeX] [Abstract] [Download PDF]
This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.
@article{lincoln40882, month = {June}, title = {Robot Perception of Static and Dynamic Objects with an Autonomous Floor Scrubber}, author = {Zhi Yan and Simon Schreiberhuber and Georg Halmetschlager and Tom Duckett and Markus Vincze and Nicola Bellotto}, publisher = {Springer}, year = {2020}, doi = {10.1007/s11370-020-00324-9}, journal = {Intelligent Service Robotics}, keywords = {ARRAY(0x559d32501288)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40882/}, abstract = {This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.} }
- I. Albayati, A. Postnikov, S. Pearson, R. Bickerton, A. Zolotas, and C. Bingham, “Power and energy analysis for a commercial retail refrigeration system responding to a static demand side response,” International journal of electrical power & energy systems, vol. 117, p. 105645, 2020. doi:10.1016/j.ijepes.2019.105645
[BibTeX] [Abstract] [Download PDF]
The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.
@article{lincoln38163, volume = {117}, month = {May}, author = {Ibrahim Albayati and Andrey Postnikov and Simon Pearson and Ronald Bickerton and Argyrios Zolotas and Chris Bingham}, title = {Power and Energy Analysis for a Commercial Retail Refrigeration System Responding to a Static Demand Side Response}, publisher = {Elsevier}, year = {2020}, journal = {International Journal of Electrical Power \& Energy Systems}, doi = {10.1016/j.ijepes.2019.105645}, pages = {105645}, keywords = {ARRAY(0x559d3242e000)}, url = {https://eprints.lincoln.ac.uk/id/eprint/38163/}, abstract = {The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.} }
- G. Picardi, M. Chellapurath, S. Iacoponi, S. Stefanni, C. Laschi, and M. Calisti, “Bioinspired underwater legged robot for seabed exploration with low environmental disturbance,” Science robotics, vol. 5, iss. 42, p. eaaz1012, 2020. doi:10.1126/scirobotics.aaz1012
[BibTeX] [Abstract] [Download PDF]
Robots have the potential to assist and complement humans in the study and exploration of extreme and hostile environments. For example, valuable scientific data have been collected with the aid of propeller-driven autonomous and remotely operated vehicles in underwater operations. However, because of their nature as swimmers, such robots are limited when closer interaction with the environment is required. Here, we report a bioinspired underwater legged robot, called SILVER2, that implements locomotion modalities inspired by benthic animals (organisms that harness the interaction with the seabed to move; for example, octopi and crabs). Our robot can traverse irregular terrains, interact delicately with the environment, approach targets safely and precisely, and hold position passively and silently. The capabilities of our robot were validated through a series of field missions in real sea conditions in a depth range between 0.5 and 12 meters.
@article{lincoln46143, volume = {5}, number = {42}, month = {May}, author = {G. Picardi and M. Chellapurath and S. Iacoponi and S. Stefanni and C. Laschi and M. Calisti}, title = {Bioinspired underwater legged robot for seabed exploration with low environmental disturbance}, year = {2020}, journal = {Science Robotics}, doi = {10.1126/scirobotics.aaz1012}, pages = {eaaz1012}, url = {https://eprints.lincoln.ac.uk/id/eprint/46143/}, abstract = {Robots have the potential to assist and complement humans in the study and exploration of extreme and hostile environments. For example, valuable scientific data have been collected with the aid of propeller-driven autonomous and remotely operated vehicles in underwater operations. However, because of their nature as swimmers, such robots are limited when closer interaction with the environment is required. Here, we report a bioinspired underwater legged robot, called SILVER2, that implements locomotion modalities inspired by benthic animals (organisms that harness the interaction with the seabed to move; for example, octopi and crabs). Our robot can traverse irregular terrains, interact delicately with the environment, approach targets safely and precisely, and hold position passively and silently. The capabilities of our robot were validated through a series of field missions in real sea conditions in a depth range between 0.5 and 12 meters.} }
- L. Jackson, C. M. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and S. Hadfield, “Downsizing an orbital space robot: a dynamic system based evaluation,” Advances in space research, vol. 65, iss. 10, p. 2247–2262, 2020. doi:10.1016/j.asr.2020.03.004
[BibTeX] [Abstract] [Download PDF]
Small space robots have the potential to revolutionise space exploration by facilitating the on-orbit assembly of infrastructure, in shorter time scales, at reduced costs. Their commercial appeal will be further improved if such a system is also capable of performing on-orbit servicing missions, in line with the current drive to limit space debris and prolong the lifetime of satellites already in orbit. Whilst there have been a limited number of successful demonstrations of technologies capable of these on-orbit operations, the systems remain large and bespoke. The recent surge in small satellite technologies is changing the economics of space and in the near future, downsizing a space robot might become be a viable option with a host of benefits. This industry wide shift means some of the technologies for use with a downsized space robot, such as power and communication subsystems, now exist. However, there are still dynamic and control issues that need to be overcome before a downsized space robot can be capable of undertaking useful missions. This paper first outlines these issues, before analyzing the effect of downsizing a system on its operational capability. Therefore presenting the smallest controllable system such that the benefits of a small space robot can be achieved with current technologies. The sizing of the base spacecraft and manipulator are addressed here. The design presented consists of a 3 link, 6 degrees of freedom robotic manipulator mounted on a 12U form factor satellite. The feasibility of this 12U space robot was evaluated in simulation and the in-depth results presented here support the hypothesis that a small space robot is a viable solution for in-orbit operations.
@article{lincoln48337, volume = {65}, number = {10}, month = {May}, author = {Lucy Jackson and Chakravarthini M. Saaj and Asma Seddaoui and Calem Whiting and Steve Eckersley and Simon Hadfield}, title = {Downsizing an orbital space robot: A dynamic system based evaluation}, publisher = {Elsevier}, year = {2020}, journal = {Advances in Space Research}, doi = {10.1016/j.asr.2020.03.004}, pages = {2247--2262}, keywords = {ARRAY(0x559d3264d330)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48337/}, abstract = {Small space robots have the potential to revolutionise space exploration by facilitating the on-orbit assembly of infrastructure, in shorter time scales, at reduced costs. Their commercial appeal will be further improved if such a system is also capable of performing on-orbit servicing missions, in line with the current drive to limit space debris and prolong the lifetime of satellites already in orbit. Whilst there have been a limited number of successful demonstrations of technologies capable of these on-orbit operations, the systems remain large and bespoke. The recent surge in small satellite technologies is changing the economics of space and in the near future, downsizing a space robot might become be a viable option with a host of benefits. This industry wide shift means some of the technologies for use with a downsized space robot, such as power and communication subsystems, now exist. However, there are still dynamic and control issues that need to be overcome before a downsized space robot can be capable of undertaking useful missions. This paper first outlines these issues, before analyzing the effect of downsizing a system on its operational capability. Therefore presenting the smallest controllable system such that the benefits of a small space robot can be achieved with current technologies. The sizing of the base spacecraft and manipulator are addressed here. The design presented consists of a 3 link, 6 degrees of freedom robotic manipulator mounted on a 12U form factor satellite. The feasibility of this 12U space robot was evaluated in simulation and the in-depth results presented here support the hypothesis that a small space robot is a viable solution for in-orbit operations.} }
- D. D. Barrie, R. Margetts, and K. Goher, “Simpa: soft-grasp infant myoelectric prosthetic arm,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 699–704, 2020. doi:10.1109/LRA.2019.2963820
[BibTeX] [Abstract] [Download PDF]
Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.
@article{lincoln39383, volume = {5}, number = {2}, month = {April}, author = {Daniel De Barrie and Rebecca Margetts and Khaled Goher}, title = {SIMPA: Soft-Grasp Infant Myoelectric Prosthetic Arm}, publisher = {IEEE}, year = {2020}, journal = {IEEE Robotics and Automation Letters}, doi = {10.1109/LRA.2019.2963820}, pages = {699--704}, keywords = {ARRAY(0x559d324ff828)}, url = {https://eprints.lincoln.ac.uk/id/eprint/39383/}, abstract = {Myoelectric prosthetic arms have primarily focused on adults, despite evidence showing the benefits of early adoption. This work presents SIMPA, a low-cost 3D-printed prosthetic arm with soft grippers. The arm has been designed using CAD and 3D-scanning, and manufactured using predominantly 3D-printing techniques. A voluntary opening control system utilizing an armband-based sEMG has been developed concurrently. Grasp tests have resulted in an average effectiveness of 87\%, with objects in excess of 400g being securely grasped. The results highlight the effectiveness of soft grippers as an end device in prosthetics, as well as the viability of toddler scale myoelectric devices.} }
- H. Wang, J. Peng, and S. Yue, “A directionally selective small target motion detecting visual neural network in cluttered backgrounds,” Ieee transactions on cybernetics, vol. 50, iss. 4, p. 1541–1555, 2020. doi:10.1109/TCYB.2018.2869384
[BibTeX] [Abstract] [Download PDF]
Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect’s visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.
@article{lincoln33420, volume = {50}, number = {4}, month = {April}, author = {Hongxin Wang and Jigen Peng and Shigang Yue}, note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8485659}, title = {A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds}, publisher = {IEEE}, year = {2020}, journal = {IEEE Transactions on Cybernetics}, doi = {10.1109/TCYB.2018.2869384}, pages = {1541--1555}, keywords = {ARRAY(0x559d324446e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/33420/}, abstract = {Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect's visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.} }
- T. Pardi, V. Ortenzi, C. Fairbairn, T. Pipe, A. G. Esfahani, and R. Stolkin, “Planning maximum-manipulability cutting paths,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 1999–2006, 2020. doi:10.1109/LRA.2020.2970949
[BibTeX] [Abstract] [Download PDF]
This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.
@article{lincoln41285, volume = {5}, number = {2}, month = {April}, author = {Tommaso Pardi and Valerio Ortenzi and Colin Fairbairn and Tony Pipe and Amir Ghalamzan Esfahani and Rustam Stolkin}, title = {Planning maximum-manipulability cutting paths}, publisher = {IEEE}, year = {2020}, journal = {IEEE Robotics and Automation Letters}, doi = {10.1109/LRA.2020.2970949}, pages = {1999--2006}, keywords = {ARRAY(0x559d3252ed68)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41285/}, abstract = {This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.} }
- W. Martindale, S. Pearson, M. Swainson, L. Korir, I. Wright, A. M. Opiyo, B. Karanja, S. Nyalala, and M. Kumar, “Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between kenyan, indian and united kingdom food manufacturers,” Emerald open research, vol. 2, iss. 12, 2020. doi:10.35241/emeraldopenres.13414.1
[BibTeX] [Abstract] [Download PDF]
The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.
@article{lincoln40529, volume = {2}, number = {12}, month = {April}, author = {Wayne Martindale and Simon Pearson and Mark Swainson and Lilian Korir and Isobel Wright and Arnold M. Opiyo and Benard Karanja and Samuel Nyalala and Mahesh Kumar}, title = {Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between Kenyan, Indian and United Kingdom food manufacturers}, publisher = {Emerald}, year = {2020}, journal = {Emerald Open Research}, doi = {10.35241/emeraldopenres.13414.1}, keywords = {ARRAY(0x559d32495698)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40529/}, abstract = {The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.} }
- M. Al-Khafajiy, T. Baker, M. Asim, Z. Guo, R. Ranjan, A. Longo, D. Puthal, and M. Taylor, “Comitment: a fog computing trust management approach,” Journal of parallel and distributed computing, vol. 137, p. 1–16, 2020. doi:10.1016/j.jpdc.2019.10.006
[BibTeX] [Abstract] [Download PDF]
As an extension of cloud computing, fog computing is considered to be relatively more secure than cloud computing due to data being transiently maintained and analyzed on local fog nodes closer to data sources. However, there exist several security and privacy concerns when fog nodes collaborate and share data to execute certain tasks. For example, offloading data to a malicious fog node can result into an unauthorized collection or manipulation of users? private data. Cryptographic-based techniques can prevent external attacks, but are not useful when fog nodes are already authenticated and part of a networks using legitimate identities. We therefore resort to trust to identify and isolate malicious fog nodes and mitigate security, respectively. In this paper, we present a fog COMputIng Trust manageMENT (COMITMENT) approach that uses quality of service and quality of protection history measures from previous direct and indirect fog node interactions for assessing and managing the trust level of the nodes within the fog computing environment. Using COMITMENT approach, we were able to reduce/identify the malicious attacks/interactions among fog nodes by approximately 66\%, while reducing the service response time by approximately 15s.
@article{lincoln47559, volume = {137}, month = {March}, author = {Mohammed Al-Khafajiy and Thar Baker and Muhammad Asim and Zehua Guo and Rajiv Ranjan and Antonella Longo and Deepak Puthal and Mark Taylor}, title = {COMITMENT: A Fog Computing Trust Management Approach}, publisher = {Elsevier}, year = {2020}, journal = {Journal of Parallel and Distributed Computing}, doi = {10.1016/j.jpdc.2019.10.006}, pages = {1--16}, keywords = {ARRAY(0x559d324e3510)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47559/}, abstract = {As an extension of cloud computing, fog computing is considered to be relatively more secure than cloud computing due to data being transiently maintained and analyzed on local fog nodes closer to data sources. However, there exist several security and privacy concerns when fog nodes collaborate and share data to execute certain tasks. For example, offloading data to a malicious fog node can result into an unauthorized collection or manipulation of users? private data. Cryptographic-based techniques can prevent external attacks, but are not useful when fog nodes are already authenticated and part of a networks using legitimate identities. We therefore resort to trust to identify and isolate malicious fog nodes and mitigate security, respectively. In this paper, we present a fog COMputIng Trust manageMENT (COMITMENT) approach that uses quality of service and quality of protection history measures from previous direct and indirect fog node interactions for assessing and managing the trust level of the nodes within the fog computing environment. Using COMITMENT approach, we were able to reduce/identify the malicious attacks/interactions among fog nodes by approximately 66\%, while reducing the service response time by approximately 15s.} }
- A. Mohamed, C. Saaj, A. Seddaoui, and M. Nair, “Linear controllers for free-flying and controlled-floating space robots: a new perspective,” Aeronautics and aerospace open access journal, vol. 4, iss. 3, p. 97–114, 2020. doi:10.15406/aaoaj.2020.04.00112
[BibTeX] [Abstract] [Download PDF]
Autonomous space robots are crucial for performing future in-orbit operations, including servicing of a spacecraft, assembly of large structures, maintenance of other space assets and active debris removal. Such orbital missions require servicer spacecraft equipped with one or more dexterous manipulators. However, unlike its terrestrial counterpart, the base of the robotic manipulator is not fixed in inertial space; instead, it is mounted on the base?spacecraft, which itself possess both translational and rotational motions. Additionally, the system will be subjected to extreme environmental perturbations, parametric uncertainties and system constraints due to the dynamic coupling between the manipulator and the base-spacecraft. This paper presents the dynamic model of the space robot and a three?stage control algorithm for this highly dynamic non-linear system. In this approach, feed?forward compensation and feed-forward linearization techniques are used to decouple and linearize the highly non-linear system respectively. This approach allows the use of the linear Proportional-Integral-Derivative (PID) controller and Linear Quadratic Regulator (LQR) in the final stages. Moreover, this paper covers a simulation-based trade-off analysis to determine both proposed linear controllers? efficacy. This assessment considers precise trajectory tracking requirements whilst minimizing power consumption and improving robustness during the close-range operation with the target spacecraft.
@article{lincoln48336, volume = {4}, number = {3}, month = {July}, author = {Amr Mohamed and Chakravarthini Saaj and Asma Seddaoui and Manu Nair}, title = {Linear controllers for free-flying and controlled-floating space robots: a new perspective}, publisher = {MedCrave Group}, year = {2020}, journal = {Aeronautics and Aerospace Open Access Journal}, doi = {10.15406/aaoaj.2020.04.00112}, pages = {97--114}, keywords = {ARRAY(0x559d326450e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48336/}, abstract = {Autonomous space robots are crucial for performing future in-orbit operations, including servicing of a spacecraft, assembly of large structures, maintenance of other space assets and active debris removal. Such orbital missions require servicer spacecraft equipped with one or more dexterous manipulators. However, unlike its terrestrial counterpart, the base of the robotic manipulator is not fixed in inertial space; instead, it is mounted on the base?spacecraft, which itself possess both translational and rotational motions. Additionally, the system will be subjected to extreme environmental perturbations, parametric uncertainties and system constraints due to the dynamic coupling between the manipulator and the base-spacecraft. This paper presents the dynamic model of the space robot and a three?stage control algorithm for this highly dynamic non-linear system. In this approach, feed?forward compensation and feed-forward linearization techniques are used to decouple and linearize the highly non-linear system respectively. This approach allows the use of the linear Proportional-Integral-Derivative (PID) controller and Linear Quadratic Regulator (LQR) in the final stages. Moreover, this paper covers a simulation-based trade-off analysis to determine both proposed linear controllers? efficacy. This assessment considers precise trajectory tracking requirements whilst minimizing power consumption and improving robustness during the close-range operation with the target spacecraft.} }
- J. Gao, A. French, M. Pound, Y. He, T. Pridmore, and J. Pieters, “Deep convolutional neural networks for image-based convolvulus sepium detection in sugar beet fields,” Plant methods, vol. 16, p. 19, 2020. doi:10.1186/s13007-020-00570-z
[BibTeX] [Abstract] [Download PDF]
Background Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. Results Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 {$\times$} 1200) on a NVIDIA Titan X GPU environment.
@article{lincoln41223, volume = {16}, month = {March}, author = {Junfeng Gao and Andrew French and Michael Pound and Yong He and Tony Pridmore and Jan Pieters}, title = {Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields}, publisher = {BMC}, year = {2020}, journal = {Plant Methods}, doi = {10.1186/s13007-020-00570-z}, pages = {19}, keywords = {ARRAY(0x559d3264d9f0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41223/}, abstract = {Background Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. Results Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 {$\times$} 1200) on a NVIDIA Titan X GPU environment.} }
- H. Wang, J. Peng, X. Zheng, and S. Yue, “A robust visual system for small target motion detection against cluttered moving backgrounds,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 3, p. 839–853, 2020. doi:10.1109/TNNLS.2019.2910418
[BibTeX] [Abstract] [Download PDF]
Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems–ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.
@article{lincoln36114, volume = {31}, number = {3}, month = {March}, author = {Hongxin Wang and Jigen Peng and Xuqiang Zheng and Shigang Yue}, title = {A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds}, publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, year = {2020}, journal = {IEEE Transactions on Neural Networks and Learning Systems}, doi = {10.1109/TNNLS.2019.2910418}, pages = {839--853}, keywords = {ARRAY(0x559d32636e40)}, url = {https://eprints.lincoln.ac.uk/id/eprint/36114/}, abstract = {Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems--ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.} }
- R. Polvara, M. Patacchiola, M. Hanheide, and G. Neumann, “Sim-to-real quadrotor landing via sequential deep q-networks and domain randomization,” Robotics, vol. 9, iss. 1, 2020. doi:doi:10.3390/robotics9010008
[BibTeX] [Abstract] [Download PDF]
The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.
@article{lincoln40216, volume = {9}, number = {1}, month = {February}, author = {Riccardo Polvara and Massimiliano Patacchiola and Marc Hanheide and Gerhard Neumann}, title = {Sim-to-Real Quadrotor Landing via Sequential Deep Q-Networks and Domain Randomization}, publisher = {MDPI}, year = {2020}, journal = {Robotics}, doi = {doi:10.3390/robotics9010008}, keywords = {ARRAY(0x559d325ec638)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40216/}, abstract = {The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.} }
- M. Bartlett, C. Costescu, P. Baxter, and S. Thill, “Requirements for robotic interpretation of social signals ?in the wild?: insights from diagnostic criteria of autism spectrum disorder,” Mdpi information, vol. 11, iss. 81, p. 1–20, 2020. doi:10.3390/info11020081
[BibTeX] [Abstract] [Download PDF]
The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.
@article{lincoln40108, volume = {11}, number = {81}, month = {February}, author = {M Bartlett and C Costescu and Paul Baxter and S Thill}, title = {Requirements for Robotic Interpretation of Social Signals ?in the Wild?: Insights from Diagnostic Criteria of Autism Spectrum Disorder}, publisher = {MDPI}, year = {2020}, journal = {MDPI Information}, doi = {10.3390/info11020081}, pages = {1--20}, keywords = {ARRAY(0x559d3261c098)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40108/}, abstract = {The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.} }
- B. Chen, J. Huang, Y. Huang, S. Kollias, and S. Yue, “Combining guaranteed and spot markets in display advertising: selling guaranteed page views with stochastic demand,” European journal of operational research, vol. 280, iss. 3, p. 1144–1159, 2020. doi:10.1016/j.ejor.2019.07.067
[BibTeX] [Abstract] [Download PDF]
While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.
@article{lincoln39575, volume = {280}, number = {3}, month = {February}, author = {Bowei Chen and Jingmin Huang and Yufei Huang and Stefanos Kollias and Shigang Yue}, title = {Combining guaranteed and spot markets in display advertising: Selling guaranteed page views with stochastic demand}, publisher = {Elsevier}, year = {2020}, journal = {European Journal of Operational Research}, doi = {10.1016/j.ejor.2019.07.067}, pages = {1144--1159}, keywords = {ARRAY(0x559d324cd688)}, url = {https://eprints.lincoln.ac.uk/id/eprint/39575/}, abstract = {While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.} }
- J. P. Fentanes, A. Badiee, T. Duckett, J. Evans, S. Pearson, and G. Cielniak, “Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor,” Journal of field robotics, vol. 37, iss. 1, p. 122–136, 2020. doi:10.1002/rob.21914
[BibTeX] [Abstract] [Download PDF]
Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.
@article{lincoln37350, volume = {37}, number = {1}, month = {January}, author = {Jaime Pulido Fentanes and Amir Badiee and Tom Duckett and Jonathan Evans and Simon Pearson and Grzegorz Cielniak}, title = {Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor}, publisher = {Wiley Periodicals, Inc.}, year = {2020}, journal = {Journal of Field Robotics}, doi = {10.1002/rob.21914}, pages = {122--136}, keywords = {ARRAY(0x559d324a4b30)}, url = {https://eprints.lincoln.ac.uk/id/eprint/37350/}, abstract = {Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.} }
- P. Chudzik, A. Mitchell, M. Alkaseem, Y. Wu, S. Fang, T. Hudaib, S. Pearson, and B. Al-Diri, “Mobile real-time grasshopper detection and data aggregation framework,” Scientific reports, vol. 10, p. 1150, 2020. doi:10.1038/s41598-020-57674-8
[BibTeX] [Abstract] [Download PDF]
nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.
@article{lincoln39125, volume = {10}, month = {January}, author = {Piotr Chudzik and Arthur Mitchell and Mohammad Alkaseem and Yingie Wu and Shibo Fang and Taghread Hudaib and Simon Pearson and Bashir Al-Diri}, title = {Mobile Real-Time Grasshopper Detection and Data Aggregation Framework}, publisher = {Springer}, year = {2020}, journal = {Scientific Reports}, doi = {10.1038/s41598-020-57674-8}, pages = {1150}, keywords = {ARRAY(0x559d325632b8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/39125/}, abstract = {nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.} }
- R. Kirk, G. Cielniak, and M. Mangan, “L*a*b*fruits: a rapid and robust outdoor fruit detection system combining bio-inspired features with one-stage deep learning networks,” Sensors, vol. 20, iss. 1, p. 275, 2020. doi:10.3390/s20010275
[BibTeX] [Abstract] [Download PDF]
Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.
@article{lincoln39423, volume = {20}, number = {1}, month = {January}, author = {Raymond Kirk and Grzegorz Cielniak and Michael Mangan}, title = {L*a*b*Fruits: A Rapid and Robust Outdoor Fruit Detection System Combining Bio-Inspired Features with One-Stage Deep Learning Networks}, publisher = {MDPI}, year = {2020}, journal = {Sensors}, doi = {10.3390/s20010275}, pages = {275}, keywords = {ARRAY(0x559d326486c0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/39423/}, abstract = {Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.} }
- P. Bosilj, E. Aptoula, T. Duckett, and G. Cielniak, “Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture,” Journal of field robotics, vol. 37, iss. 1, p. 7–19, 2020. doi:10.1002/rob.21869
[BibTeX] [Abstract] [Download PDF]
Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.
@article{lincoln35535, volume = {37}, number = {1}, month = {January}, author = {Petra Bosilj and Erchan Aptoula and Tom Duckett and Grzegorz Cielniak}, title = {Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture}, publisher = {Wiley}, year = {2020}, journal = {Journal of Field Robotics}, doi = {10.1002/rob.21869}, pages = {7--19}, keywords = {ARRAY(0x559d3262a290)}, url = {https://eprints.lincoln.ac.uk/id/eprint/35535/}, abstract = {Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to \$80{$\backslash$}\%\$. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within \$2{$\backslash$}\%\$ of that of networks trained with laboriously annotated pixel-precision data.} }
- C. Coppola, S. Cosar, D. R. Faria, and N. Bellotto, “Social activity recognition on continuous rgb-d video sequences,” International journal of social robotics, p. 1–15, 2020. doi:10.1007/s12369-019-00541-y
[BibTeX] [Abstract] [Download PDF]
Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.
@article{lincoln35151, month = {January}, author = {Claudio Coppola and Serhan Cosar and Diego R. Faria and Nicola Bellotto}, title = {Social Activity Recognition on Continuous RGB-D Video Sequences}, publisher = {Springer}, journal = {International Journal of Social Robotics}, doi = {10.1007/s12369-019-00541-y}, pages = {1--15}, year = {2020}, keywords = {ARRAY(0x559d3244d1a0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/35151/}, abstract = {Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data.} }
- Z. Yan, T. Duckett, and N. Bellotto, “Online learning for 3d lidar-based human detection: experimental analysis of point cloud clustering and classification methods,” Autonomous robots, vol. 44, iss. 2, p. 147–164, 2020. doi:10.1007/s10514-019-09883-y
[BibTeX] [Abstract] [Download PDF]
This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of “experts” to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.
@article{lincoln36535, volume = {44}, number = {2}, month = {January}, author = {Zhi Yan and Tom Duckett and Nicola Bellotto}, title = {Online Learning for 3D LiDAR-based Human Detection: Experimental Analysis of Point Cloud Clustering and Classification Methods}, publisher = {Springer}, year = {2020}, journal = {Autonomous Robots}, doi = {10.1007/s10514-019-09883-y}, pages = {147--164}, keywords = {ARRAY(0x559d3265a380)}, url = {https://eprints.lincoln.ac.uk/id/eprint/36535/}, abstract = {This paper presents a system for online learning of human classifiers by mobile service robots using 3D{\texttt{\char126}}LiDAR sensors, and its experimental evaluation in a large indoor public space. The learning framework requires a minimal set of labelled samples (e.g. one or several samples) to initialise a classifier. The classifier is then retrained iteratively during operation of the robot. New training samples are generated automatically using multi-target tracking and a pair of "experts" to estimate false negatives and false positives. Both classification and tracking utilise an efficient real-time clustering algorithm for segmentation of 3D point cloud data. We also introduce a new feature to improve human classification in sparse, long-range point clouds. We provide an extensive evaluation of our the framework using a 3D LiDAR dataset of people moving in a large indoor public space, which is made available to the research community. The experiments demonstrate the influence of the system components and improved classification of humans compared to the state-of-the-art.} }
- F. Camara and C. Fox, “Space invaders: pedestrian proxemic utility functions and trust zones for autonomous vehicle interactions,” International journal of social robotics, 2020. doi:10.1007/s12369-020-00717-x
[BibTeX] [Abstract] [Download PDF]
Understanding pedestrian proxemic utility and trust will help autonomous vehicles to plan and control interactions with pedestrians more safely and efficiently. When pedestrians cross the road in front of human-driven vehicles, the two agents use knowledge of each other?s preferences to negotiate and to determine who will yield to the other. Autonomous vehicles will require similar understandings, but previous work has shown a need for them to be provided in the form of continuous proxemic utility functions, which are not available from previous proxemics stud- ies based on Hall?s discrete zones. To fill this gap, a new Bayesian method to infer continuous pedestrian proxemic utility functions is proposed, and related to a new definition of ?physical trust requirement? (PTR) for road-crossing scenarios. The method is validated on simulation data then its parameters are inferred empirically from two public datasets. Results show that pedestrian proxemic utility is best described by a hyperbolic function, and that trust by the pedestrian is required in a discrete ?trust zone? which emerges naturally from simple physics. The PTR concept is then shown to be capable of generating and explaining the empirically observed zone sizes of Hall’s discrete theory of proxemics.
@article{lincoln42876, title = {Space Invaders: Pedestrian Proxemic Utility Functions and Trust Zones for Autonomous Vehicle Interactions}, author = {Fanta Camara and Charles Fox}, publisher = {Springer}, year = {2020}, doi = {10.1007/s12369-020-00717-x}, journal = {International Journal of Social Robotics}, keywords = {ARRAY(0x559d3244d9e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42876/}, abstract = {Understanding pedestrian proxemic utility and trust will help autonomous vehicles to plan and control interactions with pedestrians more safely and efficiently. When pedestrians cross the road in front of human-driven vehicles, the two agents use knowledge of each other?s preferences to negotiate and to determine who will yield to the other. Autonomous vehicles will require similar understandings, but previous work has shown a need for them to be provided in the form of continuous proxemic utility functions, which are not available from previous proxemics stud- ies based on Hall?s discrete zones. To fill this gap, a new Bayesian method to infer continuous pedestrian proxemic utility functions is proposed, and related to a new definition of ?physical trust requirement? (PTR) for road-crossing scenarios. The method is validated on simulation data then its parameters are inferred empirically from two public datasets. Results show that pedestrian proxemic utility is best described by a hyperbolic function, and that trust by the pedestrian is required in a discrete ?trust zone? which emerges naturally from simple physics. The PTR concept is then shown to be capable of generating and explaining the empirically observed zone sizes of Hall's discrete theory of proxemics.} }
- J. Lock, I. Gilchrist, G. Cielniak, and N. Bellotto, “Experimental analysis of a spatialised audio interface for people with visual impairments,” Acm transactions on accessible computing, 2020.
[BibTeX] [Abstract] [Download PDF]
Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.
@article{lincoln41544, title = {Experimental Analysis of a Spatialised Audio Interface for People with Visual Impairments}, author = {Jacobus Lock and Iain Gilchrist and Grzegorz Cielniak and Nicola Bellotto}, publisher = {Association for Computing Machinery}, year = {2020}, journal = {ACM Transactions on Accessible Computing}, keywords = {ARRAY(0x559d324479e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41544/}, abstract = {Sound perception is a fundamental skill for many people with severe sight impairments. The research presented in this paper is part of an ongoing project with the aim to create a mobile guidance aid to help people with vision impairments find objects within an unknown indoor environment. This system requires an effective non-visual interface and uses bone-conduction headphones to transmit audio instructions to the user. It has been implemented and tested with spatialised audio cues, which convey the direction of a predefined target in 3D space. We present an in-depth evaluation of the audio interface with several experiments that involve a large number of participants, both blindfolded and with actual visual impairments, and analyse the pros and cons of our design choices. In addition to producing results comparable to the state-of-the-art, we found that Fitts?s Law (a predictive model for human movement) provides a suitable a metric that can be used to improve and refine the quality of the audio interface in future mobile navigation aids.} }
- J. Singh, A. R. Srinivasan, G. Neumann, and A. Kucukyilmaz, “Haptic-guided teleoperation of a 7-dof collaborative robot arm with an identical twin master,” Ieee transactions on haptics, p. 1–1, 2020. doi:10.1109/TOH.2020.2971485
[BibTeX] [Abstract] [Download PDF]
In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.
@article{lincoln40137, title = {Haptic-Guided Teleoperation of a 7-DoF Collaborative Robot Arm with an Identical Twin Master}, author = {Jayant Singh and Aravinda Ramakrishnan Srinivasan and Gerhard Neumann and Ayse Kucukyilmaz}, publisher = {IEEE}, year = {2020}, pages = {1--1}, doi = {10.1109/TOH.2020.2971485}, journal = {IEEE Transactions on Haptics}, keywords = {ARRAY(0x559d32452080)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40137/}, abstract = {In this study, we describe two techniques to enable haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A shortcoming of using cobots as master-slave systems is the lack of force feedback at the master side. However, recent developments in cobot technologies have brought in affordable, flexible, and safe torque-controlled robot arms, which can be programmed to generate force feedback to mimic the operation of a haptic device. In this study, we use two Franka Emika Panda robot arms as a twin master-slave system to enable haptic-guided teleoperation. We propose a two layer mechanism to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces. We present two different approaches for force rendering and conduct an experimental study to evaluate the performance and usability of these approaches in comparison to teleoperation without haptic guidance. Our results indicate that the proposed joint torque coupling method for rendering task forces improves energy requirements during haptic guided telemanipulation, providing realistic force feedback by accurately matching the slave torque readings at the master side.} }
- X. Sun, S. Yue, and M. Mangan, “A decentralised neural model explaining optimal integration of navigational strategies in insects,” Elife, vol. 9, 2020. doi:10.7554/eLife.54026
[BibTeX] [Abstract] [Download PDF]
Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.
@article{lincoln41703, volume = {9}, month = {July}, author = {Xuelong Sun and Shigang Yue and Michael Mangan}, title = {A decentralised neural model explaining optimal integration of navigational strategies in insects}, publisher = {eLife Sciences Publications}, journal = {eLife}, doi = {10.7554/eLife.54026}, year = {2020}, keywords = {ARRAY(0x559d3245a370)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41703/}, abstract = {Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.} }
- R. Polvara, M. Fernandez-Carmona, M. Hanheide, and G. Neumann, “Next-best-sense: a multi-criteria robotic exploration strategy for rfid tags discovery,” Ieee robotics and automation letters, vol. 5, iss. 3, p. 4477–4484, 2020. doi:10.1109/LRA.2020.3001539
[BibTeX] [Abstract] [Download PDF]
Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation – a scan with the RFID reader – as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors’ repository.
@article{lincoln41120, volume = {5}, number = {3}, month = {June}, author = {Riccardo Polvara and Manuel Fernandez-Carmona and Marc Hanheide and Gerhard Neumann}, title = {Next-Best-Sense: a multi-criteria robotic exploration strategy for RFID tags discovery}, publisher = {IEEE}, year = {2020}, journal = {IEEE Robotics and Automation Letters}, doi = {10.1109/LRA.2020.3001539}, pages = {4477--4484}, keywords = {ARRAY(0x559d32444008)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41120/}, abstract = {Automated exploration is one of the most relevant applications of autonomous robots. In this paper, we suggest a novel online coverage algorithm called Next-Best-Sense (NBS), an extension of the Next-Best-View class of exploration algorithms that optimizes the exploration task balancing multiple criteria. This novel algorithm is applied to the problem of localizing all Radio Frequency Identification (RFID) tags with a mobile robotic platform that is equipped with a RFID reader. We cast this problem as a coverage planning problem by defining a basic sensing operation -- a scan with the RFID reader -- as the field of ?view? of the sensor. NBS evaluates candidate locations with a global utility function which combines utility values for travel distance, information gain, sensing time, battery status and RFID information gain, generalizing the use of Multi-Criteria Decision Making. We developed an RFID reader and tag model in the Gazebo simulator for validation. Experiments performed both in simulation and with a real robot suggest that our NBS approach can successfully localize all the RFID tags while minimizing navigation metrics such sensing operations, total traveling distance and battery consumption. The code developed is publicly available on the authors' repository.} }
- M. Chellapurath, S. Stefanni, G. Fiorito, A. M. Sabatini, C. Laschi, and M. Calisti, “Locomotory behaviour of the intertidal marble crab (pachygrapsus marmoratus) supports the underwater spring-loaded inverted pendulum as a fundamental model for punting in animals,” Bioinspiration & biomimetics, vol. 15, iss. 5, p. 55004, 2020. doi:10.1088/1748-3190/ab968c
[BibTeX] [Abstract] [Download PDF]
In aquatic pedestrian locomotion the dynamics of terrestrial and aquatic environments are coupled. Here we study terrestrial running and aquatic punting locomotion of the marine-living crab Pachygrapsus marmoratus. We detected both active and passive phases of running and punting through the observation of crab locomotory behaviour in standardized settings and by three-dimensional kinematic analysis of its dynamic gaits using high-speed video cameras. Variations in different stride parameters were studied and compared. The comparison was done based on the dimensionless parameter the Froude number (Fr) to account for the effect of buoyancy and size variability among the crabs. The underwater spring-loaded inverted pendulum (USLIP) model better fitted the dynamics of aquatic punting. USLIP takes account of the damping effect of the aquatic environment, a variable not considered by the spring-loaded inverted pendulum (SLIP) model in reduced gravity. Our results highlight the underlying principles of aquatic terrestrial locomotion by comparing it with terrestrial locomotion. Comparing punting with running, we show and increased stride period, decreased duty cycle and orientation of the carapace more inclined with the horizontal plane, indicating the significance of fluid forces on the dynamics due to the aquatic environment. Moreover, we discovered periodicity in punting locomotion of crabs and two different gaits, namely, long-flight punting and short-flight punting, distinguished by both footfall patterns and kinematic parameters. The generic fundamental model which belongs to all animals performing both terrestrial and aquatic legged locomotion has implications for control strategies, evolution and translation to robotic artefacts.
@article{lincoln46139, volume = {15}, number = {5}, month = {July}, author = {Mrudul Chellapurath and Sergio Stefanni and Graziano Fiorito and Angelo Maria Sabatini and Cecilia Laschi and Marcello Calisti}, title = {Locomotory behaviour of the intertidal marble crab (Pachygrapsus marmoratus) supports the underwater spring-loaded inverted pendulum as a fundamental model for punting in animals}, year = {2020}, journal = {Bioinspiration \& Biomimetics}, doi = {10.1088/1748-3190/ab968c}, pages = {055004}, url = {https://eprints.lincoln.ac.uk/id/eprint/46139/}, abstract = {In aquatic pedestrian locomotion the dynamics of terrestrial and aquatic environments are coupled. Here we study terrestrial running and aquatic punting locomotion of the marine-living crab Pachygrapsus marmoratus. We detected both active and passive phases of running and punting through the observation of crab locomotory behaviour in standardized settings and by three-dimensional kinematic analysis of its dynamic gaits using high-speed video cameras. Variations in different stride parameters were studied and compared. The comparison was done based on the dimensionless parameter the Froude number (Fr) to account for the effect of buoyancy and size variability among the crabs. The underwater spring-loaded inverted pendulum (USLIP) model better fitted the dynamics of aquatic punting. USLIP takes account of the damping effect of the aquatic environment, a variable not considered by the spring-loaded inverted pendulum (SLIP) model in reduced gravity. Our results highlight the underlying principles of aquatic terrestrial locomotion by comparing it with terrestrial locomotion. Comparing punting with running, we show and increased stride period, decreased duty cycle and orientation of the carapace more inclined with the horizontal plane, indicating the significance of fluid forces on the dynamics due to the aquatic environment. Moreover, we discovered periodicity in punting locomotion of crabs and two different gaits, namely, long-flight punting and short-flight punting, distinguished by both footfall patterns and kinematic parameters. The generic fundamental model which belongs to all animals performing both terrestrial and aquatic legged locomotion has implications for control strategies, evolution and translation to robotic artefacts.} }
- G. Canal, R. Borgo, A. Coles, A. Drake, D. Huynh, P. Keller, S. Krivić, P. Luff, Q. Mahesar, L. Moreau, S. Parsons, M. Patel, and E. Sklar, “Building trust in human-machine partnerships,” Computer law & security review, vol. 39, p. 105489, 2020. doi:10.1016/j.clsr.2020.105489
[BibTeX] [Abstract] [Download PDF]
Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing.
@article{lincoln43255, volume = {39}, month = {November}, author = {Gerard Canal and Rita Borgo and Andrew Coles and Archie Drake and Dong Huynh and Perry Keller and Senka Krivi{\'c} and Paul Luff and Quratul-ain Mahesar and Luc Moreau and Simon Parsons and Menisha Patel and Elizabeth Sklar}, title = {Building Trust in Human-Machine Partnerships}, journal = {Computer Law \& Security Review}, doi = {10.1016/j.clsr.2020.105489}, pages = {105489}, year = {2020}, keywords = {ARRAY(0x559d326307b8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43255/}, abstract = {Artificial Intelligence (AI) is bringing radical change to our lives. Fostering trust in this technology requires the technology to be transparent, and one route to transparency is to make the decisions that are reached by AIs explainable to the humans that interact with them. This paper lays out an exploratory approach to developing explainability and trust, describing the specific technologies that we are adopting, the social and organizational context in which we are working, and some of the challenges that we are addressing.} }
- D. Bochtis, L. Benos, M. Lampridi, V. Marinoudi, S. Pearson, and C. G. S. o, “Agricultural workforce crisis in light of the covid-19 pandemic,” Sustainability, vol. 12, iss. 19, p. 8212, 2020. doi:10.3390/su12198212
[BibTeX] [Abstract] [Download PDF]
COVID-19 and the restrictive measures towards containing the spread of its infections have seriously affected the agricultural workforce and jeopardized food security. The present study aims at assessing the COVID-19 pandemic impacts on agricultural labor and suggesting strategies to mitigate them. To this end, after an introduction to the pandemic background, the negative consequences on agriculture and the existing mitigation policies, risks to the agricultural workers were benchmarked across the United States? Standard Occupational Classification system. The individual tasks associated with each occupation in agricultural production were evaluated on the basis of potential COVID-19 infection risk. As criteria, the most prevalent virus transmission mechanisms were considered, namely the possibility of touching contaminated surfaces and the close proximity of workers. The higher risk occupations within the sector were identified, which facilitates the allocation of worker protection resources to the occupations where they are most needed. In particular, the results demonstrated that 50\% of the agricultural workforce and 54\% of the workers? annual income are at moderate to high risk. As a consequence, a series of control measures need to be adopted so as to enhance the resilience and sustainability of the sector as well as protect farmers including physical distancing, hygiene practices, and personal protection equipment.
@article{lincoln43697, volume = {12}, number = {19}, month = {October}, author = {Dionysis Bochtis and Lefteris Benos and Maria Lampridi and Vasso Marinoudi and Simon Pearson and Claus G. S{\o}rensen}, title = {Agricultural Workforce Crisis in Light of the COVID-19 Pandemic}, year = {2020}, journal = {Sustainability}, doi = {10.3390/su12198212}, pages = {8212}, keywords = {ARRAY(0x559d3243de48)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43697/}, abstract = {COVID-19 and the restrictive measures towards containing the spread of its infections have seriously affected the agricultural workforce and jeopardized food security. The present study aims at assessing the COVID-19 pandemic impacts on agricultural labor and suggesting strategies to mitigate them. To this end, after an introduction to the pandemic background, the negative consequences on agriculture and the existing mitigation policies, risks to the agricultural workers were benchmarked across the United States? Standard Occupational Classification system. The individual tasks associated with each occupation in agricultural production were evaluated on the basis of potential COVID-19 infection risk. As criteria, the most prevalent virus transmission mechanisms were considered, namely the possibility of touching contaminated surfaces and the close proximity of workers. The higher risk occupations within the sector were identified, which facilitates the allocation of worker protection resources to the occupations where they are most needed. In particular, the results demonstrated that 50\% of the agricultural workforce and 54\% of the workers? annual income are at moderate to high risk. As a consequence, a series of control measures need to be adopted so as to enhance the resilience and sustainability of the sector as well as protect farmers including physical distancing, hygiene practices, and personal protection equipment.} }
- F. Camara, N. Bellotto, S. Cosar, F. Weber, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, G. Markkula, A. Schieben, F. Tango, N. Merat, and C. Fox, “Pedestrian models for autonomous driving part ii: high-level models of human behavior,” Ieee transactions on intelligent transport systems, 2020. doi:10.1109/TITS.2020.3006767
[BibTeX] [Abstract] [Download PDF]
Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.
@article{lincoln41706, month = {July}, title = {Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior}, author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Florian Weber and Dimitris Nathanael and Matthias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Gustav Markkula and Anna Schieben and Fabio Tango and Natasha Merat and Charles Fox}, publisher = {IEEE}, year = {2020}, doi = {10.1109/TITS.2020.3006767}, journal = {IEEE Transactions on Intelligent Transport Systems}, keywords = {ARRAY(0x559d325fc408)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41706/}, abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.} }
- G. Bosworth, L. Price, M. Collison, and C. Fox, “Unequal futures of rural mobility:�challenges for a ?smart countryside?,” Local economy, vol. 35, iss. 6, p. 586–608, 2020. doi:10.1177/0269094220968231
[BibTeX] [Abstract] [Download PDF]
Current transport strategy in the UK is strongly urban-focused, with assumptions that technological advances in mobility will simply trickle down into rural areas. This paper challenges such a view and instead draws on rural development thinking aligned to a ?Smart Countryside? which emphasises the need for place-based approaches. Survey and interview methods are employed to develop a framework of rural needs associated with older people, younger people and businesses. This framework is employed to assess a range of mobility innovations that could most effectively address these needs in different rural contexts. In presenting visions of future rural mobility, the paper also identifies key infrastructure as well as institutional and financial changes that are required to facilitate the roll-out of new technologies across rural areas.
@article{lincoln42612, volume = {35}, number = {6}, month = {September}, author = {Gary Bosworth and Liz Price and Martin Collison and Charles Fox}, title = {Unequal Futures of Rural Mobility:�Challenges for a ?Smart Countryside?}, publisher = {Sage}, year = {2020}, journal = {Local Economy}, doi = {10.1177/0269094220968231}, pages = {586--608}, keywords = {ARRAY(0x559d32622ae8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42612/}, abstract = {Current transport strategy in the UK is strongly urban-focused, with assumptions that technological advances in mobility will simply trickle down into rural areas. This paper challenges such a view and instead draws on rural development thinking aligned to a ?Smart Countryside? which emphasises the need for place-based approaches. Survey and interview methods are employed to develop a framework of rural needs associated with older people, younger people and businesses. This framework is employed to assess a range of mobility innovations that could most effectively address these needs in different rural contexts. In presenting visions of future rural mobility, the paper also identifies key infrastructure as well as institutional and financial changes that are required to facilitate the roll-out of new technologies across rural areas.} }
- M. T. Fountain, A. Badiee, S. Hemer, A. Delgado, M. Mangan, C. Dowding, F. Davis, and S. Pearson, “The use of light spectrum blocking films to reduce populations of drosophila suzukii matsumura in fruit crops,” Scientific reports, vol. 10, iss. 1, 2020. doi:10.1038/s41598-020-72074-8
[BibTeX] [Abstract] [Download PDF]
Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries. Effective control is challenging and reliant on integrated pest management which includes the use of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest that can be integrated into control strategies are urgently required. In many production regions, including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials. These can be modified to filter specific wavebands of light. We investigated whether targeted spectral modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii. We present a novel approach that starts from a neuroscientific investigation of insect sensory systems and ends with infield testing of new cladding materials inspired by the biological data. We show D. suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm (orange & red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting materials reduces pest populations up to 73\% in field trials.
@article{lincoln42446, volume = {10}, number = {1}, month = {September}, author = {Michelle T. Fountain and Amir Badiee and Sebastian Hemer and Alvaro Delgado and Michael Mangan and Colin Dowding and Frederick Davis and Simon Pearson}, title = {The use of light spectrum blocking films to reduce populations of Drosophila suzukii Matsumura in fruit crops}, publisher = {Nature Publishing Group}, year = {2020}, journal = {Scientific Reports}, doi = {10.1038/s41598-020-72074-8}, keywords = {ARRAY(0x559d32532b30)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42446/}, abstract = {Spotted wing drosophila, Drosophila suzukii, is a serious invasive pest impacting the production of multiple fruit crops, including soft and stone fruits such as strawberries, raspberries and cherries. Effective control is challenging and reliant on integrated pest management which includes the use of an ever decreasing number of approved insecticides. New means to reduce the impact of this pest that can be integrated into control strategies are urgently required. In many production regions, including the UK, soft fruit are typically grown inside tunnels clad with polyethylene based materials. These can be modified to filter specific wavebands of light. We investigated whether targeted spectral modifications to cladding materials that disrupt insect vision could reduce the incidence of D. suzukii. We present a novel approach that starts from a neuroscientific investigation of insect sensory systems and ends with infield testing of new cladding materials inspired by the biological data. We show D. suzukii are predominantly sensitive to wavelengths below 405 nm (ultraviolet) and above 565 nm (orange \& red) and that targeted blocking of lower wavebands (up to 430 nm) using light restricting materials reduces pest populations up to 73\% in field trials.} }
- F. D. Duchetto, P. Baxter, and M. Hanheide, “Are you still with me? continuous engagement assessment from a robot’s point of view,” Frontiers in robotics and ai, vol. 7, iss. 116, 2020. doi:10.3389/frobt.2020.00116
[BibTeX] [Abstract] [Download PDF]
Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.
@article{lincoln42433, volume = {7}, number = {116}, month = {September}, author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide}, title = {Are You Still With Me? Continuous Engagement Assessment From a Robot's Point of View}, publisher = {Frontiers Media S.A.}, year = {2020}, journal = {Frontiers in Robotics and AI}, doi = {10.3389/frobt.2020.00116}, keywords = {ARRAY(0x559d32501d80)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42433/}, abstract = {Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement\_detector, as a tool to measure engagement in a variety of settings.} }
- P. Bosilj, I. Gould, T. Duckett, and G. Cielniak, “Estimating soil aggregate size distribution from images using pattern spectra,” Biosystems engineering, vol. 198, p. 63–77, 2020. doi:10.1016/j.biosystemseng.2020.07.012
[BibTeX] [Abstract] [Download PDF]
A method for quantifying aggregate size distribution from the images of soil samples is introduced. Knowledge of soil aggregate size distribution can help to inform soil management practices for the sustainable growth of crops. While current in-field approaches are mostly subjective, obtaining quantifiable results in a laboratory is labour- and time-intensive. Our goal is to develop an imaging technique for quantitative analysis of soil aggregate size distribution, which could provide the basis of a tool for rapid assessment of soil structure. The prediction accuracy of pattern spectra descriptors based on hierarchical representations from attribute morphology are analysed, as well as the impact of using images of different quality and scales. The method is able to handle greater sample complexity than the previous approaches, while working with smaller samples sizes that are easier to handle. The results show promise for size analysis of soils with larger structures, and minimal sample preparation, as typical of soil assessment in agriculture.
@article{lincoln42179, volume = {198}, month = {October}, author = {Petra Bosilj and Iain Gould and Tom Duckett and Grzegorz Cielniak}, title = {Estimating soil aggregate size distribution from images using pattern spectra}, publisher = {Elsevier}, year = {2020}, journal = {Biosystems Engineering}, doi = {10.1016/j.biosystemseng.2020.07.012}, pages = {63--77}, keywords = {ARRAY(0x559d324368d8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42179/}, abstract = {A method for quantifying aggregate size distribution from the images of soil samples is introduced. Knowledge of soil aggregate size distribution can help to inform soil management practices for the sustainable growth of crops. While current in-field approaches are mostly subjective, obtaining quantifiable results in a laboratory is labour- and time-intensive. Our goal is to develop an imaging technique for quantitative analysis of soil aggregate size distribution, which could provide the basis of a tool for rapid assessment of soil structure. The prediction accuracy of pattern spectra descriptors based on hierarchical representations from attribute morphology are analysed, as well as the impact of using images of different quality and scales. The method is able to handle greater sample complexity than the previous approaches, while working with smaller samples sizes that are easier to handle. The results show promise for size analysis of soils with larger structures, and minimal sample preparation, as typical of soil assessment in agriculture.} }
- G. Picardi, C. Borrelli, A. Sarti, G. Chimienti, and M. Calisti, “A minimal metric for the characterization of acoustic noise emitted by underwater vehicles,” Sensors, vol. 20, iss. 22, p. 6644, 2020. doi:10.3390/s20226644
[BibTeX] [Abstract] [Download PDF]
Underwater robots emit sound during operations which can deteriorate the quality of acoustic data recorded by on-board sensors or disturb marine fauna during in vivo observations. Notwithstanding this, there have only been a few attempts at characterizing the acoustic emissions of underwater robots in the literature, and the datasheets of commercially available devices do not report information on this topic. This work has a twofold goal. First, we identified a setup consisting of a camera directly mounted on the robot structure to acquire the acoustic data and two indicators (i.e., spectral roll-off point and noise introduced to the environment) to provide a simple and intuitive characterization of the acoustic emissions of underwater robots carrying out specific maneuvers in specific environments. Second, we performed the proposed analysis on three underwater robots belonging to the classes of remotely operated vehicles and underwater legged robots. Our results showed how the legged device produced a clearly different signature compared to remotely operated vehicles which can be an advantage in operations that require low acoustic disturbance. Finally, we argue that the proposed indicators, obtained through a standardized procedure, may be a useful addition to datasheets of existing underwater robots
@article{lincoln46141, volume = {20}, number = {22}, month = {November}, author = {Giacomo Picardi and Clara Borrelli and Augusto Sarti and Giovanni Chimienti and Marcello Calisti}, title = {A Minimal Metric for the Characterization of Acoustic Noise Emitted by Underwater Vehicles}, year = {2020}, journal = {Sensors}, doi = {10.3390/s20226644}, pages = {6644}, keywords = {ARRAY(0x559d324473b0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46141/}, abstract = {Underwater robots emit sound during operations which can deteriorate the quality of acoustic data recorded by on-board sensors or disturb marine fauna during in vivo observations. Notwithstanding this, there have only been a few attempts at characterizing the acoustic emissions of underwater robots in the literature, and the datasheets of commercially available devices do not report information on this topic. This work has a twofold goal. First, we identified a setup consisting of a camera directly mounted on the robot structure to acquire the acoustic data and two indicators (i.e., spectral roll-off point and noise introduced to the environment) to provide a simple and intuitive characterization of the acoustic emissions of underwater robots carrying out specific maneuvers in specific environments. Second, we performed the proposed analysis on three underwater robots belonging to the classes of remotely operated vehicles and underwater legged robots. Our results showed how the legged device produced a clearly different signature compared to remotely operated vehicles which can be an advantage in operations that require low acoustic disturbance. Finally, we argue that the proposed indicators, obtained through a standardized procedure, may be a useful addition to datasheets of existing underwater robots} }
- C. Hu, C. Xiong, J. Peng, and S. Yue, “Coping with multiple visual motion cues under extremely constrained computation power of micro autonomous robots,” Ieee access, vol. 8, p. 159050–159066, 2020. doi:10.1109/ACCESS.2020.3016893
[BibTeX] [Abstract] [Download PDF]
The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system’s functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems.
@article{lincoln43658, volume = {8}, month = {September}, author = {Cheng Hu and Caihua Xiong and Jigen Peng and Shigang Yue}, title = {Coping With Multiple Visual Motion Cues Under Extremely Constrained Computation Power of Micro Autonomous Robots}, publisher = {IEEE}, year = {2020}, journal = {IEEE Access}, doi = {10.1109/ACCESS.2020.3016893}, pages = {159050--159066}, keywords = {ARRAY(0x559d3263c668)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43658/}, abstract = {The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system's functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems.} }
- Q. Fu and S. Yue, “Modelling drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds,” Biological cybernetics, vol. 114, p. 443–460, 2020. doi:10.1007/s00422-020-00841-x
[BibTeX] [Abstract] [Download PDF]
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
@article{lincoln46870, volume = {114}, month = {October}, author = {Qinbing Fu and Shigang Yue}, title = {Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds}, publisher = {Springer}, year = {2020}, journal = {Biological Cybernetics}, doi = {10.1007/s00422-020-00841-x}, pages = {443--460}, keywords = {ARRAY(0x559d324b0110)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46870/}, abstract = {Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.} }
- S. Iacoponi, M. Calisti, and C. Laschi, “Simulation and analysis of microspines interlocking behavior on rocky surfaces: an in-depth study of the isolated spine,” Journal of mechanisms and robotics, vol. 12, iss. 6, 2020. doi:10.1115/1.4047725
[BibTeX] [Abstract] [Download PDF]
Microspine grippers address a large variety of possible applications, especially in field robotics and manipulation in extreme environments. Predicting and modeling the gripper behavior remains a major challenge to this day. One of the most complex aspects of these predictions is how to model the spine to rock interaction of the spine tip with the local asperity. This paper proposes a single spine model, in order to fill the gap of knowledge in this specific field. A new model for the anchoring resistance of a single spine is proposed and discussed. The model is then applied to a simulation campaign. With the aid of simulations and analytic functions, we correlated performance characteristics of a spine with a set of quantitative, macroscopic variables related to the spine, the substrate and its usage. Eventually, this paper presents some experimental comparison tests and discusses traversal phenomena observed during the tests.
@article{lincoln46135, volume = {12}, number = {6}, month = {December}, author = {Saverio Iacoponi and Marcello Calisti and Cecilia Laschi}, title = {Simulation and Analysis of Microspines Interlocking Behavior on Rocky Surfaces: An In-Depth Study of the Isolated Spine}, journal = {Journal of Mechanisms and Robotics}, doi = {10.1115/1.4047725}, year = {2020}, keywords = {ARRAY(0x559d32622380)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46135/}, abstract = {Microspine grippers address a large variety of possible applications, especially in field robotics and manipulation in extreme environments. Predicting and modeling the gripper behavior remains a major challenge to this day. One of the most complex aspects of these predictions is how to model the spine to rock interaction of the spine tip with the local asperity. This paper proposes a single spine model, in order to fill the gap of knowledge in this specific field. A new model for the anchoring resistance of a single spine is proposed and discussed. The model is then applied to a simulation campaign. With the aid of simulations and analytic functions, we correlated performance characteristics of a spine with a set of quantitative, macroscopic variables related to the spine, the substrate and its usage. Eventually, this paper presents some experimental comparison tests and discusses traversal phenomena observed during the tests.} }
- S. Cosar, M. Fernandez-Carmona, R. Agrigoroaie, J. Pages, F. Ferland, F. Zhao, S. Yue, N. Bellotto, and A. Tapus, “Enrichme: perception and interaction of an assistive robot for the elderly at home,” International journal of social robotics, vol. 12, iss. 3, p. 779–805, 2020. doi:10.1007/s12369-019-00614-y
[BibTeX] [Abstract] [Download PDF]
Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.
@article{lincoln39037, volume = {12}, number = {3}, month = {July}, author = {Serhan Cosar and Manuel Fernandez-Carmona and Roxana Agrigoroaie and Jordi Pages and Francois Ferland and Feng Zhao and Shigang Yue and Nicola Bellotto and Adriana Tapus}, title = {ENRICHME: Perception and Interaction of an Assistive Robot for the Elderly at Home}, publisher = {Springer}, year = {2020}, journal = {International Journal of Social Robotics}, doi = {10.1007/s12369-019-00614-y}, pages = {779--805}, keywords = {ARRAY(0x559d325f74b8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/39037/}, abstract = {Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project?s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home.} }
- I. J. Gould, I. Wright, M. Collison, E. Ruto, G. Bosworth, and S. Pearson, “The impact of coastal flooding on agriculture: a case study of lincolnshire, united kingdom,” Land degradation & development, vol. 31, iss. 12, p. 1545–1559, 2020. doi:10.1002/ldr.3551
[BibTeX] [Abstract] [Download PDF]
Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.
@article{lincoln40049, volume = {31}, number = {12}, month = {July}, author = {Iain J Gould and Isobel Wright and Martin Collison and Eric Ruto and Gary Bosworth and Simon Pearson}, title = {The impact of coastal flooding on agriculture: a case study of Lincolnshire, United Kingdom}, publisher = {Wiley}, year = {2020}, journal = {Land Degradation \& Development}, doi = {10.1002/ldr.3551}, pages = {1545--1559}, keywords = {ARRAY(0x559d3260fdc8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40049/}, abstract = {Under future climate predictions the incidence of coastal flooding is set to rise. Many coastal regions at risk, such as those surrounding the North Sea, comprise large areas of low-lying and productive agricultural land. Flood risk assessments typically emphasise the economic consequences of coastal flooding on urban areas and national infrastructure. Impacts on agricultural land have seen less attention, and considerations tend to omit the long term effects of soil salinity. The aim of this study is to develop a universal framework to evaluate the economic impact of coastal flooding to agriculture. We incorporated existing flood models, satellite acquired crop data, soil salinity and crop sensitivity to give a novel and detailed assessment of salt damage to agricultural productivity over time. We focussed our case study on low-lying, highly productive agricultural land with a history of flooding in Lincolnshire, UK. The potential impact of agricultural flood damage varied across our study region.Assuming typical cropping does not change post-flood, financial losses range from {\pounds}1,366/ha to {\pounds}5,526/ha per inundation; these losses would be reduced by between 35\% up to 85\% in the likely event that an alternative, more salt-tolerant, cropping, regime is implemented post-flood. These losses are substantially higher than loses calculated on the same areas using established flood risk assessment framework conventionally used for freshwater flood assessments, with differences attributed to our longer term salt damage projections impacting over several years. This suggests flood protection policy needs to consider local and long terms impacts of flooding on agricultural land.} }
- F. Camara, S. Cosar, N. Bellotto, N. Merat, and C. Fox, “Continuous game theory pedestrian modelling method for autonomous vehicles,” in Human factors in intelligent vehicles, C. Olaverri-Monreal, F. García-Fernández, and R. J. F. Rossetti, Eds., River publishers, 2020.
[BibTeX] [Abstract] [Download PDF]
Autonomous Vehicles (AVs) must interact with other road users. They must understand and adapt to complex pedestrian behaviour, especially during crossings where priority is not clearly defined. This includes feedback effects such as modelling a pedestrian?s likely behaviours resulting from changes in the AVs behaviour. For example, whether a pedestrian will yield if the AV accelerates, and vice versa. To enable such automated interactions, it is necessary for the AV to possess a statistical model of the pedestrian?s responses to its own actions. A previous work demonstrated a proof-of- concept method to fit parameters to a simplified model based on data from a highly artificial discrete laboratory task with human subjects. The method was based on LIDAR-based person tracking, game theory, and Gaussian process analysis. The present study extends this method to enable analysis of more realistic continuous human experimental data. It shows for the first time how game-theoretic predictive parameters can be fit into pedestrians natural and continuous motion during road-crossings, and how predictions can be made about their interactions with AV controllers in similar real-world settings.
@incollection{lincoln42872, month = {October}, author = {Fanta Camara and Serhan Cosar and Nicola Bellotto and Natasha Merat and Charles Fox}, series = {River Publishers Series in Transport Technology}, booktitle = {Human Factors in Intelligent Vehicles}, editor = {Cristina Olaverri-Monreal and Fernando Garc{\'i}a-Fern{\'a}ndez and Rosaldo J. F. Rossetti}, title = {Continuous Game Theory Pedestrian Modelling Method for Autonomous Vehicles}, publisher = {River Publishers}, year = {2020}, keywords = {ARRAY(0x559d325f1938)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42872/}, abstract = {Autonomous Vehicles (AVs) must interact with other road users. They must understand and adapt to complex pedestrian behaviour, especially during crossings where priority is not clearly defined. This includes feedback effects such as modelling a pedestrian?s likely behaviours resulting from changes in the AVs behaviour. For example, whether a pedestrian will yield if the AV accelerates, and vice versa. To enable such automated interactions, it is necessary for the AV to possess a statistical model of the pedestrian?s responses to its own actions. A previous work demonstrated a proof-of- concept method to fit parameters to a simplified model based on data from a highly artificial discrete laboratory task with human subjects. The method was based on LIDAR-based person tracking, game theory, and Gaussian process analysis. The present study extends this method to enable analysis of more realistic continuous human experimental data. It shows for the first time how game-theoretic predictive parameters can be fit into pedestrians natural and continuous motion during road-crossings, and how predictions can be made about their interactions with AV controllers in similar real-world settings.} }
- M. Al-Khafajiy, T. Baker, A. Hussien, and A. Cotgrave, “Uav and fog computing for ioe-based systems: a case study on environment disasters prediction and recovery plans,” in Unmanned aerial vehicles in smart cities, Springer, 2020, p. 133–152. doi:10.1007/978-3-030-38712-9_8
[BibTeX] [Abstract] [Download PDF]
In the past few years, an exponential upsurge in the development and use of the Internet of Everything (IoE)-based systems has evolved. IoE-based systems bring together the power of embedded smart things (e.g., sensors and actuators), flying-things (e.g., drones), and machine learning and data processing mediums (e.g., fog and edge computing) to create intelligent and powerful networked systems. These systems benefit various aspects of our modern smart cities{–}ranging from healthcare and smart homes to smart motorways, for example, via making informed decisions. In IoE-based systems, sensors sense the surrounding environment and return data for processing: Unmanned aerial vehicles (UAVs) survey and scan areas that are difficult to reach by human beings (e.g., oceans and mountains), and machine learning algorithms are used to classify data, interpret and learn from collected data over fog and edge computing nodes. In fact, the integration of UAVs, fog computing and machine learning provides fast, cost-effective and safe deployments for many civil and military applications. While fog computing is a new network paradigm of distributed computing nodes at the edge of the network, fog extends the cloud?s capability to the edge to provide better quality of service (QoS), and it is particularly suitable for applications that have strict requirements on latency and reliability. Also, fog computing has the advantage of providing the support of mobility, location awareness, scalability and efficient integration with other systems such as cloud computing. Fog computing and UAV are an integral part of the future information and communication technologies (ICT) that are able to achieve higher functionality, optimised resources utilisation and better management to improve both quality of service (QoS) and quality of experiences (QoE). Such systems that can combine both these technologies are natural disaster prediction systems, which could use fog-based algorithms to predict and warn for upcoming disaster threats, such as floods. The fog computing algorithms use data to make decisions and predictions from both the embedded-sensors, such as environmental sensors and data from flying-things, such as data from UAV that include live images and videos.
@incollection{lincoln47572, month = {April}, author = {Mohammed Al-Khafajiy and Thar Baker and Aseel Hussien and Alison Cotgrave}, booktitle = {Unmanned Aerial Vehicles in Smart Cities}, title = {UAV and Fog Computing for IoE-Based Systems: A Case Study on Environment Disasters Prediction and Recovery Plans}, publisher = {Springer}, year = {2020}, journal = {UAV and Fog Computing for IoE-Based Systems: A Case Study on Environment Disasters Prediction and Recovery Plans}, doi = {10.1007/978-3-030-38712-9\_8}, pages = {133--152}, keywords = {ARRAY(0x559d32600cc8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47572/}, abstract = {In the past few years, an exponential upsurge in the development and use of the Internet of Everything (IoE)-based systems has evolved. IoE-based systems bring together the power of embedded smart things (e.g., sensors and actuators), flying-things (e.g., drones), and machine learning and data processing mediums (e.g., fog and edge computing) to create intelligent and powerful networked systems. These systems benefit various aspects of our modern smart cities{--}ranging from healthcare and smart homes to smart motorways, for example, via making informed decisions. In IoE-based systems, sensors sense the surrounding environment and return data for processing: Unmanned aerial vehicles (UAVs) survey and scan areas that are difficult to reach by human beings (e.g., oceans and mountains), and machine learning algorithms are used to classify data, interpret and learn from collected data over fog and edge computing nodes. In fact, the integration of UAVs, fog computing and machine learning provides fast, cost-effective and safe deployments for many civil and military applications. While fog computing is a new network paradigm of distributed computing nodes at the edge of the network, fog extends the cloud?s capability to the edge to provide better quality of service (QoS), and it is particularly suitable for applications that have strict requirements on latency and reliability. Also, fog computing has the advantage of providing the support of mobility, location awareness, scalability and efficient integration with other systems such as cloud computing. Fog computing and UAV are an integral part of the future information and communication technologies (ICT) that are able to achieve higher functionality, optimised resources utilisation and better management to improve both quality of service (QoS) and quality of experiences (QoE). Such systems that can combine both these technologies are natural disaster prediction systems, which could use fog-based algorithms to predict and warn for upcoming disaster threats, such as floods. The fog computing algorithms use data to make decisions and predictions from both the embedded-sensors, such as environmental sensors and data from flying-things, such as data from UAV that include live images and videos.} }
- F. D. Duchetto, P. Baxter, and M. Hanheide, “Automatic assessment and learning of robot social abilities,” in Companion of the 2020 acm/ieee international conference on human-robot interaction, 2020, p. 561–563. doi:10.1145/3371382.3377430
[BibTeX] [Abstract] [Download PDF]
One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].
@inproceedings{lincoln40509, booktitle = {Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction}, month = {March}, title = {Automatic Assessment and Learning of Robot Social Abilities}, author = {Francesco Del Duchetto and Paul Baxter and Marc Hanheide}, year = {2020}, pages = {561--563}, doi = {10.1145/3371382.3377430}, keywords = {ARRAY(0x559d324e7228)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40509/}, abstract = {One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].} }
- Q. Fu and S. Yue, “Complementary visual neuronal systems model for collision sensing,” in The ieee international conference on advanced robotics and mechatronics (arm), 2020. doi:10.1109/ICARM49381.2020.9195303
[BibTeX] [Abstract] [Download PDF]
Inspired by insects? visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-?eld motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in ?ies, have been studied, intensively. The LGMDs have speci?c selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To ?ll this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented ingroundmicro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.
@inproceedings{lincoln42134, booktitle = {The IEEE International Conference on Advanced Robotics and Mechatronics (ARM)}, month = {December}, title = {Complementary Visual Neuronal Systems Model for Collision Sensing}, author = {Qinbing Fu and Shigang Yue}, year = {2020}, doi = {10.1109/ICARM49381.2020.9195303}, keywords = {ARRAY(0x559d32469178)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42134/}, abstract = {Inspired by insects? visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-?eld motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in ?ies, have been studied, intensively. The LGMDs have speci?c selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To ?ll this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented ingroundmicro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.} }
- M. Al-Khafajiy, T. Baker, A. Waraich, O. Alfandi, and A. Hussien, “Enabling high performance fog computing through fog-2-fog coordination model,” in 2019 ieee/acs 16th international conference on computer systems and applications (aiccsa), 2020, p. 1–6. doi:10.1109/AICCSA47632.2019.9035353
[BibTeX] [Abstract] [Download PDF]
Fog computing is a promising network paradigm in the IoT area as it has a great potential to reduce processing time for time-sensitive IoT applications. However, fog can get congested very easily due to fog resources limitations in term of capacity and computational power. In this paper, we tackle the issue of fog congestion through a request offloading algorithm. The result shows that the performance of fogs nodes can be increased be sharing fog’s overload over several fog nodes. The proposed offloading algorithm could have the potential to achieve a sustainable network paradigm and highlights the significant benefits of fog offloading for the future networking paradigm.
@inproceedings{lincoln47564, month = {March}, author = {Mohammed Al-Khafajiy and Thar Baker and Atif Waraich and Omar Alfandi and Aseel Hussien}, booktitle = {2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA)}, title = {Enabling High Performance Fog Computing through Fog-2-Fog Coordination Model}, publisher = {IEEE}, doi = {10.1109/AICCSA47632.2019.9035353}, pages = {1--6}, year = {2020}, keywords = {ARRAY(0x559d32557590)}, url = {https://eprints.lincoln.ac.uk/id/eprint/47564/}, abstract = {Fog computing is a promising network paradigm in the IoT area as it has a great potential to reduce processing time for time-sensitive IoT applications. However, fog can get congested very easily due to fog resources limitations in term of capacity and computational power. In this paper, we tackle the issue of fog congestion through a request offloading algorithm. The result shows that the performance of fogs nodes can be increased be sharing fog's overload over several fog nodes. The proposed offloading algorithm could have the potential to achieve a sustainable network paradigm and highlights the significant benefits of fog offloading for the future networking paradigm.} }
- M. H. Nair, C. M. Saaj, and A. G. Esfahani, “On robotic in-orbit assembly of large aperture space telescopes,” in Proc. ieee/rsj international conference on intelligent robots and systems (iros), 2020.
[BibTeX] [Abstract] [Download PDF]
Space has found itself amidst numerous missions benefitting the life on Earth and for mankind to explore further. The space community has been in the move of launching various on-orbit missions, tackling the extremities of the space environment, with the use of robots, for performing tasks like assembly, maintenance, repairs, etc. The urge to explore further in the universe for scientific benefits has found the rise of modular Large-Space Telescopes (LASTs). With respect to the challenges of the in-space assembly of LAST, a five Degrees-of Freedom (DoF) End-Over-End Walking Robot (E-Walker) is presented in this paper. The Dynamical Model and Gait Pattern of the E-Walker is discussed with reference to the different phases of its motion. For the initial verification of the E-Walker model, a PID controller was used to make the E-Walker follow the desired trajectory. A mission concept discussing a potential strategy of assembling a 25m LAST with 342 Primary Mirror Units (PMUs) is briefly discussed. Simulation results show the precise tracking of the E-Walker along a desired trajectory is achieved without exceeding the joint torques.
@inproceedings{lincoln48338, booktitle = {Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, month = {October}, title = {On Robotic In-Orbit Assembly of Large Aperture Space Telescopes}, author = {Manu H. Nair and Chakravarthini M. Saaj and Amir G. Esfahani}, publisher = {IEEE}, year = {2020}, keywords = {ARRAY(0x559d3254c238)}, url = {https://eprints.lincoln.ac.uk/id/eprint/48338/}, abstract = {Space has found itself amidst numerous missions benefitting the life on Earth and for mankind to explore further. The space community has been in the move of launching various on-orbit missions, tackling the extremities of the space environment, with the use of robots, for performing tasks like assembly, maintenance, repairs, etc. The urge to explore further in the universe for scientific benefits has found the rise of modular Large-Space Telescopes (LASTs). With respect to the challenges of the in-space assembly of LAST, a five Degrees-of Freedom (DoF) End-Over-End Walking Robot (E-Walker) is presented in this paper. The Dynamical Model and Gait Pattern of the E-Walker is discussed with reference to the different phases of its motion. For the initial verification of the E-Walker model, a PID controller was used to make the E-Walker follow the desired trajectory. A mission concept discussing a potential strategy of assembling a 25m LAST with 342 Primary Mirror Units (PMUs) is briefly discussed. Simulation results show the precise tracking of the E-Walker along a desired trajectory is achieved without exceeding the joint torques.} }
- W. Khan, G. Das, M. Hanheide, and G. Cielniak, “Incorporating spatial constraints into a bayesian tracking framework for improved localisation in agricultural environments,” in 2020 ieee/rsj international conference on intelligent robots and systems, 2020.
[BibTeX] [Abstract] [Download PDF]
Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF). The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.
@inproceedings{lincoln42419, booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems}, month = {October}, title = {Incorporating Spatial Constraints into a Bayesian Tracking Framework for Improved Localisation in Agricultural Environments}, author = {Waqas Khan and Gautham Das and Marc Hanheide and Grzegorz Cielniak}, publisher = {IEEE}, year = {2020}, keywords = {ARRAY(0x559d326606e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42419/}, abstract = {Global navigation satellite system (GNSS) has been considered as a panacea for positioning and tracking since the last decade. However, it suffers from severe limitations in terms of accuracy, particularly in highly cluttered and indoor environments. Though real-time kinematics (RTK) supported GNSS promises extremely accurate localisation, employing such services are expensive, fail in occluded environments and are unavailable in areas where cellular base stations are not accessible. It is, therefore, necessary that the GNSS data is to be filtered if high accuracy is required. Thus, this article presents a GNSS-based particle filter that exploits the spatial constraints imposed by the environment. In the proposed setup, the state prediction of the sample set follows a restricted motion according to the topological map of the environment. This results in the transition of the samples getting confined between specific discrete points, called the topological nodes, defined by a topological map. This is followed by a refinement stage where the full set of predicted samples goes through weighting and resampling, where the weight is proportional to the predicted particle?s proximity with the GNSS measurement. Thus, a discrete space continuous-time Bayesian filter is proposed, called the Topological Particle Filter (TPF). The proposed TPF is put to test by localising and tracking fruit pickers inside polytunnels. Fruit pickers inside polytunnels can only follow specific paths according to the topology of the tunnel. These paths are defined in the topological map of the polytunnels and are fed to TPF to tracks fruit pickers. Extensive datasets are collected to demonstrate the improved discrete tracking of strawberry pickers inside polytunnels thanks to the exploitation of the environmental constraints.} }
- J. Barber, H. Cuayahuitl, M. Zhong, and W. Luan, “Lightweight non-intrusive load monitoring employing pruned sequence-to-point learning,” in 5th international workshop on non-intrusive load monitoring, 2020. doi:10.1145/1122445.1122456
[BibTeX] [Abstract] [Download PDF]
Non-intrusive load monitoring (NILM) is the process in which a household?s total power consumption is used to determine the power consumption of household appliances. Previous work has shown that sequence-to-point (seq2point) learning is one of the most promising methods for tackling NILM. This process uses a sequence of aggregate power data to map a target appliance’s power consumption at the midpoint of that window of power data. However, models produced using this method contain upwards of thirty million weights, meaning that the models require large volumes of resources to perform disaggregation. This paper addresses this problem by pruning the weights learned by such a model, which results in a lightweight NILM algorithm for the purpose of being deployed on mobile devices such as smart meters. The pruned seq2point learning algorithm was applied to the REFIT data, experimentally showing that the performance was retained comparing to the original seq2point learning whilst the number of weights was reduced by 87{$\backslash$}\%. Code:https://github.com/JackBarber98/pruned-nilm
@inproceedings{lincoln42806, booktitle = {5th International Workshop on Non-Intrusive Load Monitoring}, month = {October}, title = {Lightweight Non-Intrusive Load Monitoring Employing Pruned Sequence-to-Point Learning}, author = {Jack Barber and Heriberto Cuayahuitl and Mingjun Zhong and Wempen Luan}, publisher = {ACM Conference Proceedings}, year = {2020}, doi = {10.1145/1122445.1122456}, keywords = {ARRAY(0x559d326480f0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42806/}, abstract = {Non-intrusive load monitoring (NILM) is the process in which a household?s total power consumption is used to determine the power consumption of household appliances. Previous work has shown that sequence-to-point (seq2point) learning is one of the most promising methods for tackling NILM. This process uses a sequence of aggregate power data to map a target appliance's power consumption at the midpoint of that window of power data. However, models produced using this method contain upwards of thirty million weights, meaning that the models require large volumes of resources to perform disaggregation. This paper addresses this problem by pruning the weights learned by such a model, which results in a lightweight NILM algorithm for the purpose of being deployed on mobile devices such as smart meters. The pruned seq2point learning algorithm was applied to the REFIT data, experimentally showing that the performance was retained comparing to the original seq2point learning whilst the number of weights was reduced by 87{$\backslash$}\%. Code:https://github.com/JackBarber98/pruned-nilm} }
- J. L. Louedec, B. Li, and G. Cielniak, “Evaluation of 3d vision systems for detection of small objects in agricultural environments,” in The 15th international joint conference on computer vision, imaging and computer graphics theory and applications, 2020. doi:10.5220/0009182806820689
[BibTeX] [Abstract] [Download PDF]
3D information provides unique information about shape, localisation and relations between objects, not found in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a study on the application of modern 3D sensing technology together with the state-of-the-art machine learning algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for future work on 3D perception for challenging outdoor applications such as agriculture.
@inproceedings{lincoln40456, booktitle = {The 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications}, month = {February}, title = {Evaluation of 3D Vision Systems for Detection of Small Objects in Agricultural Environments}, author = {Justin Le Louedec and Bo Li and Grzegorz Cielniak}, publisher = {SciTePress}, year = {2020}, doi = {10.5220/0009182806820689}, keywords = {ARRAY(0x559d325a1188)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40456/}, abstract = {3D information provides unique information about shape, localisation and relations between objects, not found in standard 2D images. This information would be very beneficial in a large number of applications in agriculture such as fruit picking, yield monitoring, forecasting and phenotyping. In this paper, we conducted a study on the application of modern 3D sensing technology together with the state-of-the-art machine learning algorithms for segmentation and detection of strawberries growing in real farms. We evaluate the performance of two state-of-the-art 3D sensing technologies and showcase the differences between 2D and 3D networks trained on the images and point clouds of strawberry plants and fruit. Our study highlights limitations of the current 3D vision systems for the detection of small objects in outdoor applications and sets out foundations for future work on 3D perception for challenging outdoor applications such as agriculture.} }
- L. Roberts-Elliott, M. Fernandez-Carmona, and M. Hanheide, “Towards safer robot motion: using a qualitative motion model to classify human-robot spatial interaction,” in 21st towards autonomous robotic systems conference, 2020.
[BibTeX] [Abstract] [Download PDF]
For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.
@inproceedings{lincoln40186, booktitle = {21st Towards Autonomous Robotic Systems Conference}, month = {December}, title = {Towards Safer Robot Motion: Using a Qualitative Motion Model to Classify Human-Robot Spatial Interaction}, author = {Laurence Roberts-Elliott and Manuel Fernandez-Carmona and Marc Hanheide}, publisher = {Springer}, year = {2020}, keywords = {ARRAY(0x559d3261cd58)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40186/}, abstract = {For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe.} }
- R. Kirk, M. Mangan, and G. Cielniak, “Feasibility study of in-field phenotypic trait extraction for robotic soft-fruit operations,” in Ukras20 conference: ?robots into the real world? proceedings, 2020, p. 21–23. doi:doi:10.31256/Uk4Td6I
[BibTeX] [Abstract] [Download PDF]
There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN.
@inproceedings{lincoln42101, month = {February}, author = {Raymond Kirk and Michael Mangan and Grzegorz Cielniak}, booktitle = {UKRAS20 Conference: ?Robots into the real world? Proceedings}, title = {Feasibility Study of In-Field Phenotypic Trait Extraction for Robotic Soft-Fruit Operations}, publisher = {UKRAS}, doi = {doi:10.31256/Uk4Td6I}, pages = {21--23}, year = {2020}, keywords = {ARRAY(0x559d3244cae8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42101/}, abstract = {There are many agricultural applications that would benefit from robotic monitoring of soft-fruit, examples include harvesting and yield forecasting. Autonomous mobile robotic platforms enable digitisation of horticultural processes in-field reducing labour demand and increasing efficiency through con- tinuous operation. It is critical for vision-based fruit detection methods to estimate traits such as size, mass and volume for quality assessment, maturity estimation and yield forecasting. Estimating these traits from a camera mounted on a mobile robot is a non-destructive/invasive approach to gathering qualitative fruit data in-field. We investigate the feasibility of using vision- based modalities for precise, cheap, and real time computation of phenotypic traits: mass and volume of strawberries from planar RGB slices and optionally point data. Our best method achieves a marginal error of 3.00cm3 for volume estimation. The planar RGB slices can be computed manually or by using common object detection methods such as Mask R-CNN.} }
- M. Terreran, A. Tramontano, J. Lock, S. Ghidoni, and N. Bellotto, “Real-time object detection using deep learning for helping people with visual impairments,” in 4th ieee international conference on image processing, applications and systems (ipas), 2020. doi:10.1109/IPAS50080.2020.9334933
[BibTeX] [Abstract] [Download PDF]
Object detection plays a crucial role in the development of Electronic Travel Aids (ETAs), capable to guide a person with visual impairments towards a target object in an unknown indoor environment. In such a scenario, the object detector runs on a mobile device (e.g. smartphone) and needs to be fast, accurate, and, most importantly, lightweight. Nowadays, Deep Neural Networks (DNN) have become the state-of-the-art solution for object detection tasks, with many works improving speed and accuracy by proposing new architectures or extending existing ones. A common strategy is to use deeper networks to get higher performance, but that leads to a higher computational cost which makes it impractical to integrate them on mobile devices with limited computational power. In this work we compare different object detectors to find a suitable candidate to be implemented on ETAs, focusing on lightweight models capable of working in real-time on mobile devices with a good accuracy. In particular, we select two models: SSD Lite with Mobilenet V2 and Tiny-DSOD. Both models have been tested on the popular OpenImage dataset and a new dataset, called Office dataset, collected to further test models? performance and robustness in a real scenario inspired by the actual perception challenges of a user with visual impairments.
@inproceedings{lincoln42338, booktitle = {4th IEEE International Conference on Image Processing, Applications and Systems (IPAS)}, month = {December}, title = {Real-time Object Detection using Deep Learning for helping People with Visual Impairments}, author = {Matteo Terreran and Andrea Tramontano and Jacobus Lock and Stefano Ghidoni and Nicola Bellotto}, publisher = {IEEE}, year = {2020}, doi = {10.1109/IPAS50080.2020.9334933}, keywords = {ARRAY(0x559d32469160)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42338/}, abstract = {Object detection plays a crucial role in the development of Electronic Travel Aids (ETAs), capable to guide a person with visual impairments towards a target object in an unknown indoor environment. In such a scenario, the object detector runs on a mobile device (e.g. smartphone) and needs to be fast, accurate, and, most importantly, lightweight. Nowadays, Deep Neural Networks (DNN) have become the state-of-the-art solution for object detection tasks, with many works improving speed and accuracy by proposing new architectures or extending existing ones. A common strategy is to use deeper networks to get higher performance, but that leads to a higher computational cost which makes it impractical to integrate them on mobile devices with limited computational power. In this work we compare different object detectors to find a suitable candidate to be implemented on ETAs, focusing on lightweight models capable of working in real-time on mobile devices with a good accuracy. In particular, we select two models: SSD Lite with Mobilenet V2 and Tiny-DSOD. Both models have been tested on the popular OpenImage dataset and a new dataset, called Office dataset, collected to further test models? performance and robustness in a real scenario inspired by the actual perception challenges of a user with visual impairments.} }
- F. Lei, Z. Peng, V. Cutsuridis, M. Liu, Y. Zhang, and S. Yue, “Competition between on and off neural pathways enhancing collision selectivity,” in Ieee wcci 2020-ijcnn regular session, 2020. doi:10.1109/IJCNN48605.2020.9207131
[BibTeX] [Abstract] [Download PDF]
The LGMD1 neuron of locusts shows strong looming-sensitive property for both light and dark objects. Although a few LGMD1 models have been proposed, they are not reliable to inhibit the translating motion under certain conditions compare to the biological LGMD1 in the locust. To address this issue, we propose a bio-plausible model to enhance the collision selectivity by inhibiting the translating motion. The proposed model contains three parts, the retina to lamina layer for receiving luminance change signals, the lamina to medulla layer for extracting motion cues via ON and OFF pathways separately, the medulla to lobula layer for eliminating translational excitation with neural competition. We tested the model by synthetic stimuli and real physical stimuli. The experimental results demonstrate that the proposed LGMD1 model has a strong preference for objects in direct collision course-it can detect looming objects in different conditions while completely ignoring translating objects.
@inproceedings{lincoln41701, booktitle = {IEEE WCCI 2020-IJCNN regular session}, title = {Competition between ON and OFF Neural Pathways Enhancing Collision Selectivity}, author = {Fang Lei and Zhiping Peng and Vassilis Cutsuridis and Mei Liu and Yicheng Zhang and Shigang Yue}, year = {2020}, doi = {10.1109/IJCNN48605.2020.9207131}, keywords = {ARRAY(0x559d32353c00)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41701/}, abstract = {The LGMD1 neuron of locusts shows strong looming-sensitive property for both light and dark objects. Although a few LGMD1 models have been proposed, they are not reliable to inhibit the translating motion under certain conditions compare to the biological LGMD1 in the locust. To address this issue, we propose a bio-plausible model to enhance the collision selectivity by inhibiting the translating motion. The proposed model contains three parts, the retina to lamina layer for receiving luminance change signals, the lamina to medulla layer for extracting motion cues via ON and OFF pathways separately, the medulla to lobula layer for eliminating translational excitation with neural competition. We tested the model by synthetic stimuli and real physical stimuli. The experimental results demonstrate that the proposed LGMD1 model has a strong preference for objects in direct collision course-it can detect looming objects in different conditions while completely ignoring translating objects.} }
- X. Li, C. Fox, and S. Coutts, “Deep learning for robotic strawberry harvesting,” in Ukras20, 2020, p. 80–82. doi:10.31256/Bj3Kl5B
[BibTeX] [Abstract] [Download PDF]
Abstract{–}We develop a novel machine learning based robotic strawberry harvesting system for fruit counting, sizing/weighting, and yield prediction.
@inproceedings{lincoln41273, month = {April}, author = {Xiaodong Li and Charles Fox and Shaun Coutts}, booktitle = {UKRAS20}, title = {Deep learning for robotic strawberry harvesting}, publisher = {UK-RAS}, doi = {10.31256/Bj3Kl5B}, pages = {80--82}, year = {2020}, keywords = {ARRAY(0x559d32505b98)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41273/}, abstract = {Abstract{--}We develop a novel machine learning based robotic strawberry harvesting system for fruit counting, sizing/weighting, and yield prediction.} }
- Z. Huang, G. Miyauchi, A. S. Gomez, R. Bird, A. S. Kalsi, C. Jansen, Z. Liu, S. Parsons, and E. Sklar, “An experiment on human-robot interaction in a simulated agricultural task,” in Taros 2020: towards autonomous robotic systems, 2020. doi:10.1007/978-3-030-63486-5_25
[BibTeX] [Abstract] [Download PDF]
On the farm of the future, a human agriculturist collaborates with both human and automated labourers in order to perform a wide range of tasks. Today, changes in traditional farming practices motivate robotics researchers to consider ways in which automated devices and intelligent systems can work with farmers to address diverse needs of farming. Because farming tasks can be highly specialised, though often repetitive, a human-robot approach is a natural choice. The work presented here investigates a collaborative task in which a human and robot share decision making about the readiness of strawberries for harvesting, based on visual inspection. Two different robot behaviours are compared: one in which the robot provides decisions with more false positives and one in which the robot provides decisions with more false negatives. Preliminary experimental results conducted with human subjects are presented and show that the robot behaviour with more false positives is preferred in completing this task.
@inproceedings{lincoln53891, booktitle = {TAROS 2020: Towards Autonomous Robotic Systems}, month = {December}, title = {An Experiment on Human-Robot Interaction in a Simulated Agricultural Task}, author = {Zhuoling Huang and Genki Miyauchi and Adrian Salazar Gomez and Richie Bird and Amar Singh Kalsi and Chipp Jansen and Zeyang Liu and Simon Parsons and Elizabeth Sklar}, year = {2020}, doi = {10.1007/978-3-030-63486-5\_25}, keywords = {ARRAY(0x559d3244ca10)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53891/}, abstract = {On the farm of the future, a human agriculturist collaborates with both human and automated labourers in order to perform a wide range of tasks. Today, changes in traditional farming practices motivate robotics researchers to consider ways in which automated devices and intelligent systems can work with farmers to address diverse needs of farming. Because farming tasks can be highly specialised, though often repetitive, a human-robot approach is a natural choice. The work presented here investigates a collaborative task in which a human and robot share decision making about the readiness of strawberries for harvesting, based on visual inspection. Two different robot behaviours are compared: one in which the robot provides decisions with more false positives and one in which the robot provides decisions with more false negatives. Preliminary experimental results conducted with human subjects are presented and show that the robot behaviour with more false positives is preferred in completing this task.} }
- M. Calisti, F. Giorgio-Serchi, C. Stefanini, M. Farman, I. Hussain, C. Armanini, D. Gan, L. Seneviratne, and F. Renda, “Design, modeling and testing of a flagellum-inspired soft underwater propeller exploiting passive elasticity,” in 2019 ieee/rsj international conference on intelligent robots and systems (iros), 2020, p. 3328–3334. doi:10.1109/IROS40897.2019.8967700
[BibTeX] [Abstract] [Download PDF]
Flagellated micro-organism are regarded as excellent swimmers within their size scales. This, along with the simplicity of their actuation and the richness of their dynamics makes them a valuable source of inspiration to design continuum, self-propelled underwater robots. Here we introduce a soft, flagellum-inspired system which exploits the compliance of its own body to passively attain a range of geometrical configurations from the interaction with the surrounding fluid. The spontaneous formation of stable helical waves along the length of the flagellum is responsible for the generation of positive net thrust. We investigate the relationship between actuation frequency and material elasticity in determining the steady-state configuration of the system and its thrust output. This is ultimately used to perform a parameter identification procedure of an elastodynamic model aimed at investigating the scaling laws in the propulsion of flagellated robots.
@inproceedings{lincoln46145, booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, month = {January}, title = {Design, Modeling and Testing of a Flagellum-inspired Soft Underwater Propeller Exploiting Passive Elasticity}, author = {Marcello Calisti and Francesco Giorgio-Serchi and Cesare Stefanini and Madiha Farman and Irfan Hussain and Costanza Armanini and Dongming Gan and Lakmal Seneviratne and Federico Renda}, year = {2020}, pages = {3328--3334}, doi = {10.1109/IROS40897.2019.8967700}, url = {https://eprints.lincoln.ac.uk/id/eprint/46145/}, abstract = {Flagellated micro-organism are regarded as excellent swimmers within their size scales. This, along with the simplicity of their actuation and the richness of their dynamics makes them a valuable source of inspiration to design continuum, self-propelled underwater robots. Here we introduce a soft, flagellum-inspired system which exploits the compliance of its own body to passively attain a range of geometrical configurations from the interaction with the surrounding fluid. The spontaneous formation of stable helical waves along the length of the flagellum is responsible for the generation of positive net thrust. We investigate the relationship between actuation frequency and material elasticity in determining the steady-state configuration of the system and its thrust output. This is ultimately used to perform a parameter identification procedure of an elastodynamic model aimed at investigating the scaling laws in the propulsion of flagellated robots.} }
- M. H. Nair, M. Saaj, and A. G. Esfahani, “Modelling and control of an end-over-end walking robot,” in 21st towards autonomous robotic systems conference, 2020. doi:10.1007/978-3-030-63486-5_15
[BibTeX] [Abstract] [Download PDF]
Over the last few decades, Space robots have found their applications in various in-orbit operations. The Canadarm2 and the European Robotic Arm (ERA), onboard the International Space Station (ISS), are exceptional examples of supervised robotic manipulators (RMs) used for station assembly and mainte?nance. However, in the case of in-space assembly of structures, like Large-Aperture Space Telescope (LAT) with an aperture larger than the Hubble Space Telescope (HST) and James Webb Space Telescope (JWST), missions are still in their infancy; this is heavily attributed to the limitations of current state-of-the-art Robotics, Automation and Autonomous Systems (RAAS) for the extreme space environ?ment. To address this challenge, this paper introduces the modelling and control of a candidate robotic architecture, inspired by Canadarm2 and ERA, for in-situ assembly of LAT. The kinematic and dynamic models of a five degrees-of-freedom (DoF) End-Over-End Walking robot’s (E-Walker’s) first phase of motion is pre?sented. A closed-loop feedback system validates the system’s accurate gait pat?tern. The simulation results presented show that a Proportional-Integral-Derivative (PID) controller is able to track the desired joint angles without exceeding the joint torque limits; this ensures precise motion along the desired trajectory for one full cycle comprising of Phase-1 and Phase-2 respectively. The gait pattern of the E-Walker for the next phases is also briefly discussed.
@inproceedings{lincoln49496, booktitle = {21st Towards Autonomous Robotic Systems Conference}, month = {December}, title = {Modelling and Control of an End-Over-End Walking Robot}, author = {Manu H. Nair and Mini Saaj and Amir G. Esfahani}, publisher = {Springer}, year = {2020}, doi = {10.1007/978-3-030-63486-5\_15}, keywords = {ARRAY(0x559d3264b3a8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49496/}, abstract = {Over the last few decades, Space robots have found their applications in various in-orbit operations. The Canadarm2 and the European Robotic Arm (ERA), onboard the International Space Station (ISS), are exceptional examples of supervised robotic manipulators (RMs) used for station assembly and mainte?nance. However, in the case of in-space assembly of structures, like Large-Aperture Space Telescope (LAT) with an aperture larger than the Hubble Space Telescope (HST) and James Webb Space Telescope (JWST), missions are still in their infancy; this is heavily attributed to the limitations of current state-of-the-art Robotics, Automation and Autonomous Systems (RAAS) for the extreme space environ?ment. To address this challenge, this paper introduces the modelling and control of a candidate robotic architecture, inspired by Canadarm2 and ERA, for in-situ assembly of LAT. The kinematic and dynamic models of a five degrees-of-freedom (DoF) End-Over-End Walking robot's (E-Walker's) first phase of motion is pre?sented. A closed-loop feedback system validates the system's accurate gait pat?tern. The simulation results presented show that a Proportional-Integral-Derivative (PID) controller is able to track the desired joint angles without exceeding the joint torque limits; this ensures precise motion along the desired trajectory for one full cycle comprising of Phase-1 and Phase-2 respectively. The gait pattern of the E-Walker for the next phases is also briefly discussed.} }
- M. H. Nair, M. Saaj, S. Adlen, E. Amir G, and S. Eckersley, “Advances in robotic in-orbit assembly of large aperture space telescopes,” in 15th international symposium on artificial intelligence, robotics and automation in space (i-sairas 2020), 2020.
[BibTeX] [Abstract] [Download PDF]
Modular Large Aperture Space Telescopes (LAST) hold the key to future astronomical missions in search of the origin of the cosmos. Robotics and Autonomous Systems technology would be required to meet the challenges associated with the assembly of such high value infrastructure in orbit. In this paper an End-Over-End walking robot is selected to assemble a 25m LAST. The dynamical model, control architecture and gait pattern of the E-Walker are discussed. The key mission requirements are stated along with the strategies for scheduling the assembly process. A mission concept of operations (ConOps) is proposed for assembling the 25m LAST. Simulation results show the precise trajectory tracking of the EWalker for the chosen mission scenario.
@inproceedings{lincoln49489, booktitle = {15th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS 2020)}, month = {October}, title = {Advances in Robotic In-Orbit Assembly of Large Aperture Space Telescopes}, author = {Manu H. Nair and Mini Saaj and Sam Adlen and Amir G, Esfahani and Steve Eckersley}, publisher = {European Space Agency}, year = {2020}, keywords = {ARRAY(0x559d32584218)}, url = {https://eprints.lincoln.ac.uk/id/eprint/49489/}, abstract = {Modular Large Aperture Space Telescopes (LAST) hold the key to future astronomical missions in search of the origin of the cosmos. Robotics and Autonomous Systems technology would be required to meet the challenges associated with the assembly of such high value infrastructure in orbit. In this paper an End-Over-End walking robot is selected to assemble a 25m LAST. The dynamical model, control architecture and gait pattern of the E-Walker are discussed. The key mission requirements are stated along with the strategies for scheduling the assembly process. A mission concept of operations (ConOps) is proposed for assembling the 25m LAST. Simulation results show the precise trajectory tracking of the EWalker for the chosen mission scenario.} }
- P. Somaiya, M. Hanheide, and G. Cielniak, “Unsupervised anomaly detection for safe robot operations,” in Ukras20 conference: ?robots into the real world?, 2020, p. 154–156. doi:10.31256/Wg7Ap8J
[BibTeX] [Abstract] [Download PDF]
Faults in robot operations are risky, particularly when robots are operating in the same environment as humans. Early detection of such faults is necessary to prevent further escalation and endangering human life. However, due to sensor noise and unforeseen faults in robots, creating a model for fault prediction is difficult. Existing supervised data-driven approaches rely on large amounts of labelled data for detecting anomalies, which is impractical in real applications. In this paper, we present an unsupervised machine learning approach for this purpose, which requires only data corresponding to the normal operation of the robot. We demonstrate how to fuse multi-modal information from robot motion sensors and evaluate the proposed framework in multiple scenarios collected from a real mobile robot.
@inproceedings{lincoln46369, month = {April}, author = {Pratik Somaiya and Marc Hanheide and Grzegorz Cielniak}, booktitle = {UKRAS20 Conference: ?Robots into the real world?}, title = {Unsupervised Anomaly Detection for Safe Robot Operations}, publisher = {UKRAS}, doi = {10.31256/Wg7Ap8J}, pages = {154--156}, year = {2020}, keywords = {ARRAY(0x559d32455958)}, url = {https://eprints.lincoln.ac.uk/id/eprint/46369/}, abstract = {Faults in robot operations are risky, particularly when robots are operating in the same environment as humans. Early detection of such faults is necessary to prevent further escalation and endangering human life. However, due to sensor noise and unforeseen faults in robots, creating a model for fault prediction is difficult. Existing supervised data-driven approaches rely on large amounts of labelled data for detecting anomalies, which is impractical in real applications. In this paper, we present an unsupervised machine learning approach for this purpose, which requires only data corresponding to the normal operation of the robot. We demonstrate how to fuse multi-modal information from robot motion sensors and evaluate the proposed framework in multiple scenarios collected from a real mobile robot.} }
- Z. Huang, G. Miyauchi, A. S. Gomez, R. Bird, A. S. Kalsi, Z. Liu, C. Jansen, S. Parsons, and E. Sklar, “Toward robot co-labourers for intelligent farming,” in Hri ’20: companion of the 2020 acm/ieee international conference on human-robot interaction, 2020, p. 263–265. doi:10.1145/3371382.3378333
[BibTeX] [Abstract] [Download PDF]
This paper presents the results of preliminary experiments in human-robot collaboration for an agricultural task.
@inproceedings{lincoln53892, month = {April}, author = {Zhuoling Huang and Genki Miyauchi and Adrian Salazar Gomez and Richie Bird and Amar Singh Kalsi and Zeyang Liu and Chipp Jansen and Simon Parsons and Elizabeth Sklar}, booktitle = {HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction}, title = {Toward Robot Co-Labourers for Intelligent Farming}, publisher = {Association for Computing Machinery}, doi = {10.1145/3371382.3378333}, pages = {263--265}, year = {2020}, keywords = {ARRAY(0x559d326666a0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/53892/}, abstract = {This paper presents the results of preliminary experiments in human-robot collaboration for an agricultural task.} }
- H. Isakhani, S. Yue, C. Xiong, W. Chen, X. Sun, and T. liu, “Fabrication and mechanical analysis of bioinspired gliding-optimized wing prototypes for micro aerial vehicles,” in 5th international conference on advanced robotics and mechatronics (icarm), 2020, p. 602–608. doi:10.1109/ICARM49381.2020.9195392
[BibTeX] [Abstract] [Download PDF]
Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.
@inproceedings{lincoln43687, month = {September}, author = {Hamid Isakhani and Shigang Yue and Caihua Xiong and Wenbin Chen and Xuelong Sun and Tian liu}, booktitle = {5th International Conference on Advanced Robotics and Mechatronics (ICARM)}, title = {Fabrication and Mechanical Analysis of Bioinspired Gliding-optimized Wing Prototypes for Micro Aerial Vehicles}, publisher = {IEEE}, year = {2020}, journal = {2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM)}, doi = {10.1109/ICARM49381.2020.9195392}, pages = {602--608}, keywords = {ARRAY(0x559d326370e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43687/}, abstract = {Gliding is the most efficient flight mode that is explicitly appreciated by natural fliers. This is achieved by high-performance structures developed over millions of years of evolution. One such prehistoric insect, locust (Schistocerca gregaria) is a perfect example of a natural glider capable of endured transatlantic flights, which could potentially inspire numerous solutions to the problems in aerospace engineering. However, biomimicry of such aerodynamic properties is hindered by the limitations of conventional as well as modern fabrication technologies in terms of precision and availability, respectively. Therefore, we explore and propose novel combinations of economical manufacturing methods to develop various locust-inspired tandem wing prototypes (i.e. fore and hindwings), for further wind tunnel based aerodynamic studies. Additionally, we determine the flexural stiffness and maximum deformation rate of our prototypes and compare it to their counterparts in nature and literature, recommending the most suitable artificial bioinspired wing for gliding micro aerial vehicle applications.} }
- H. Isakhani, C. Xiong, S. Yue, and W. Chen, “A bioinspired airfoil optimization technique using nash genetic algorithm,” in 2020 17th international conference on ubiquitous robots (ur), 2020, p. 506–513. doi:10.1109/UR49135.2020.9144868
[BibTeX] [Abstract] [Download PDF]
Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77\% improvement in its gliding ratio.
@inproceedings{lincoln43819, month = {July}, author = {Hamid Isakhani and Caihua Xiong and Shigang Yue and Wenbin Chen}, booktitle = {2020 17th International Conference on Ubiquitous Robots (UR)}, title = {A Bioinspired Airfoil Optimization Technique Using Nash Genetic Algorithm}, publisher = {IEEE}, doi = {10.1109/UR49135.2020.9144868}, pages = {506--513}, year = {2020}, keywords = {ARRAY(0x559d324ad0e0)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43819/}, abstract = {Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77\% improvement in its gliding ratio.} }
- M. Sorour, K. Elgeneidy, M. Hanheide, and A. Srinivasan, “Enhancing grasp pose computation in gripper workspace spheres,” in Icra 2020, 2020.
[BibTeX] [Abstract] [Download PDF]
In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76\% and 85:5\% respectively has been reported by real world experiments.
@inproceedings{lincoln39957, booktitle = {ICRA 2020}, month = {July}, title = {Enhancing Grasp Pose Computation in Gripper Workspace Spheres}, author = {Mohamed Sorour and Khaled Elgeneidy and Marc Hanheide and Aravinda Srinivasan}, year = {2020}, keywords = {ARRAY(0x559d3245a820)}, url = {https://eprints.lincoln.ac.uk/id/eprint/39957/}, abstract = {In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76\% and 85:5\% respectively has been reported by real world experiments.} }
- S. Parsa, D. Kamale, S. Mghames, K. Nazari, T. Pardi, A. Srinivasan, G. Neumann, M. Hanheide, and A. G. Esfahani, “Haptic-guided shared control grasping: collision-free manipulation,” in Case 2020- international conference on automation science and engineering, 2020. doi:10.1109/CASE48305.2020.9216789
[BibTeX] [Abstract] [Download PDF]
We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.
@inproceedings{lincoln41283, month = {August}, author = {Soran Parsa and Disha Kamale and Sariah Mghames and Kiyanoush Nazari and Tommaso Pardi and Aravinda Srinivasan and Gerhard Neumann and Marc Hanheide and Amir Ghalamzan Esfahani}, booktitle = {CASE 2020- International Conference on Automation Science and Engineering}, title = {Haptic-guided shared control grasping: collision-free manipulation}, publisher = {IEEE}, journal = {International Conference on Automation Science and Engineering (CASE)}, doi = {10.1109/CASE48305.2020.9216789}, year = {2020}, keywords = {ARRAY(0x559d3262a3c8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/41283/}, abstract = {We propose a haptic-guided shared control system that provides an operator with force cues during reach-to-grasp phase of tele-manipulation. The force cues inform the operator of grasping configuration which allows collision-free autonomous post-grasp movements. Previous studies showed haptic guided shared control significantly reduces the complexities of the teleoperation. We propose two architectures of shared control in which the operator is informed about (1) the local gradient of the collision cost, and (2) the grasping configuration suitable for collision-free movements of an aimed pick-and-place task. We demonstrate the efficiency of our proposed shared control systems by a series of experiments with Franka Emika robot. Our experimental results illustrate our shared control systems successfully inform the operator of predicted collisions between the robot and an obstacle in the robot?s workspace. We learned that informing the operator of the global information about the grasping configuration associated with minimum collision cost of post-grasp movements results in a reach-to-grasp time much shorter than the case in which the operator is informed about the local-gradient information of the collision cost.} }
- J. L. Louedec, H. A. Montes, T. Duckett, and G. Cielniak, “Segmentation and detection from organised 3d point clouds: a case study in broccoli head detection,” in 2020 ieee/cvf conference on computer vision and pattern recognition workshops (cvprw), 2020, p. 285–293. doi:10.1109/CVPRW50498.2020.00040
[BibTeX] [Abstract] [Download PDF]
Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.
@inproceedings{lincoln43425, month = {June}, author = {Justin Le Louedec and Hector A. Montes and Tom Duckett and Grzegorz Cielniak}, booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, title = {Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection}, publisher = {IEEE}, doi = {10.1109/CVPRW50498.2020.00040}, pages = {285--293}, year = {2020}, keywords = {ARRAY(0x559d324f3258)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43425/}, abstract = {Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.} }
- K. Elgeneidy and K. Goher, “Structural optimization of adaptive soft fin ray fingers with variable stiffening capability,” in Ieee robosoft 2020, 2020. doi:10.1109/RoboSoft48309.2020.9115969
[BibTeX] [Abstract] [Download PDF]
Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.
@inproceedings{lincoln40182, booktitle = {IEEE RoboSoft 2020}, month = {June}, title = {Structural Optimization of Adaptive Soft Fin Ray Fingers with Variable Stiffening Capability}, author = {Khaled Elgeneidy and Khaled Goher}, publisher = {IEEE}, year = {2020}, doi = {10.1109/RoboSoft48309.2020.9115969}, keywords = {ARRAY(0x559d32557980)}, url = {https://eprints.lincoln.ac.uk/id/eprint/40182/}, abstract = {Soft and adaptable grippers are desired for their ability to operate effectively in unstructured or dynamically changing environments, especially when interacting with delicate or deformable targets. However, utilizing soft bodies often comes at the expense of reduced carrying payload and limited performance in high-force applications. Hence, methods for achieving variable stiffness soft actuators are being investigated to broaden the applications of soft grippers. This paper investigates the structural optimization of adaptive soft fingers based on the Fin Ray? effect (Soft Fin Ray), featuring a passive stiffening mechanism that is enabled via layer jamming between deforming flexible ribs. A finite element model of the proposed Soft Fin Ray structure is developed and experimentally validated, with the aim of enhancing the layer jamming behavior for better grasping performance. The results showed that through structural optimization, initial contact forces before jamming can be minimized and final contact forces after jamming can be significantly enhanced, without downgrading the desired passive adaptation to objects. Thus, applications for Soft Fin Ray fingers can range from adaptive delicate grasping to high-force manipulation tasks.} }
- J. L. Louedec, H. Montes, T. Duckett, and G. Cielniak, “Segmentation and detection from organised 3d point clouds: a case study in broccoli head detection,” in 2020 ieee/cvf conference on computer vision and pattern recognition workshops (cvprw), 2020.
[BibTeX] [Abstract] [Download PDF]
Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.
@inproceedings{lincoln45041, booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, month = {June}, title = {Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection}, author = {Justin Le Louedec and Hector Montes and Tom Duckett and Grzegorz Cielniak}, publisher = {IEEE}, year = {2020}, keywords = {ARRAY(0x559d325ed3c8)}, url = {https://eprints.lincoln.ac.uk/id/eprint/45041/}, abstract = {Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications.} }
- N. Andreakos, S. Yue, and V. Cutsuridis, “Improving recall in an associative neural network model of the hippocampus,” in 9th international conference, living machines 2020, 2020.
[BibTeX] [Abstract] [Download PDF]
The mammalian hippocampus is involved in auto-association and hetero-association of declarative memories. We employed a bio-inspired neural model of hippocampal CA1 region to systematically evaluate its mean recall quality against different number of stored patterns, overlaps and active cells per pattern. Model consisted of excitatory (pyramidal cells) and four types of inhibitory cells: axo-axonic, basket, bistratified, and oriens lacunosum-moleculare cells. Cells were simplified compartmental models with complex ion channel dynamics. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. During recall excitatory input to network excitatory cells provided context and timing information for retrieval of previously stored memory patterns. Dendritic inhibition acted as a nonspecific global threshold machine that removed spurious activity during recall. Simulations showed recall quality improved when the network?s memory capacity increased as the number of active cells per pattern decreased. Furthermore, increased firing rate of a presynaptic inhibitory threshold machine inhibiting a network of postsynaptic excitatory cells has a better success at removing spurious activity at the network level and improving recall quality than increased synaptic efficacy of the same threshold machine on the same network of excitatory cells, while keeping its firing rate fixed.
@inproceedings{lincoln43365, booktitle = {9th International Conference, Living Machines 2020}, month = {September}, title = {Improving Recall in an Associative Neural Network Model of the Hippocampus}, author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis}, publisher = {Springer Nature}, year = {2020}, keywords = {ARRAY(0x559d324d6430)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43365/}, abstract = {The mammalian hippocampus is involved in auto-association and hetero-association of declarative memories. We employed a bio-inspired neural model of hippocampal CA1 region to systematically evaluate its mean recall quality against different number of stored patterns, overlaps and active cells per pattern. Model consisted of excitatory (pyramidal cells) and four types of inhibitory cells: axo-axonic, basket, bistratified, and oriens lacunosum-moleculare cells. Cells were simplified compartmental models with complex ion channel dynamics. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. During recall excitatory input to network excitatory cells provided context and timing information for retrieval of previously stored memory patterns. Dendritic inhibition acted as a nonspecific global threshold machine that removed spurious activity during recall. Simulations showed recall quality improved when the network?s memory capacity increased as the number of active cells per pattern decreased. Furthermore, increased firing rate of a presynaptic inhibitory threshold machine inhibiting a network of postsynaptic excitatory cells has a better success at removing spurious activity at the network level and improving recall quality than increased synaptic efficacy of the same threshold machine on the same network of excitatory cells, while keeping its firing rate fixed.} }
- T. Liu, X. Sun, C. Hu, Q. Fu, H. Isakhani, and S. Yue, “Investigating multiple pheromones in swarm robots – a case study of multi-robot deployment,” in 2020 5th international conference on advanced robotics and mechatronics (icarm), 2020, p. 595–601. doi:10.1109/ICARM49381.2020.9195311
[BibTeX] [Abstract] [Download PDF]
Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects’ collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOS{\ensuremath{\Phi}}. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOS{\ensuremath{\Phi}} platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects’ research.
@inproceedings{lincoln43680, month = {September}, author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Hamid Isakhani and Shigang Yue}, booktitle = {2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM)}, title = {Investigating Multiple Pheromones in Swarm Robots - A Case Study of Multi-Robot Deployment}, publisher = {IEEE}, doi = {10.1109/ICARM49381.2020.9195311}, pages = {595--601}, year = {2020}, keywords = {ARRAY(0x559d32566508)}, url = {https://eprints.lincoln.ac.uk/id/eprint/43680/}, abstract = {Social insects are known as the experts in handling complex task in a collective smart way although their small brains contain only limited computation resources and sensory information. It is believed that pheromones play a vital role in shaping social insects' collective behaviours. One of the key points underlying the stigmergy is the combination of different pheromones in a specific task. In the swarm intelligence field, pheromone inspired studies usually focus one single pheromone at a time, so it is not clear how effectively multiple pheromones could be employed for a collective strategy in the real physical world. In this study, we investigate multiple pheromone-based deployment strategy for swarm robots inspired by social insects. The proposed deployment strategy uses two kinds of artificial pheromones; the attractive and the repellent pheromone that enables micro robots to be distributed in desired positions with high efficiency. The strategy is assessed systematically by both simulation and real robot experiments using a novel artificial pheromone platform ColCOS{\ensuremath{\Phi}}. Results from the simulation and real robot experiments both demonstrate the effectiveness of the proposed strategy and reveal the role of multiple pheromones. The feasibility of the ColCOS{\ensuremath{\Phi}} platform, and its potential for further robotic research on multiple pheromones are also verified. Our study of using different pheromones for one collective swarm robotics task may help or inspire biologists in real insects' research.} }
- A. Binch, G. Das, J. P. Fentanes, and M. Hanheide, “Context dependant iterative parameter optimisation for robust robot navigation,” in 2020 ieee international conference on robotics and automation (icra), 2020, p. 3937–3943. doi:10.1109/ICRA40945.2020.9196550
[BibTeX] [Abstract] [Download PDF]
Progress in autonomous mobile robotics has seen significant advances in the development of many algorithms for motion control and path planning. However, robust performance from these algorithms can often only be expected if the parameters controlling them are tuned specifically for the respective robot model, and optimised for specific scenarios in the environment the robot is working in. Such parameter tuning can, depending on the underlying algorithm, amount to a substantial combinatorial challenge, often rendering extensive manual tuning of these parameters intractable. In this paper, we present a framework that permits the use of different navigation actions and/or parameters depending on the spatial context of the navigation task, while considering the respective navigation algorithms themselves mostly as a “black box”, and find suitable parameters by means of an iterative optimisation, improving for performance metrics in simulated environments. We present a genetic algorithm incorporated into the framework and empirically show that the resulting parameter sets lead to substantial performance improvements in both simulated and real-world environments in the domain of agricultural robots.
@inproceedings{lincoln42389, month = {May}, author = {Adam Binch and Gautham Das and Jaime Pulido Fentanes and Marc Hanheide}, booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)}, title = {Context Dependant Iterative Parameter Optimisation for Robust Robot Navigation}, publisher = {IEEE}, doi = {10.1109/ICRA40945.2020.9196550}, pages = {3937--3943}, year = {2020}, keywords = {ARRAY(0x559d3261ec98)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42389/}, abstract = {Progress in autonomous mobile robotics has seen significant advances in the development of many algorithms for motion control and path planning. However, robust performance from these algorithms can often only be expected if the parameters controlling them are tuned specifically for the respective robot model, and optimised for specific scenarios in the environment the robot is working in. Such parameter tuning can, depending on the underlying algorithm, amount to a substantial combinatorial challenge, often rendering extensive manual tuning of these parameters intractable. In this paper, we present a framework that permits the use of different navigation actions and/or parameters depending on the spatial context of the navigation task, while considering the respective navigation algorithms themselves mostly as a "black box", and find suitable parameters by means of an iterative optimisation, improving for performance metrics in simulated environments. We present a genetic algorithm incorporated into the framework and empirically show that the resulting parameter sets lead to substantial performance improvements in both simulated and real-world environments in the domain of agricultural robots.} }
- N. Mavrakis, R. Stolkin, and A. G. Esfahani, “Estimating an object?s inertial parameters by robotic pushing: a data-driven approach,” in The ieee/rsj international conference on intelligent robots and systems (iros), 2020, p. 9537–9544. doi:10.1109/IROS45743.2020.9341112
[BibTeX] [Abstract] [Download PDF]
Estimating the inertial properties of an object can make robotic manipulations more efficient, especially in extreme environments. This paper presents a novel method of estimating the 2D inertial parameters of an object, by having a robot applying a push on it. We draw inspiration from previous analyses on quasi-static pushing mechanics, and introduce a data-driven model that can accurately represent these mechan- ics and provide a prediction for the object?s inertial parameters. We evaluate the model with two datasets. For the first dataset, we set up a V-REP simulation of seven robots pushing objects with large range of inertial parameters, acquiring 48000 pushes in total. For the second dataset, we use the object pushes from the MIT M-Cube lab pushing dataset. We extract features from force, moment and velocity measurements of the pushes, and train a Multi-Output Regression Random Forest. The experimental results show that we can accurately predict the 2D inertial parameters from a single push, and that our method retains this robust performance under various surface types.
@inproceedings{lincoln42213, month = {October}, author = {Nikos Mavrakis and Rustam Stolkin and Amir Ghalamzan Esfahani}, booktitle = {The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, title = {Estimating An Object?s Inertial Parameters By Robotic Pushing: A Data-Driven Approach}, journal = {The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020)}, doi = {10.1109/IROS45743.2020.9341112}, pages = {9537--9544}, year = {2020}, keywords = {ARRAY(0x559d32536320)}, url = {https://eprints.lincoln.ac.uk/id/eprint/42213/}, abstract = {Estimating the inertial properties of an object can make robotic manipulations more efficient, especially in extreme environments. This paper presents a novel method of estimating the 2D inertial parameters of an object, by having a robot applying a push on it. We draw inspiration from previous analyses on quasi-static pushing mechanics, and introduce a data-driven model that can accurately represent these mechan- ics and provide a prediction for the object?s inertial parameters. We evaluate the model with two datasets. For the first dataset, we set up a V-REP simulation of seven robots pushing objects with large range of inertial parameters, acquiring 48000 pushes in total. For the second dataset, we use the object pushes from the MIT M-Cube lab pushing dataset. We extract features from force, moment and velocity measurements of the pushes, and train a Multi-Output Regression Random Forest. The experimental results show that we can accurately predict the 2D inertial parameters from a single push, and that our method retains this robust performance under various surface types.} }
- S. Kottayil, P. Tsoleridis, K. Rossa, R. Connors, and C. Fox, “Investigation of driver route choice behaviour using bluetooth data,” in 15th world conference on transport research, 2020, p. 632–645. doi:10.1016/j.trpro.2020.08.065
[BibTeX] [Abstract] [Download PDF]
Many local authorities use small-scale transport models to manage their transportation networks. These may assume drivers? behaviour to be rational in choosing the fastest route, and thus that all drivers behave the same given an origin and destination, leading to simplified aggregate flow models, fitted to anonymous traffic flow measurements. Recent price falls in traffic sensors, data storage, and compute power now enable Data Science to empirically test such assumptions, by using per-driver data to infer route selection from sensor observations and compare with optimal route selection. A methodology is presented using per-driver data to analyse driver route choice behaviour in transportation networks. Traffic flows on multiple measurable routes for origin destination pairs are compared based on the length of each route. A driver rationality index is defined by considering the shortest physical route between an origin-destination pair. The proposed method is intended to aid calibration of parameters used in traffic assignment models e.g. weights in generalized cost formulations or dispersion within stochastic user equilibrium models. The method is demonstrated using raw sensor datasets collected through Bluetooth sensors in the area of Chesterfield, Derbyshire, UK. The results for this region show that routes with a significant difference in lengths of their paths have the majority (71\%) of drivers using the optimal path but as the difference in length decreases, the probability of suboptimal route choice decreases (27\%). The methodology can be used for extended research considering the impact on route choice of other factors including travel time and road specific conditions.
@inproceedings{lincoln34791, volume = {48}, month = {September}, author = {Sreedevi Kottayil and Panagiotis Tsoleridis and Kacper Rossa and Richard Connors and Charles Fox}, booktitle = {15th World Conferenc