Publications

Download the BibTeX file of all L-CAS publications

Below a list of all outputs created with the involvement of L-CAS academics. Filter by author or type.

Authors: Type:

2024

  • J. Gao, J. Zhang, F. Zhang, and J. Gao, “Lacta: a lightweight and accurate algorithm for cherry tomato detection in unstructured environments,” Expert systems with applications, vol. 238, iss. Part C, p. 122073, 2024. doi:10.1016/j.eswa.2023.122073
    [BibTeX] [Abstract] [Download PDF]

    Developing cherry tomato detection algorithms for selective harvesting robots faces many challenges due to the influence of various environmental factors such as lighting, water mist, overlap, and occlusion. To this end, we present LACTA, a lightweight and accurate cherry tomato detection algorithm specifically designed for harvesting robot operation in complex environments. Our approach enhances the model?s generalization ability and robustness by selectively expanding the original dataset using a combination of offline and online data augmentation strategies. To effectively capture the small target features of cherry tomatoes, we construct an adaptive feature extraction network (AFEN) that focuses on extracting pertinent feature information to enhance the identification ability. Additionally, the proposed cross-layer feature fusion network (CFFN) preserves the model?s lightweight nature while obtaining richer feature representations. Finally, the integration of efficient decoupled heads (EDH) further enhances the model?s detection performance. Experimental results demonstrate the adaptability and robustness of LACTA, achieving precision, recall, and mAP values of 94\%, 92.5\%, and 97.3\%, respectively. Compared to the original dataset, the offline-online combined data augmentation strategy improves precision, recall, and mAP by 1.6\%, 1.7\%, and 1.1\%, respectively. The AFEN + CFFN network structure significantly reduces computational complexity by 28\% and number of parameters by 72\%. With a compact size of only 2.88M, the LACTA model can be seamlessly deployed into selective harvesting robots for the automated harvesting of cherry tomatoes in greenhouses. The code is available at https://github.com/ruyounuo/LACTA

    @article{lincoln56667,
    volume = {238},
    number = {Part C},
    month = {March},
    author = {Jin Gao and Junxiong Zhang and Fan Zhang and Junfeng Gao},
    title = {LACTA: A Lightweight and Accurate Algorithm for Cherry Tomato Detection in Unstructured Environments},
    publisher = {Elsevier},
    year = {2024},
    journal = {Expert Systems with Applications},
    doi = {10.1016/j.eswa.2023.122073},
    pages = {122073},
    keywords = {ARRAY(0x55bd28ddb4c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56667/},
    abstract = {Developing cherry tomato detection algorithms for selective harvesting robots faces many challenges due to the influence of various environmental factors such as lighting, water mist, overlap, and occlusion. To this end, we present LACTA, a lightweight and accurate cherry tomato detection algorithm specifically designed for harvesting robot operation in complex environments. Our approach enhances the model?s generalization ability and robustness by selectively expanding the original dataset using a combination of offline and online data augmentation strategies. To effectively capture the small target features of cherry tomatoes, we construct an adaptive feature extraction network (AFEN) that focuses on extracting pertinent feature information to enhance the identification ability. Additionally, the proposed cross-layer feature fusion network (CFFN) preserves the model?s lightweight nature while obtaining richer feature representations. Finally, the integration of efficient decoupled heads (EDH) further enhances the model?s detection performance. Experimental results demonstrate the adaptability and robustness of LACTA, achieving precision, recall, and mAP values of 94\%, 92.5\%, and 97.3\%, respectively. Compared to the original dataset, the offline-online combined data augmentation strategy improves precision, recall, and mAP by 1.6\%, 1.7\%, and 1.1\%, respectively. The AFEN + CFFN network structure significantly reduces computational complexity by 28\% and number of parameters by 72\%. With a compact size of only 2.88M, the LACTA model can be seamlessly deployed into selective harvesting robots for the automated harvesting of cherry tomatoes in greenhouses. The code is available at https://github.com/ruyounuo/LACTA}
    }

2023

  • G. Picardi, M. D. Luca, G. Chimienti, M. Cianchetti, and M. Calisti, “User-driven design and development of an underwater soft gripper for biological sampling and litter collection,” Journal of marine science and engineering, vol. 11, iss. 4, p. 771, 2023. doi:10.3390/jmse11040771
    [BibTeX] [Abstract] [Download PDF]

    Implementing manipulation and intervention capabilities in underwater vehicles is of crucial importance for commercial and scientific reasons. Mainstream underwater grippers are designed for the heavy load tasks typical of the industrial sector; however, due to the lack of alternatives, they are frequently used in biological sampling applications to handle irregular, delicate, and deformable specimens with a consequent high risk of damage. To overcome this limitation, the design of grippers for marine science applications should explicitly account for the requirements of end-users. In this paper, we aim at making a step forward and propose to systematically account for the needs of end-users by resorting to design tools used in industry for the conceptualization of new products which can yield great benefits to both applied robotic research and marine science. After the generation of the concept design for the gripper using a reduced version of the House of Quality and the Pugh decision matrix, we reported on its mechanical design, construction, and preliminary testing. The paper reports on the full design pipeline from requirements collection to preliminary testing with the aim of fostering and providing structure to fruitful interdisciplinary collaborations at the interface of robotics and marine science.

    @article{lincoln56195,
    volume = {11},
    number = {4},
    month = {March},
    author = {Giacomo Picardi and Mauro De Luca and Giovanni Chimienti and Matteo Cianchetti and Marcello Calisti},
    title = {User-Driven Design and Development of an Underwater Soft Gripper for Biological Sampling and Litter Collection},
    publisher = {MDPI},
    year = {2023},
    journal = {Journal of Marine Science and Engineering},
    doi = {10.3390/jmse11040771},
    pages = {771},
    keywords = {ARRAY(0x55bd28ddc820)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56195/},
    abstract = {Implementing manipulation and intervention capabilities in underwater vehicles is of crucial importance for commercial and scientific reasons. Mainstream underwater grippers are designed for the heavy load tasks typical of the industrial sector; however, due to the lack of alternatives, they are frequently used in biological sampling applications to handle irregular, delicate, and deformable specimens with a consequent high risk of damage. To overcome this limitation, the design of grippers for marine science applications should explicitly account for the requirements of end-users. In this paper, we aim at making a step forward and propose to systematically account for the needs of end-users by resorting to design tools used in industry for the conceptualization of new products which can yield great benefits to both applied robotic research and marine science. After the generation of the concept design for the gripper using a reduced version of the House of Quality and the Pugh decision matrix, we reported on its mechanical design, construction, and preliminary testing. The paper reports on the full design pipeline from requirements collection to preliminary testing with the aim of fostering and providing structure to fruitful interdisciplinary collaborations at the interface of robotics and marine science.}
    }

  • F. Camara and C. Fox, “A kinematic model generates non-circular human proxemics zones,” Advanced robotics, 2023. doi:10.1080/01691864.2023.2263062
    [BibTeX] [Abstract] [Download PDF]

    Hall?s theory of proxemics established distinct spatial zones around humans where they experience comfort or discomfort when interacting with others. Our previous work proposed a new model of proxemics and trust and it showed how to generate proxemics zone sizes using simple equations from human kinematic behaviour. But like most work, this assumed that the zones are circular. In this paper, we refine this model to take the initial heading of the agent into account and find that this results in a non-circular outer boundary of the social zone. These new analytical results from a generative model form a step towards more advanced quantitative proxemics in dual agents? interaction modelling.

    @article{lincoln56622,
    title = {A kinematic model generates non-circular human proxemics zones},
    author = {Fanta Camara and Charles Fox},
    publisher = {Taylor and Francais},
    year = {2023},
    doi = {10.1080/01691864.2023.2263062},
    journal = {Advanced Robotics},
    keywords = {ARRAY(0x55bd28ddca30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56622/},
    abstract = {Hall?s theory of proxemics established distinct spatial zones around humans where they experience comfort or discomfort when interacting with others. Our previous work proposed a new model of proxemics and trust and it showed how to generate proxemics zone sizes using simple equations from human kinematic behaviour. But like most work, this assumed that the zones are circular. In this paper, we refine this model to take the initial heading of the agent into account and find that this results in a non-circular outer boundary of the social zone. These new analytical results from a generative model form a step towards more advanced quantitative proxemics in dual agents? interaction modelling.}
    }

  • A. M. G. Esfahani, “Haptic-guided grasping to minimise torque effort during robotic telemanipulation,” Autonomous robots, vol. 47, iss. 4, 2023. doi:10.1007/s10514-023-10096-7
    [BibTeX] [Abstract] [Download PDF]

    Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in human subjects studies. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visually guidance, although combining them together leads to the best overall results.

    @article{lincoln53715,
    volume = {47},
    number = {4},
    month = {April},
    author = {Amir Masoud Ghalamzan Esfahani},
    title = {Haptic-guided Grasping to Minimise Torque Effort during Robotic Telemanipulation},
    publisher = {Springer},
    year = {2023},
    journal = {Autonomous Robots},
    doi = {10.1007/s10514-023-10096-7},
    keywords = {ARRAY(0x55bd28ddc7f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53715/},
    abstract = {Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in human subjects studies. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visually guidance, although combining them together leads to the best overall results.}
    }

  • J. Cox, D. Li, C. Fox, and S. Coutts, “Black-grass (alopecurus myosuroides) in cereal multispectral detection by uav,” Weed science, 2023. doi:10.1017/wsc.2023.41
    [BibTeX] [Abstract] [Download PDF]

    Site-specific weed management (on the scale of a few meters or less) has the potential to greatly reduce pesticide use and its associated environmental and economic costs. A prerequisite for site-specific weed management is the availability of accurate maps of the weed population that can be generated quickly and cheaply. Improvements and cost reductions in unmanned aerial vehicles (UAVs) and camera technology mean these tools are now readily available for agricultural use. We used UAVs to collect aerial images captured in both RGB and multispectral formats of 12 cereal fields (wheat [Triticum aestivum L.] and barley [Hordeum vulgare L.]) across eastern England. These data were used to train machine learning models to generate prediction maps of locations of black-grass (Alopecurus myosuroides Huds.), a prolific weed in UK cereal fields. We tested machine learning and data set resampling methods to obtain the most accurate system for predicting the presence and absence of weeds in new out-of-sample fields. The accuracy of the system in predicting the absence of A. myosuroides is 69\% and its presence above 5 g in weight with 77\% accuracy in new out-of-sample fields. This system generates prediction maps that can be used by either agricultural machinery or autonomous robotic platforms for precision weed management. Improvements to the accuracy can be made by increasing the number of fields and samples in the data set and the length of time over which data are collected to gather data across the entire growing season.

    @article{lincoln56155,
    title = {Black-grass (Alopecurus myosuroides) in cereal multispectral detection by UAV},
    author = {Jonathan Cox and Dom Li and Charles Fox and Shaun Coutts},
    publisher = {Cambridge University Press},
    year = {2023},
    doi = {10.1017/wsc.2023.41},
    journal = {Weed Science},
    keywords = {ARRAY(0x55bd28ddca90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56155/},
    abstract = {Site-specific weed management (on the scale of a few meters or less) has the potential to greatly reduce pesticide use and its associated environmental and economic costs. A prerequisite for site-specific weed management is the availability of accurate maps of the weed population that can be generated quickly and cheaply. Improvements and cost reductions in unmanned aerial vehicles (UAVs) and camera technology mean these tools are now readily available for agricultural use. We used UAVs to collect aerial images captured in both RGB and multispectral formats of 12 cereal fields (wheat [Triticum aestivum L.] and barley [Hordeum vulgare L.]) across eastern England. These data were used to train machine learning models to generate prediction maps of locations of black-grass (Alopecurus myosuroides Huds.), a prolific weed in UK cereal fields. We tested machine learning and data set resampling methods to obtain the most accurate system for predicting the presence and absence of weeds in new out-of-sample fields. The accuracy of the system in predicting the absence of A. myosuroides is 69\% and its presence above 5 g in weight with 77\% accuracy in new out-of-sample fields. This system generates prediction maps that can be used by either agricultural machinery or autonomous robotic platforms for precision weed management. Improvements to the accuracy can be made by increasing the number of fields and samples in the data set and the length of time over which data are collected to gather data across the entire growing season.}
    }

  • J. Cox, N. Tsagkopoulos, Z. Rozsypálek, T. Krajník, E. Sklar, and M. Hanheide, “Visual teach and generalise (vtag)–exploiting perceptual aliasing for scalable autonomous robotic navigation in horticultural environments,” Computers and electronics in agriculture, vol. 212, iss. 108054, 2023. doi:10.1016/j.compag.2023.108054
    [BibTeX] [Abstract] [Download PDF]

    Nowadays, most agricultural robots rely on precise and expensive localisation, typically based on global navigation satellite systems (GNSS) and real-time kinematic (RTK) receivers. Unfortunately, the precision of GNSS localisation significantly decreases in environments where the signal paths between the receiver and the satellites are obstructed. This precision hampers deployments of these robots in, e.g., polytunnels or forests. An attractive alternative to GNSS is vision-based localisation and navigation. However, perceptual aliasing and landmark deficiency, typical for agricultural environments, cause traditional image processing techniques, such as feature matching, to fail. We propose an approach for an affordable pure vision-based navigation system which is not only robust to perceptual aliasing, but it actually exploits the repetitiveness of agricultural environments. Our system extends the classic concept of visual teach and repeat to visual teach and generalise (VTAG). Our teach and generalise method uses a deep learning-based image registration pipeline to register similar images through meaningful generalised representations obtained from different but similar areas. The proposed system uses only a low-cost uncalibrated monocular camera and the robot?s wheel odometry to produce heading corrections to traverse crop rows in polytunnels safely. We evaluate this method at our test farm and at a commercial farm on three different robotic platforms where an operator teaches only a single crop row. With all platforms, the method successfully navigates the majority of rows with most interventions required at the end of the rows, where the camera no longer has a view of any repeating landmarks such as poles, crop row tables or rows which have visually different features to that of the taught row. For one robot which was taught one row 25 m long our approach autonomously navigated the robot a total distance of over 3.5 km, reaching a teach-generalisation gain of 140.

    @article{lincoln55642,
    volume = {212},
    number = {108054},
    month = {September},
    author = {Jonathan Cox and Nikolaos Tsagkopoulos and Zden{\v e}k Rozsyp{\'a}lek and Tom{\'a}{\v s} Krajn{\'i}k and Elizabeth Sklar and Marc Hanheide},
    title = {Visual teach and generalise (VTAG){--}Exploiting perceptual aliasing for scalable autonomous robotic navigation in horticultural environments},
    publisher = {Elsevier},
    year = {2023},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2023.108054},
    keywords = {ARRAY(0x55bd28ddc378)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55642/},
    abstract = {Nowadays, most agricultural robots rely on precise and expensive localisation, typically based on global navigation satellite systems (GNSS) and real-time kinematic (RTK) receivers. Unfortunately, the precision of GNSS localisation significantly decreases in environments where the signal paths between the receiver and the satellites are obstructed. This precision hampers deployments of these robots in, e.g., polytunnels or forests. An attractive alternative to GNSS is vision-based localisation and navigation. However, perceptual aliasing and landmark deficiency, typical for agricultural environments, cause traditional image processing techniques, such as feature matching, to fail. We propose an approach for an affordable pure vision-based navigation system which is not only robust to perceptual aliasing, but it actually exploits the repetitiveness of agricultural environments. Our system extends the classic concept of visual teach and repeat to visual teach and generalise (VTAG). Our teach and generalise method uses a deep learning-based image registration pipeline to register similar images through meaningful generalised representations obtained from different but similar areas. The proposed system uses only a low-cost uncalibrated monocular camera and the robot?s wheel odometry to produce heading corrections to traverse crop rows in polytunnels safely. We evaluate this method at our test farm and at a commercial farm on three different robotic platforms where an operator teaches only a single crop row. With all platforms, the method successfully navigates the majority of rows with most interventions required at the end of the rows, where the camera no longer has a view of any repeating landmarks such as poles, crop row tables or rows which have visually different features to that of the taught row. For one robot which was taught one row 25 m long our approach autonomously navigated the robot a total distance of over 3.5 km, reaching a teach-generalisation gain of 140.}
    }

  • X. Li, J. Gao, S. Jin, C. Jiang, M. Zhao, and M. Lu, “Towards robust registration of heterogeneous multispectral uav imagery: a two-stage approach for cotton leaf lesion grading,” Computers and electronics in agriculture, vol. 212, 2023. doi:10.1016/j.compag.2023.108153
    [BibTeX] [Abstract] [Download PDF]

    Multiple source images acquired from diverse sensors mounted on unmanned aerial vehicles (UAVs) offer valuable complementary information for ground vegetation analysis. However, accurately aligning heterogeneous UAV images poses challenges due to differences in geometry, intensity, and noise resulting from varying imaging principles. This paper presents a two-stage registration method aimed at fusing visible RGB and multispectral images for cotton leaf lesion grading. The coarse alignment stage utilizes Scale Invariant Feature Transform (SIFT), while the refined alignment stage employs a novel correlation coefficient-based template matching. The proposed method first employs the EfficientDet network to detect infected cotton leaves with lesions in RGB images. Subsequently, lesion leaves in multiple spectral imagery (red, green, red edge, and near-infrared bands) are located using the perspective transformation matrix derived from SIFT and the coordinates of lesion leaves in RGB images. Refined registration between RGB and multispectral imagery is achieved through template matching with the new correlation coefficient. The registered reflectance data from the different spectral bands and RGB components are utilized to classify pixels in each infected leaf into lesion, healthy, and soil parts. The lesion grade is determined based on the ratio of lesion pixels to the total corresponding leaf area. Experimental results, compared with manual assessment, demonstrate a lesion leaves detection model with a mAP@0.5 of 91.01\% and a leaf lesion grading accuracy of 92.01\%. These results validate the suitability of the proposed method for UAV RGB and multispectral image registration, enabling automated cotton leaf lesion grading.

    @article{lincoln55903,
    volume = {212},
    month = {September},
    author = {Xinzhou Li and Junfeng Gao and Shichao Jin and Chunxin Jiang and Mingming Zhao and Mingzhou Lu},
    title = {Towards robust registration of heterogeneous multispectral UAV imagery: A two-stage approach for cotton leaf lesion grading},
    publisher = {Elsevier},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2023.108153},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc3a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55903/},
    abstract = {Multiple source images acquired from diverse sensors mounted on unmanned aerial vehicles (UAVs) offer valuable complementary information for ground vegetation analysis. However, accurately aligning heterogeneous UAV images poses challenges due to differences in geometry, intensity, and noise resulting from varying imaging principles. This paper presents a two-stage registration method aimed at fusing visible RGB and multispectral images for cotton leaf lesion grading. The coarse alignment stage utilizes Scale Invariant Feature Transform (SIFT), while the refined alignment stage employs a novel correlation coefficient-based template matching. The proposed method first employs the EfficientDet network to detect infected cotton leaves with lesions in RGB images. Subsequently, lesion leaves in multiple spectral imagery (red, green, red edge, and near-infrared bands) are located using the perspective transformation matrix derived from SIFT and the coordinates of lesion leaves in RGB images. Refined registration between RGB and multispectral imagery is achieved through template matching with the new correlation coefficient. The registered reflectance data from the different spectral bands and RGB components are utilized to classify pixels in each infected leaf into lesion, healthy, and soil parts. The lesion grade is determined based on the ratio of lesion pixels to the total corresponding leaf area. Experimental results, compared with manual assessment, demonstrate a lesion leaves detection model with a mAP@0.5 of 91.01\% and a leaf lesion grading accuracy of 92.01\%. These results validate the suitability of the proposed method for UAV RGB and multispectral image registration, enabling automated cotton leaf lesion grading.}
    }

  • W. Mandil, K. Nazari, V. R. Sugathakumary, and A. G. Esfahani, “Tactile-sensing technologies: trends, challenges and outlook in agri-food manipulation,” Sensors, vol. 23, iss. 17, 2023. doi:10.3390/s23177362
    [BibTeX] [Abstract] [Download PDF]

    Tactile sensing plays a pivotal role in achieving precise physical manipulation tasks and extracting vital physical features. This comprehensive review paper presents an in-depth overview of the growing research on tactile-sensing technologies, encompassing state-of-the-art techniques, future prospects, and current limitations. The paper focuses on tactile hardware, algorithmic complexities, and the distinct features offered by each sensor. This paper has a special emphasis on agri-food manipulation and relevant tactile-sensing technologies. It highlights key areas in agri-food manipulation, including robotic harvesting, food item manipulation, and feature evaluation, such as fruit ripeness assessment, along with the emerging field of kitchen robotics. Through this interdisciplinary exploration, we aim to inspire researchers, engineers, and practitioners to harness the power of tactile-sensing technology for transformative advancements in agri-food robotics. By providing a comprehensive understanding of the current landscape and future prospects, this review paper serves as a valuable resource for driving progress in the field of tactile sensing and its application in agri-food systems.

    @article{lincoln56099,
    volume = {23},
    number = {17},
    month = {August},
    author = {Willow Mandil and Kiyanoush Nazari and Vishnu Rajendran Sugathakumary and Amir Ghalamzan Esfahani},
    title = {Tactile-Sensing Technologies: Trends, Challenges and Outlook in Agri-Food Manipulation},
    publisher = {MDPI},
    year = {2023},
    journal = {Sensors},
    doi = {10.3390/s23177362},
    keywords = {ARRAY(0x55bd28ddc3d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56099/},
    abstract = {Tactile sensing plays a pivotal role in achieving precise physical manipulation tasks and extracting vital physical features. This comprehensive review paper presents an in-depth overview of the growing research on tactile-sensing technologies, encompassing state-of-the-art techniques, future prospects, and current limitations. The paper focuses on tactile hardware, algorithmic complexities, and the distinct features offered by each sensor. This paper has a special emphasis on agri-food manipulation and relevant tactile-sensing technologies. It highlights key areas in agri-food manipulation, including robotic harvesting, food item manipulation, and feature evaluation, such as fruit ripeness assessment, along with the emerging field of kitchen robotics. Through this interdisciplinary exploration, we aim to inspire researchers, engineers, and practitioners to harness the power of tactile-sensing technology for transformative advancements in agri-food robotics. By providing a comprehensive understanding of the current landscape and future prospects, this review paper serves as a valuable resource for driving progress in the field of tactile sensing and its application in agri-food systems.}
    }

  • S. M. Mellado, A. Mannucci, M. Magnusson, D. Adolfsson, H. Andreasson, M. Hamad, S. Abdolshah, R. T. Chadalavada, L. Palmieri, T. Linder, C. S. Swaminathan, T. P. Kucner, M. Hanheide, M. Fernandez-Carmona, G. Cielniak, T. Duckett, F. Pecora, S. Bokesand, K. O. Arras, S. Haddadin, and A. J. Lilienthal, “The iliad safety stack: human-aware infrastructure-free navigation of industrial mobile robots,” Ieee robotics and automation magazine, p. 2–13, 2023. doi:10.1109/MRA.2023.3296983
    [BibTeX] [Abstract] [Download PDF]

    Safe yet efficient operation of professional service robots within logistics or production in human-robot shared environments requires a flexible human-aware navigation stack. In this manuscript, we propose the ILIAD safety stack comprising software and hardware designed to achieve safe and efficient motion specifically for industrial vehicles with nontrivial kinematics The stack integrates five interconnected layers for autonomous motion planning and control to enable short- and long-term reasoning. The use-case scenario tested requires an autonomous industrial forklift to safely navigate among pick-and-place locations during normal daily activities involving human workers. Our test-bed in the real world consists of a three-day experiment in a food distribution warehouse. The evaluation is extended in simulation with an ablation study of the impact of different layers to show both the practical and the performance-related impact. The experimental results show a safer and more legible robot when humans are nearby with a trade-off in task efficiency, and that not all layers have the same degree of impact in the system.

    @article{lincoln56102,
    month = {August},
    author = {Sergio Molina Mellado and Anna Mannucci and Martin Magnusson and Daniel Adolfsson and Henrik Andreasson and Mazin Hamad and Saeed Abdolshah and Ravi Teja Chadalavada and Luigi Palmieri and Timm Linder and Chittaranjan Srinivas Swaminathan and Tomasz Piotr Kucner and Marc Hanheide and Manuel Fernandez-Carmona and Grzegorz Cielniak and Tom Duckett and Federico Pecora and Simon Bokesand and Kai Oliver Arras and Sami Haddadin and Achim J. Lilienthal},
    title = {The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots},
    publisher = {Robotics and Automation Society},
    journal = {IEEE Robotics and Automation Magazine},
    doi = {10.1109/MRA.2023.3296983},
    pages = {2--13},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc430)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56102/},
    abstract = {Safe yet efficient operation of professional service robots within logistics or production in human-robot shared environments requires a flexible human-aware navigation stack. In this manuscript, we propose the ILIAD safety stack comprising software and hardware designed to achieve safe and efficient motion specifically for industrial vehicles with nontrivial kinematics The stack integrates five interconnected layers for autonomous motion planning and control to enable short- and long-term reasoning. The use-case scenario tested requires an autonomous industrial forklift to safely navigate among pick-and-place locations during normal daily activities involving human workers. Our test-bed in the real world consists of a three-day experiment in a food distribution warehouse. The evaluation is extended in simulation with an ablation study of the impact of different layers to show both the practical and the performance-related impact. The experimental results show a safer and more legible robot when humans are nearby with a trade-off in task efficiency, and that not all layers have the same degree of impact in the system.}
    }

  • J. Ganzer, N. Criado, M. Lopez-Sanchez, S. Parsons, and J. A. Rodriguez-Aguilar, “A model to support collective reasoning: formalization, analysis and computational assessment,” Journal of artificial intelligence research (jair), vol. 77, 2023. doi:10.1613/jair.1.14409
    [BibTeX] [Abstract] [Download PDF]

    In this paper we propose a new model to represent human debates and methods to obtain collective conclusions from them. This model overcomes two drawbacks of existing approaches. First, our model does not assume that participants agree on the structure of the debate. It does this by allowing participants to express their opinion about all aspects of the debate. Second, our model does not assume that participants’ opinions are rational, an assumption that significantly limits current approaches. Instead, we define a weaker notion of rationality that characterises coherent opinions, and we consider different scenarios based on the coherence of individual opinions and the level of consensus. We provide a formal analysis of different opinion aggregation functions that compute a collective decision based on the individual opinions and the debate structure. In particular, we demonstrate that aggregated opinions can be coherent even if there is a lack of consensus and individual opinions are not coherent. We conclude with an empirical evaluation demonstrating that collective opinions can be computed efficiently for real-sized debates.

    @article{lincoln55428,
    volume = {77},
    month = {July},
    author = {Jordi Ganzer and Natalia Criado and Maite Lopez-Sanchez and Simon Parsons and Juan A. Rodriguez-Aguilar},
    title = {A model to support collective reasoning: Formalization, analysis and computational assessment},
    publisher = {AI Access Foundation},
    journal = {Journal of Artificial Intelligence Research (JAIR)},
    doi = {10.1613/jair.1.14409},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc490)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55428/},
    abstract = {In this paper we propose a new model to represent human debates and methods to obtain collective conclusions from them. This model overcomes two drawbacks of existing approaches. First, our model does not assume that participants agree on the structure of the debate. It does this by allowing participants to express their opinion about all aspects of the debate. Second, our model does not assume that participants' opinions are rational, an assumption that significantly limits current approaches. Instead, we define a weaker notion of rationality that characterises coherent opinions, and we consider different scenarios based on the coherence of individual opinions and the level of consensus.
    We provide a formal analysis of different opinion aggregation functions that compute a collective decision based on the individual opinions and the debate structure. In particular, we demonstrate that aggregated opinions can be coherent even if there is a lack of consensus and individual opinions are not coherent. We conclude with an empirical evaluation demonstrating that collective opinions can be computed efficiently for real-sized debates.}
    }

  • R. Bina, A. K. E., A. Taghvaeipour, and A. Klimchik, “A procedure for the stiffness identification of parallel robots under measurement limitations,” Mechanics based design of structures and machines, 2023. doi:10.1080/15397734.2023.2234991
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces a procedure to obtain reliable stiffness model for parallel robots from experimental data and identify its parameters considering measurement limitations. The efficiency of the proposed identification procedure validated via simulation and experimental studies on a 3-DOF Delta parallel robot. Simulation results showed that the proposed simplification and model reduction keeps more than 95\% of entire stiffness properties (for the worst-case analysis). The experimental results proved that the obtained model on average describes 95\% of compliance errors and for the worst case the error does not overcome 9.8\%.

    @article{lincoln56591,
    title = {A procedure for the stiffness identification of parallel robots under measurement limitations},
    author = {Rasool Bina and Ali Kamali E. and Afshin Taghvaeipour and Alexandr Klimchik},
    publisher = {Taylor and Francis},
    year = {2023},
    doi = {10.1080/15397734.2023.2234991},
    journal = {Mechanics Based Design of Structures and Machines},
    keywords = {ARRAY(0x55bd28ddca00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56591/},
    abstract = {This paper introduces a procedure to obtain reliable stiffness model for parallel robots from experimental data and identify its parameters considering measurement limitations. The efficiency of the proposed identification procedure validated via simulation and experimental studies on a 3-DOF Delta parallel robot. Simulation results showed that the proposed simplification and model reduction keeps more than 95\% of entire stiffness properties (for the worst-case analysis). The experimental results proved that the obtained model on average describes 95\% of compliance errors and for the worst case the error does not overcome 9.8\%.}
    }

  • L. Guevara, M. Hanheide, and S. Parsons, “Implementation of a human?aware robot navigation module for cooperative soft?fruit harvesting operations,” Journal of field robotics, 2023. doi:10.1002/rob.22227
    [BibTeX] [Abstract] [Download PDF]

    In the last decades, robotic solutions have been introduced in agriculture to improve the efficiency of tasks such as spraying, plowing, and seeding. However, for a more complex task like soft-fruit harvesting, the efficiency of experienced human pickers has not been surpassed yet by robotic solutions. Thus, in the immediate future, human labor will probably be still necessary for picking tasks while robotic platforms could be used as collaborators, supporting the pickers in the transportation of the harvested fruit. This cooperative harvesting strategy creates a human?robot interaction (HRI) that requires significant further development in human-aware safe navigation and effective bidirectional communication of intent. In fact, although agricultural robots are considered small/medium size machinery, they still represent a risk of causing injuries to human collaborators, especially if people are not trained to work with robots or robot operations are not intuitive. Avoiding such injury is the aim of this work which contributes to the development, implementation, and evaluation of a human-aware navigation (HAN) module that can be integrated into the autonomous navigation system of commercial agricultural robots. The proposed module is responsible for the detection and monitoring of humans working around the robot and uses this information to activate safety actions depending on whether the human presence is considered at risk or not. Apart from ensuring a physically safe HRI, the proposed module deals with the comfort level and psychological safety of human coworkers. The latter is possible by using an explicit human?robot communication strategy that lets both know of the other’s intentions, increasing the level of trust and reducing inefficient pauses triggered by unnecessary safety actions. The proposed HAN solution was integrated into a commercial agricultural robot and tested in several situations that are expected to happen during cooperative harvesting operations. The results of a usability assessment illustrated the benefits of the proposal in terms of safety, efficiency, and ergonomics.

    @article{lincoln55459,
    title = {Implementation of a human?aware robot navigation module for cooperative soft?fruit harvesting operations},
    author = {Leonardo Guevara and Marc Hanheide and Simon Parsons},
    publisher = {Wiley},
    year = {2023},
    doi = {10.1002/rob.22227},
    journal = {Journal of Field Robotics},
    keywords = {ARRAY(0x55bd28ddcaf0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55459/},
    abstract = {In the last decades, robotic solutions have been introduced in agriculture to improve the efficiency of tasks such as spraying, plowing, and seeding. However, for a more complex task like soft-fruit harvesting, the efficiency of experienced human pickers has not been surpassed yet by robotic solutions. Thus, in the immediate future, human labor will probably be still necessary for picking tasks while robotic platforms could be used as collaborators, supporting the pickers in the transportation of the harvested fruit. This cooperative harvesting strategy creates a human?robot interaction (HRI) that requires significant further development in human-aware safe navigation and effective bidirectional communication of intent. In fact, although agricultural robots are considered small/medium size machinery, they still represent a risk of causing injuries to human collaborators, especially if people are not trained to work with robots or robot operations are not intuitive. Avoiding such injury is the aim of this work which contributes to the development, implementation, and evaluation of a human-aware navigation (HAN) module that can be integrated into the autonomous navigation system of commercial agricultural robots. The proposed module is responsible for the detection and monitoring of humans working around the robot and uses this information to activate safety actions depending on whether the human presence is considered at risk or not. Apart from ensuring a physically safe HRI, the proposed module deals with the comfort level and psychological safety of human coworkers. The latter is possible by using an explicit human?robot communication strategy that lets both know of the other's intentions, increasing the level of trust and reducing inefficient pauses triggered by unnecessary safety actions. The proposed HAN solution was integrated into a commercial agricultural robot and tested in several situations that are expected to happen during cooperative harvesting operations. The results of a usability assessment illustrated the benefits of the proposal in terms of safety, efficiency, and ergonomics.}
    }

  • V. Wichitwechkarn and C. Fox, “Macarons: a modular and open-sourced automation system for vertical farming,” Jounral of open hardware, vol. 7, iss. 1, 2023. doi:10.5334/joh.53
    [BibTeX] [Abstract] [Download PDF]

    The Modular Automated Crop Array Online System (MACARONS) is an extensible, scalable, open hardware system for plant transport in automated horticulture systems such as vertical farms. It is specified to move trays of plants up to 1060mm \${$\backslash$}times\$ 630mm and 12.5kg at a rate of 100mm/s along the guide rails and 41.7mm/s up the lifts, such as between stations for monitoring and actuating plants. The cost for the construction of one grow unit of MACARONS is 144.96USD which equates to 128.85USD/m\${\^{ }}2\$ of grow area. The designs are released and meets the requirements of CERN-OSH-W, which includes step-by-step graphical build instructions and can be built by a typical technical person in one day at a cost of 1535.50 USD. Integrated tests are included in the build instructions are used to validate against the specifications, and we report on a successful build. Through a simple analysis, we demonstrate that MACARONS can operate at a rate sufficient to automate tray loading/unloading, to reduce labour costs in a vertical farm.

    @article{lincoln52872,
    volume = {7},
    number = {1},
    month = {January},
    author = {Vijja Wichitwechkarn and Charles Fox},
    title = {MACARONS: A Modular and Open-Sourced Automation System for Vertical Farming},
    publisher = {Ubiquity Press},
    year = {2023},
    journal = {Jounral of Open Hardware},
    doi = {10.5334/joh.53},
    keywords = {ARRAY(0x55bd28ddc9a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52872/},
    abstract = {The Modular Automated Crop Array Online System (MACARONS) is an extensible, scalable, open hardware system for plant transport in automated horticulture systems such as vertical farms. It is specified to move trays of plants up to 1060mm \${$\backslash$}times\$ 630mm and 12.5kg at a rate of 100mm/s along the guide rails and 41.7mm/s up the lifts, such as between stations for monitoring and actuating plants. The cost for the construction of one grow unit of MACARONS is 144.96USD which equates to 128.85USD/m\${\^{ }}2\$ of grow area. The designs are released and meets the requirements of CERN-OSH-W, which includes step-by-step graphical build instructions and can be built by a typical technical person in one day at a cost of 1535.50 USD. Integrated tests are included in the build instructions are used to validate against the specifications, and we report on a successful build. Through a simple analysis, we demonstrate that MACARONS can operate at a rate sufficient to automate tray loading/unloading, to reduce labour costs in a vertical farm.}
    }

  • Z. Maamar, E. Kajan, M. Al-Khafajiy, M. Dohan, A. Fayoumi, and F. Yahya, “A multi-type artifact framework for cyber?physical, social systems design and development,” Internet of things, vol. 22, p. 100820, 2023. doi:10.1016/j.iot.2023.100820
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses the design and development of cyber?physical, social systems using a set of guidelines that capture the conceptual and technical characteristics of such systems. These guidelines are packaged into a framework that resorts to the concept of artifact. Because of these characteristics, the framework?s artifacts are specialized into 3 types referred to as data, thing, and social, all connected together through a set of situational relations referred to as work-with-me, work-for-me, back-me, and avoid-me. To mitigate conflicts blue that could arise because of artifacts? respective time availabilities when they jointly participate in situational relations, policies are put in place defining who does what, when, where, and why. To demonstrate the technical doability of the multi-type artifact framework, a system capturing cyber, physical, and social interactions in a healthcare case-study is developed, deployed, and evaluated.

    @article{lincoln56608,
    volume = {22},
    month = {June},
    author = {Zakaria Maamar and Ejub Kajan and Mohammed Al-Khafajiy and Murtada Dohan and Amjad Fayoumi and Fadwa Yahya},
    title = {A multi-type artifact framework for cyber?physical, social systems design and development},
    publisher = {Elsevier},
    year = {2023},
    journal = {Internet of Things},
    doi = {10.1016/j.iot.2023.100820},
    pages = {100820},
    keywords = {ARRAY(0x55bd28ddc5e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56608/},
    abstract = {This paper discusses the design and development of cyber?physical, social systems using a set of guidelines that capture the conceptual and technical characteristics of such systems. These guidelines are packaged into a framework that resorts to the concept of artifact. Because of these characteristics, the framework?s artifacts are specialized into 3 types referred to as data, thing, and social, all connected together through a set of situational relations referred to as work-with-me, work-for-me, back-me, and avoid-me. To mitigate conflicts blue that could arise because of artifacts? respective time availabilities when they jointly participate in situational relations, policies are put in place defining who does what, when, where, and why. To demonstrate the technical doability of the multi-type artifact framework, a system capturing cyber, physical, and social interactions in a healthcare case-study is developed, deployed, and evaluated.}
    }

  • J. C. M. Baños, P. r a, and G. Cielniak, “Towards safe robotic agricultural applications: safe navigation system design for a robotic grass-mowing application through the risk management method,” Robotics, vol. 12, iss. 63, 2023. doi:10.3390/robotics12030063
    [BibTeX] [Abstract] [Download PDF]

    Safe navigation is a key objective for autonomous applications, particularly those involving mobile tasks, to avoid dangerous situations and prevent harm to humans. However, the integration of a risk management process is not yet mandatory in robotics development. Ensuring safety using mobile robots is critical for many real-world applications, especially those in which contact with the robot could result in fatal consequences, such as agricultural environments where a mobile device with an industrial cutter is used for grass-mowing. In this paper, we propose an explicit integration of a risk management process into the design of the software for an autonomous grass mower, with the aim of enhancing safety. Our approach is tested and validated in simulated scenarios that assess the effectiveness of different custom safety functionalities in terms of collision prevention, execution time, and the number of required human interventions.

    @article{lincoln54478,
    volume = {12},
    number = {63},
    month = {June},
    author = {Jos{\'e} Carlos Mayoral Ba{\~n}os and P{\r a}l Johan From and Grzegorz Cielniak},
    title = {Towards Safe Robotic Agricultural Applications: Safe Navigation System Design for a Robotic Grass-Mowing Application through the Risk Management Method},
    publisher = {MDPI},
    year = {2023},
    journal = {Robotics},
    doi = {10.3390/robotics12030063},
    keywords = {ARRAY(0x55bd28ddc610)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54478/},
    abstract = {Safe navigation is a key objective for autonomous applications, particularly those involving mobile tasks, to avoid dangerous situations and prevent harm to humans. However, the integration of a risk management process is not yet mandatory in robotics development. Ensuring safety using mobile robots is critical for many real-world applications, especially those in which contact with the robot could result in fatal consequences, such as agricultural environments where a mobile device with an industrial cutter is used for grass-mowing. In this paper, we propose an explicit integration of a risk management process into the design of the software for an autonomous grass mower, with the aim of enhancing safety. Our approach is tested and validated in simulated scenarios that assess the effectiveness of different custom safety functionalities in terms of collision prevention, execution time, and the number of required human interventions.}
    }

  • L. Manning, S. Brewer, P. Craigon, J. Frey, A. Gutierrez, N. Jacobs, S. Kanza, S. Munday, J. Sacks, and S. Pearson, “Reflexive governance architectures: considering the ethical implications of autonomous technology adoption in food supply chains,” Trends in food science & technology, vol. 133, p. 114–126, 2023. doi:10.1016/j.tifs.2023.01.015
    [BibTeX] [Abstract] [Download PDF]

    Background: The application of autonomous technology in food supply chains gives rise to a number of ethical considerations associated with the interaction between human and technology, human-technology-plant and human-technology-animal. These considerations and their implications influence technology design, the ways in which technology is applied, how the technology changes food supply chain practices, decision-making and the associated ethical aspects and outcomes. Scope and approach: Using the concept of reflexive governance, this paper has critiqued existing reflective food-related ethical assessment tools and proposed the structural elements required for reflexive governance architectures which address both the sharing of data, and the use of artificial intelligence (AI) and machine learning in food supply chains. Key findings and conclusions: Considering the ethical implications of using autonomous technology in real life contexts is challenging. The current approach, focusing on discrete ethical elements in isolation e.g., ethical aspects or outcomes, normative standards or ethically orientated compliance-based business strategies is not sufficient in itself. Alternatively, the application of more holistic, reflexive governance architectures can inform consideration of ethical aspects, potential ethical outcomes, in particular how they are interlinked and/or interdependent, and the need for mitigation at all lifecycle stages of technology and food product conceptualisation, design, realisation and adoption in the food supply chain. This research is of interest to those who are undertaking ethical deliberation on data sharing, and the use of AI and machine learning in food supply chains.

    @article{lincoln53439,
    volume = {133},
    month = {March},
    author = {L. Manning and S. Brewer and P. Craigon and J. Frey and A. Gutierrez and N. Jacobs and S. Kanza and S. Munday and J. Sacks and S. Pearson},
    title = {Reflexive governance architectures: considering the ethical implications of autonomous technology adoption in food supply chains},
    publisher = {Elsevier},
    year = {2023},
    journal = {Trends in Food Science \& Technology},
    doi = {10.1016/j.tifs.2023.01.015},
    pages = {114--126},
    keywords = {ARRAY(0x55bd28ddc8b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53439/},
    abstract = {Background: The application of autonomous technology in food supply chains gives rise to a number of ethical considerations associated with the interaction between human and technology, human-technology-plant and human-technology-animal. These considerations and their implications influence technology design, the ways in which technology is applied, how the technology changes food supply chain practices, decision-making and the associated ethical aspects and outcomes.
    Scope and approach: Using the concept of reflexive governance, this paper has critiqued existing reflective food-related ethical assessment tools and proposed the structural elements required for reflexive governance architectures which address both the sharing of data, and the use of artificial intelligence (AI) and machine learning in food supply chains.
    Key findings and conclusions: Considering the ethical implications of using autonomous technology in real life contexts is challenging. The current approach, focusing on discrete ethical elements in isolation e.g., ethical aspects or outcomes, normative standards or ethically orientated compliance-based business strategies is not sufficient in itself. Alternatively, the application of more holistic, reflexive governance architectures can inform consideration of ethical aspects, potential ethical outcomes, in particular how they are interlinked and/or interdependent, and the need for mitigation at all lifecycle stages of technology and food product conceptualisation, design, realisation and adoption in the food supply chain. This research is of interest to those who are undertaking ethical deliberation on data sharing, and the use of AI and machine learning in food supply chains.}
    }

  • S. Pearson, S. Brewer, L. Manning, L. Bidaut, G. Onoufriou, A. Durrant, G. Leontidis, C. Jabbour, A. Zisman, G. Parr, J. Frey, and R. Maull, “Decarbonising our food systems: contextualising digitalisation for net zero,” Frontiers in sustainable food systems, vol. 7, 2023. doi:10.3389/fsufs.2023.1094299
    [BibTeX] [Abstract] [Download PDF]

    The food system is undergoing a digital transformation that connects local and global supply chains to address economic, environmental and societal drivers. Digitalisation enables firms to meet sustainable development goals (SDGs), address climate change and the wider negative externalities of food production such as biodiversity loss, and diffuse pollution. Digitalising at the business and supply chain level through public-private mechanisms for data exchange affords the opportunity for greater collaboration, visualising and measuring activities and their socio-environmental impact, demonstrating compliance with regulatory and market requirements and providing opportunity to capture current practice and future opportunities for process and product improvement. Herein we consider digitalisation as a tool to drive innovation and transition to a decarbonised food system. We consider that deep decarbonisation of the food system can only occur when trusted emissions data are exchanged across supply chains. This requires fusion of standardised emissions measurements within a supply chain data sharing framework. This framework, likely operating as a corporate entity, would provide the foci for measurement standards, data exchange, trusted and certified data and as a multi-stakeholder body, including regulators, that would build trust and collaboration across supply chains. This approach provides a methodology for accurate and trusted emissions data to inform consumer choice and industrial response of individual firms within a supply chain.

    @article{lincoln54866,
    volume = {7},
    month = {May},
    author = {Simon Pearson and Steve Brewer and Louise Manning and Luc Bidaut and George Onoufriou and Aiden Durrant and Georgios Leontidis and Charbel Jabbour and Andrea Zisman and Gerard Parr and Jeremy Frey and Roger Maull},
    title = {Decarbonising Our Food Systems: Contextualising Digitalisation For Net Zero},
    publisher = {Frontiers Media},
    journal = {Frontiers in Sustainable Food Systems},
    doi = {10.3389/fsufs.2023.1094299},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc730)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54866/},
    abstract = {The food system is undergoing a digital transformation that connects local and global supply chains to address economic, environmental and societal drivers. Digitalisation enables firms to meet sustainable development goals (SDGs), address climate change and the wider negative externalities of food production such as biodiversity loss, and diffuse pollution. Digitalising at the business and supply chain level through public-private mechanisms for data exchange affords the opportunity for greater collaboration, visualising and measuring activities and their socio-environmental impact, demonstrating compliance with regulatory and market requirements and providing opportunity to capture current practice and future opportunities for process and product improvement. Herein we consider digitalisation as a tool to drive innovation and transition to a decarbonised food system. We consider that deep decarbonisation of the food system can only occur when trusted emissions data are exchanged across supply chains. This requires fusion of standardised emissions measurements within a supply chain data sharing framework. This framework, likely operating as a corporate entity, would provide the foci for measurement standards, data exchange, trusted and certified data and as a multi-stakeholder body, including regulators, that would build trust and collaboration across supply chains. This approach provides a methodology for accurate and trusted emissions data to inform consumer choice and industrial response of individual firms within a supply chain.}
    }

  • P. Craigon, J. Sacks, S. Brewer, J. Frey, G. A. Mendoza, S. Kanza, L. Manning, S. Munday, A. Wintour, and S. Pearson, “Ethics by design: responsible research & innovation for ai in the food sector,” Journal of responsible technology, vol. 13, iss. 100051, 2023. doi:10.1016/j.jrt.2022.100051
    [BibTeX] [Abstract] [Download PDF]

    Here we reflect on how a multi-disciplinary working group explored the ethical complexities of the use of new technologies for data sharing in the food supply chain. We used a three-part process of varied design methods, which included collaborative ideation and speculative scenario development, the creation of design fiction objects, and assessment using the Moral-IT deck, a card-based tool. We present, through the lens of the EPSRC’s Framework for Responsible Innovation how processes of anticipation, reflection, engagement and action built a plausible, fictional world in which a data trust uses artificial intelligence (AI) to support data sharing and decision-making across the food supply chain. This approach provides rich opportunities for considering ethical challenges to data sharing as part of a reflexive and engaged responsible innovation approach. We reflect on the value and potential of this approach as a method for engaged (co-)design and responsible innovation.

    @article{lincoln52115,
    volume = {13},
    number = {100051},
    month = {April},
    author = {P. Craigon and J. Sacks and S. Brewer and J. Frey and A. Gutierrez Mendoza and S. Kanza and L. Manning and S. Munday and A. Wintour and S. Pearson},
    title = {Ethics by Design: Responsible Research \& Innovation for AI in the Food Sector},
    publisher = {Elsevier},
    year = {2023},
    journal = {Journal of Responsible Technology},
    doi = {10.1016/j.jrt.2022.100051},
    keywords = {ARRAY(0x55bd28ddc760)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52115/},
    abstract = {Here we reflect on how a multi-disciplinary working group explored the ethical complexities of the use of new technologies for data sharing in the food supply chain. We used a three-part process of varied design methods, which included collaborative ideation and speculative scenario development, the creation of design fiction objects, and assessment using the Moral-IT deck, a card-based tool. We present, through the lens of the EPSRC's Framework for Responsible Innovation how processes of anticipation, reflection, engagement and action built a plausible, fictional world in which a data trust uses artificial intelligence (AI) to support data sharing and decision-making across the food supply chain. This approach provides rich opportunities for considering ethical challenges to data sharing as part of a reflexive and engaged responsible innovation approach. We reflect on the value and potential of this approach as a method for engaged (co-)design and responsible innovation.}
    }

  • G. Picardi, A. Astolfi, D. Chatzievangelou, J. Aguzzi, and M. Calisti, “Underwater legged robotics: review and perspectives,” Bioinspiration & biomimetics, vol. 18, iss. 3, 2023. doi:10.1088/1748-3190/acc0bb
    [BibTeX] [Abstract] [Download PDF]

    Nowadays, there is a growing awareness on the social and economic importance of the ocean. In this context, being able to carry out a diverse range of operations underwater is of paramount importance for many industrial sectors as well as for marine science and to enforce restoration and mitigation actions. Underwater robots allowed us to venture deeper and for longer time into the remote and hostile marine environment. However, traditional design concepts such as propeller driven remotely operated vehicles, autonomous underwater vehicles, or tracked benthic crawlers, present intrinsic limitations, especially when a close interaction with the environment is required. An increasing number of researchers are proposing legged robots as a bioinspired alternative to traditional designs, capable of yielding versatile multi-terrain locomotion, high stability, and low environmental disturbance. In this work, we aim at presenting the new field of underwater legged robotics in an organic way, discussing the prototypes in the state-of-the-art and highlighting technological and scientific challenges for the future. First, we will briefly recap the latest developments in traditional underwater robotics from which several technological solutions can be adapted, and on which the benchmarking of this new field should be set. Second, we will the retrace the evolution of terrestrial legged robotics, pinpointing the main achievements of the field. Third, we will report a complete state of the art on underwater legged robots focusing on the innovations with respect to the interaction with the environment, sensing and actuation, modelling and control, and autonomy and navigation. Finally, we will thoroughly discuss the reviewed literature by comparing traditional and legged underwater robots, highlighting interesting research opportunities, and presenting use case scenarios derived from marine science applications.

    @article{lincoln56189,
    volume = {18},
    number = {3},
    month = {April},
    author = {G Picardi and A Astolfi and D Chatzievangelou and J Aguzzi and Marcello Calisti},
    title = {Underwater legged robotics: review and perspectives},
    publisher = {IOP Publishing},
    year = {2023},
    journal = {Bioinspiration \& Biomimetics},
    doi = {10.1088/1748-3190/acc0bb},
    keywords = {ARRAY(0x55bd28ddc790)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56189/},
    abstract = {Nowadays, there is a growing awareness on the social and economic importance of the ocean. In this context, being able to carry out a diverse range of operations underwater is of paramount importance for many industrial sectors as well as for marine science and to enforce restoration and mitigation actions. Underwater robots allowed us to venture deeper and for longer time into the remote and hostile marine environment. However, traditional design concepts such as propeller driven remotely operated vehicles, autonomous underwater vehicles, or tracked benthic crawlers, present intrinsic limitations, especially when a close interaction with the environment is required. An increasing number of researchers are proposing legged robots as a bioinspired alternative to traditional designs, capable of yielding versatile multi-terrain locomotion, high stability, and low environmental disturbance. In this work, we aim at presenting the new field of underwater legged robotics in an organic way, discussing the prototypes in the state-of-the-art and highlighting technological and scientific challenges for the future. First, we will briefly recap the latest developments in traditional underwater robotics from which several technological solutions can be adapted, and on which the benchmarking of this new field should be set. Second, we will the retrace the evolution of terrestrial legged robotics, pinpointing the main achievements of the field. Third, we will report a complete state of the art on underwater legged robots focusing on the innovations with respect to the interaction with the environment, sensing and actuation, modelling and control, and autonomy and navigation. Finally, we will thoroughly discuss the reviewed literature by comparing traditional and legged underwater robots, highlighting interesting research opportunities, and presenting use case scenarios derived from marine science applications.}
    }

  • R. D. Silva, G. Cielniak, G. Wang, and J. Gao, “Deep learning-based crop row detection for infield navigation of agri-robots,” Journal of field robotics, 2023. doi:10.1002/rob.22238
    [BibTeX] [Abstract] [Download PDF]

    Autonomous navigation in agricultural environments is challenged by varying field conditions that arise in arable fields. State-of-the-art solutions for autonomous navigation in such environments require expensive hardware such as RTK-GNSS. This paper presents a robust crop row detection algorithm that withstands such field variations using inexpensive cameras. Existing datasets for crop row detection does not represent all the possible field variations. A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows. The proposed pipeline segments the crop rows using a deep learning-based method and employs the predicted segmentation mask for extraction of the central crop using a novel central crop row selection algorithm. The novel crop row detection algorithm was tested for crop row detection performance and the capability of visual servoing along a crop row. The visual servoing-based navigation was tested on a realistic simulation scenario with the real ground and plant textures. Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.

    @article{lincoln55690,
    title = {Deep learning-based Crop Row Detection for Infield Navigation of Agri-Robots},
    author = {Rajitha De Silva and Grzegorz Cielniak and Gang Wang and Junfeng Gao},
    publisher = {Wiley Periodicals, Inc.},
    year = {2023},
    doi = {10.1002/rob.22238},
    journal = {Journal of Field Robotics},
    keywords = {ARRAY(0x55bd28ddcac0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55690/},
    abstract = {Autonomous navigation in agricultural environments is challenged by varying field conditions that arise in arable fields.
    State-of-the-art solutions for autonomous navigation in such environments require expensive hardware such as RTK-GNSS. This paper presents a robust crop row detection algorithm that withstands such field variations using inexpensive cameras. Existing datasets for crop row detection does not represent all the possible field variations. A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed
    densities, curved crop rows and discontinuous crop rows.
    The proposed pipeline segments the crop rows using a deep
    learning-based method and employs the predicted segmentation mask for extraction of the central crop using a novel
    central crop row selection algorithm. The novel crop row
    detection algorithm was tested for crop row detection performance and the capability of visual servoing along a crop
    row. The visual servoing-based navigation was tested on a
    realistic simulation scenario with the real ground and plant
    textures. Our algorithm demonstrated robust vision-based
    crop row detection in challenging field conditions outperforming the baseline.}
    }

  • S. Vayakkattil, G. Cielniak, and M. Calisti, “Plant phenotyping using dlt method: towards retrieving the delicate features in a dynamic environment,” Lecture notes in computer science, vol. 14136, p. 3–14, 2023. doi:10.1007/978-3-031-43360-3_1
    [BibTeX] [Abstract] [Download PDF]

    Passive phenotyping methodologies use various techniques for calibration, which include a variety of sensory information like vision. Contrary to the state-of-the-art, this paper presents the use of a Direct Linear Transformation (DLT) algorithm to find the shape and position of fine and delicate features in plants. The proposed method not only finds a solution to the motion problem but also provides additional information related to the displacement of the traits of the subject plant. This study uses DLTdv digitalisation toolbox to implement the DLT modelling tool which reduces the complications in data processing. The calibration feature of the toolbox also enables the prior assumption of calibrated space in using DLT.

    @article{lincoln56198,
    volume = {14136},
    month = {September},
    author = {Srikishan Vayakkattil and Grzegorz Cielniak and Marcello Calisti},
    booktitle = {Towards Autonomous Robotic Systems},
    title = {Plant Phenotyping Using DLT Method: Towards Retrieving the Delicate Features in a Dynamic Environment},
    publisher = {Springer},
    year = {2023},
    journal = {Lecture Notes in Computer Science},
    doi = {10.1007/978-3-031-43360-3\_1},
    pages = {3--14},
    keywords = {ARRAY(0x55bd28ddc318)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56198/},
    abstract = {Passive phenotyping methodologies use various techniques for calibration, which include a variety of sensory information like vision. Contrary to the state-of-the-art, this paper presents the use of a Direct Linear Transformation (DLT) algorithm to find the shape and position of fine and delicate features in plants. The proposed method not only finds a solution to the motion problem but also provides additional information related to the displacement of the traits of the subject plant. This study uses DLTdv digitalisation toolbox to implement the DLT modelling tool which reduces the complications in data processing. The calibration feature of the toolbox also enables the prior assumption of calibrated space in using DLT.}
    }

  • F. Camara, C. Waltham, G. Churchill, and C. Fox, “Openpodcar: an open source vehicle for self-driving car research,” Journal of open hardware, vol. 7, iss. 1, p. 1–17, 2023. doi:10.5334/joh.46
    [BibTeX] [Abstract] [Download PDF]

    OpenPodcar is a low-cost, open source hardware and software, autonomous vehicle research platform based on an off-the-shelf, hard-canopy, mobility scooter donor vehicle. Hardware and software build instructions are provided to convert the donor vehicle into a low-cost and fully autonomous platform. The open platform consists of (a) hardware components: CAD designs, bill of materials, and build instructions; (b) Arduino, ROS and Gazebo control and simulation software files which provide standard ROS interfaces and simulation of the vehicle; and (c) higher-level ROS software implementations and configurations of standard robot autonomous planning and control, including the move{$\backslash$}_base interface with Timed-Elastic-Band planner which enacts commands to drive the vehicle from a current to a desired pose around obstacles. The vehicle is large enough to transport a human passenger or similar load at speeds up to 15km/h, for example for use as a last-mile autonomous taxi service or to transport delivery containers similarly around a city center. It is small and safe enough to be parked in a standard research lab and be used for realistic human-vehicle interaction studies. System build cost from new components is around USD7,000 in total in 2022. OpenPodcar thus provides a good balance between real world utility, safety, cost and research convenience.

    @article{lincoln56199,
    volume = {7},
    number = {1},
    month = {September},
    author = {Fanta Camara and Chris Waltham and Grey Churchill and Charles Fox},
    title = {OpenPodcar: An Open Source Vehicle for Self-Driving Car Research},
    publisher = {Ubiquity Press},
    year = {2023},
    journal = {Journal of Open Hardware},
    doi = {10.5334/joh.46},
    pages = {1--17},
    keywords = {ARRAY(0x55bd28ddc288)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56199/},
    abstract = {OpenPodcar is a low-cost, open source hardware and software, autonomous vehicle research platform based on an off-the-shelf, hard-canopy, mobility scooter donor vehicle. Hardware and software build instructions are provided to convert the donor vehicle into a low-cost and fully autonomous platform. The open platform consists of (a) hardware components: CAD designs, bill of materials, and build instructions; (b) Arduino, ROS and Gazebo control and simulation software files which provide standard ROS interfaces and simulation of the vehicle; and (c) higher-level ROS software implementations and configurations of standard robot autonomous planning and control, including the move{$\backslash$}\_base interface with Timed-Elastic-Band planner which enacts commands to drive the vehicle from a current to a desired pose around obstacles. The vehicle is large enough to transport a human passenger or similar load at speeds up to 15km/h, for example for use as a last-mile autonomous taxi service or to transport delivery containers similarly around a city center. It is small and safe enough to be parked in a standard research lab and be used for realistic human-vehicle interaction studies. System build cost from new components is around USD7,000 in total in 2022. OpenPodcar thus provides a good balance between real world utility, safety, cost and research convenience.}
    }

  • A. Perrett, H. Pollard, C. Barnes, M. Schofield, L. Qie, P. Bosilj, and J. Brown, “Deepverge: classification of roadside verge biodiversity and conservation potential,” Computers, environment and urban systems, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Grasslands are increasingly modified by anthropogenic activities and species rich grasslands have become rare habitats in the UK. However, grassy roadside verges often contain conservation priority plant species and should be targeted for protection. Identification of verges with high conservation potential represents a considerable challenge for ecologists, driving the development of automated methods to make up for the shortfall of relevant expertise nationally. Using survey data from 3,900 km of roadside verges alongside publicly available street-view imagery, we present DeepVerge: a deep learning-based method that can automatically survey sections of roadside verge by detecting the presence of positive indicator species. Using images and ground truth survey data from the rural UK county of Lincolnshire, DeepVerge achieved a mean accuracy of 88\% and a mean F1 score of 0.82. Such a method may be used by local authorities to identify new local wildlife sites, and aid management and environmental planning in line with legal and government policy obligations, saving thousands of hours of skilled labour

    @article{lincoln54285,
    title = {DeepVerge: Classification of Roadside Verge Biodiversity and Conservation Potential},
    author = {Andrew Perrett and Harry Pollard and Charlie Barnes and Mark Schofield and Lan Qie and Petra Bosilj and James Brown},
    publisher = {Elsevier},
    year = {2023},
    journal = {Computers, Environment and Urban Systems},
    keywords = {ARRAY(0x55bd28ddccd0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54285/},
    abstract = {Grasslands are increasingly modified by anthropogenic activities and species rich grasslands have become rare habitats in the UK. However, grassy roadside verges often contain conservation priority plant species and should be targeted for protection. Identification of verges with high conservation potential represents a considerable challenge for ecologists, driving the development of automated methods to make up for the shortfall of relevant expertise nationally. Using survey data from 3,900 km of roadside verges alongside publicly available street-view imagery, we present DeepVerge: a deep learning-based method that can automatically survey sections of roadside verge by detecting the presence of positive indicator species. Using images and ground truth survey data from the rural UK county of Lincolnshire, DeepVerge achieved a mean accuracy of 88\% and a mean F1 score of 0.82. Such a method may be used by local authorities to identify new local wildlife sites, and aid management and environmental planning in line with legal and government policy obligations, saving thousands of hours of skilled labour}
    }

  • V. R. Sugathakumary, B. Debnath, S. Mghames, W. Mandil, S. Parsa, S. Parsons, and A. G. Esfahani, “Selective harvesting robots: a review,” Journal of field robotics, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Climate change and population growth have created significant challenges for global food production, and ensuring food security requires a resilient food-production system. One of the most labour-intensive tasks in agriculture and food production is selective harvesting, which is vulnerable to risks such as a shortage of adequate labour force. To address this challenge, there is a growing need for robots that can deliver precise and efficient harvesting operations. However, developing robots for selective harvesting presents several technological challenges and raises a range of intriguing scientific questions. This paper provides an overview of the available robotic technologies for the selective harvesting of high-value crops and discusses the latest advancements and challenges in the relevant technology domains, including robotic hardware, robot perception, robot planning, and robot control. Additionally, this paper presents several open research questions that can serve as a research focus for further development in this field.

    @article{lincoln55429,
    title = {Selective Harvesting Robots: A Review},
    author = {Vishnu Rajendran Sugathakumary and Bappaditya Debnath and Sariah Mghames and Willow Mandil and Soran Parsa and Simon Parsons and Amir Ghalamzan Esfahani},
    publisher = {Wiley},
    year = {2023},
    journal = {Journal of Field Robotics},
    keywords = {ARRAY(0x55bd28ddcd60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55429/},
    abstract = {Climate change and population growth have created significant challenges for global food production, and ensuring food security requires a resilient food-production system. One of the most labour-intensive tasks in agriculture and food production is selective harvesting, which is vulnerable to risks such as a shortage of adequate labour force. To address this challenge, there is a growing need for robots that can deliver precise and efficient harvesting operations. However, developing robots for selective harvesting presents several technological challenges and raises a range of intriguing scientific questions. This paper provides an overview of the available robotic technologies for the selective harvesting of high-value crops and discusses the latest advancements and challenges in the relevant technology domains, including robotic hardware, robot perception, robot planning, and robot control. Additionally, this paper presents several open research questions that can serve as a research focus for further development in this field.}
    }

  • G. Kulathunga and A. Klimchik, “Survey on motion planning for multirotor aerial vehicles in plan-based control paradigm,” Remote sensing, vol. 15, iss. 21, p. 5237, 2023. doi:10.3390/rs15215237
    [BibTeX] [Abstract] [Download PDF]

    In general, optimal motion planning can be performed both locally and globally. In such a planning, the choice in favour of either local or global planning technique mainly depends on whether the environmental conditions are dynamic or static. Hence, the most adequate choice is to use local planning or local planning alongside global planning. When designing optimal motion planning both local and global, the key metrics to bear in mind are execution time, asymptotic optimality, and quick reaction to dynamic obstacles. Such planning approaches can address the aforesaid target metrics more efficiently compared to other approaches such as path planning followed by smoothing. Thus, the foremost objective of this study is to analyse related literature in order to understand how the motion planning, especially trajectory planning, problem is formulated, when being applied for generating optimal trajectories in real-time for Multirotor Aerial Vehicles, impacts the listed metrics. As a result of the research, the trajectory planning problem was broken down into a set of subproblems, and the lists of methods for addressing each of the problems were identified and described in detail. Subsequently, the most prominent results from 2010 to 2022 were summarized and presented in the form of a timeline.

    @article{lincoln57199,
    volume = {15},
    number = {21},
    month = {November},
    author = {Geesara Kulathunga and Alexandr Klimchik},
    title = {Survey on Motion Planning for Multirotor Aerial Vehicles in Plan-based Control Paradigm},
    publisher = {MDPI},
    year = {2023},
    journal = {Remote Sensing},
    doi = {10.3390/rs15215237},
    pages = {5237},
    keywords = {ARRAY(0x55bd28ddbfe8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57199/},
    abstract = {In general, optimal motion planning can be performed both locally and globally. In such a planning, the choice in favour of either local or global planning technique mainly depends on whether the environmental conditions are dynamic or static. Hence, the most adequate choice is to use local planning or local planning alongside global planning. When designing optimal motion planning both local and global, the key metrics to bear in mind are execution time, asymptotic optimality, and quick reaction to dynamic obstacles. Such planning approaches can address the aforesaid target metrics more efficiently compared to other approaches such as path planning followed by smoothing. Thus, the foremost objective of this study is to analyse related literature in order to understand how the motion planning, especially trajectory planning, problem is formulated, when being applied for generating optimal trajectories in real-time for Multirotor Aerial Vehicles, impacts the listed metrics. As a result of the research, the trajectory planning problem was broken down into a set of subproblems, and the lists of methods for addressing each of the problems were identified and described in detail. Subsequently, the most prominent results from 2010 to 2022 were summarized and presented in the form of a timeline.}
    }

  • R. Polvara, S. M. Mellado, I. Hroob, A. Papadimitriou, K. Tsiolis, D. Giakoumis, S. Likothanassis, D. Tzovaras, G. Cielniak, and M. Hanheide, “Bacchus long?term (blt) data set: acquisition of the agricultural multimodal blt data set with automated robot deployment,” Journal of field robotics, 2023. doi:10.1002/rob.22228
    [BibTeX] [Abstract] [Download PDF]

    Achieving a robust long-term deployment with mobile robots in the agriculture domain is both a demanded and challenging task. The possibility to have autonomous platforms in the field performing repetitive tasks, such as monitoring or harvesting crops, collides with the difficulties posed by the always-changing appearance of the environment due to seasonality. With this scope in mind, we report an ongoing effort in the long-term deployment of an autonomous mobile robot in a vineyard, with the main objective of acquiring what we called the Bacchus Long-Term (BLT) Dataset. This dataset consists of multiple sessions recorded in the same area of a vineyard but at different points in time, covering a total of 7 months to capture the whole canopy growth from March until September. The multimodal dataset recorded is acquired with the main focus put on pushing the development and evaluations of different mapping and localisation algorithms for long-term autonomous robots operation in the agricultural domain. Hence, besides the dataset, we also present an initial study in long-term localisation using four different sessions belonging to four different months with different plant stages. We identify that state-of-the-art localisation methods can only cope partially with the amount of change in the environment, making the proposed dataset suitable to establish a benchmark on which the robotics community can test its methods. On our side, we anticipate two solutions pointed at extracting stable temporal features for improving long-term 4d localisation results. The BLT dataset is available at https://lncn.ac/lcas-blt\}\{lncn.ac/lcas-blt.

    @article{lincoln56037,
    title = {Bacchus Long?Term (BLT) data set: Acquisition of the agricultural multimodal BLT data set with automated robot deployment},
    author = {Riccardo Polvara and Sergio Molina Mellado and Ibrahim Hroob and Alexios Papadimitriou and Konstantinos Tsiolis and Dimitrios Giakoumis and Spiridon Likothanassis and Dimitrios Tzovaras and Grzegorz Cielniak and Marc Hanheide},
    publisher = {Wiley},
    year = {2023},
    doi = {10.1002/rob.22228},
    journal = {Journal of Field Robotics},
    keywords = {ARRAY(0x55bd28ddcd30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56037/},
    abstract = {Achieving a robust long-term deployment with mobile robots in the agriculture domain is both a demanded and challenging task. The possibility to have autonomous platforms in the field performing repetitive tasks, such as monitoring or harvesting crops, collides with the difficulties posed by the always-changing appearance of the environment due to seasonality.
    With this scope in mind, we report an ongoing effort in the long-term deployment of an autonomous mobile robot in a vineyard, with the main objective of acquiring what we called the Bacchus Long-Term (BLT) Dataset. This dataset consists of multiple sessions recorded in the same area of a vineyard but at different points in time, covering a total of 7 months to capture the whole canopy growth from March until September. The multimodal dataset recorded is acquired with the main focus put on pushing the development and evaluations of different mapping and localisation algorithms for long-term autonomous robots operation in the agricultural domain. Hence, besides the dataset, we also present an initial study in long-term localisation using four different sessions belonging to four different months with different plant stages. We identify that state-of-the-art localisation methods can only cope partially with the amount of change in the environment, making the proposed dataset suitable to establish a benchmark on which the robotics community can test its methods. On our side, we anticipate two solutions pointed at extracting stable temporal features for improving long-term 4d localisation results.
    The BLT dataset is available at https://lncn.ac/lcas-blt\}\{lncn.ac/lcas-blt.}
    }

  • T. P. Kucner, M. Magnusson, S. Mghames, L. Palmieri, F. Verdoja, C. S. Swaminathan, T. Krajník, E. Schaffernicht, N. Bellotto, M. Hanheide, and A. J. Lilienthal, “Survey of maps of dynamics for mobile robots,” The international journal of robotics research (ijrr), 2023. doi:10.1177/02783649231190428
    [BibTeX] [Abstract] [Download PDF]

    Robotic mapping provides spatial information for autonomous agents. Depending on the tasks they seek to enable, the maps created range from simple 2D representations of the environment geometry to complex, multilayered semantic maps. This survey article is about maps of dynamics (MoDs), which store semantic information about typical motion patterns in a given environment. Some MoDs use trajectories as input, and some can be built from short, disconnected observations of motion. Robots can use MoDs, for example, for global motion planning, improved localization, or human motion prediction. Accounting for the increasing importance of maps of dynamics, we present a comprehensive survey that organizes the knowledge accumulated in the field and identifies promising directions for future work. Specifically, we introduce field-specific vocabulary, summarize existing work according to a novel taxonomy, and describe possible applications and open research problems. We conclude that the field is mature enough, and we expect that maps of dynamics will be increasingly used to improve robot performance in real-world use cases. At the same time, the field is still in a phase of rapid development where novel contributions could significantly impact this research area.

    @article{lincoln55673,
    title = {Survey of maps of dynamics for mobile robots},
    author = {Tomasz Piotr Kucner and Martin Magnusson and Sariah Mghames and Luigi Palmieri and Francesco Verdoja and Chittaranjan Srinivas Swaminathan and Tom{\'a}{\v s} Krajn{\'i}k and Erik Schaffernicht and Nicola Bellotto and Marc Hanheide and Achim J Lilienthal},
    publisher = {Sage Publications},
    year = {2023},
    doi = {10.1177/02783649231190428},
    journal = {The International Journal of Robotics Research (IJRR)},
    keywords = {ARRAY(0x55bd28ddcd00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55673/},
    abstract = {Robotic mapping provides spatial information for autonomous agents. Depending on the tasks they seek to enable, the maps created range from simple 2D representations of the environment geometry to complex, multilayered semantic maps. This survey article is about maps of dynamics (MoDs), which store semantic information about typical motion patterns in a given environment. Some MoDs use trajectories as input, and some can be built from short, disconnected observations of motion. Robots can use MoDs, for example, for global motion planning, improved localization, or human motion prediction. Accounting for the increasing importance of maps of dynamics, we present a comprehensive survey that organizes the knowledge accumulated in the field and identifies promising directions for future work. Specifically, we introduce field-specific vocabulary, summarize existing work according to a novel taxonomy, and describe possible applications and open research problems. We conclude that the field is mature enough, and we expect that maps of dynamics will be increasingly used to improve robot performance in real-world use cases. At the same time, the field is still in a phase of rapid development where novel contributions could significantly impact this research area.}
    }

  • S. Parsa, B. Debnath, M. A. Khan, and A. G. Esfahani, “Modular autonomous strawberry-picking robotic system,” Journal of field robotics, 2023. doi:10.1002/rob.22229
    [BibTeX] [Abstract] [Download PDF]

    Challenges in strawberry picking made selective harvesting robotic technology very demanding. However, the elective harvesting of strawberries is a complicated robotic task forming a few scientific research questions. Most available solutions only deal with a specific picking scenario, for example, picking only a single variety of fruit in isolation. Nonetheless, most economically viable (e.g., high?yielding and/or disease?resistant) varieties of strawberry are grown in dense clusters. The current perception technology in such use cases is inefficient. In this work, we developed a novel system capable of harvesting strawberries with several unique features. These features allow the system to deal with very complex picking scenarios, for example, dense clusters. Our concept of a modular system makes our system reconfigurable to adapt to different picking scenarios. We designed, manufactured, and tested a patented picking head with 2.5?degrees of freedom (two independent mechanisms and one dependent cutting system) capable of removing possible occlusions and harvesting the targeted strawberry without any contact with the fruit flesh to avoid damage and bruising. In addition, we developed a novel perception system to localize strawberries and detect their key points, picking points, and determine their ripeness. For this purpose, we introduced two new data sets. Finally, we tested the system in a commercial strawberry growing field and our research farm with three different strawberry varieties. The results show the effectiveness and reliability of the proposed system. The designed picking head was able to remove occlusions and harvest strawberries effectively. The perception system was able to detect and determine the ripeness of strawberries with 95\% accuracy. In total, the system was able to harvest 87\% of all detected strawberries with a success rate of 83\% for all pluckable fruits. We also discuss a series of open research questions in the discussion section.

    @article{lincoln56100,
    title = {Modular autonomous strawberry-picking robotic system},
    author = {Soran Parsa and Bappaditya Debnath and Muhammad Arshad Khan and Amir Ghalamzan Esfahani},
    publisher = {Wiley},
    year = {2023},
    doi = {10.1002/rob.22229},
    journal = {Journal of Field Robotics},
    keywords = {ARRAY(0x55bd28ddcca0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56100/},
    abstract = {Challenges in strawberry picking made selective harvesting robotic technology very demanding. However, the elective harvesting of strawberries is a complicated robotic task forming a few scientific research questions. Most available solutions only deal with a specific picking scenario, for example, picking only a single variety of fruit in isolation. Nonetheless, most economically viable (e.g., high?yielding and/or disease?resistant) varieties of strawberry are grown in dense clusters. The current perception technology in such use cases is inefficient. In this work, we developed a novel system capable of harvesting strawberries with several unique features. These features allow the system to deal with very complex picking scenarios, for example, dense clusters. Our concept of a modular system makes our system reconfigurable to adapt to different picking scenarios. We designed, manufactured, and tested a patented picking head with 2.5?degrees of freedom (two independent mechanisms and one dependent cutting system) capable of removing possible occlusions and harvesting the targeted strawberry without any contact with the fruit flesh to avoid damage and bruising. In addition, we developed a novel perception system to localize strawberries and detect their key points, picking points, and determine their ripeness. For this purpose, we introduced two new data sets. Finally, we tested the system in a commercial strawberry growing field and our research farm with three different strawberry varieties. The results show the effectiveness and reliability of the proposed system. The designed picking head was able to remove occlusions and harvest strawberries effectively. The perception system was able to detect and determine the ripeness of strawberries with 95\% accuracy. In total, the system was able to harvest 87\% of all detected strawberries with a success rate of 83\% for all pluckable fruits. We also discuss a series of open research questions in the discussion section.}
    }

  • Q. Mahesar, M. Hanheide, and S. Parsons, “Argument schemes and a dialogue system for explainable planning,” Acm transactions on intelligent systems and technology, 2023. doi:10.1002/rob.22227
    [BibTeX] [Abstract] [Download PDF]

    Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. In order to establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. Explainable AI Planning (XAIP) is a field that involves explaining the outputs, i.e., solution plans produced by AI planning systems to a user. The main goal of a plan explanation is to help humans understand reasoning behind the plans that are produced by the planners. In this paper, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.

    @article{lincoln55636,
    title = {Argument Schemes and a Dialogue System for Explainable Planning},
    author = {Quratul-ain Mahesar and Marc Hanheide and Simon Parsons},
    publisher = {Association for Computing Machinery (ACM)},
    year = {2023},
    doi = {10.1002/rob.22227},
    journal = {ACM Transactions on Intelligent Systems and Technology},
    keywords = {ARRAY(0x55bd28ddcc40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55636/},
    abstract = {Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. In order to establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. Explainable AI Planning (XAIP) is a field that involves explaining the outputs, i.e., solution plans produced by AI planning systems to a user. The main goal of a plan explanation is to help humans understand reasoning behind the plans that are produced by the planners. In this paper, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.}
    }

  • Z. Maamar, N. Faci, M. Al-Khafajiy, and M. Dohan, “Thing artifact-based design of iot ecosystems,” Service oriented computing and applications, 2023.
    [BibTeX] [Abstract] [Download PDF]

    This paper sheds light on the complexity of designing Internet-of-Things (IoT) ecosystems where a high number of things reside and thus, must collaborate despite their reduced size, restricted connectivity, and constrained storage limitations. To address this complexity, a novel concept referred to as thing artifact is devised abstracting the roles that things play in an IoT ecosystem. The abstraction focuses on 3 cross-cutting aspects namely, functionality in term of what to perform, lifecycle in term of how to behave, and interaction flow in term of with whom to exchange. Building upon the concept of data artifact commonly used in data-driven business applications design, thing artifacts engage in relations with peers to coordinate their individual behaviors and hence, avoid conflicts that could result from the quality of exchanged data. Putting functionality, lifecycle, interaction flow, and relation together contributes to abstracting IoT ecosystems design. A system implementing a thing artifact-based IoT ecosystem along with some experiments are presented in the paper as well.

    @article{lincoln57197,
    title = {Thing Artifact-based Design of IoT Ecosystems},
    author = {Zakaria Maamar and Noura Faci and Mohammed Al-Khafajiy and Murtada Dohan},
    publisher = {Springer},
    year = {2023},
    journal = {Service Oriented Computing and Applications},
    keywords = {ARRAY(0x55bd28ddcc10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57197/},
    abstract = {This paper sheds light on the complexity of designing Internet-of-Things (IoT) ecosystems where a high number of things reside and thus, must collaborate despite their reduced size, restricted connectivity, and constrained storage limitations. To address this complexity, a novel concept referred to as thing artifact is devised abstracting the roles that things play in an IoT ecosystem. The abstraction focuses on 3 cross-cutting aspects namely, functionality in term of what to perform, lifecycle in term of how to behave, and interaction flow in term of with whom to exchange. Building upon the concept of data artifact commonly used in data-driven business applications design, thing artifacts engage in relations with peers to coordinate their individual behaviors and hence, avoid conflicts that could result from the quality of exchanged data. Putting functionality, lifecycle, interaction flow, and relation together contributes to abstracting IoT ecosystems design. A system implementing a thing artifact-based IoT ecosystem along with some experiments are presented in the paper as well.}
    }

  • M. Darbyshire, A. Salazar-Gomez, J. Gao, E. Sklar, and S. Parsons, “Towards practical object detection for weed spraying in precision agriculture,” Frontiers in plant science, vol. 14, 2023. doi:10.3389/fpls.2023.1183277
    [BibTeX] [Abstract] [Download PDF]

    Weeds pose a persistent threat to farmers’ yields, but conventional methods for controlling weed populations, like herbicide spraying, pose a risk to surrounding ecosystems. Precision spraying aims to reduce harms to the surrounding environment by targeting only the weeds, rather than spraying the entire field with herbicide. Such an approach requires weeds to first be detected. With the advent of convolutional neural networks, there has been significant research trialling such technologies on datasets of weeds and crops. However, the evaluation of the performance of these approaches has often been limited to the standard machine learning metrics. This paper aims to assess the feasibility of precision spraying via a comprehensive evaluation of weed detection and spraying accuracy using two separate datasets, different image resolutions, and several state-of-the-art object detection algorithms. A simplified model of precision spraying is proposed to compare the performance of different detection algorithms while varying the precision of spray nozzles. The key performance indicators in precision spraying that this study focuses on are a high weed hit rate and a reduction in herbicide usage. This paper introduces two metrics, namely Weed Coverage Rate and area sprayed, to capture these aspects of the real-world performance of precision spraying and demonstrates their utility through experimental results. Using these metrics to calculate spraying performance, it was found that 93\% of weeds could be sprayed, by spraying just 30\% of the area using state of the art vision methods to identify weeds.

    @article{lincoln56887,
    volume = {14},
    month = {November},
    author = {Madeleine Darbyshire and Adrian Salazar-Gomez and Junfeng Gao and Elizabeth Sklar and Simon Parsons},
    title = {Towards practical object detection for weed spraying in precision agriculture},
    publisher = {Frontiers},
    journal = {Frontiers in Plant Science},
    doi = {10.3389/fpls.2023.1183277},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddb598)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56887/},
    abstract = {Weeds pose a persistent threat to farmers' yields, but conventional methods for controlling weed populations, like herbicide spraying, pose a risk to surrounding ecosystems. Precision spraying aims to reduce harms to the surrounding environment by targeting only the weeds, rather than spraying the entire field with herbicide. Such an approach requires weeds to first be detected. With the advent of convolutional neural networks, there has been significant research trialling such technologies on datasets of weeds and crops. However, the evaluation of the performance of these approaches has often been limited to the standard machine learning metrics. This paper aims to assess the feasibility of precision spraying via a comprehensive evaluation of weed detection and spraying accuracy using two separate datasets, different image resolutions, and several state-of-the-art object detection algorithms. A simplified model of precision spraying is proposed to compare the performance of different detection algorithms while varying the precision of spray nozzles. The key performance indicators in precision spraying that this study focuses on are a high weed hit rate and a reduction in herbicide usage. This paper introduces two metrics, namely Weed Coverage Rate and area sprayed, to capture these aspects of the real-world performance of precision spraying and demonstrates their utility through experimental results. Using these metrics to calculate spraying performance, it was found that 93\% of weeds could be sprayed, by spraying just 30\% of the area using state of the art vision methods to identify weeds.}
    }

  • J. Heselden and G. Das, “Heuristics and rescheduling in prioritised multi-robot path planning: a literature review,” Machines (special issue new trends in robotics, automation and mechatronics), vol. 11, iss. 11, 2023. doi:10.3390/machines11111033
    [BibTeX] [Abstract] [Download PDF]

    The benefits of multi-robot systems are substantial, bringing gains in efficiency, quality, and cost, and they are useful in a wide range of environments from warehouse automation, to agriculture and even extend in part to entertainment. In multi-robot system research, the main focus is on ensuring efficient coordination in the operation of the robots, both in task allocation and navigation. However, much of this research seldom strays from the theoretical bounds; there are many reasons for this, with the most prominent and -impactful being resource limitations. This is especially true for research in areas such as multi-robot path planning (MRPP) and navigation coordination. This is a large issue in practice as many approaches are not designed with meaningful real-world implications in mind and are not scalable to large multi-robot systems. This survey aimed to look into the coordination and path-planning issues and challenges faced when working with multi-robot systems, especially those using a prioritised planning approach and identify key areas that are not well-explored and the scope of applying existing MRPP approaches to real-world settings.

    @article{lincoln57338,
    volume = {11},
    number = {11},
    month = {November},
    author = {James Heselden and Gautham Das},
    title = {Heuristics and Rescheduling in Prioritised Multi-Robot Path Planning: A Literature Review},
    publisher = {MDPI},
    year = {2023},
    journal = {Machines (Special Issue New Trends in Robotics, Automation and Mechatronics)},
    doi = {10.3390/machines11111033},
    keywords = {ARRAY(0x55bd28ddb550)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57338/},
    abstract = {The benefits of multi-robot systems are substantial, bringing gains in efficiency, quality, and cost, and they are useful in a wide range of environments from warehouse automation, to agriculture and even extend in part to entertainment. In multi-robot system research, the main focus is on ensuring efficient coordination in the operation of the robots, both in task allocation and navigation. However, much of this research seldom strays from the theoretical bounds; there are many reasons for this, with the most prominent and -impactful being resource limitations. This is especially true for research in areas such as multi-robot path planning (MRPP) and navigation coordination. This is a large issue in practice as many approaches are not designed with meaningful real-world implications in mind and are not scalable to large multi-robot systems. This survey aimed to look into the coordination and path-planning issues and challenges faced when working with multi-robot systems, especially those using a prioritised planning approach and identify key areas that are not well-explored and the scope of applying existing MRPP approaches to real-world settings.}
    }

  • D. Popov, A. Pahkevich, and A. Klimchik, “Adaptive technique for physical human?robot interaction handling using proprioceptive sensors,” Engineering applications of artificial intelligence, vol. 126, iss. D, 2023. doi:10.1016/j.engappai.2023.107141
    [BibTeX] [Abstract] [Download PDF]

    The work focuses on the development of an adaptive technique for the physical interaction handling between a human and a robot, as well as its experimental validation. The proposed technique is based on the deep residual neural network and dedicated finite state machine, where the states are the robot behavior modes and transitions are the switchings between the states that depend on the interaction parameters and characteristics. It ensures the human operator safety and improves the human?robot collaboration performance by implementing various scenarios. In the scope of this technique, the parameters of human?robot interaction are used to select an appropriate robot reaction strategy using data from internal robot sensors only, i.e. proprioceptive sensors. These parameters define the interaction force vector and its application point on the robot surface, which allow to classify the interaction within the set of predefined categories. This classification distinguishes interactions applied at the tool or intermediate link (Tool/Link), having soft or hard nature (Soft/Hard), as well as having different intention (Intl/Accd) or duration (Short/Long). Based on identified category and the current robot state, the algorithm chooses an appropriate robot reaction. To confirm the efficiency the developed technique, an experimental study was conducted, which involved the collaboration between the real industrial manipulator KUKA LBR iiwa and the human operator.

    @article{lincoln56590,
    volume = {126},
    number = {D},
    month = {November},
    author = {Dmitry Popov and Anatol Pahkevich and Alexandr Klimchik},
    title = {Adaptive technique for physical human?robot interaction handling using proprioceptive sensors},
    publisher = {Elsevier},
    year = {2023},
    journal = {Engineering Applications of Artificial Intelligence},
    doi = {10.1016/j.engappai.2023.107141},
    keywords = {ARRAY(0x55bd28ddb4f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56590/},
    abstract = {The work focuses on the development of an adaptive technique for the physical interaction handling between a human and a robot, as well as its experimental validation. The proposed technique is based on the deep residual neural network and dedicated finite state machine, where the states are the robot behavior modes and transitions are the switchings between the states that depend on the interaction parameters and characteristics. It ensures the human operator safety and improves the human?robot collaboration performance by implementing various scenarios. In the scope of this technique, the parameters of human?robot interaction are used to select an appropriate robot reaction strategy using data from internal robot sensors only, i.e. proprioceptive sensors. These parameters define the interaction force vector and its application point on the robot surface, which allow to classify the interaction within the set of predefined categories. This classification distinguishes interactions applied at the tool or intermediate link (Tool/Link), having soft or hard nature (Soft/Hard), as well as having different intention (Intl/Accd) or duration (Short/Long). Based on identified category and the current robot state, the algorithm chooses an appropriate robot reaction. To confirm the efficiency the developed technique, an experimental study was conducted, which involved the collaboration between the real industrial manipulator KUKA LBR iiwa and the human operator.}
    }

  • A. Damien and A. Klimchik, “Passively adapting external force compensation system for serial manipulators,” in Advances in mechanism and machine science. iftomm wc 2023. mechanisms and machine science, M. Okada, Ed., Cham: Springer, 2023, vol. 148, p. 560–569. doi:10.1007/978-3-031-45770-8_56
    [BibTeX] [Abstract] [Download PDF]

    This paper proposes a design approach for a spring-based compensator that passively adapts to external loadings. The proposed compensator is based on a combination of spring-lever mechanisms that are adjustable to the robot?s configuration. Springs in this arrangement are mounted on sliding pivots that connected to springs that allow passive adjustment of the value of the counter-torque. While previous research is either concerned with gravity compensation or with variable stiffness actuators, the proposed approach deals with the variation of external loadings. The proposed force compensator can be used in robotics applications where the robot experiences high external loads during operation. The model was tested in simulation for a planar 2-DoF manipulator and a planar redundant 3-DoF manipulator. The presented results show 58.5\% torque reduction for 2-DoF and 55.2\% reduction for 3-DoF . Robot redundancy can enhance counter-balancing since the compensation level is configuration-dependent.

    @incollection{lincoln57367,
    volume = {148},
    month = {November},
    author = {Albert Damien and Alexandr Klimchik},
    series = {Mechanisms and Machine Science},
    booktitle = {Advances in Mechanism and Machine Science. IFToMM WC 2023. Mechanisms and Machine Science},
    editor = {M. Okada},
    title = {Passively Adapting External Force Compensation System for Serial Manipulators},
    address = {Cham},
    publisher = {Springer},
    year = {2023},
    doi = {10.1007/978-3-031-45770-8\_56},
    pages = {560--569},
    keywords = {ARRAY(0x55bd28ddb538)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57367/},
    abstract = {This paper proposes a design approach for a spring-based compensator that passively adapts to external loadings. The proposed compensator is based on a combination of spring-lever mechanisms that are adjustable to the robot?s configuration. Springs in this arrangement are mounted on sliding pivots that connected to springs that allow passive adjustment of the value of the counter-torque. While previous research is either concerned with gravity compensation or with variable stiffness actuators, the proposed approach deals with the variation of external loadings. The proposed force compensator can be used in robotics applications where the robot experiences high external loads during operation. The model was tested in simulation for a planar 2-DoF manipulator and a planar redundant 3-DoF manipulator. The presented results show 58.5\% torque reduction for 2-DoF and 55.2\% reduction for 3-DoF . Robot redundancy can enhance counter-balancing since the compensation level is configuration-dependent.}
    }

  • K. Seemakurthy, C. Fox, E. Aptoula, and P. Bosilj, “Domain generalised faster r-cnn,” in The 37th aaai conference on artificial intelligence, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Domain generalisation (i.e. out-of-distribution generalisation) is an open problem in machine learning, where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains. While the topic is attracting increasing interest, it has not been studied in detail in the context of object detection. The established approaches all operate under the covariate shift assumption, where the conditional distributions are assumed to be approximately equal across source domains. This is the first paper to address domain generalisation in the context of object detection, with a rigorous mathematical analysis of domain shift, without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Faster R-CNN and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines. All the codes for replicating the results in this paper can be found at https://github.com/karthikiitm87/domain-generalisation.git

    @inproceedings{lincoln53771,
    booktitle = {The 37th AAAI conference on Artificial Intelligence},
    month = {March},
    title = {Domain Generalised Faster R-CNN},
    author = {Karthik Seemakurthy and Charles Fox and Erchan Aptoula and Petra Bosilj},
    publisher = {Association for Advancement of Artificial Intelligence},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc850)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53771/},
    abstract = {Domain generalisation (i.e. out-of-distribution generalisation) is an open problem in machine learning, where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains. While the topic is attracting increasing interest, it has not been studied in detail in the context of object detection. The established approaches all operate under the covariate shift assumption, where the conditional distributions are assumed to be approximately equal across source domains. This is the first paper to address domain generalisation in the context of object detection, with a rigorous mathematical analysis of domain shift, without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Faster R-CNN and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines. All the codes for replicating the results in this paper can be found at https://github.com/karthikiitm87/domain-generalisation.git}
    }

  • Z. Yan, L. Sun, T. Krajnik, T. Duckett, and N. Bellotto, “Towards long-term autonomy: a perspective from robot learning,” in Aaai bridge program ?ai and robotics?, 2023.
    [BibTeX] [Abstract] [Download PDF]

    In the future, service robots are expected to be able to operate autonomously for long periods of time without human intervention. Many work striving for this goal have been emerging with the development of robotics, both hardware and software. Today we believe that an important underpinning of long-term robot autonomy is the ability of robots to learn on site and on-the-fly, especially when they are deployed in changing environments or need to traverse different environments. In this paper, we examine the problem of long-term autonomy from the perspective of robot learning, especially in an online way, and discuss in tandem its premise “data” and the subsequent “deployment”.

    @inproceedings{lincoln53115,
    booktitle = {AAAI Bridge Program ?AI and Robotics?},
    month = {February},
    title = {Towards Long-term Autonomy: A Perspective from Robot Learning},
    author = {Zhi Yan and Li Sun and Tomas Krajnik and Tom Duckett and Nicola Bellotto},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc940)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53115/},
    abstract = {In the future, service robots are expected to be able to operate autonomously for long periods of time without human intervention. Many work striving for this goal have been emerging with the development of robotics, both hardware and software. Today we believe that an important underpinning of long-term robot autonomy is the ability of robots to learn on site and on-the-fly, especially when they are deployed in changing environments or need to traverse different environments. In this paper, we examine the problem of long-term autonomy from the perspective of robot learning, especially in an online way, and discuss in tandem its premise "data" and the subsequent "deployment".}
    }

  • M. Constantinou, R. Polvara, and E. Makridis, “The technologisation of thematic analysis: a case study into automatising qualitative research,” in 17th international technology, education and development conference, 2023, p. 1092–1098. doi:10.21125/inted.2023.0323
    [BibTeX] [Abstract] [Download PDF]

    Thematic analysis is the most commonly used form of qualitative analysis used extensively in educational sciences. While the process is straightforward in the sense that a hermeneutic analysis is conducted so as to detect patterns and assign themes emerging from the data acquired, replicability can be challenging. As a result, there is significant debate about what constitutes reliability and rigour in relation to qualitative coding. Traditional thematic analysis in educational sciences requires the development of a codebook and the recruitment of a research team for intercoder reviewing and code testing. Such a process is often lengthy and infeasible when the number of texts to be analysed increases exponentially. To overcome these limitations, in this work, we use an unsupervised text analysis technique called the Latent Dirichlet Allocation (LDA) to identify distinct abstract topics which are then clustered into potential themes. Our results show that thematic analysis in the field of educational sciences using the LDA text analysis technique has prospects of demonstrating rigour and higher thematic coding reliability and validity while offering a valid intra-coder complementary support to the researcher.

    @inproceedings{lincoln54118,
    month = {March},
    author = {Marina Constantinou and Riccardo Polvara and Evagoras Makridis},
    booktitle = {17th International Technology, Education and Development Conference},
    title = {The technologisation of thematic analysis: a case study into automatising qualitative research},
    publisher = {IATED},
    year = {2023},
    journal = {17th International Technology, Education and Development Conference},
    doi = {10.21125/inted.2023.0323},
    pages = {1092--1098},
    keywords = {ARRAY(0x55bd28ddc880)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54118/},
    abstract = {Thematic analysis is the most commonly used form of qualitative analysis used extensively in educational sciences. While the process is straightforward in the sense that a hermeneutic analysis is conducted so as to detect patterns and assign themes emerging from the data acquired, replicability can be challenging. As a result, there is significant debate about what constitutes reliability and rigour in relation to qualitative coding. Traditional thematic analysis in educational sciences requires the development of a codebook and the recruitment of a research team for intercoder reviewing and code testing. Such a process is often lengthy and infeasible when the number of texts to be analysed increases exponentially. To overcome these limitations, in this work, we use an unsupervised text analysis technique called the Latent Dirichlet Allocation (LDA) to identify distinct abstract topics which are then clustered into potential themes. Our results show that thematic analysis in the field of educational sciences using the LDA text analysis technique has prospects of demonstrating rigour and higher thematic coding reliability and validity while offering a valid intra-coder complementary support to the researcher.}
    }

  • F. Pasti and N. Bellotto, “Evaluation of computer vision-based person detection on low-cost embedded systems,” in 18th international conference on computer vision theory and applications (visapp), 2023.
    [BibTeX] [Abstract] [Download PDF]

    Person detection applications based on computer vision techniques often rely on complex Convolutional Neural Networks that require powerful hardware in order achieve good runtime performance. The work of this paper has been developed with the aim of implementing a safety system, based on computer vision algorithms, able to detect people in working environments using an embedded device. Possible applications for such safety systems include remote site monitoring and autonomous mobile robots in warehouses and industrial premises. Similar studies already exist in the literature, but they mostly rely on systems like NVidia Jetson that, with a Cuda enabled GPU, are able to provide satisfactory results. This, however, comes with a significant downside as such devices are usually expensive and require significant power consumption. The current paper instead is going to consider various implementations of computer vision-based person detection on two power-efficient and inexpensive devices, namely Raspberry Pi 3 and 4. In order to do so, some solutions based on off-the-shelf algorithms are first explored by reporting experimental results based on relevant performance metrics. Then, the paper presents a newly-created custom architecture, called eYOLO, that tries to solve some limitations of the previous systems. The experimental evaluation demonstrates the good performance of the proposed approach and suggests ways for further improvement.

    @inproceedings{lincoln53114,
    booktitle = {18th International Conference on Computer Vision Theory and Applications (VISAPP)},
    month = {February},
    title = {Evaluation of Computer Vision-Based Person Detection on Low-Cost Embedded Systems},
    author = {Francesco Pasti and Nicola Bellotto},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc910)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53114/},
    abstract = {Person detection applications based on computer vision techniques often rely on complex Convolutional Neural Networks that require powerful hardware in order achieve good runtime performance. The work of this paper has been developed with the aim of implementing a safety system, based on computer vision algorithms, able to detect people in working environments using an embedded device. Possible applications for such safety systems include remote site monitoring and autonomous mobile robots in warehouses and industrial premises. Similar studies already exist in the literature, but they mostly rely on systems like NVidia Jetson that, with a Cuda enabled GPU, are able to provide satisfactory results. This, however, comes with a significant downside as such devices are usually expensive and require significant power consumption. The current paper instead is going to consider various implementations of computer vision-based person detection on two power-efficient and inexpensive devices, namely Raspberry Pi 3 and 4. In order to do so, some solutions based on off-the-shelf algorithms are first explored by reporting experimental results based on relevant performance metrics. Then, the paper presents a newly-created custom architecture, called eYOLO, that tries to solve some limitations of the previous systems. The experimental evaluation demonstrates the good performance of the proposed approach and suggests ways for further improvement.}
    }

  • L. Castri, S. Mghames, and N. Bellotto, “From continual learning to causal discovery in robotics,” in Aaai bridge program ?continual causality?, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Reconstructing accurate causal models of dynamic systems from time-series of sensor data is a key problem in many real-world scenarios. In this paper, we present an overview based on our experience about practical challenges that the causal analysis encounters when applied to autonomous robots and how Continual Learning{\texttt{\char126}}(CL) could help to overcome them. We propose a possible way to leverage the CL paradigm to make causal discovery feasible for robotics applications where the computational resources are limited, while at the same time exploiting the robot as an active agent that helps to increase the quality of the reconstructed causal models.

    @inproceedings{lincoln53116,
    booktitle = {AAAI Bridge Program ?Continual Causality?},
    month = {January},
    title = {From Continual Learning to Causal Discovery in Robotics},
    author = {Luca Castri and Sariah Mghames and Nicola Bellotto},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc9d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53116/},
    abstract = {Reconstructing accurate causal models of dynamic systems from time-series of sensor data is a key problem in many real-world scenarios. In this paper, we present an overview based on our experience about practical challenges that the causal analysis encounters when applied to autonomous robots and how Continual Learning{\texttt{\char126}}(CL) could help to overcome them. We propose a possible way to leverage the CL paradigm to make causal discovery feasible for robotics applications where the computational resources are limited, while at the same time exploiting the robot as an active agent that helps to increase the quality of the reconstructed causal models.}
    }

  • V. R. Sugathakumary, S. Parsons, and A. G. Esfahani, “Towards continuous acoustic tactile soft sensing,” in 21st international conference on advanced robotics, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Acoustic Soft Tactile (AST) skin is a novel sensing technology that uses deformations of the acoustic channels beneath the sensing surface to predict static normal forces and their contact locations. AST skin functions by sensing the changes in the modulation of the acoustic waves travelling through the channels as they deform due to the forces acting on the skin surface. Our previous study tested different AST skin designs for three discrete sensing points and selected two designs that better predicted the forces and contact locations. This paper presents a study of the sensing capability of these two AST skin designs with continuous sensing points with a spatial resolution of 6 mm. Our findings indicate that the AST skin with a dual-channel geometry outperformed the single-channel type during calibration. The dual-channel design predicted more than 90\% of the forces within a {$\pm$} 3 N tolerance and was 84.2\% accurate in predicting contact locations with {$\pm$} 6 mm resolution. In addition, the dual-channel AST skin demonstrated superior performance in a real-time pushing experiment over an off-the-shelf soft tactile sensor. These results demonstrate the potential of using AST skin technology for real-time force sensing in various applications, such as human-robot interaction and medical diagnosis.

    @inproceedings{lincoln56882,
    booktitle = {21st International Conference on Advanced Robotics},
    title = {Towards Continuous Acoustic Tactile Soft Sensing},
    author = {Vishnu Rajendran Sugathakumary and Simon Parsons and Amir Ghalamzan Esfahani},
    publisher = {IEEE Press},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddcd90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56882/},
    abstract = {Acoustic Soft Tactile (AST) skin is a novel sensing technology that uses deformations of the acoustic channels beneath the sensing surface to predict static normal forces and their contact locations. AST skin functions by sensing the changes in the modulation of the acoustic waves travelling through the channels as they deform due to the forces acting on the skin surface. Our previous study tested different AST skin designs for three discrete sensing points and selected two designs that better predicted the forces and contact locations. This paper presents a study of the sensing capability of these two AST skin designs with continuous sensing points with a spatial resolution of 6 mm. Our findings indicate that the AST skin with a dual-channel geometry outperformed the single-channel type during calibration. The dual-channel design predicted more than 90\% of the forces within a {$\pm$} 3 N tolerance and was 84.2\% accurate in predicting contact locations with {$\pm$} 6 mm resolution. In addition, the dual-channel AST skin demonstrated superior performance in a real-time pushing experiment over an off-the-shelf soft tactile sensor. These results demonstrate the potential of using AST skin technology for real-time force sensing in various applications, such as human-robot
    interaction and medical diagnosis.}
    }

  • F. Atas, G. Cielniak, and L. Grimstad, “Benchmark of sampling-based optimizing planners for outdoor robot navigation,” in 17th international conference on intelligent autonomous systems, 2023, p. 231–243. doi:10.1007/978-3-031-22216-0_16
    [BibTeX] [Abstract] [Download PDF]

    This paper evaluates Sampling-Based Optimizing (SBO) planners from the Open Motion Planning Library (OMPL) in the context of mobile robot navigation in outdoor environments. Many SBO planners have been proposed, and determining performance differences among these planners for path planning problems can be time-consuming and ambiguous. The probabilistic nature of SBO planners can also complicate this procedure, as different results for the same planning problem can be obtained even in consecutive queries from the same planner. We compare all available SBO planners in OMPL with an automated planning problem generation method designed specifically for outdoor robot navigation scenarios. Several evaluation metrics are chosen, such as the length, smoothness, and success rate of the resulting path, and probability distributions for metrics are presented. With the experimental results obtained, clear recommendations on high-performing planners for mobile robot path planning problems are made, which will be useful to researchers and practitioners in mobile robot planning and navigation.

    @inproceedings{lincoln50521,
    month = {January},
    author = {Fetullah Atas and Grzegorz Cielniak and Lars Grimstad},
    note = {ISBN: 978-3-031-22216-0},
    booktitle = {17th International Conference on Intelligent Autonomous Systems},
    title = {Benchmark of Sampling-Based Optimizing Planners for Outdoor Robot Navigation},
    publisher = {Springer},
    year = {2023},
    doi = {10.1007/978-3-031-22216-0\_16},
    pages = {231--243},
    keywords = {ARRAY(0x55bd28ddc970)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50521/},
    abstract = {This paper evaluates Sampling-Based Optimizing (SBO) planners from the Open Motion Planning Library (OMPL) in the context of mobile robot navigation in outdoor environments. Many SBO planners have been proposed, and determining performance differences among these planners for path planning problems can be time-consuming and ambiguous. The probabilistic nature of SBO planners can also complicate this procedure, as different results for the same planning problem can be obtained even in consecutive queries from the same planner. We compare all available SBO planners in OMPL with an automated planning problem generation method designed specifically for outdoor robot navigation scenarios. Several evaluation metrics are chosen, such as the length, smoothness, and success rate of the resulting path, and probability distributions for metrics are presented. With the experimental results obtained, clear recommendations on high-performing planners for mobile robot path planning problems are made, which will be useful to researchers and practitioners in mobile robot planning and navigation.}
    }

  • O. Hardy, K. Seemakurthy, and E. Sklar, “A comparative data set of annotated broccoli heads from a moving vehicle with accompanying depth mapping data,” in Cvppa 2023 workshop (8th workshop on computer vision in plant phenotyping and agriculture), 2023.
    [BibTeX] [Abstract] [Download PDF]

    The LAR Broccoli dataset 1 is a collection of manually annotated video footage of broccoli heads from a UK-based organic farm late in the harvesting season. Our new data set provides a high number of annotated frames captured at 30 frames per second with relatively low cost commercially available cameras that utilise two different depth sensing methods, the RealSense D435 and Stereolabs ZED 2 camera. The broccoli images have a variety of sizes, levels of occlusion and additional interesting complications such as weeds and previously harvested broccoli stems. We also provide an annotated RGB data set of the same crop recorded with the tractor running at 3 km/h capturing blurring effects that detection algorithms will need to adapt to if autonomous broccoli harvesting machines want to operate at these speeds.

    @inproceedings{lincoln56526,
    booktitle = {CVPPA 2023 Workshop (8th workshop on Computer Vision in Plant Phenotyping and Agriculture)},
    title = {A comparative data set of annotated Broccoli heads from a moving vehicle with accompanying depth mapping data},
    author = {Oliver Hardy and Karthik Seemakurthy and Elizabeth Sklar},
    publisher = {IEEE Xplore},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddcb20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56526/},
    abstract = {The LAR Broccoli dataset 1 is a collection of manually annotated video footage of broccoli heads from a UK-based organic farm late in the harvesting season. Our new data set provides a high number of annotated frames captured at 30 frames per second with relatively low cost commercially available cameras that utilise two different depth sensing methods, the RealSense D435 and Stereolabs ZED 2 camera. The broccoli images have a variety of sizes, levels of occlusion and additional interesting complications such as weeds and previously harvested broccoli stems. We also provide an annotated RGB data set of the same crop recorded with the tractor running at 3 km/h capturing blurring effects that detection algorithms will need to adapt to if autonomous broccoli harvesting machines want to operate at these speeds.}
    }

  • L. Castri, S. Mghames, and N. Bellotto, “Efficient causal discovery for robotics applications,” in Italian conference on robotics and intelligent machines (i-rim 3d), 2023.
    [BibTeX] [Abstract] [Download PDF]

    Using robots for automating tasks in environments shared with humans, such as warehouses, shopping centres, or hospitals, requires these robots to comprehend the fundamental physical interactions among nearby agents and objects. Specifically, creating models to represent cause-and-effect relationships among these elements can aid in predicting unforeseen human behaviours and anticipate the outcome of particular robot actions. To be suitable for robots, causal analysis must be both fast and accurate, meeting real-time demands and the limited computational resources typical in most robotics applications. In this paper, we present a practical demonstration of our approach for fast and accurate causal analysis, known as Filtered PCMCI (F-PCMCI), along with a real-world robotics application. The provided application illustrates how our F-PCMCI can accurately and promptly reconstruct the causal model of a human-robot interaction scenario, which can then be leveraged to enhance the quality of the interaction.

    @inproceedings{lincoln56810,
    booktitle = {Italian Conference on Robotics and Intelligent Machines (I-RIM 3D)},
    title = {Efficient Causal Discovery for Robotics Applications},
    author = {Luca Castri and Sariah Mghames and Nicola Bellotto},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddca60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56810/},
    abstract = {Using robots for automating tasks in environments shared with humans, such as warehouses, shopping centres, or hospitals, requires these robots to comprehend the fundamental physical interactions among nearby agents and objects. Specifically, creating models to represent cause-and-effect relationships among these elements can aid in predicting unforeseen human behaviours and anticipate the outcome of particular robot actions. To be suitable for robots, causal analysis must be both fast and accurate, meeting real-time demands and the limited computational resources typical in most robotics applications. In this paper, we present a practical demonstration of our approach for fast and accurate causal analysis, known as Filtered PCMCI (F-PCMCI), along with a real-world robotics application. The provided application illustrates how our F-PCMCI can accurately and promptly reconstruct the causal model of a human-robot interaction scenario, which can then be leveraged to enhance the quality of the interaction.}
    }

  • S. Mghames, L. Castri, M. Hanheide, and N. Bellotto, “Qualitative prediction of multi-agent spatial interactions,” in 32nd ieee international conference on robot and human interactive communication, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Deploying service robots in our daily life, whether in restaurants, warehouses or hospitals, calls for the need to reason on the interactions happening in dense and dynamic scenes. In this paper, we present and benchmark three new approaches to model and predict multi-agent interactions in dense scenes, including the use of an intuitive qualitative representation. The proposed solutions take into account static and dynamic context to predict individual interactions. They exploit an input- and a temporal-attention mechanism, and are tested on medium and long-term time horizons. The first two approaches integrate different relations from the so-called Qualitative Trajectory Calculus (QTC) within a state-of-the-art deep neural network to create a symbol-driven neural architecture for predicting spatial interactions. The third approach implements a purely data-driven network for motion prediction, the output of which is post-processed to predict QTC spatial interactions. Experimental results on a popular robot dataset of challenging crowded scenarios show that the purely data-driven prediction approach generally outperforms the other two. The three approaches were further evaluated on a different but related human scenarios to assess their generalisation capability.

    @inproceedings{lincoln55466,
    booktitle = {32nd IEEE International Conference on Robot and Human Interactive Communication},
    title = {Qualitative Prediction of Multi-Agent Spatial Interactions},
    author = {Sariah Mghames and Luca Castri and Marc Hanheide and Nicola Bellotto},
    publisher = {IEEE},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddcc70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55466/},
    abstract = {Deploying service robots in our daily life, whether in restaurants, warehouses or hospitals, calls for the need to reason on the interactions happening in dense and dynamic scenes. In this paper, we present and benchmark three new approaches to model and predict multi-agent interactions in dense scenes, including the use of an intuitive qualitative representation. The proposed solutions take into account static and dynamic context to predict individual interactions. They exploit an input- and a temporal-attention mechanism, and are tested on medium and long-term time horizons. The first two approaches integrate different relations from the so-called Qualitative Trajectory Calculus (QTC) within a state-of-the-art deep neural network to create a symbol-driven neural architecture for predicting spatial interactions. The third approach implements a purely data-driven network for motion prediction, the output of which is post-processed to predict QTC spatial interactions. Experimental results on a popular robot dataset of challenging crowded scenarios show that the purely data-driven prediction approach generally outperforms the other two. The three approaches were further evaluated on a different but related human scenarios to assess their generalisation capability.}
    }

  • K. Jurkans and C. Fox, “Python subset to digital logic dataflow compiler for robots and iot,” in International symposium on intelligent and trustworthy computing, communications, and networking (itccn-2023), 2023.
    [BibTeX] [Abstract] [Download PDF]

    Robots and IoT devices often need to process real-time signals using embedded systems with limited power and clock speeds – rather than large CPUs or GPUs. FPGAs offer highly parallel computation, but such computation is difficult to program, both algorithmically and at hardware implementation level. Programmers of digital signal processing (DSP), machine vision, and neural networks typically work in high level, serial languages such as Python, so would benefit from a tool to automatically convert this code to run on FPGA. We present a design for a compiler from a serial Python subset to parallel dataflow FPGA, in which the physical connectivity and dataflow of the digital logic mirrors the logical dataflow of the programs. The subset removes some imperative features from Python and focuses on Python’s functional programming elements, which can be more easily compiled into physical digital logic implementations of dataflows. Some imperative features are retained but interpreted under alternative functional semantics, making them easier to parallelize. These dataflows can then be pipelined for efficient continuous real-time data processing. An open-source partial implementation is provided together with a compilable simple neuron program.

    @inproceedings{lincoln56659,
    booktitle = {International Symposium on Intelligent and Trustworthy Computing, Communications, and Networking (ITCCN-2023)},
    title = {Python Subset to Digital Logic Dataflow Compiler for Robots and IoT},
    author = {Kristaps Jurkans and Charles Fox},
    publisher = {IEEE Computer Society},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddcbe0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56659/},
    abstract = {Robots and IoT devices often need to process real-time signals using embedded systems with limited power and clock speeds -- rather than large CPUs or GPUs. FPGAs offer highly parallel computation, but such computation is difficult to program, both algorithmically and at hardware implementation level. Programmers of digital signal processing (DSP), machine vision, and neural networks typically work in high level, serial languages such as Python, so would benefit from a tool to automatically convert this code to run on FPGA. We present a design for a compiler from a serial Python subset to parallel dataflow FPGA, in which the physical connectivity and dataflow of the digital logic mirrors the logical dataflow of the programs. The subset removes some imperative features from Python and focuses on Python's functional programming elements, which can be more easily compiled into physical digital logic implementations of dataflows. Some imperative features are retained but interpreted under alternative functional semantics, making them easier to parallelize. These dataflows can then be pipelined for efficient continuous real-time data processing. An open-source partial implementation is provided together with a compilable simple neuron program.}
    }

  • P. H. Johnson, M. Rai, and M. Calisti, “Fabrication and characterization of a passive variable stiffness joint based on shear thickening fluids,” in 6th ieee-ras international conference on soft robotics (robosoft), 2023. doi:10.1109/RoboSoft55895.2023.10122061
    [BibTeX] [Abstract] [Download PDF]

    In soft robotics, variable stiffening is the key to taking full advantage of properties such as compliance, manipulability and deformability. However, many variable stiffness actuators and mechanisms which have been produced so far to control these properties of soft robots are slow, bulky, or require additional complex actuators. This paper presents a novel passive soft joint based upon the intrinsic non-Newtonian behavior of Shear Thickening Fluids (STFs). The joint stiffness is varied by changing the speed at which it is actuated. The joints fabricated for testing have a simple cylindrical structure comprised of a soft silicone shell filled with a STF. Three prototypes with lengths of 20, 40 and 60mm were produced for experimental validation. We characterize the behavior of the joints in compression, expansion and bending, yielding a stiffness variation of more than 5x based on actuation speed in compression testing. This paper is the first step in producing a new category of variable stiffening mechanisms based on STFs which can be incorporated into soft robots without the need for additional actuation. It is envisaged that this new soft joint will find applications in soft manipulators and wearable devices.

    @inproceedings{lincoln53352,
    booktitle = {6th IEEE-RAS International Conference on Soft Robotics (ROBOSOFT)},
    title = {Fabrication and Characterization of a Passive Variable Stiffness Joint based on Shear Thickening Fluids},
    author = {Philip H. Johnson and Mini Rai and Marcello Calisti},
    publisher = {IEEE},
    year = {2023},
    doi = {10.1109/RoboSoft55895.2023.10122061},
    keywords = {ARRAY(0x55bd28ddcbb0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53352/},
    abstract = {In soft robotics, variable stiffening is the key to taking full advantage of properties such as compliance, manipulability and deformability. However, many variable stiffness actuators and mechanisms which have been produced so far to control these properties of soft robots are slow, bulky, or require additional complex actuators. This paper presents a novel passive soft joint based upon the intrinsic non-Newtonian behavior of Shear Thickening Fluids (STFs). The joint stiffness is varied by changing the speed at which it is actuated. The joints fabricated for testing have a simple cylindrical structure comprised of a soft silicone shell filled with a STF. Three prototypes with lengths of 20, 40 and 60mm were produced for experimental validation. We characterize the behavior of the joints in compression, expansion and bending, yielding a stiffness variation of more than 5x based on actuation speed in compression testing. This paper is the first step in producing a new category of variable stiffening mechanisms based on STFs which can be incorporated into soft robots without the need for additional actuation. It is envisaged that this new soft joint will find applications in soft manipulators and wearable devices.}
    }

  • M. A. Jameel, T. Kanakis, S. Turner, A. Al-Sherbaz, W. S. Bhaya, and M. Al-Khafajiy, “An intelligent routing approach for multimedia traffic transmission over sdn,” in 2023 15th international conference on developments in esystems engineering (dese), 2023, p. 118–124. doi:10.1109/DeSE58274.2023.10100250
    [BibTeX] [Abstract] [Download PDF]

    Multimedia applications such as video streaming services have become popular, especially with the rapid growth of users, devices, increased availability and diversity of these services over the internet. In this case, service providers and network administrators have difficulties ensuring end-user satisfaction because the traffic generated by such services is more exposed to multiple network quality of service impairments, including bandwidth, delay, jitter, and loss ratio. This paper proposes an intelligent-based multimedia traffic routing framework that exploits the integration of a reinforcement learning technique with software-defined networking to explore, learn and find potential routes for video streaming traffic. Simulation results through a realistic network and under various traffic loads, demonstrate the proposed scheme’s effectiveness in providing improved end-user viewing quality, higher throughput and lower video quality switches when compared to the existing techniques.

    @inproceedings{lincoln56607,
    booktitle = {2023 15th International Conference on Developments in eSystems Engineering (DeSE)},
    title = {An Intelligent Routing Approach for Multimedia Traffic Transmission Over SDN},
    author = {Mohammed Al Jameel and Triantafyllos Kanakis and Scott Turner and Ali Al-Sherbaz and Wesam S. Bhaya and Mohammed Al-Khafajiy},
    publisher = {IEEE},
    year = {2023},
    pages = {118--124},
    doi = {10.1109/DeSE58274.2023.10100250},
    keywords = {ARRAY(0x55bd28ddcb80)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56607/},
    abstract = {Multimedia applications such as video streaming services have become popular, especially with the rapid growth of users, devices, increased availability and diversity of these services over the internet. In this case, service providers and network administrators have difficulties ensuring end-user satisfaction because the traffic generated by such services is more exposed to multiple network quality of service impairments, including bandwidth, delay, jitter, and loss ratio. This paper proposes an intelligent-based multimedia traffic routing framework that exploits the integration of a reinforcement learning technique with software-defined networking to explore, learn and find potential routes for video streaming traffic. Simulation results through a realistic network and under various traffic loads, demonstrate the proposed scheme's effectiveness in providing improved end-user viewing quality, higher throughput and lower video quality switches when compared to the existing techniques.}
    }

  • F. Zhong, K. T. Fogarty, P. Hanji, T. W. Wu, A. Sztrajman, A. E. Spielberg, A. Tagliasacchi, P. Bosilj, and C. Oztireli, “Neural fields with hard constraints of arbitrary differential order,” in Thirty-seventh conference on neural information processing systems, 2023.
    [BibTeX] [Abstract] [Download PDF]

    While deep learning techniques have become extremely popular for solving a broad range of optimization problems, methods to enforce hard constraints during optimization, particularly on deep neural networks, remain underdeveloped. Inspired by the rich literature on meshless interpolation and its extension to spectral collocation methods in scientific computing, we develop a series of approaches for enforcing hard constraints on neural fields. The constraints can be specified as a linear operator applied to the neural field at any differential order. We also design specific model representations and training strategies for problems where standard models may encounter difficulties. Our approaches are demonstrated in a wide range of real-world applications. Additionally, we develop a framework that enables highly efficient model and constraint specification, which can be readily applied to any downstream task where hard constraints need to be explicitly satisfied during optimization.

    @inproceedings{lincoln56588,
    booktitle = {Thirty-seventh Conference on Neural Information Processing Systems},
    month = {December},
    title = {Neural Fields with Hard Constraints of Arbitrary Differential Order},
    author = {Fangcheng Zhong and Kyle Thomas Fogarty and Param Hanji and Tianhao Walter Wu and Alejandro Sztrajman and Andrew Everett Spielberg and Andrea Tagliasacchi and Petra Bosilj and Cengiz Oztireli},
    year = {2023},
    keywords = {ARRAY(0x55bd28dcdba0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56588/},
    abstract = {While deep learning techniques have become extremely popular for solving a broad range of optimization problems, methods to enforce hard constraints during optimization, particularly on deep neural networks, remain underdeveloped. Inspired by the rich literature on meshless interpolation and its extension to spectral collocation methods in scientific computing, we develop a series of approaches for enforcing hard constraints on neural fields. The constraints can be specified as a linear operator applied to the neural field at any differential order. We also design specific model representations and training strategies for problems where standard models may encounter difficulties. Our approaches are demonstrated in a wide range of real-world applications. Additionally, we develop a framework that enables highly efficient model and constraint specification, which can be readily applied to any downstream task where hard constraints need to be explicitly satisfied during optimization.}
    }

  • F. Atas, G. Cielniak, and L. Grimstad, “Enabling robot autonomy through a modular software framework,” in Icra2023 workshop on robot software architectures, 2023.
    [BibTeX] [Abstract] [Download PDF]

    The complexity of robotic software architectures stems from the need to manage a diverse range of sensory inputs, real-time actuator control, and adaptive capabilities in dynamic environments. In order to guarantee safe operation, robots must be capable of executing tasks concurrently and asynchronously, which poses significant challenges in developing cohesive robotic software architectures. It is commonly accepted that there is no universal approach that can address the needs of all robot platforms and applications. A number of established architectures have been developed based on the publish-subscribe and action-client paradigms employed by Robot Operating System (ROS) middleware. Extending on these developments, in this research, we present a novel robotic software architecture that enables seamless integration of different robotics software components, such as Planning, Control, and Perception. The presented architecture is designed to ensure the autonomous navigation of a mobile robot operating in uneven outdoor terrains, while also supporting indoor environments with appropriate customization. Our software has been made available to the robotics community through a GitHub repository.

    @inproceedings{lincoln55955,
    booktitle = {ICRA2023 Workshop on Robot Software Architectures},
    month = {May},
    title = {Enabling Robot Autonomy through a Modular Software Framework},
    author = {Fetullah Atas and Grzegorz Cielniak and Lars Grimstad},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc640)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55955/},
    abstract = {The complexity of robotic software architectures stems from the need to manage a diverse range of sensory inputs, real-time actuator control, and adaptive capabilities in dynamic environments. In order to guarantee safe operation, robots must be capable of executing tasks concurrently and asynchronously, which poses significant challenges in developing cohesive robotic software architectures.
    It is commonly accepted that there is no universal approach that can address the needs of all robot platforms and applications. A number of established architectures have been developed based on the publish-subscribe and action-client paradigms employed by Robot Operating System (ROS) middleware. Extending on these developments, in this research, we present a novel robotic software architecture that enables seamless integration of different robotics software components, such as Planning, Control, and Perception. The presented architecture is designed to ensure the autonomous navigation of a mobile robot operating in uneven outdoor terrains, while also supporting indoor environments with appropriate customization. Our software has been made available to the robotics community through a GitHub repository.}
    }

  • L. Castri, S. Mghames, M. Hanheide, and N. Bellotto, “Enhancing causal discovery from robot sensor data in dynamic scenarios,” in Conference on causal learning and reasoning (clear), 2023.
    [BibTeX] [Abstract] [Download PDF]

    Identifying the main features and learning the causal relationships of a dynamic system from time-series of sensor data are key problems in many real-world robot applications. In this paper, we propose an extension of a state-of-the-art causal discovery method, PCMCI, embedding an additional feature-selection module based on transfer entropy. Starting from a prefixed set of variables, the new algorithm reconstructs the causal model of the observed system by considering only the its main features and neglecting those deemed unnecessary for understanding the evolution of the system. We first validate the method on a toy problem, for which the ground-truth model is available, and then on a real-world robotics scenario using a large-scale time-series dataset of human trajectories. The experiments demonstrate that our solution outperforms the previous state-of-the-art technique in terms of accuracy and computational efficiency, allowing better and faster causal discovery of meaningful models from robot sensor data.

    @inproceedings{lincoln53113,
    booktitle = {Conference on Causal Learning and Reasoning (CLeaR)},
    month = {April},
    title = {Enhancing Causal Discovery from Robot Sensor Data in Dynamic Scenarios},
    author = {Luca Castri and Sariah Mghames and Marc Hanheide and Nicola Bellotto},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc7c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53113/},
    abstract = {Identifying the main features and learning the causal relationships of a dynamic system from time-series of sensor data are key problems in many real-world robot applications. In this paper, we propose an extension of a state-of-the-art causal discovery method, PCMCI, embedding an additional feature-selection module based on transfer entropy. Starting from a prefixed set of variables, the new algorithm reconstructs the causal model of the observed system by considering only the its main features and neglecting those deemed unnecessary for understanding the evolution of the system. We first validate the method on a toy problem, for which the ground-truth model is available, and then on a real-world robotics scenario using a large-scale time-series dataset of human trajectories. The experiments demonstrate that our solution outperforms the previous state-of-the-art technique in terms of accuracy and computational efficiency, allowing better and faster causal discovery of meaningful models from robot sensor data.}
    }

  • J. Davy and C. Fox, “Simultaneous base and arm trajectories for multi-target mobile agri-robot,” in Taros, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Many agricultural robotics tasks require an end effector to hold stationary above individual plants in the field for short periods. Examples include precision harvesting, imaging and spraying. This effector may be mounted on a mobile base such as a large tractor or small robot, driving in the field. We consider how to optimise control of the base and the end actuator together, to minimise total time taken to visit the plants. Our approach is based on low level combination of simple motion primitives, with mid level target clustering, and higher level planning. For the high level, three strategies are compared and evaluated in simulation: baseline stop-and-spray, constant velocity, and variable velocity. The baseline strategy is common in existing systems, and is shown to be outperformed by the new methods. The application considered here is weed spraying, but the methods are applicable to many tasks.

    @inproceedings{lincoln55398,
    booktitle = {TAROS},
    month = {September},
    title = {Simultaneous Base and Arm Trajectories for Multi-Target Mobile Agri-Robot},
    author = {Josh Davy and Charles Fox},
    publisher = {TAROS},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc258)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55398/},
    abstract = {Many agricultural robotics tasks require an end effector to hold stationary above individual plants in the field for short periods. Examples include precision harvesting, imaging and spraying. This effector may be mounted on a mobile base such as a large tractor or small robot, driving in the field. We consider how to optimise control of the base and the end actuator together, to minimise total time taken to visit the plants. Our approach is based on low level combination of simple motion primitives, with mid level target clustering, and higher level planning. For the high level, three strategies are compared and evaluated in simulation: baseline stop-and-spray, constant velocity, and variable velocity. The baseline strategy is common in existing systems, and is shown to be outperformed by the new methods. The application considered here is weed spraying, but the methods are applicable to many tasks.}
    }

  • E. Paul, S. Bharti, A. Uthama, R. A. Boby, H. Krishnaswamy, and A. Klimchik, “Evaluation of deviations due to robot configuration for robot-based incremental sheet metal forming,” in 6th international conference on advances in robotics, 2023. doi:10.1145/3610419.3610471
    [BibTeX] [Abstract] [Download PDF]

    Industrial robot-based Incremental Sheet metal Forming (ISF) is known as Roboforming. Industrial robots are being adopted for forming operations because they allow higher tool flexibility in terms of positioning and orientating the tool and a larger workspace at a minimum cost compared to CNC-based ISF. However, the lower stiffness of the robots leads to undesirable geometrical anomalies and deviations in the formed part. Along with the externally applied forces, robot configuration changes also impact the tool?s positional accuracy. An attempt has been made to study the influence of robot configurations on overall part deviations in Roboforming due to robot compliance. Understanding changes in compliance or stiffness changes owing to different robot poses is a necessary step for optimizing the transition from a traditional CNC-based ISF to a robot-assisted ISF. Robot stiffness-based deviations are evaluated using an analytical approach, and configuration-based deflections are studied using FEM-based approaches for their contribution to the overall part deviations. Results are compared to the sheet metal forming experiments conducted over conventional 3-axis CNC for forming a cone-shaped profile using a spiral toolpath. It is highlighted that the overall deviations in the formed components are influenced by robot compliance and pose.

    @inproceedings{lincoln57109,
    booktitle = {6th International Conference on Advances in Robotics},
    month = {November},
    title = {Evaluation Of Deviations Due To Robot Configuration For Robot-based Incremental Sheet Metal Forming},
    author = {Eldho Paul and Sahil Bharti and Anandu Uthama and Riby Abraham Boby and Hariharan Krishnaswamy and Alexandr Klimchik},
    publisher = {Association for Computing Machinery},
    year = {2023},
    doi = {10.1145/3610419.3610471},
    keywords = {ARRAY(0x55bd28ddb4d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57109/},
    abstract = {Industrial robot-based Incremental Sheet metal Forming (ISF) is known as Roboforming. Industrial robots are being adopted for forming operations because they allow higher tool flexibility in terms of positioning and orientating the tool and a larger workspace at a minimum cost compared to CNC-based ISF. However, the lower stiffness of the robots leads to undesirable geometrical anomalies and deviations in the formed part. Along with the externally applied forces, robot configuration changes also impact the tool?s positional accuracy. An attempt has been made to study the influence of robot configurations on overall part deviations in Roboforming due to robot compliance. Understanding changes in compliance or stiffness changes owing to different robot poses is a necessary step for optimizing the transition from a traditional CNC-based ISF to a robot-assisted ISF. Robot stiffness-based deviations are evaluated using an analytical approach, and configuration-based deflections are studied using FEM-based approaches for their contribution to the overall part deviations. Results are compared to the sheet metal forming experiments conducted over conventional 3-axis CNC for forming a cone-shaped profile using a spiral toolpath. It is highlighted that the overall deviations in the formed components are influenced by robot compliance and pose.}
    }

  • K. Gaikwad, R. Soni, C. Fox, and C. Waltham, “Open source hardware robotics interfacing board,” in Taros, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Robotics research still struggles with reproducibility. The ROS ecosystem enables reuse of software, but not hardware. Researchers waste time porting systems between hardware platforms to reproduce research between labs. Researchers in developing counties in particular often cannot afford the proprietary robots used by others. If a published robotics system is dependent on any component that is only available from a single supplier, then all work building on it is at risk if that supplier vanishes, de-lists or changes the product. Open Source Hardware (OSH, {$\backslash$}cite\{pearce2012building\}) is hardware whose designs and build instructions are public, easy, and low-cost so that anyone is free to build and modify them, enabling large community collaborations. Combined open software and hardware stacks allow any researcher to download, build, exactly replicate, then extend the published work which they read about.

    @inproceedings{lincoln55399,
    booktitle = {TAROS},
    month = {October},
    title = {Open source hardware robotics interfacing board},
    author = {Kshitij Gaikwad and Rakshit Soni and Charles Fox and Chris Waltham},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc030)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55399/},
    abstract = {Robotics research still struggles with reproducibility. The ROS ecosystem enables reuse of software, but not hardware. Researchers waste time porting systems between hardware platforms to reproduce research between labs. Researchers in developing counties in particular often cannot afford the proprietary robots used by others. If a published robotics system is dependent on any component that is only available from a single supplier, then all work building on it is at risk if that supplier vanishes, de-lists or changes the product. Open Source Hardware (OSH, {$\backslash$}cite\{pearce2012building\}) is hardware whose designs and build instructions are public, easy, and low-cost so that anyone is free to build and modify them, enabling large community collaborations. Combined open software and hardware stacks allow any researcher to download, build, exactly replicate, then extend the published work which they read about.}
    }

  • R. Trimble and C. Fox, “Skid-steer friction calibration protocol for digital twin creation,” in Taros, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Mobile robots require digital twins to test and learn algorithms while minimising the difficulty, expense and risk of physical trials. Most mobile robots use wheels, which are notoriously difficult to simulate accurately due to friction. Physics engines approximate complex tribology using simplified models which can result in unrealistic behaviors such as inability to turn or sliding sideways down small slopes. Methods exist to characterise friction properties of skid steer vehicles {$\backslash$}cite\{khaleghian2017technical\} but use has been limited because they require expensive measurement equipment or physics models not available in common simulators. We present a new simple protocol to obtain dynamic friction parameters from physical four-wheeled skid-steer robots for use in the Gazebo robot simulator using ODE (Open Dynamics Engine), assuming only that calibrated IMU (Inertial Measurement Unit) and odometry, and vehicle and wheel weights and geometry are available.

    @inproceedings{lincoln55400,
    booktitle = {TAROS},
    month = {October},
    title = {Skid-steer friction calibration protocol for digital twin creation},
    author = {Rachel Trimble and Charles Fox},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc060)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55400/},
    abstract = {Mobile robots require digital twins to test and learn algorithms while minimising the difficulty, expense and risk of physical trials. Most mobile robots use wheels, which are notoriously difficult to simulate accurately due to friction. Physics engines approximate complex tribology using simplified models which can result in unrealistic behaviors such as inability to turn or sliding sideways down small slopes. Methods exist to characterise friction properties of skid steer vehicles {$\backslash$}cite\{khaleghian2017technical\} but use has been limited because they require expensive measurement equipment or physics models not available in common simulators. We present a new simple protocol to obtain dynamic friction parameters from physical four-wheeled skid-steer robots for use in the Gazebo robot simulator using ODE (Open Dynamics Engine), assuming only that calibrated IMU (Inertial Measurement Unit) and odometry, and vehicle and wheel weights and geometry are available.}
    }

  • W. Shaker and A. Klimchik, “Comparative analysis of springback compensation for various profiles in incremental forming,” in 2023 international russian automation conference (rusautocon), 2023. doi:10.1109/RusAutoCon58002.2023.10272754
    [BibTeX] [Abstract] [Download PDF]

    Incremental forming encounters a common challenge known as the springback effect, wherein the workpiece undergoes elastic deformation and deviates slightly from the desired shape once the forming tool is released. This discrepancy between the intended and obtained shape results in reduced geometric accuracy, making incremental forming less precise compared to conventional methods. This research presents a novel springback effect compensation model for sheet forming processes. The main objective is to evaluate the model’s performance across various profiles, with a focus on enhancing precision and accuracy in the formed shapes. The proposed model demonstrates an impressive ability to compensate for approximately 60\% of the springback effect, offering a practical solution for offline springback compensation.

    @inproceedings{lincoln57368,
    booktitle = {2023 International Russian Automation Conference (RusAutoCon)},
    month = {October},
    title = {Comparative Analysis of Springback Compensation for Various Profiles in Incremental Forming},
    author = {Walid Shaker and Alexandr Klimchik},
    publisher = {IEEE},
    year = {2023},
    doi = {10.1109/RusAutoCon58002.2023.10272754},
    keywords = {ARRAY(0x55bd28ddc078)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57368/},
    abstract = {Incremental forming encounters a common challenge known as the springback effect, wherein the workpiece undergoes elastic deformation and deviates slightly from the desired shape once the forming tool is released. This discrepancy between the intended and obtained shape results in reduced geometric accuracy, making incremental forming less precise compared to conventional methods. This research presents a novel springback effect compensation model for sheet forming processes. The main objective is to evaluate the model's performance across various profiles, with a focus on enhancing precision and accuracy in the formed shapes. The proposed model demonstrates an impressive ability to compensate for approximately 60\% of the springback effect, offering a practical solution for offline springback compensation.}
    }

  • H. Rogers, B. D. L. Iglesia, T. Zebin, G. Cielniak, and B. Magri, “An agricultural precision sprayer deposit identification system,” in 2023 ieee 19th international conference on automation science and engineering (case), 2023, p. 1–6. doi:10.1109/CASE56687.2023.10260374
    [BibTeX] [Abstract] [Download PDF]

    Data-driven Artificial Intelligence systems are playing an increasingly significant role in the advancement of precision agriculture. Currently, precision sprayers lack fully automated methods to evaluate the effectiveness of their operation, e.g. whether spray has landed on target weeds. In this paper, using an agricultural spot spraying system images were collected from an RGB camera to locate spray deposits on weeds or lettuces. We present an interpretable deep learning pipeline to identify spray deposits on lettuces and weeds without using existing methods such as tracers or water-sensitive papers. We implement a novel stratification and sampling methodology to improve results from a baseline. Using a binary classification head after transfer learning networks, spray deposits are identified with over 90\% Area Under the Receiver Operating Characteristic (AUROC). This work offers a data-driven approach for an automated evaluation methodology for the effectiveness of precision sprayers.

    @inproceedings{lincoln56560,
    month = {September},
    author = {Harry Rogers and Beatriz De La Iglesia and Tahmina Zebin and Grzegorz Cielniak and Ben Magri},
    booktitle = {2023 IEEE 19th International Conference on Automation Science and Engineering (CASE)},
    title = {An Agricultural Precision Sprayer Deposit Identification System},
    publisher = {IEEE},
    doi = {10.1109/CASE56687.2023.10260374},
    pages = {1--6},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc0d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56560/},
    abstract = {Data-driven Artificial Intelligence systems are playing an increasingly significant role in the advancement of precision agriculture. Currently, precision sprayers lack fully automated methods to evaluate the effectiveness of their operation, e.g. whether spray has landed on target weeds. In this paper, using an agricultural spot spraying system images were collected from an RGB camera to locate spray deposits on weeds or lettuces. We present an interpretable deep learning pipeline to identify spray deposits on lettuces and weeds without using existing methods such as tracers or water-sensitive papers. We implement a novel stratification and sampling methodology to improve results from a baseline. Using a binary classification head after transfer learning networks, spray deposits are identified with over 90\% Area Under the Receiver Operating Characteristic (AUROC). This work offers a data-driven approach for an automated evaluation methodology for the effectiveness of precision sprayers.}
    }

  • W. K. Shaker and A. Klimchik, “Towards single point incremental forming accuracy: an approach for the springback effect compensation,” in 2023 ieee 19th international conference on automation science and engineering (case), 2023, p. 1–6. doi:10.1109/CASE56687.2023.10260568
    [BibTeX] [Abstract] [Download PDF]

    The springback effect is a common occurrence in incremental forming, where the formed workpiece elastically deforms and slightly shifts from the desired shape after the tool is released. This phenomenon causes an error between the target and obtained shape, leading to reduced geometric accuracy. It is is a significant challenge in incremental forming and it is a reason why the process has lower accuracy compared to conventional forming methods. This paper presents an off-line springback effect compensation model aiming to generate an optimized toolpath that accounts for the material springback effect. The model is based on an off-line numerical simulation conducted on Abaqus/CAE software. The results demonstrated that the proposed model can effectively reduce the error between the desired and obtained shape by 31.8\% for aluminum, 63.2\% for copper, and 63.1 \% for magnesium.

    @inproceedings{lincoln56655,
    month = {September},
    author = {Walid K. Shaker and Alexandr Klimchik},
    booktitle = {2023 IEEE 19th International Conference on Automation Science and Engineering (CASE)},
    title = {Towards Single Point Incremental Forming Accuracy: An Approach for the Springback Effect Compensation},
    publisher = {IEEE},
    doi = {10.1109/CASE56687.2023.10260568},
    pages = {1--6},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc108)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56655/},
    abstract = {The springback effect is a common occurrence in incremental forming, where the formed workpiece elastically deforms and slightly shifts from the desired shape after the tool is released. This phenomenon causes an error between the target and obtained shape, leading to reduced geometric accuracy. It is is a significant challenge in incremental forming and it is a reason why the process has lower accuracy compared to conventional forming methods. This paper presents an off-line springback effect compensation model aiming to generate an optimized toolpath that accounts for the material springback effect. The model is based on an off-line numerical simulation conducted on Abaqus/CAE software. The results demonstrated that the proposed model can effectively reduce the error between the desired and obtained shape by 31.8\% for aluminum, 63.2\% for copper, and 63.1 \% for magnesium.}
    }

  • F. Atas, G. Cielniak, and L. Grimstad, “Navigating in 3d uneven environments through supervoxels and nonlinear mpc,” in European conference on mobile robots (ecmr), 2023, p. 1–8. doi:10.1109/ECMR59166.2023.10256342
    [BibTeX] [Abstract] [Download PDF]

    Navigating uneven and rough terrains presents difficulties, including stability, traversability, sensing, and robustness, making autonomous navigation in these terrains a challenging task. This study introduces a new approach for mobile robots to navigate uneven terrains. The method uses a compact graph of traversable regions on point cloud maps, created through the utilization of supervoxel representation of point clouds. By using this supervoxel graph, the method navigates the robot to any reachable goal pose by utilizing a navigation function and Nonlinear Model Predictive Controller (NMPC). The NMPC ensures kinodynamically feasible and collision-free motion plans, while the supervoxel-based geometric planning generates near-optimal plans by exploiting the terrain information. We conducted extensive navigation experiments in real and simulated 3D uneven terrains and found that the approach performs reliably. Additionally, we compared resulting motion plans to some state-of-the-art sampling-based motion planners in which our method outperformed them in terms of execution time and resulting path lengths. The method can also be adapted to meet specific behavior, like the shortest route or the path with the least slope route. The source code is available in a GitHub repository.

    @inproceedings{lincoln56559,
    month = {September},
    author = {Fetullah Atas and Grzegorz Cielniak and Lars Grimstad},
    booktitle = {European Conference on Mobile Robots (ECMR)},
    title = {Navigating in 3D Uneven Environments through Supervoxels and Nonlinear MPC},
    publisher = {IEEE},
    doi = {10.1109/ECMR59166.2023.10256342},
    pages = {1--8},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc138)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56559/},
    abstract = {Navigating uneven and rough terrains presents difficulties, including stability, traversability, sensing, and robustness, making autonomous navigation in these terrains a challenging task. This study introduces a new approach for mobile robots to navigate uneven terrains. The method uses a compact graph of traversable regions on point cloud maps, created through the utilization of supervoxel representation of point clouds. By using this supervoxel graph, the method navigates the robot to any reachable goal pose by utilizing a navigation function and Nonlinear Model Predictive Controller (NMPC). The NMPC ensures kinodynamically feasible and collision-free motion plans, while the supervoxel-based geometric planning generates near-optimal plans by exploiting the terrain information. We conducted extensive navigation experiments in real and simulated 3D uneven terrains and found that the approach performs reliably. Additionally, we compared resulting motion plans to some state-of-the-art sampling-based motion planners in which our method outperformed them in terms of execution time and resulting path lengths. The method can also be adapted to meet specific behavior, like the shortest route or the path with the least slope route. The source code is available in a GitHub repository.}
    }

  • I. Hroob, S. M. Mellado, R. Polvara, G. Cielniak, and M. Hanheide, “Learned long-term stability scan filtering for robust robot localisation in continuously changing environments,” in European conference on mobile robots (ecmr), 2023. doi:10.1109/ECMR59166.2023.10256419
    [BibTeX] [Abstract] [Download PDF]

    In field robotics, particularly in the agricultural sector, precise localization presents a challenge due to the constantly changing nature of the environment. Simultaneous Localization and Mapping algorithms can provide an effective estimation of a robot?s position, but their long-term performance may be impacted by false data associations. Additionally, alternative strategies such as the use of RTK-GPS can also have limitations, such as dependence on external infrastructure. To address these challenges, this paper introduces a novel stability scan filter. This filter can learn and infer the motion status of objects in the environment, allowing it to identify the most stable objects and use them as landmarks for robust robot localization in a continuously changing environment. The proposed method involves an unsupervised point-wise labelling of LiDAR frames by utilizing temporal observations of the environment, as well as a regression network called Long-Term Stability Network (LTSNET) to learn and infer 3D LiDAR points long-term motion status. Experiments demonstrate the ability of the stability scan filter to infer the motion stability of objects on a real agricultural long-term dataset. Results show that by only utilizing points belonging to long-term stable objects, the localization system exhibits reliable and robust localization performance for longterm missions compared to using the entire LiDAR frame points.

    @inproceedings{lincoln56036,
    booktitle = {European Conference on Mobile Robots (ECMR)},
    month = {September},
    title = {Learned Long-Term Stability Scan Filtering for Robust Robot Localisation in Continuously Changing Environments},
    author = {Ibrahim Hroob and Sergio Molina Mellado and Riccardo Polvara and Grzegorz Cielniak and Marc Hanheide},
    publisher = {IEEE},
    year = {2023},
    doi = {10.1109/ECMR59166.2023.10256419},
    keywords = {ARRAY(0x55bd28ddc168)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56036/},
    abstract = {In field robotics, particularly in the agricultural sector, precise localization presents a challenge due to the constantly changing nature of the environment. Simultaneous Localization and Mapping algorithms can provide an effective estimation of a robot?s position, but their long-term performance may be impacted by false data associations. Additionally, alternative strategies such as the use of RTK-GPS can also have limitations, such as dependence on external infrastructure. To address these challenges, this paper introduces a novel stability scan filter. This filter can learn and infer the motion status of objects in the environment, allowing it to identify the most stable objects and use them as landmarks for robust robot localization in a continuously changing environment. The proposed method involves an unsupervised point-wise labelling of LiDAR frames by utilizing temporal observations of the environment, as well as a regression network called Long-Term Stability Network (LTSNET) to learn and infer 3D LiDAR points long-term motion status. Experiments demonstrate the ability of the stability scan filter to infer the motion stability of objects on a real agricultural long-term dataset. Results show that by only utilizing points belonging to long-term stable objects, the localization system exhibits reliable and robust localization performance for longterm missions compared to using the entire LiDAR frame points.}
    }

  • J. L. Louedec and G. Cielniak, “Key point-based orientation estimation of strawberries for robotic fruit picking,” in 14th international conference, icvs 2023, 2023. doi:10.1007/978-3-031-44137-0_13
    [BibTeX] [Abstract] [Download PDF]

    Selective robotic harvesting can help address labour shortages affecting modern global agriculture. For an accurate and efficient picking process, a robotic harvester requires the precise location and orientation of the fruit to effectively plan the trajectory of the end effector. The current methods for estimating fruit orientation employ either complete 3D information registered from multiple views or rely on fully-supervised learning techniques, requiring difficult-to-obtain manual annotation of the reference orientation. In this paper, we introduce a novel key-point-based fruit orientation estimation method for the prediction of 3D orientation from 2D images directly. The proposed technique can work without full 3D orientation annotations but can also exploit such information for improved accuracy. We evaluate our work on two separate datasets of strawberry images obtained from real-world scenarios. Our method achieves state-of-the-art performance with an average error as low as 8?, improving predictions by {$\sim$}30\% compared to previous work presented in [18]. Furthermore, our method is suited for real-time robotic applications with fast inference times of {$\sim$}30ms.

    @inproceedings{lincoln56545,
    booktitle = {14th International Conference, ICVS 2023},
    month = {September},
    title = {Key Point-Based Orientation Estimation of Strawberries for Robotic Fruit Picking},
    author = {Justin Le Louedec and Grzegorz Cielniak},
    publisher = {Springer Cham},
    year = {2023},
    doi = {10.1007/978-3-031-44137-0\_13},
    keywords = {ARRAY(0x55bd28ddc198)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56545/},
    abstract = {Selective robotic harvesting can help address labour shortages affecting modern global agriculture. For an accurate and efficient picking process, a robotic harvester requires the precise location and orientation of the fruit to effectively plan the trajectory of the end effector. The current methods for estimating fruit orientation employ either complete 3D information registered from multiple views or rely on fully-supervised learning techniques, requiring difficult-to-obtain manual annotation of the reference orientation. In this paper, we introduce a novel key-point-based fruit orientation estimation method for the prediction of 3D orientation from 2D images directly. The proposed technique can work without full 3D orientation annotations but can also exploit such information for improved accuracy. We evaluate our work on two separate datasets of strawberry images obtained from real-world scenarios. Our method achieves state-of-the-art performance with an average error as low as 8?, improving predictions by {$\sim$}30\% compared to previous work presented in [18]. Furthermore, our method is suited for real-time robotic applications with fast inference times of {$\sim$}30ms.}
    }

  • L. Roberts-Elliott, G. Das, and A. Millard, “Towards an abstract lightweight multi-robot ros simulator for rapid experimentation,” in The 23rd towards autonomous robotic systems (taros) conference, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Modern robot simulators are commonly highly complex, offering 3D graphics, and simulation of physics, sensors, and actuators. The computational complexity of simulating large multi-robot systems in these simulators can be prohibitively high. To achieve faster-than-realtime simulation of a multi-robot system for rapid experimentation, we present `move_base_abstract’, a ROS package providing a high-level abstraction of robot navigation as a “drop-in” replacement for the standard `move_base’ navigation, and a bespoke integrated minimal simulator. This bespoke simulator is compatible with ROS and strips the simulation of robots down to the representation of robot poses in 2D space, control of robots via navigation goals, and control of simulation time over ROS topic messages. Replication of an existing MRS simulated study using `move_base_abstract’ executed 2.87 times faster than the real-time that was simulated in the study, and analysis of the results of this replication shows room for further optimisations.

    @inproceedings{lincoln56229,
    booktitle = {The 23rd Towards Autonomous Robotic Systems (TAROS) Conference},
    month = {September},
    title = {Towards an Abstract Lightweight Multi-robot ROS Simulator for Rapid Experimentation},
    author = {Laurence Roberts-Elliott and Gautham Das and Alan Millard},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc1f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56229/},
    abstract = {Modern robot simulators are commonly highly complex, offering 3D graphics, and simulation of physics, sensors, and actuators. The computational complexity of simulating large multi-robot systems in these simulators can be prohibitively high. To achieve faster-than-realtime simulation of a multi-robot system for rapid experimentation, we present `move\_base\_abstract', a ROS package providing a high-level abstraction of robot navigation as a ``drop-in'' replacement for the standard `move\_base' navigation, and a bespoke integrated minimal simulator. This bespoke simulator is compatible with ROS and strips the simulation of robots down to the representation of robot poses in 2D space, control of robots via navigation goals, and control of simulation time over ROS topic messages.
    Replication of an existing MRS simulated study using `move\_base\_abstract' executed 2.87 times faster than the real-time that was simulated in the study, and analysis of the results of this replication shows room for further optimisations.}
    }

  • E. Alabi, F. Camara, and C. Fox, “Evaluation of osmc open source motor driver$\backslash$$\backslash$ for reproducible robotics research,” in Taros, 2023.
    [BibTeX] [Abstract] [Download PDF]

    There is a growing need for open source hardware subcomponents to be evaluated. Most robotic systems are ultimately based upon motors which are driven to move either to certain positions, as in robot arms, or to certain velocities, as in wheeled mobile robots. We evaluate a state of the art OSH driver, OSMC, for such systems, and contribute new Open Source Software (OSS) to control it. Our findings suggest that OSMC is now mature enough to replace closed-source motor drivers in medium-size robots such as agri-robots and last mile delivery vehicles.

    @inproceedings{lincoln55397,
    booktitle = {TAROS},
    month = {September},
    title = {Evaluation of OSMC open source motor driver{$\backslash$}{$\backslash$} for reproducible robotics research},
    author = {Elijah Alabi and Fanta Camara and Charles Fox},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc228)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55397/},
    abstract = {There is a growing need for open source hardware subcomponents to be evaluated. Most robotic systems are ultimately based upon motors which are driven to move either to certain positions, as in robot arms, or to certain velocities, as in wheeled mobile robots. We evaluate a state of the art OSH driver, OSMC, for such systems, and contribute new Open Source Software (OSS) to control it. Our findings suggest that OSMC is now mature enough to replace closed-source motor drivers in medium-size robots such as agri-robots and last mile delivery vehicles.}
    }

  • B. Hurst, N. Bellotto, and P. Bosilj, “An assessment of self-supervised learning for data efficient potato instance segmentation,” in Taros, 2023. doi:10.1007/978-3-031-43360-3
    [BibTeX] [Abstract] [Download PDF]

    This work examines the viability of self-supervised learning approaches in the field of agri-robotics, specifically focusing on the segmentation of densely packed potato tubers in storage. The work assesses the impact of both the quantity and quality of data on self-supervised training, employing a limited set of both annotated and unannotated data. Mask R-CNN with a ResNet50 backbone is used for instance segmentation to evaluate self-supervised training performance. The results indicate that the self-supervised methods employed have a modest yet beneficial impact on the downstream task. A simpler approach yields more effective results with a larger dataset, whereas a more intricate method shows superior performance with a refined, smaller self-supervised dataset.

    @inproceedings{lincoln56183,
    booktitle = {TAROS},
    month = {September},
    title = {An assessment of self-supervised learning for data efficient potato instance segmentation},
    author = {Bradley Hurst and Nicola Bellotto and Petra Bosilj},
    publisher = {Springer, Cham},
    year = {2023},
    doi = {10.1007/978-3-031-43360-3},
    keywords = {ARRAY(0x55bd28ddc2b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56183/},
    abstract = {This work examines the viability of self-supervised learning approaches in the field of agri-robotics, specifically focusing on the segmentation of densely packed potato tubers in storage. The work assesses the impact of both the quantity and quality of data on self-supervised training, employing a limited set of both annotated and unannotated data. Mask R-CNN with a ResNet50 backbone is used for instance segmentation to evaluate self-supervised training performance. The results indicate that the self-supervised methods employed have a modest yet beneficial impact on the downstream task. A simpler approach yields more effective results with a larger dataset, whereas a more intricate method shows superior performance with a refined, smaller self-supervised dataset.}
    }

  • M. S. Sofla, S. Vayakkattil, and M. Calisti, “Spatial position estimation of lightweight and delicate objects using a soft haptic probe,” in 2023 ieee international conference on soft robotics (robosoft), 2023, p. 1–6. doi:10.1109/RoboSoft55895.2023.10122004
    [BibTeX] [Abstract] [Download PDF]

    This paper reports on the use of a soft probe as a haptic exploratory device with Force/Moment (F/M) Readings at its base to determine the position of extremely lightweight and delicate objects. The proposed method uses the mathematical relationships between the deformations of the soft probe and the F/M sensor outputs, to reconstruct the shape of the probe and the position of the touched object. The Cosserat rod theory was utilized in this way under the assumption that only one contact point occurs during the exploration and friction effects are negligible. Soft probes in different sizes were designed and fabricated using a Form3 3D printer and Elastic50A resin, for which the effect of gravity is not negligible. Experimental results verified the performance of the proposed method that achieved a position error between of -0.7-13mm, while different external forces (between 0.01N to 1.5N) were applied along the soft probes to resemble the condition of touching lightweight objects. Eventually, the method is used to estimate position of some points in a delicate card house structure.

    @inproceedings{lincoln56196,
    month = {May},
    author = {Mohammad Sheikh Sofla and Srikishan Vayakkattil and Marcello Calisti},
    booktitle = {2023 IEEE International Conference on Soft Robotics (RoboSoft)},
    title = {Spatial Position Estimation of Lightweight and Delicate Objects using a Soft haptic Probe},
    publisher = {IEEE},
    doi = {10.1109/RoboSoft55895.2023.10122004},
    pages = {1--6},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc700)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56196/},
    abstract = {This paper reports on the use of a soft probe as a haptic exploratory device with Force/Moment (F/M) Readings at its base to determine the position of extremely lightweight and delicate objects. The proposed method uses the mathematical relationships between the deformations of the soft probe and the F/M sensor outputs, to reconstruct the shape of the probe and the position of the touched object. The Cosserat rod theory was utilized in this way under the assumption that only one contact point occurs during the exploration and friction effects are negligible. Soft probes in different sizes were designed and fabricated using a Form3 3D printer and Elastic50A resin, for which the effect of gravity is not negligible. Experimental results verified the performance of the proposed method that achieved a position error between of -0.7-13mm, while different external forces (between 0.01N to 1.5N) were applied along the soft probes to resemble the condition of touching lightweight objects. Eventually, the method is used to estimate position of some points in a delicate card house structure.}
    }

  • R. Ravikanna, J. Heselden, M. A. Khan, A. Perrett, Z. Zhu, G. Das, and M. Hanheide, “Smart parking system using heuristic optimization for autonomous transportation robots in agriculture,” in The 23rd towards autonomous robotic systems (taros) conference, 2023. doi:10.1007/978-3-031-43360-3_4
    [BibTeX] [Abstract] [Download PDF]

    This paper formulates a heuristic assignment algorithm for assigning parking spaces to autonomous transportation robots in a polytunnel or parallel aisle-based environment. The algorithm is named Smart Parking and is then implemented and tested for performance in a Python-based simulation software. It is also integrated into a Robot Controller called RASberry, which in itself is a state-of-the-art research project funded by the UKRI in managing a fully automated strawberry farm. Throvald by Saga Robotics is the robot used for autonomous transportation in the RASberry project and the real-world experiments in this paper. A set of real-world experiments are also performed via RASberry – Thorvald system to observe and analyse the performance of Smart Parking. It has been validated through graphical trend lines and statistical testing that Smart Parking outperforms Standard Parking in terms of mechanical conservation and task completion time.

    @inproceedings{lincoln57084,
    booktitle = {The 23rd Towards Autonomous Robotic Systems (TAROS) Conference},
    month = {September},
    title = {Smart Parking System Using Heuristic Optimization For Autonomous Transportation Robots In Agriculture},
    author = {Roopika Ravikanna and James Heselden and Muhammad Arshad Khan and Andrew Perrett and Zuyuan Zhu and Gautham Das and Marc Hanheide},
    publisher = {Springer, Cham},
    year = {2023},
    doi = {10.1007/978-3-031-43360-3\_4},
    keywords = {ARRAY(0x55bd28ddc2e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57084/},
    abstract = {This paper formulates a heuristic assignment algorithm for assigning parking spaces to autonomous transportation robots in a polytunnel or parallel aisle-based environment. The algorithm is named Smart Parking and is then implemented and tested for performance in a Python-based simulation software. It is also integrated into a Robot Controller called RASberry, which in itself is a state-of-the-art research project funded by the UKRI in managing a fully automated strawberry farm. Throvald by Saga Robotics is the robot used for autonomous transportation in the RASberry project and the real-world experiments in this paper. A set of real-world experiments are also performed via RASberry - Thorvald system to observe and analyse the performance of Smart Parking. It has been validated through graphical trend lines and statistical testing that Smart Parking outperforms Standard Parking in terms of mechanical conservation and task completion time.}
    }

  • M. Al-Khafajiy, G. Al-Tameemi, and T. Baker, “Ddos-focus: a distributed dos attacks mitigation using deep learning approach for a secure iot network,” in 2023 ieee international conference on edge computing and communications (edge), 2023, p. 393–399. doi:10.1109/EDGE60047.2023.00062
    [BibTeX] [Abstract] [Download PDF]

    The fast growth of the Internet of Things devices and communication protocols poses equal opportunities for lifestyle-boosting services and pools for cyber attacks. Usually, IoT network attackers gain access to a large number of IoT (e.g., things and fog nodes) by exploiting their vulnerabilities to set up attack armies, then attacking other devices/nodes in the IoT network. The Distributed Denial of Service (DDoS) flooding-attacks are prominent attacks on IoT. DDoS concerns security professionals due to its nature in forming sophisticated attacks that can be bandwidth-busting. DDoS can cause unplanned IoT-services outages, hence requiring prompt and efficient DDoS mitigation. In this paper, we propose a DDoS-FOCUS; a solution to mitigate DDoS attacks on fog nodes. The solution encompasses a machine learning model implanted at fog nodes to detect DDoS attackers. A hybrid deep learning model was developed using Conventional Neural Network and Bidirectional LSTM (CNN-BiLSTM) to mitigate future DDoS attacks. A preliminary test of the proposed model produced an accuracy of 99.8\% in detecting DDoS attacks.

    @inproceedings{lincoln56609,
    month = {September},
    author = {Mohammed Al-Khafajiy and Ghaith Al-Tameemi and Thar Baker},
    booktitle = {2023 IEEE International Conference on Edge Computing and Communications (EDGE)},
    title = {DDoS-FOCUS: A Distributed DoS Attacks Mitigation using Deep Learning Approach for a Secure IoT Network},
    publisher = {IEEE},
    doi = {10.1109/EDGE60047.2023.00062},
    pages = {393--399},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc348)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56609/},
    abstract = {The fast growth of the Internet of Things devices and communication protocols poses equal opportunities for lifestyle-boosting services and pools for cyber attacks. Usually, IoT network attackers gain access to a large number of IoT (e.g., things and fog nodes) by exploiting their vulnerabilities to set up attack armies, then attacking other devices/nodes in the IoT network. The Distributed Denial of Service (DDoS) flooding-attacks are prominent attacks on IoT. DDoS concerns security professionals due to its nature in forming sophisticated attacks that can be bandwidth-busting. DDoS can cause unplanned IoT-services outages, hence requiring prompt and efficient DDoS mitigation. In this paper, we propose a DDoS-FOCUS; a solution to mitigate DDoS attacks on fog nodes. The solution encompasses a machine learning model implanted at fog nodes to detect DDoS attackers. A hybrid deep learning model was developed using Conventional Neural Network and Bidirectional LSTM (CNN-BiLSTM) to mitigate future DDoS attacks. A preliminary test of the proposed model produced an accuracy of 99.8\% in detecting DDoS attacks.}
    }

  • S. Mghames, L. Castri, M. Hanheide, and N. Bellotto, “A neuro-symbolic approach for enhanced human motion prediction,” in International joint conference on neural networks (ijcnn), 2023. doi:10.1109/IJCNN54540.2023.10191970
    [BibTeX] [Abstract] [Download PDF]

    Reasoning on the context of human beings is crucial for many real-world applications especially for those deploying autonomous systems (e.g. robots). In this paper, we present a new approach for context reasoning to further advance the field of human motion prediction. We therefore propose a neuro-symbolic approach for human motion prediction (NeuroSyM), which weights differently the interactions in the neighbourhood by leveraging an intuitive technique for spatial representation called Qualitative Trajectory Calculus (QTC). The proposed approach is experimentally tested on medium and long term time horizons using two architectures from the state of art, one of which is a baseline for human motion prediction and the other is a baseline for generic multivariate time-series prediction. Six datasets of challenging crowded scenarios, collected from both fixed and mobile cameras, were used for testing. Experimental results show that the NeuroSyM approach outperforms in most cases the baseline architectures in terms of prediction accuracy.

    @inproceedings{lincoln54568,
    booktitle = {International Joint Conference on Neural Networks (IJCNN)},
    month = {August},
    title = {A Neuro-Symbolic Approach for Enhanced Human Motion Prediction},
    author = {Sariah Mghames and Luca Castri and Marc Hanheide and Nicola Bellotto},
    publisher = {IEEE Xplore},
    year = {2023},
    doi = {10.1109/IJCNN54540.2023.10191970},
    keywords = {ARRAY(0x55bd28ddc460)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54568/},
    abstract = {Reasoning on the context of human beings is crucial for many real-world applications especially for those deploying autonomous systems (e.g. robots). In this paper, we present a new approach for context reasoning to further advance the field of human motion prediction. We therefore propose a neuro-symbolic approach for human motion prediction (NeuroSyM), which weights differently the interactions in the neighbourhood by leveraging an intuitive technique for spatial representation called Qualitative Trajectory Calculus (QTC).
    The proposed approach is experimentally tested on medium and long term time horizons using two architectures from the state of art, one of which is a baseline for human motion prediction and the other is a baseline for generic multivariate time-series prediction. Six datasets of challenging crowded scenarios, collected from both fixed and mobile cameras, were used for testing. Experimental results show that the NeuroSyM approach outperforms in most cases the baseline architectures in terms of prediction accuracy.}
    }

  • Z. Zhu, G. Das, and M. Hanheide, “On optimising topology of agricultural fields for efficient robotic fleet deployment,” in The 18th international conference on intelligent autonomous system 2023 (ias18 ? 2023), 2023.
    [BibTeX] [Abstract] [Download PDF]

    Field-deployed robotic fleets can provide solutions that improve operational efficiency, control operational costs, and provide farmers with transparency over the day-to-day operations with scouting operations. The topology of agricultural farms such as polytunnels provides a basic environmental configuration that can be exploited to create a topological map to aid operational planning and robot navigation. However, these environments are optimised for operations by humans or for large farming vehicles and pose a major challenge for multiple moving robots to coordinate their navigation while performing tasks. The farm environment without any topological modifications for supporting robotic fleet deployments can cause traffic bottlenecks, eventually affecting the overall efficiency of the fleet. In this work, we propose a Genetic Algorithm-based Topological Optimisation (GATO) algorithm that discretises the search space of topological modifications into finite integer combinations. Each solution is encoded as an integer vector that contains the location information of the topology modification. The algorithm is evaluated in a discrete event simulation of the picking and in-field logistics process in a commercial strawberry farm and the results validate the effectiveness of our algorithm in identifying the topological modifications that improve the efficiency of the robotic fleet operations. robot traffic planning, multi-robot systems, agri-robotics, topological optimisa- tion, discrete event simulation, genetic algorithm

    @inproceedings{lincoln55464,
    booktitle = {The 18th international conference on Intelligent Autonomous System 2023 (IAS18 ? 2023)},
    month = {July},
    title = {On Optimising Topology of Agricultural Fields for Efficient Robotic Fleet Deployment},
    author = {Zuyuan Zhu and Gautham Das and Marc Hanheide},
    publisher = {The 18th international conference on Intelligent Autonomous System 2023 (IAS18 ? 2023)},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc4c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55464/},
    abstract = {Field-deployed robotic fleets can provide solutions that improve operational efficiency, control operational costs, and provide farmers with transparency over the day-to-day operations with scouting operations. The topology of agricultural farms such as polytunnels provides a basic environmental configuration that can be exploited to create a topological map to aid operational planning and robot navigation. However, these environments are optimised for operations by humans or for large farming vehicles and pose a major challenge for multiple moving robots to coordinate their navigation while performing tasks. The farm environment without any topological modifications for supporting robotic fleet deployments can cause traffic bottlenecks, eventually affecting the overall efficiency of the fleet. In this work, we propose a Genetic Algorithm-based Topological Optimisation (GATO) algorithm that discretises the search space of topological modifications into finite integer combinations. Each solution is encoded as an integer vector that contains the location information of the topology modification. The algorithm is evaluated in a discrete event simulation of the picking and in-field logistics process in a commercial strawberry farm and the results validate the effectiveness of our algorithm in identifying the topological modifications that improve the efficiency of the robotic fleet operations.
    robot traffic planning, multi-robot systems, agri-robotics, topological optimisa-
    tion, discrete event simulation, genetic algorithm}
    }

  • I. Hroob, S. M. Mellado, R. Polvara, G. Cielniak, and M. Hanheide, “S-net: end-to-end unsupervised learning of long-term 3d stable objects,” in 18th international conference on intelligent autonomous systems, 2023.
    [BibTeX] [Abstract] [Download PDF]

    In this research, we present an end-to-end data-driven pipeline for determining the long-term stability status of objects within a given environment, specifically distinguishing between static and dynamic objects. Understanding object stability is key for mobile robots since longterm stable objects can be exploited as landmarks for long-term localisation. Our pipeline includes a labelling method that utilizes historical data from the environment to generate training data for a neural network. Rather than utilizing discrete labels, we propose the use of point-wise continuous label values, indicating the spatio-temporal stability of individual points, to train a point cloud regression network named S-NET. Our approach is evaluated on point cloud data from two parking lots in the NCLT dataset, and the results show that our proposed solution, outperforms direct training of a classification model for static vs dynamic object classification.

    @inproceedings{lincoln54690,
    booktitle = {18th International Conference on Intelligent Autonomous Systems},
    month = {July},
    title = {S-NET: End-to-end Unsupervised Learning of Long-Term 3D Stable objects},
    author = {Ibrahim Hroob and Sergio Molina Mellado and Riccardo Polvara and Grzegorz Cielniak and Marc Hanheide},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc4f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54690/},
    abstract = {In this research, we present an end-to-end data-driven pipeline for determining the long-term stability status of objects within a given environment, specifically distinguishing between static and dynamic objects. Understanding object stability is key for mobile robots since longterm stable objects can be exploited as landmarks for long-term localisation. Our pipeline includes a labelling method that utilizes historical data from the environment to generate training data for a neural network. Rather than utilizing discrete labels, we propose the use of point-wise continuous label values, indicating the spatio-temporal stability of individual points, to train a point cloud regression network named S-NET. Our approach is evaluated on point cloud data from two parking lots in the NCLT dataset, and the results show that our proposed solution, outperforms direct training of a classification model for static vs dynamic object classification.}
    }

  • K. Seemakurthy, P. Bosilj, E. Aptoula, and C. Fox, “Domain generalised fully convolutional one stage detection,” in International conference on robotics and automation (icra), 2023, p. 7002–7009. doi:10.1109/ICRA48891.2023.10160937
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Real-time vision in robotics plays an important role in localising and recognising objects. Recently, deep learning approaches have been widely used in robotic vision. However, most of these approaches have assumed that training and test sets come from similar data distributions, which is not valid in many real world applications. This study proposes an approach to address domain generalisation (i.e. out-of distribution generalisation, OODG) where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains using single stage detectors. All existing approaches which deal with OODG either use slow two stage detectors or operate under the covariate shift assumption which may not be useful for real-time robotics. This is the first paper to address domain generalisation in the context of single stage anchor free object detector FCOS without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Fully Convolutional One Stage (DGFCOS) detection and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines and is able to run in real-time for robotics.

    @inproceedings{lincoln53780,
    month = {July},
    author = {Karthik Seemakurthy and Petra Bosilj and Erchan Aptoula and Charles Fox},
    booktitle = {International Conference on Robotics and Automation (ICRA)},
    title = {Domain Generalised Fully Convolutional One Stage Detection},
    publisher = {IEEE},
    doi = {10.1109/ICRA48891.2023.10160937},
    pages = {7002--7009},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc520)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53780/},
    abstract = {Abstract{--}Real-time vision in robotics plays an important role in localising and recognising objects. Recently, deep learning approaches have been widely used in robotic vision. However, most of these approaches have assumed that training and test sets come from similar data distributions, which is not valid in many real world applications. This study proposes an approach to address domain generalisation (i.e. out-of distribution generalisation, OODG) where the goal is to train a model via one or more source domains, that will generalise well to unknown target domains using single stage detectors. All existing approaches which deal with OODG either use slow two stage detectors or operate under the covariate shift assumption which may not be useful for real-time robotics. This is the first paper to address domain generalisation in the context of single stage anchor free object detector FCOS without the covariate shift assumption. We focus on improving the generalisation ability of object detection by proposing new regularisation terms to address the domain shift that arises due to both classification and bounding box regression. Also, we include an additional consistency regularisation term to align the local and global level predictions. The proposed approach is implemented as a Domain Generalised Fully Convolutional One Stage (DGFCOS) detection and evaluated using four object detection datasets which provide domain metadata (GWHD, Cityscapes, BDD100K, Sim10K) where it exhibits a consistent performance improvement over the baselines and is able to run in real-time for robotics.}
    }

  • Z. Zhu, G. Das, and M. Hanheide, “Autonomous topological optimisation for multi-robot systems in logistics,” in The 38th acm/sigapp symposium on applied computing, 2023, p. 791–799. doi:10.1145/3555776.3577666
    [BibTeX] [Abstract] [Download PDF]

    Multi-robot systems (MRS) are currently being introduced in many in-field logistics operations in large environments such as warehouses and commercial soft-fruit production. Collision avoidance is a critical problem in MRS as it may introduce deadlocks during the motion planning. In this work, a discretised topological map representation is used for low-cost route planning of individual robots as well as to easily switch the navigation actions depending on the constraints in the environment. However, this topological map could also have bottlenecks which leads to deadlocks and low transportation efficiency when used for an MRS. In this paper, we propose a resource container based Request-Release-Interrupt (RRI) algorithm that constrains each topological node with a capacity of one entity and therefore helps to avoid collisions and detect deadlocks. Furthermore, we integrate a Genetic Algorithm (GA) with Discrete Event Simulation (DES) for optimising the topological map to reduce deadlocks and improve transportation efficiency in logistics tasks. Performance analysis of the proposed algorithms are conducted after running a set of simulations with multiple robots and different maps. The results validate the effectiveness of our algorithms.

    @inproceedings{lincoln53246,
    month = {June},
    author = {Zuyuan Zhu and Gautham Das and Marc Hanheide},
    booktitle = {The 38th ACM/SIGAPP Symposium On Applied Computing},
    title = {Autonomous Topological Optimisation for Multi-robot Systems in Logistics},
    publisher = {Oxford University Press},
    doi = {10.1145/3555776.3577666},
    pages = {791--799},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc550)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53246/},
    abstract = {Multi-robot systems (MRS) are currently being introduced in many in-field logistics operations in large environments such as warehouses and commercial soft-fruit production. Collision avoidance is a critical problem in MRS as it may introduce deadlocks during the motion planning. In this work, a discretised topological map representation is used for low-cost route planning of individual robots as well as to easily switch the navigation actions depending on the constraints in the environment. However, this topological map could also have bottlenecks which leads to deadlocks and low transportation efficiency when used for an MRS. In this paper, we propose a resource container based Request-Release-Interrupt (RRI) algorithm that constrains each topological node with a capacity of one entity and therefore helps to avoid collisions and detect deadlocks. Furthermore, we integrate a Genetic Algorithm (GA) with Discrete Event Simulation (DES) for optimising the topological map to reduce deadlocks and improve transportation efficiency in logistics tasks. Performance analysis of the proposed algorithms are conducted after running a set of simulations with multiple robots and different maps. The results validate the effectiveness of our algorithms.}
    }

  • F. D. Duchetto, A. Kucukyilmaz, and M. Hanheide, “In-the-wild failures in a long-term hri deployment,” in Workshop on robot execution failures and failure management strategies at ieee icra 2023, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Failures are typical in robotics deployments “in-the-wild”, especially when robots perform their functions within social human spaces. This paper reports on the failures of an autonomous social robot called Lindsey, which has been used in a public museum for several years, covering over 1300 kilometres through its deployment. We present an analysis of distinctive failures observed during the deployment and focusing on those cases where the robot can leverage human help to resolve the problem situation. A final discussion outlines future research directions needed to ensure robots are equipped with adequate resources to detect and appropriately deal with failures requiring a human-in-the-loop approach.

    @inproceedings{lincoln54842,
    booktitle = {Workshop on Robot Execution Failures and Failure Management Strategies at IEEE ICRA 2023},
    month = {June},
    title = {In-the-Wild Failures in a Long-Term HRI Deployment},
    author = {Francesco Del Duchetto and Ayse Kucukyilmaz and Marc Hanheide},
    year = {2023},
    journal = {Workshop on Robot Execution Failures and Failure Management Strategies at ICRA 2023},
    keywords = {ARRAY(0x55bd28ddc580)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54842/},
    abstract = {Failures are typical in robotics deployments ``in-the-wild'', especially when robots perform their functions within social human spaces. This paper reports on the failures of an autonomous social robot called Lindsey, which has been used in a public museum for several years, covering over 1300 kilometres through its deployment. We present an analysis of distinctive failures observed during the deployment and focusing on those cases where the robot can leverage human help to resolve the problem situation. A final discussion outlines future research directions needed to ensure robots are equipped with adequate resources to detect and appropriately deal with failures requiring a human-in-the-loop approach.}
    }

  • S. McCall, M. Yu, L. Gong, S. Yue, and S. Kollias, “Enhancing fall detection accuracy with a transfer learning-aided transformer model using computer vision,” in Icmlhm 2023 : international conference on machine learning for healthcare and medicine, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Falls are a significant health concern for older adults globally, and prompt identification is critical to providing necessary healthcare support. Our study proposes a new fall detection method using computer vision based on modern deep learning techniques. Our approach involves training a transformer model on a large 2D pose dataset for general action recognition, followed by transfer learning. Specifically, we freeze the first few layers of the trained transformer model and train only the last two layers for fall detection. Our experimental results demonstrate that our proposed method outperforms both classical machine learning and deep learning approaches in fall/non-fall classification. Overall, our study suggests that our proposed methodology could be a valuable tool for identifying falls.

    @inproceedings{lincoln56964,
    booktitle = {ICMLHM 2023 : International Conference on Machine Learning for Healthcare and Medicine},
    month = {December},
    title = {Enhancing Fall Detection Accuracy with a Transfer Learning-Aided Transformer Model using Computer Vision},
    author = {Sheldon McCall and Miao Yu and Liyun Gong and Shigang Yue and Stefanos Kollias},
    publisher = {World Academy of Science, Engineering and Technology},
    year = {2023},
    keywords = {ARRAY(0x55bd28dcdb88)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56964/},
    abstract = {Falls are a significant health concern for older adults globally, and prompt identification is critical to providing necessary healthcare support. Our study proposes a new fall detection method using computer vision based on modern deep learning techniques. Our approach involves training a transformer model on a large 2D pose dataset for general action recognition, followed by transfer learning. Specifically, we freeze the first few layers of the trained transformer model and train only the last two layers for fall detection. Our experimental results demonstrate that our proposed method outperforms both classical machine learning and deep learning approaches in fall/non-fall classification. Overall, our study suggests that our proposed methodology could be a valuable tool for identifying falls.}
    }

  • R. D. Silva, G. Cielniak, and J. Gao, “Leaving the lines behind: vision-based crop row exit for agricultural robot navigation,” in Icra2023 workshop on tig-iv: agri-food robotics from farm to fork, 2023. doi:10.48550/arXiv.2306.05869
    [BibTeX] [Abstract] [Download PDF]

    Usage of purely vision based solutions for row switching is not well explored in existing vision based crop row navigation frameworks. This method only uses RGB images for local feature matching based visual feedback to exit crop row. Depth images were used at crop row end to estimatethe navigation distance within headland. The algorithm was tested on diverse headland areas with soil and vegetation. The proposed method could reach the end of the crop row and then navigate into the headland completely leaving behind the crop row with an error margin of 50 cm.

    @inproceedings{lincoln55044,
    booktitle = {ICRA2023 Workshop on TIG-IV: Agri-food Robotics From Farm to Fork},
    month = {May},
    title = {Leaving the Lines Behind: Vision-Based Crop Row Exit for Agricultural Robot Navigation},
    author = {Rajitha De Silva and Grzegorz Cielniak and Junfeng Gao},
    year = {2023},
    doi = {10.48550/arXiv.2306.05869},
    note = {Best Paper Award at TIG-IV workshop at ICRA 2023},
    keywords = {ARRAY(0x55bd28ddc670)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55044/},
    abstract = {Usage of purely vision based solutions for row switching is not well explored in existing vision based crop row navigation frameworks. This method only uses RGB images for local feature matching based visual feedback to exit crop row. Depth images were used at crop row end to estimatethe navigation distance within headland. The algorithm was tested on diverse headland areas with soil and vegetation. The proposed method could reach the end of the crop row and then navigate into the headland completely leaving behind the crop row with an error margin of 50 cm.}
    }

  • K. Heiwolt, C. Öztireli, and G. Cielniak, “Statistical shape representations for temporal registration of plant components in 3d,” in International conference on robotics and automation 2023, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Plants are dynamic organisms and understanding temporal variations in vegetation is an essential problem for robots in the wild. However, associating repeated 3D scans of plants across time is challenging. A key step in this process is re-identifying and tracking the same individual plant components over time. Previously, this has been achieved by comparing their global spatial or topological location. In this work, we demonstrate how using shape features improves temporal organ matching. We present a landmark-free shape compression algorithm, which allows for the extraction of 3D shape features of leaves, characterises leaf shape and curvature efficiently in few parameters, and makes the association of individual leaves in feature space possible. The approach combines 3D contour extraction and further compression using Principal Component Analysis (PCA) to produce a shape space encoding, which is entirely learned from data and retains information about edge contours and 3D curvature. Our evaluation on temporal scan sequences of tomato plants shows, that incorporating shape features improves temporal leaf-matching. A combination of shape, location, and rotation information proves most informative for recognition of leaves over time and yields a true positive rate of 75\%, a 15\% improvement on sate-of-the-art methods. This is essential for robotic crop monitoring, which enables whole-of-lifecycle phenotyping.

    @inproceedings{lincoln55292,
    booktitle = {International Conference on Robotics and Automation 2023},
    month = {May},
    title = {Statistical Shape Representations for Temporal Registration of Plant Components in 3D},
    author = {Karoline Heiwolt and Cengiz {\"O}ztireli and Grzegorz Cielniak},
    publisher = {Infovaya},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc6a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55292/},
    abstract = {Plants are dynamic organisms and understanding temporal variations in vegetation is an essential problem for robots in the wild. However, associating repeated 3D scans of plants across time is challenging. A key step in this process is re-identifying and tracking the same individual plant components over time. Previously, this has been achieved by comparing their global spatial or topological location. In this work, we demonstrate how using shape features improves temporal organ matching. We present a landmark-free shape compression algorithm, which allows for the extraction of 3D shape features of leaves, characterises leaf shape and curvature efficiently in few parameters, and makes the association of individual leaves in feature space possible. The approach combines 3D contour extraction and further compression using Principal Component Analysis (PCA) to produce a shape space encoding, which is entirely learned from data and retains information about edge contours and 3D curvature. Our evaluation on temporal scan sequences of tomato plants shows, that incorporating shape features improves temporal leaf-matching. A combination of shape, location, and rotation information proves most informative for recognition of leaves over time and yields a true positive rate of 75\%, a 15\% improvement on sate-of-the-art methods. This is essential for robotic crop monitoring, which enables whole-of-lifecycle phenotyping.}
    }

  • A. Astolfi and M. Calisti, “Articulated legs allow energy optimization across different speeds for legged robots with elastically suspended loads,” in 2023 ieee international conference on soft robotics (robosoft), 2023, p. 1–7. doi:10.1109/RoboSoft55895.2023.10121949
    [BibTeX] [Abstract] [Download PDF]

    Legged robots are a promising technology whose use is limited by their high energy consumption. Biological and biomechanical studies have shown that the vibration generated by elastically suspended masses provides an energy advantage over rigidly carrying the same load. The robotic validation of these findings has only scarcely been explored in the dynamic walking case. In this context, a relationship has emerged between the design parameters and the actuation that generates the optimal gait. Although very relevant, these studies lack a generalizable analysis of different locomotion modes and a possible strategy to obtain optimal locomotion at different speeds. To this end, we propose the use of articulated legs in an extended Spring-Loaded Inverted Pendulum (SLIP) model with an elastically suspended mass. Thanks to this model, we show how stiffness and damping can be modulated through articulated legs by selecting the knee angle at touch-down. Therefore, by choosing different body postures, it is possible to vary the control parameters and reach different energetically optimal speeds. At the same time, this modeling allows the study of the stability of the defined system. The results show how suitable control choices reduce energy expenditure by 16\% at the limit cycle at a chosen speed. The demonstrated strategy could be used in the design and control of legged robots where energy consumption would be dynamically optimal and usage time would be significantly increased.

    @inproceedings{lincoln56182,
    month = {May},
    author = {Anna Astolfi and Marcello Calisti},
    booktitle = {2023 IEEE International Conference on Soft Robotics (RoboSoft)},
    title = {Articulated legs allow energy optimization across different speeds for legged robots with elastically suspended loads},
    publisher = {IEEE Xplore},
    doi = {10.1109/RoboSoft55895.2023.10121949},
    pages = {1--7},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc6d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56182/},
    abstract = {Legged robots are a promising technology whose use is limited by their high energy consumption. Biological and biomechanical studies have shown that the vibration generated by elastically suspended masses provides an energy advantage over rigidly carrying the same load. The robotic validation of these findings has only scarcely been explored in the dynamic walking case. In this context, a relationship has emerged between the design parameters and the actuation that generates the optimal gait. Although very relevant, these studies lack a generalizable analysis of different locomotion modes and a possible strategy to obtain optimal locomotion at different speeds. To this end, we propose the use of articulated legs in an extended Spring-Loaded Inverted Pendulum (SLIP) model with an elastically suspended mass. Thanks to this model, we show how stiffness and damping can be modulated through articulated legs by selecting the knee angle at touch-down. Therefore, by choosing different body postures, it is possible to vary the control parameters and reach different energetically optimal speeds. At the same time, this modeling allows the study of the stability of the defined system. The results show how suitable control choices reduce energy expenditure by 16\% at the limit cycle at a chosen speed. The demonstrated strategy could be used in the design and control of legged robots where energy consumption would be dynamically optimal and usage time would be significantly increased.}
    }

  • N. Wagner and G. Cielniak, “Motion-based segmentation utilising oscillatory plant properties,” in Cvppa @ iccv 2023, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Modern computer vision technology plays an increasingly important role in agriculture. Automated monitoring of plants for example is an essential task in several applications, such as high-throughput phenotyping or plant health monitoring. Under external influences like wind, plants typically exhibit dynamic behaviours which reveal important characteristics of their structure and condition. These behaviours, however, are typically not considered by state-of-the-art automated phenotyping methods which mostly observe static plant properties. In this paper, we propose an automated system for monitoring oscillatory plant movement from video sequences. We employ harmonic inversion for the purpose of efficiently and accurately estimating the eigenfrequency and damping parameters of individual plant parts. The achieved accuracy is compared against values obtained by performing the Discrete Fourier Transform (DFT), which we use as a baseline. We demonstrate the applicability of this approach on different plants and plant parts, like wheat ears, hanging vines, as well as stems and stalks, which exhibit a range of oscillatory motions. By utilising harmonic inversion, we are able to consistently obtain more accurate values for the eigenfrequencies compared to those obtained by DFT. We are furthermore able to directly estimate values for the damping coefficient, achieving a similar accuracy as via DFT-based methods, but without the additional computational effort required for the latter. With the approach presented in this paper, it is possible to obtain estimates of mechanical plant characteristics in an automated manner, enabling novel automated acquisition of novel traits for phenotyping.

    @inproceedings{lincoln56159,
    booktitle = {CVPPA @ ICCV 2023},
    title = {Motion-Based Segmentation Utilising Oscillatory Plant Properties},
    author = {Nikolaus Wagner and Grzegorz Cielniak},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddcdc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/56159/},
    abstract = {Modern computer vision technology plays an increasingly important role in agriculture. Automated monitoring of plants for example is an essential task in several applications, such as high-throughput phenotyping or plant health monitoring. Under external influences like wind, plants typically exhibit dynamic behaviours which reveal important characteristics of their structure and condition. These behaviours, however, are typically not considered by state-of-the-art automated phenotyping methods which mostly observe static plant properties.
    In this paper, we propose an automated system for monitoring oscillatory plant movement from video sequences. We employ harmonic inversion for the purpose of efficiently and accurately estimating the eigenfrequency and damping parameters of individual plant parts. The achieved accuracy is compared against values obtained by performing the Discrete Fourier Transform (DFT), which we use as a baseline. We demonstrate the applicability of this approach on different plants and plant parts, like wheat ears, hanging vines, as well as stems and stalks, which exhibit a range of oscillatory motions. By utilising harmonic inversion, we are able to consistently obtain more accurate values for the eigenfrequencies compared to those obtained by DFT. We are furthermore able to directly estimate values for the damping coefficient, achieving a similar accuracy as via DFT-based methods, but without the additional computational effort required for the latter. With the approach presented in this paper, it is possible to obtain estimates of mechanical plant characteristics in an automated manner, enabling novel automated acquisition of novel traits for phenotyping.}
    }

  • H. Howard, S. Wane, L. Mihaylova, D. Rose, P. Ray, L. Manning, and E. Sklar, “Training the uk agri-food sector to employ robotics and autonomous systems,” , Project Report 10.31256/WP2023.5, 2023.
    [BibTeX] [Abstract] [Download PDF]

    Robotics and Autonomous Systems (RAS) in agriculture has become an expanding area of interest for research and innovation, in both industry and academia. Robotic solutions have been demonstrated for a wide range of farming tasks, from planting and weed management to crop monitoring and harvesting{–}the concept of RAS in agriculture is no longer tomorrow?s dream; it is today?s reality. However, a number of factors have limited the uptake and deployment of RAS in the agri-food sector, including lack of access to robust digital connectivity, unfavourable cost-benefit relationships for many farms to purchase robotic solutions, often unmet requirements for reliable, trustworthy and user-friendly systems, and the need to upskill and lack of relevant training for the agri-food workforce, specifically for working farmers and growers.

    @techreport{lincoln57218,
    number = {10.31256/WP2023.5},
    month = {October},
    type = {Project Report},
    title = {Training the UK Agri-food Sector to Employ Robotics and Autonomous Systems},
    author = {H Howard and S Wane and L Mihaylova and DC Rose and P Ray and Louise Manning and Elizabeth Sklar},
    publisher = {EPSRC UK-RAS},
    year = {2023},
    keywords = {ARRAY(0x55bd28ddc0a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/57218/},
    abstract = {Robotics and Autonomous Systems (RAS) in agriculture has become an expanding area of interest for research and innovation, in both industry and academia. Robotic solutions have been demonstrated for a wide range of farming tasks, from planting and weed management to crop monitoring and harvesting{--}the concept of RAS in agriculture is no longer tomorrow?s dream; it is today?s reality. However, a number of factors have limited the uptake and deployment of RAS in the agri-food sector, including lack of access to robust digital connectivity, unfavourable cost-benefit relationships for many farms to purchase robotic solutions, often unmet requirements for reliable, trustworthy and user-friendly systems, and the need to upskill and lack of relevant training for the agri-food workforce, specifically for working farmers and growers.}
    }

2022

  • C. Qi, M. Sandroni, J. C. Westergaard, E. H. o, M. Bagge, E. Alexandersson, and J. Gao, “In-field classification of the asymptomatic biotrophic phase of potato late blight based on deep learning and proximal hyperspectral imaging,” Computers and electronics in agriculture, vol. 205, 2022. doi:10.1016/j.compag.2022.107585
    [BibTeX] [Abstract] [Download PDF]

    Effective detection of potato late blight (PLB) is an essential aspect of potato cultivation. However, it is a challenge to detect late blight in asymptomatic biotrophic phase in fields with conventional imaging approaches because of the lack of visual symptoms in the canopy. Hyperspectral imaging can capture spectral signals from a wide range of wavelengths also outside the visual wavelengths. Here, we propose a deep learning classification architecture for hyperspectral images by combining 2D convolutional neural network (2D-CNN) and 3D-CNN with deep cooperative attention networks (PLB-2D-3D-A). First, 2D-CNN and 3D-CNN are used to extract rich spectral space features, and then the attention mechanism AttentionBlock and SE-ResNet are used to emphasize the salient features in the feature maps and increase the generalization ability of the model. The dataset is built with 15,360 images (64x64x204), cropped from 240 raw images captured in an experimental field with over 20 potato genotypes. The accuracy in the test dataset of 2000 images reached 0.739 in the full band and 0.790 in the specific bands (492 nm, 519 nm, 560 nm, 592 nm, 717 nm and 765 nm). This study shows an encouraging result for classification of the asymptomatic biotrophic phase of PLB disease with deep learning and proximal hyperspectral imaging.

    @article{lincoln52940,
    volume = {205},
    month = {December},
    author = {Chao Qi and Murilo Sandroni and Jesper Cairo Westergaard and Ea H{\o}egh Riis Sundmark and Merethe Bagge and Erik Alexandersson and Junfeng Gao},
    title = {In-field classification of the asymptomatic biotrophic phase of potato late blight based on deep learning and proximal hyperspectral imaging},
    publisher = {Elsevier},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2022.107585},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddcdf0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52940/},
    abstract = {Effective detection of potato late blight (PLB) is an essential aspect of potato cultivation. However, it is a challenge to detect late blight in asymptomatic biotrophic phase in fields with conventional imaging approaches because of the lack of visual symptoms in the canopy. Hyperspectral imaging can capture spectral signals from a wide range of wavelengths also outside the visual wavelengths. Here, we propose a deep learning classification architecture for hyperspectral images by combining 2D convolutional neural network (2D-CNN) and 3D-CNN with deep cooperative attention networks (PLB-2D-3D-A). First, 2D-CNN and 3D-CNN are used to extract rich spectral space features, and then the attention mechanism AttentionBlock and SE-ResNet are used to emphasize the salient features in the feature maps and increase the generalization ability of the model. The dataset is built with 15,360 images (64x64x204), cropped from 240 raw images captured in an experimental field with over 20 potato genotypes. The accuracy in the test dataset of 2000 images reached 0.739 in the full band and 0.790 in the specific bands (492 nm, 519 nm, 560 nm, 592 nm, 717 nm and 765 nm). This study shows an encouraging result for classification of the asymptomatic biotrophic phase of PLB disease with deep learning and proximal hyperspectral imaging.}
    }

  • M. Badaoui, P. Buigues, D. Berta, G. Mandana, H. Gu, T. Földes, C. Dickson, V. Hornak, M. Kato, C. Molteni, S. Parsons, and E. Rosta, “Combined free-energy calculation and machine learning methods for understanding ligand unbinding kinetics,” Journal of chemical theory and computation, vol. 18, iss. 4, p. 2543–2555, 2022. doi:10.1021/acs.jctc.1c00924
    [BibTeX] [Abstract] [Download PDF]

    The determination of drug residence times, which define the time an inhibitor is in complex with its target, is a fundamental part of the drug discovery process. Synthesis and experimental measurements of kinetic rate constants are, however, expensive, and time-consuming. In this work, we aimed to obtain drug residence times computationally. Furthermore, we propose a novel algorithm to identify molecular design objectives based on ligand unbinding kinetics. We designed an enhanced sampling technique to accurately predict the free energy profiles of the ligand unbinding process, focusing on the free energy barrier for unbinding. Our method first identifies unbinding paths determining a corresponding set of internal coordinates (IC) that form contacts between the protein and the ligand, it then iteratively updates these interactions during a series of biased molecular-dynamics (MD) simulations to reveal the ICs that are important for the whole of the unbinding process. Subsequently, we performed finite temperature string simulations to obtain the free energy barrier for unbinding using the set of ICs as a complex reaction coordinate. Importantly, we also aimed to enable further design of drugs focusing on improved residence times. To this end, we developed a supervised machine learning (ML) approach with inputs from unbiased ?downhill? trajectories initiated near the transition state (TS) ensemble of the string unbinding path. We demonstrate that our ML method can identify key ligand-protein interactions driving the system through the TS. Some of the most important drugs for cancer treatment are kinase inhibitors. One of these kinase targets is Cyclin Dependent Kinase 2 (CDK2), an appealing target for anticancer drug development. Here, we tested our method using two different CDK2 inhibitors for potential further development of these compounds. We compared the free energy barriers obtained from our calculations with those observed in available experimental data. We highlighted important interactions at the distal ends of the ligands that can be targeted for improved residence times. Our method provides a new tool to determine unbinding rates, and to identify key structural features of the inhibitors that can be used as starting points for novel design strategies in drug discovery.

    @article{lincoln49062,
    volume = {18},
    number = {4},
    month = {April},
    author = {Magd Badaoui and Pedro Buigues and Denes Berta and Guarav Mandana and Hankang Gu and Tam{\'a}s F{\"o}ldes and Callum Dickson and Viktor Hornak and Mitsunori Kato and Carla Molteni and Simon Parsons and Edina Rosta},
    title = {Combined Free-Energy Calculation and Machine Learning Methods for Understanding Ligand Unbinding Kinetics},
    publisher = {American Chemical Society},
    year = {2022},
    journal = {Journal of Chemical Theory and Computation},
    doi = {10.1021/acs.jctc.1c00924},
    pages = {2543--2555},
    keywords = {ARRAY(0x55bd28e1b9f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49062/},
    abstract = {The determination of drug residence times, which define the time an inhibitor is in complex with its
    target, is a fundamental part of the drug discovery process. Synthesis and experimental
    measurements of kinetic rate constants are, however, expensive, and time-consuming. In this work,
    we aimed to obtain drug residence times computationally. Furthermore, we propose a novel
    algorithm to identify molecular design objectives based on ligand unbinding kinetics. We designed
    an enhanced sampling technique to accurately predict the free energy profiles of the ligand
    unbinding process, focusing on the free energy barrier for unbinding. Our method first identifies
    unbinding paths determining a corresponding set of internal coordinates (IC) that form contacts
    between the protein and the ligand, it then iteratively updates these interactions during a series of
    biased molecular-dynamics (MD) simulations to reveal the ICs that are important for the whole of
    the unbinding process. Subsequently, we performed finite temperature string simulations to obtain
    the free energy barrier for unbinding using the set of ICs as a complex reaction coordinate.
    Importantly, we also aimed to enable further design of drugs focusing on improved residence
    times. To this end, we developed a supervised machine learning (ML) approach with inputs from
    unbiased ?downhill? trajectories initiated near the transition state (TS) ensemble of the string
    unbinding path. We demonstrate that our ML method can identify key ligand-protein interactions
    driving the system through the TS. Some of the most important drugs for cancer treatment are
    kinase inhibitors. One of these kinase targets is Cyclin Dependent Kinase 2 (CDK2), an appealing
    target for anticancer drug development. Here, we tested our method using two different CDK2
    inhibitors for potential further development of these compounds. We compared the free energy
    barriers obtained from our calculations with those observed in available experimental data. We
    highlighted important interactions at the distal ends of the ligands that can be targeted for
    improved residence times. Our method provides a new tool to determine unbinding rates, and to
    identify key structural features of the inhibitors that can be used as starting points for novel design
    strategies in drug discovery.}
    }

  • S. Latif, H. Cuayahuitl, F. Pervez, F. Shamshad, H. S. Ali, and E. Cambria, “A survey on deep reinforcement learning for audio?based applications,” Artifcial intelligence review, 2022. doi:10.1007/s10462-022-10224-2
    [BibTeX] [Abstract] [Download PDF]

    Deep reinforcement learning (DRL) is poised to revolutionise the field of artificial intelligence (AI) by endowing autonomous systems with high levels of understanding of the real world. Currently, deep learning (DL) is enabling DRL to effectively solve various intractable problems in various fields including computer vision, natural language processing, healthcare, robotics, to name a few. Most importantly, DRL algorithms are also being employed in audio signal processing to learn directly from speech, music and other sound signals in order to create audio-based autonomous systems that have many promising applications in the real world. In this article, we conduct a comprehensive survey on the progress of DRL in the audio domain by bringing together research studies across different but related areas in speech and music. We begin with an introduction to the general field of DL and reinforcement learning (RL), then progress to the main DRL methods and their applications in the audio domain. We conclude by presenting important challenges faced by audio-based DRL agents and by highlighting open areas for future research and investigation. The findings of this paper will guide researchers interested in DRL for the audio domain.

    @article{lincoln50054,
    month = {July},
    title = {A survey on deep reinforcement learning for audio?based applications},
    author = {Siddique Latif and Heriberto Cuayahuitl and Farrukh Pervez and Fahad Shamshad and Hafiz Shehbaz Ali and Erik Cambria},
    publisher = {Springer Nature B.V.},
    year = {2022},
    doi = {10.1007/s10462-022-10224-2},
    journal = {Artifcial Intelligence Review},
    keywords = {ARRAY(0x55bd28d0cca0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50054/},
    abstract = {Deep reinforcement learning (DRL) is poised to revolutionise the field of artificial intelligence (AI) by endowing autonomous systems with high levels of understanding of the real world. Currently, deep learning (DL) is enabling DRL to effectively solve various intractable problems in various fields including computer vision, natural language processing, healthcare, robotics, to name a few. Most importantly, DRL algorithms are also being employed in audio signal processing to learn directly from speech, music and other sound signals in order to create audio-based autonomous systems that have many promising applications in the real world. In this article, we conduct a comprehensive survey on the progress of DRL in the audio domain by bringing together research studies across different but related areas in speech and music. We begin with an introduction to the general field of DL and reinforcement learning (RL), then progress to the main DRL methods and their applications in the audio domain. We conclude by presenting important challenges faced by audio-based DRL agents and by highlighting open areas for future research and investigation. The findings of this paper will guide researchers interested in DRL for the audio domain.}
    }

  • A. Drake, I. Sassoon, P. Balatsoukas, T. Porat, M. Ashworth, E. Wright, V. Curcin, M. Chapman, N. Kokciyan, M. Sanjay, E. Sklar, and S. Parsons, “The relationship of socio-demographic factors and patient attitudes to connected health technologies: a survey of stroke survivors.,” Health informatics journal, vol. 28, iss. 2, 2022. doi:10.1177\%2F14604582221102373
    [BibTeX] [Abstract] [Download PDF]

    More evidence is needed on technology implementation for remote monitoring and self-management across the various settings relevant to chronic conditions. This paper describes the findings of a survey designed to explore the relevance of socio-demographic factors to attitudes towards connected health technologies in a community of patients. Stroke survivors living in the UK were invited to answer questions about themselves and about their attitudes to a prototype remote monitoring and self-management app developed around their preferences. Eighty (80) responses were received and analysed, with limitations and results presented in full. Socio-demographic factors were not found to be associated with variations in participants? willingness to use the system and attitudes to data sharing. Individuals? levels of interest in relevant technology was suggested as a more important determinant of attitudes. These observations run against the grain of most relevant literature to date, and tend to underline the importance of prioritising patient-centred participatory research in efforts to advance connected health technologies.

    @article{lincoln49926,
    volume = {28},
    number = {2},
    month = {June},
    author = {Archie Drake and Isabel Sassoon and Panos Balatsoukas and Talya Porat and Mark Ashworth and Ellen Wright and Vasa Curcin and Martin Chapman and Nadin Kokciyan and Modgil Sanjay and Elizabeth Sklar and Simon Parsons},
    title = {The relationship of socio-demographic factors and patient attitudes to connected health technologies: a survey of stroke survivors.},
    publisher = {SAGE Publications},
    year = {2022},
    journal = {Health Informatics Journal},
    doi = {10.1177\%2F14604582221102373},
    keywords = {ARRAY(0x55bd28cdc788)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49926/},
    abstract = {More evidence is needed on technology implementation for remote monitoring and self-management across the various settings relevant to chronic conditions. This paper describes the findings of a survey designed to explore the relevance of socio-demographic factors to attitudes towards connected health technologies in a community of patients. Stroke survivors living in the UK were invited to answer questions about themselves and about their attitudes to a prototype remote monitoring and self-management app developed around their preferences. Eighty (80) responses were received and analysed, with limitations and results presented in full. Socio-demographic factors were not found to be associated with variations in participants? willingness to use the system and attitudes to data sharing. Individuals? levels of interest in relevant technology was suggested as a more important determinant of attitudes. These observations run against the grain of most relevant literature to date, and tend to underline the importance of prioritising patient-centred participatory research in efforts to advance connected health technologies.}
    }

  • H. Yahyaoui, Z. Maamar, M. Al-Khafajiy, and H. Al-Hamadi, “Trust-based management in iot federations,” Future generation computer systems, vol. 136, p. 182–192, 2022. doi:10.1016/j.future.2022.06.003
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a trust-based evolutionary game model for managing Internet-of-Things (IoT) federations. The model adopts trust-based payoff to either reward or penalize things based on the behaviors they expose. The model also resorts to monitoring these behaviors to ensure that the share of untrustworthy things in a federation does not hinder the good functioning of trustworthy things in this federation. The trust scores are obtained using direct experience with things and feedback from other things and are integrated into game strategies. These strategies capture the dynamic nature of federations since the population of trustworthy versus untrustworthy things changes over time with the aim of retaining the trustworthy ones. To demonstrate the technical doability of the game strategies along with rewarding/penalizing things, a set of experiments were carried out and results were benchmarked as per the existing literature. The results show a better mitigation of attacks such as bad-mouthing and ballot-stuffing on trustworthy things.

    @article{lincoln49874,
    volume = {136},
    month = {June},
    author = {Hamdi Yahyaoui and Zakaria Maamar and Mohammed Al-Khafajiy and Hamid Al-Hamadi},
    title = {Trust-based management in IoT federations},
    publisher = {Elsevier},
    year = {2022},
    journal = {Future Generation Computer Systems},
    doi = {10.1016/j.future.2022.06.003},
    pages = {182--192},
    keywords = {ARRAY(0x55bd28d86578)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49874/},
    abstract = {This paper presents a trust-based evolutionary game model for managing Internet-of-Things (IoT) federations. The model adopts trust-based payoff to either reward or penalize things based on the behaviors they expose. The model also resorts to monitoring these behaviors to ensure that the share of untrustworthy things in a federation does not hinder the good functioning of trustworthy things in this federation. The trust scores are obtained using direct experience with things and feedback from other things and are integrated into game strategies. These strategies capture the dynamic nature of federations since the population of trustworthy versus untrustworthy things changes over time with the aim of retaining the trustworthy ones. To demonstrate the technical doability of the game strategies along with rewarding/penalizing things, a set of experiments were carried out and results were benchmarked as per the existing literature. The results show a better mitigation of attacks such as bad-mouthing and ballot-stuffing on trustworthy things.}
    }

  • F. D. Duchetto and M. Hanheide, “Learning on the job: long-term behavioural adaptation in human-robot interactions,” Ieee robotics and automation letters, vol. 7, iss. 3, p. 6934–6941, 2022. doi:10.1109/LRA.2022.3178807
    [BibTeX] [Abstract] [Download PDF]

    In this work, we propose a framework for allowing autonomous robots deployed for extended periods of time in public spaces to adapt their own behaviour online from user interactions. The robot behaviour planning is embedded in a Reinforcement Learning (RL) framework, where the objective is maximising the level of overall user engagement during the interactions. We use the Upper-Confidence-Bound Value-Iteration (UCBVI) algorithm, which gives a helpful way of managing the exploration-exploitation trade-off for real-time interactions. An engagement model trained end-to-end generates the reward function in real-time during policy execution. We test this approach in a public museum in Lincoln (U.K.), where the robot is deployed as a tour guide for the visitors. Results show that after a couple of months of exploration, the robot policy learned to maintain the engagement of users for longer, with an increase of 22.8\% over the initial static policy in the number of items visited during the tour and a 30\% increase in the probability of completing the tour. This work is a promising step toward behavioural adaptation in long-term scenarios for robotics applications in social settings.

    @article{lincoln49961,
    volume = {7},
    number = {3},
    month = {June},
    author = {Francesco Del Duchetto and Marc Hanheide},
    title = {Learning on the Job: Long-Term Behavioural Adaptation in Human-Robot Interactions},
    publisher = {IEEE},
    year = {2022},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2022.3178807},
    pages = {6934--6941},
    keywords = {ARRAY(0x55bd28d83230)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49961/},
    abstract = {In this work, we propose a framework for allowing autonomous robots deployed for extended periods of time in public spaces to adapt their own behaviour online from user interactions. The robot behaviour planning is embedded in a Reinforcement Learning (RL) framework, where the objective is maximising the level of overall user engagement during the interactions. We use the Upper-Confidence-Bound Value-Iteration (UCBVI) algorithm, which gives a helpful way of managing the exploration-exploitation trade-off for real-time interactions. An engagement model trained end-to-end generates the reward function in real-time during policy execution. We test this approach in a public museum in Lincoln (U.K.), where the robot is deployed as a tour guide for the visitors. Results show that after a couple of months of exploration, the robot policy learned to maintain the engagement of users for longer, with an increase of 22.8\% over the initial static policy in the number of items visited during the tour and a 30\% increase in the probability of completing the tour. This work is a promising step toward behavioural adaptation in long-term scenarios for robotics applications in social settings.}
    }

  • S. Raza and H. Cuayahuitl, “A comparison of neural?based visual recognisers for speech activity detection,” International journal of speech technology, 2022. doi:10.1007/s10772-021-09956-3
    [BibTeX] [Abstract] [Download PDF]

    Existing literature on speech activity detection (SAD) highlights different approaches within neural networks but does not provide a comprehensive comparison to these methods. This is important because such neural approaches often require hardware-intensive resources. In this article, we provide a comparative analysis of three different approaches: classification with still images (CNN model), classification based on previous images (CRNN model), and classification of sequences of images (Seq2Seq model). Our experimental results using the Vid-TIMIT dataset show that the CNN model can achieve an accuracy of 97\% whereas the CRNN and Seq2Seq models increase the classification to 99\%. Further experiments show that the CRNN model is almost as accurate as the Seq2Seq model (99.1\% vs. 99.6\% of classification accuracy, respectively) but 57\% faster to train (326 vs. 761 secs. per epoch).

    @article{lincoln49800,
    month = {June},
    title = {A comparison of neural?based visual recognisers for speech activity detection},
    author = {Sajjadali Raza and Heriberto Cuayahuitl},
    publisher = {Springer},
    year = {2022},
    doi = {10.1007/s10772-021-09956-3},
    journal = {International Journal of Speech Technology},
    keywords = {ARRAY(0x55bd28ce7750)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49800/},
    abstract = {Existing literature on speech activity detection (SAD) highlights different approaches within neural networks but does not provide a comprehensive comparison to these methods. This is important because such neural approaches often require hardware-intensive resources. In this article, we provide a comparative analysis of three different approaches: classification with still images (CNN model), classification based on previous images (CRNN model), and classification of sequences of images (Seq2Seq model). Our experimental results using the Vid-TIMIT dataset show that the CNN model can achieve an accuracy of 97\% whereas the CRNN and Seq2Seq models increase the classification to 99\%. Further experiments show that the CRNN model is almost as accurate as the Seq2Seq model (99.1\% vs. 99.6\% of classification accuracy, respectively) but 57\% faster to train (326 vs. 761 secs. per epoch).}
    }

  • X. Li, R. Lloyd, S. Ward, J. Cox, S. Coutts, and C. Fox, “Robotic crop row tracking around weeds using cereal-specific features,” Computers and electronics in agriculture, vol. 197, 2022. doi:10.1016/j.compag.2022.106941
    [BibTeX] [Abstract] [Download PDF]

    Crop row following is especially challenging in narrow row cereal crops, such as wheat. Separation between plants within a row disappears at an early growth stage, and canopy closure between rows, when leaves from different rows start to occlude each other, occurs three to four months after the crop emerges. Canopy closure makes it challenging to identify separate rows through computer vision as clear lanes become obscured. Cereal crops are grass species and so their leaves have a predictable shape and orientation. We introduce an image processing pipeline which exploits grass shape to identify and track rows. The key observation exploited is that leaf orientations tend to be vertical along rows and horizontal between rows due to the location of the stems within the rows. Adaptive mean-shift clustering on Hough line segments is then used to obtain lane centroids, and followed by a nearest neighbor data association creating lane line candidates in 2D space. Lane parameters are fit with linear regression and a Kalman filter is used for tracking lanes between frames. The method is achieves sub-50 mm accuracy which is sufficient for placing a typical agri-robot?s wheels between real-world, early-growth wheat crop rows to drive between them, as long as the crop is seeded in a wider spacing such as 180 mm row spacing for an 80 mm wheel width robot.

    @article{lincoln49340,
    volume = {197},
    month = {June},
    author = {Xiaodong Li and Rob Lloyd and Sarah Ward and Jonathan Cox and Shaun Coutts and Charles Fox},
    title = {Robotic crop row tracking around weeds using cereal-specific features},
    publisher = {Elsevier},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2022.106941},
    year = {2022},
    keywords = {ARRAY(0x55bd28ff4108)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49340/},
    abstract = {Crop row following is especially challenging in narrow row cereal crops, such as wheat. Separation between plants within a row disappears at an early growth stage, and canopy closure between rows, when leaves from different rows start to occlude each other, occurs three to four months after the crop emerges. Canopy closure makes it challenging to identify separate rows through computer vision as clear lanes become obscured. Cereal crops are grass species and so their leaves have a predictable shape and orientation. We introduce an image processing pipeline which exploits grass shape to identify and track rows. The key observation exploited is that leaf orientations tend to be vertical along rows and horizontal between rows due to the location of the stems within the rows. Adaptive mean-shift clustering on Hough line segments is then used to obtain lane centroids, and followed by a nearest neighbor data association creating lane line candidates in 2D space. Lane parameters are fit with linear regression and a Kalman filter is used for tracking lanes between frames. The method is achieves sub-50 mm accuracy which is sufficient for placing a typical agri-robot?s wheels between real-world, early-growth wheat crop rows to drive between them, as long as the crop is seeded in a wider spacing such as 180 mm row spacing for an 80 mm wheel width robot.}
    }

  • S. Pearson, T. C. Camacho-Villa, R. Valluru, O. Gaju, M. Rai, I. Gould, S. Brewer, and E. Sklar, “Robotics and autonomous systems for net-zero agriculture,” Current robotics reports, vol. 3, p. 57–64, 2022. doi:10.1007/s43154-022-00077-6
    [BibTeX] [Abstract] [Download PDF]

    Purpose of ReviewThe paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emis-sions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.Recent FindingsAgri-robotic systems provide multiple emerging opportunities that facilitate the transition towards net zero agriculture. Five focus themes were identified where robotics could impact sustainable food production systems to (1) increase nitrogen use efficiency, (2) accelerate plant breeding, (3) deliver regenerative agriculture, (4) electrify robotic vehicles, (5) reduce food waste.SummaryRAS technologies create opportunities to (i) optimise the use of inputs such as fertiliser, seeds, and fuel/energy; (ii) reduce the environmental impact on soil and other natural resources; (iii) improve the efficiency and precision of agri-cultural processes and equipment; (iv) enhance farmers? decisions to improve crop care and reduce farm waste. Further and scaled research and technology development are needed to exploit these opportunities.

    @article{lincoln50887,
    volume = {3},
    month = {June},
    author = {Simon Pearson and Tania Carolina Camacho-Villa and Ravi Valluru and Oorbessy Gaju and Mini Rai and Iain Gould and Steve Brewer and Elizabeth Sklar},
    title = {Robotics and autonomous systems for net-zero agriculture},
    publisher = {Springer},
    year = {2022},
    journal = {Current Robotics Reports},
    doi = {10.1007/s43154-022-00077-6},
    pages = {57--64},
    keywords = {ARRAY(0x55bd28ce71e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50887/},
    abstract = {Purpose of ReviewThe paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emis-sions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.Recent FindingsAgri-robotic systems provide multiple emerging opportunities that facilitate the transition towards net zero agriculture. Five focus themes were identified where robotics could impact sustainable food production systems to (1) increase nitrogen use efficiency, (2) accelerate plant breeding, (3) deliver regenerative agriculture, (4) electrify robotic vehicles, (5) reduce food waste.SummaryRAS technologies create opportunities to (i) optimise the use of inputs such as fertiliser, seeds, and fuel/energy; (ii) reduce the environmental impact on soil and other natural resources; (iii) improve the efficiency and precision of agri-cultural processes and equipment; (iv) enhance farmers? decisions to improve crop care and reduce farm waste. Further and scaled research and technology development are needed to exploit these opportunities.}
    }

  • S. Pearson, C. Camacho?Villa, R. Valluru, G. Oorbessy, M. Rai, I. Gould, S. Brewer, and E. Sklar, “Robotics and autonomous systems for net zero agriculture,” Current robotics reports, vol. 3, p. 57–64, 2022. doi:10.1007/s43154-022-00077-6
    [BibTeX] [Abstract] [Download PDF]

    The paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emissions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.

    @article{lincoln49460,
    volume = {3},
    month = {June},
    author = {Simon Pearson and Carolina Camacho?Villa and Ravi Valluru and Gaju Oorbessy and Mini Rai and Iain Gould and Steve Brewer and Elizabeth Sklar},
    title = {Robotics and Autonomous Systems for Net Zero Agriculture},
    publisher = {Springer},
    year = {2022},
    journal = {Current Robotics Reports},
    doi = {10.1007/s43154-022-00077-6},
    pages = {57--64},
    keywords = {ARRAY(0x55bd28ce7b70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49460/},
    abstract = {The paper discusses how robotics and autonomous systems (RAS) are being deployed to decarbonise
    agricultural production. The climate emergency cannot be ameliorated without dramatic reductions in greenhouse gas emissions across the agri-food sector. This review outlines the transformational role for robotics in the agri-food system and considers where research and focus might be prioritised.}
    }

  • C. Qi, J. Gao, S. Pearson, H. Harman, K. Chen, and L. Shu, “Tea chrysanthemum detection under unstructured environments using the tc-yolo model,” Expert systems with applications, vol. 193, 2022. doi:10.1016/j.eswa.2021.116473
    [BibTeX] [Abstract] [Download PDF]

    Tea chrysanthemum detection at its flowering stage is one of the key components for selective chrysanthemum harvesting robot development. However, it is a challenge to detect flowering chrysanthemums under unstructured field environments given variations on illumination, occlusion and object scale. In this context, we propose a highly fused and lightweight deep learning architecture based on YOLO for tea chrysanthemum detection (TC-YOLO). First, in the backbone component and neck component, the method uses the Cross-Stage Partially Dense network (CSPDenseNet) and the Cross-Stage Partial ResNeXt network (CSPResNeXt) as the main networks, respectively, and embeds custom feature fusion modules to guide the gradient flow. In the final head component, the method combines the recursive feature pyramid (RFP) multiscale fusion reflow structure and the Atrous Spatial Pyramid Pool (ASPP) module with cavity convolution to achieve the detection task. The resulting model was tested on 300 field images using a data enhancement strategy combining flipping and rotation, showing that under the NVIDIA Tesla P100 GPU environment, if the inference speed is 47.23 FPS for each image (416 {$\times$} 416), TC-YOLO can achieve the average precision (AP) of 92.49\% on our own tea chrysanthemum dataset. Through further validation, it was found that overlap had the least effect on tea chrysanthemum detection, and illumination had the greatest effect on tea chrysanthemum detection. In addition, this method (13.6 M) can be deployed on a single mobile GPU, and it could be further developed as a perception system for a selective chrysanthemum harvesting robot in the future.

    @article{lincoln47700,
    volume = {193},
    month = {May},
    author = {Chao Qi and Junfeng Gao and Simon Pearson and Helen Harman and Kunjie Chen and Lei Shu},
    title = {Tea chrysanthemum detection under unstructured environments using the TC-YOLO model},
    publisher = {Elsevier},
    journal = {Expert Systems with Applications},
    doi = {10.1016/j.eswa.2021.116473},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7648)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47700/},
    abstract = {Tea chrysanthemum detection at its flowering stage is one of the key components for selective chrysanthemum harvesting robot development. However, it is a challenge to detect flowering chrysanthemums under unstructured field environments given variations on illumination, occlusion and object scale. In this context, we propose a highly fused and lightweight deep learning architecture based on YOLO for tea chrysanthemum detection (TC-YOLO). First, in the backbone component and neck component, the method uses the Cross-Stage Partially Dense network (CSPDenseNet) and the Cross-Stage Partial ResNeXt network (CSPResNeXt) as the main networks, respectively, and embeds custom feature fusion modules to guide the gradient flow. In the final head component, the method combines the recursive feature pyramid (RFP) multiscale fusion reflow structure and the Atrous Spatial Pyramid Pool (ASPP) module with cavity convolution to achieve the detection task. The resulting model was tested on 300 field images using a data enhancement strategy combining flipping and rotation, showing that under the NVIDIA Tesla P100 GPU environment, if the inference speed is 47.23 FPS for each image (416 {$\times$} 416), TC-YOLO can achieve the average precision (AP) of 92.49\% on our own tea chrysanthemum dataset. Through further validation, it was found that overlap had the least effect on tea chrysanthemum detection, and illumination had the greatest effect on tea chrysanthemum detection. In addition, this method (13.6 M) can be deployed on a single mobile GPU, and it could be further developed as a perception system for a selective chrysanthemum harvesting robot in the future.}
    }

  • S. M. Mellado, G. Cielniak, and T. Duckett, “Robotic exploration for learning human motion patterns,” Ieee transaction on robotics, 2022. doi:10.1109/TRO.2021.3101358
    [BibTeX] [Abstract] [Download PDF]

    Understanding how people are likely to move is key to efficient and safe robot navigation in human environments. However, mobile robots can only observe a fraction of the environment at a time, while the activity patterns of people may also change at different times. This paper introduces a new methodology for mobile robot exploration to maximise the knowledge of human activity patterns by deciding where and when to collect observations. We introduce an exploration policy driven by the entropy levels in a spatio-temporal map of pedestrian flows, and compare multiple spatio-temporal exploration strategies including both informed and uninformed approaches. The evaluation is performed by simulating mobile robot exploration using real sensory data from three long-term pedestrian datasets. The results show that for certain scenarios the models built with proposed exploration system can better predict the flow patterns than uninformed strategies, allowing the robot to move in a more socially compliant way, and that the exploration ratio is a key factor when it comes to the model prediction accuracy.

    @article{lincoln46497,
    month = {April},
    title = {Robotic Exploration for Learning Human Motion Patterns},
    author = {Sergio Molina Mellado and Grzegorz Cielniak and Tom Duckett},
    publisher = {IEEE},
    year = {2022},
    doi = {10.1109/TRO.2021.3101358},
    journal = {IEEE Transaction on Robotics},
    keywords = {ARRAY(0x55bd28fd3298)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46497/},
    abstract = {Understanding how people are likely to move is key to efficient and safe robot navigation in human environments. However, mobile robots can only observe a fraction of the environment at a time, while the activity patterns of people may also change at different times. This paper introduces a new methodology for mobile robot exploration to maximise the knowledge of human activity patterns by deciding where and when to collect observations. We introduce an exploration policy driven by the entropy levels in a spatio-temporal map of pedestrian flows, and compare multiple spatio-temporal exploration strategies including both informed and uninformed approaches. The evaluation is performed by simulating mobile robot exploration using real sensory data from three long-term pedestrian datasets. The results show that for certain scenarios the models built with proposed exploration system can better predict the flow patterns than uninformed strategies, allowing the robot to move in a more socially compliant way, and that the exploration ratio is a key factor when it comes to the model prediction accuracy.}
    }

  • K. M. F. James, D. J. Sargent, A. Whitehouse, and G. Cielniak, “High-throughput phenotyping for breeding targets – current status and future directions of strawberry trait automation,” Plants, people, planet, vol. 4, iss. 5, p. 432–443, 2022. doi:10.1002/ppp3.10275
    [BibTeX] [Abstract] [Download PDF]

    Automated image-based phenotyping has become widely accepted in crop phenotyping, particularly in cereal crops, yet few traits used by breeders in the strawberry industry have been automated. Early phenotypic assessment remains largely qualitative in this area since the manual phenotyping process is laborious and domain experts are constrained by time. Precision agriculture, facilitated by robotic technologies, is increasing in the strawberry industry, and the development of quantitative automated phenotyping methods is essential to ensure that breeding programs remain economically competitive. In this review, we investigate the external morphological traits relevant to the breeding of strawberries that have been automated and assess the potential for automation of traits that are still evaluated manually, highlighting challenges and limitations of the approaches used, particularly when applying high-throughput strawberry phenotyping in real-world environmental conditions.

    @article{lincoln49681,
    volume = {4},
    number = {5},
    month = {August},
    author = {Katherine Margaret Frances James and Daniel James Sargent and Adam Whitehouse and Grzegorz Cielniak},
    title = {High-throughput phenotyping for breeding targets - Current status and future directions of strawberry trait automation},
    publisher = {Wiley},
    year = {2022},
    journal = {Plants, People, Planet},
    doi = {10.1002/ppp3.10275},
    pages = {432--443},
    keywords = {ARRAY(0x55bd290097e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49681/},
    abstract = {Automated image-based phenotyping has become widely accepted in crop phenotyping, particularly in cereal crops, yet few traits used by breeders in the strawberry industry have been automated. Early phenotypic assessment remains largely qualitative in this area since the manual phenotyping process is laborious and domain experts are constrained by time. Precision agriculture, facilitated by robotic technologies, is increasing in the strawberry industry, and the development of quantitative automated phenotyping methods is essential to ensure that breeding programs remain economically competitive. In this review, we investigate the external morphological traits relevant to the breeding of strawberries that have been automated and assess the potential for automation of traits that are still evaluated manually, highlighting challenges and limitations of the approaches used, particularly when applying high-throughput strawberry phenotyping in real-world environmental conditions.}
    }

  • C. R. Carignan, R. Detry, M. R. Saaj, G. Marani, and J. V. D. Hook, “Editorial: robotic in-situ servicing, assembly and manufacturing,” Frontiers in robotics and ai, vol. 9, 2022. doi:10.3389/frobt.2022.887506
    [BibTeX] [Abstract] [Download PDF]

    This research topic is dedicated to articles focused on robotic manufacturing, assembly, and servicing utilizing in-situ resources, especially for space robotic applications. The purpose was to gather resource material for researchers from a variety of disciplines to identify common themes, formulate problems, and share promising technologies for autonomous large-scale construction, servicing, and assembly robots. The articles under this special topic provide a snapshot of several key technologies under development to support on-orbit robotic servicing, assembly, and manufacturing.

    @article{lincoln49488,
    volume = {9},
    month = {March},
    author = {Craig R. Carignan and Renaud Detry and Mini Rai Saaj and Giacomo Marani and Joshua D. Vander Hook},
    title = {Editorial: Robotic In-Situ Servicing, Assembly and Manufacturing},
    publisher = {Frontiers Media},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2022.887506},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce75e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49488/},
    abstract = {This research topic is dedicated to articles focused on robotic manufacturing, assembly, and servicing utilizing in-situ resources, especially for space robotic applications. The purpose was to gather resource material for researchers from a variety of disciplines to identify common themes, formulate problems, and share promising technologies for autonomous large-scale construction, servicing, and assembly robots. The articles under this special topic provide a snapshot of several key technologies under development to support on-orbit robotic servicing, assembly, and manufacturing.}
    }

  • F. Lei, Z. Peng, M. Liu, J. Peng, V. Cutsuridis, and S. Yue, “A robust visual system for looming cue detection against translating motion,” Ieee transactions on neural networks and learning systems, p. 1–15, 2022. doi:10.1109/TNNLS.2022.3149832
    [BibTeX] [Abstract] [Download PDF]

    Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models can not distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This paper presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events – the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.

    @article{lincoln48358,
    month = {February},
    author = {Fang Lei and Zhiping Peng and Mei Liu and Jigen Peng and Vassilis Cutsuridis and Shigang Yue},
    title = {A Robust Visual System for Looming Cue Detection Against Translating Motion},
    publisher = {IEEE},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2022.3149832},
    pages = {1--15},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7390)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48358/},
    abstract = {Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models can not distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This paper presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON-OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events -- the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.}
    }

  • F. Lei, Z. Peng, M. Liu, J. Peng, V. Cutsuridis, and S. Yue, “A robust visual system for looming cue detection against translation motion,” Ieee transactions on neural networks and learning systems, 2022. doi:10.1109/TNNLS.2022.3149832
    [BibTeX] [Abstract] [Download PDF]

    Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models cannot distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This article presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON?OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events{–}the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.

    @article{lincoln49162,
    month = {February},
    title = {A Robust Visual System for Looming Cue Detection Against Translation Motion},
    author = {Fang Lei and Zhiping Peng and Mei Liu and Jigen Peng and Vassilis Cutsuridis and Shigang Yue},
    publisher = {IEEE},
    year = {2022},
    doi = {10.1109/TNNLS.2022.3149832},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    keywords = {ARRAY(0x55bd28ce7a68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49162/},
    abstract = {Collision detection is critical for autonomous vehicles or robots to serve human society safely. Detecting looming objects robustly and timely plays an important role in collision avoidance systems. The locust lobula giant movement detector (LGMD1) is specifically selective to looming objects which are on a direct collision course. However, the existing LGMD1 models cannot distinguish a looming object from a near and fast translatory moving object, because the latter can evoke a large amount of excitation that can lead to false LGMD1 spikes. This article presents a new visual neural system model (LGMD1) that applies a neural competition mechanism within a framework of separated ON and OFF pathways to shut off the translating response. The competition-based approach responds vigorously to
    monotonous ON/OFF responses resulting from a looming object. However, it does not respond to paired ON?OFF responses that result from a translating object, thereby enhancing collision selectivity. Moreover, a complementary denoising mechanism ensures reliable collision detection. To verify the effectiveness of the model, we have conducted systematic comparative experiments on synthetic and real datasets. The results show that our method exhibits more accurate discrimination between looming and translational events{--}the looming motion can be correctly detected. It also demonstrates that the proposed model is more robust than comparative models.}
    }

  • A. Mazzeo, J. Aguzzi, M. Calisti, S. Canese, M. Angiolillo, L. Allcock, F. Vecchi, S. Stefanni, and M. Controzzi, “Marine robotics for deep-sea specimen collection: a taxonomy of underwater manipulative actions,” Sensors, vol. 22, iss. 1471, 2022. doi:10.3390/s22041471
    [BibTeX] [Abstract] [Download PDF]

    In order to develop a gripping system or control strategy that improves scientific sampling procedures, knowledge of the process and the consequent definition of requirements is fundamental. Nevertheless, factors influencing sampling procedures have not been extensively described, and selected strategies mostly depend on pilots? and researchers? experience. We interviewed 17 researchers and remotely operated vehicle (ROV) technical operators, through a formal questionnaire or in-person interviews, to collect evidence of sampling procedures based on their direct field experience. We methodologically analyzed sampling procedures to extract single basic actions (called atomic manipulations). Available equipment, environment and species-specific features strongly influenced the manipulative choices. We identified a list of functional and technical requirements for the development of novel end-effectors for marine sampling. Our results indicate that the unstructured and highly variable deep-sea environment requires a versatile system, capable of robust interactions with hard surfaces such as pushing or scraping, precise tuning of gripping force for tasks such as pulling delicate organisms away from hard and soft substrates, and rigid holding, as well as a mechanism for rapidly switching among external tools.

    @article{lincoln52103,
    volume = {22},
    number = {1471},
    month = {February},
    author = {Angela Mazzeo and Jacopo Aguzzi and Marcello Calisti and Simonpietro Canese and Michela Angiolillo and Louise Allcock and Fabrizio Vecchi and Sergio Stefanni and Marco Controzzi},
    title = {Marine Robotics for Deep-Sea Specimen Collection: A Taxonomy of Underwater Manipulative Actions},
    publisher = {MDPI},
    year = {2022},
    journal = {Sensors},
    doi = {10.3390/s22041471},
    keywords = {ARRAY(0x55bd28ce7438)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52103/},
    abstract = {In order to develop a gripping system or control strategy that improves scientific sampling procedures, knowledge of the process and the consequent definition of requirements is fundamental. Nevertheless, factors influencing sampling procedures have not been extensively described, and selected strategies mostly depend on pilots? and researchers? experience. We interviewed 17 researchers and remotely operated vehicle (ROV) technical operators, through a formal questionnaire or in-person interviews, to collect evidence of sampling procedures based on their direct field experience. We methodologically analyzed sampling procedures to extract single basic actions (called atomic manipulations). Available equipment, environment and species-specific features strongly influenced the manipulative choices. We identified a list of functional and technical requirements for the development of novel end-effectors for marine sampling. Our results indicate that the unstructured and highly variable deep-sea environment requires a versatile system, capable of robust interactions with hard surfaces such as pushing or scraping, precise tuning of gripping force for tasks such as pulling delicate organisms away from hard and soft substrates, and rigid holding, as well as a mechanism for rapidly switching among external tools.}
    }

  • J. Aguzzi, S. Flogel, S. Marini, L. Thomsen, J. Albiez, P. Weiss, G. Picardi, M. Calisti, S. Stefanni, L. Mirimin, F. Vecchi, C. Laschi, A. Branch, E. Clark, B. Foing, A. Wedler, D. Chatzievangelou, M. Tangherlini, A. Purser, L. Dartnell, and R. Danovaro, “Developing technological synergies between deep-sea and space research,” Elementa: science of the anthropocene, vol. 10, iss. 1, p. 64, 2022. doi:10.1525/elementa.2021.00064
    [BibTeX] [Abstract] [Download PDF]

    Recent advances in robotic design, autonomy and sensor integration create solutions for the exploration of deep-sea environments, transferable to the oceans of icy moons. Marine platforms do not yet have the mission autonomy capacity of their space counterparts (e.g., the state of the art Mars Perseverance rover mission), although different levels of autonomous navigation and mapping, as well as sampling, are an extant capability. In this setting their increasingly biomimicked designs may allow access to complex environmental scenarios, with novel, highly-integrated life-detecting, oceanographic and geochemical sensor packages. Here, we lay an outlook for the upcoming advances in deep-sea robotics through synergies with space technologies within three major research areas: biomimetic structure and propulsion (including power storage and generation), artificial intelligence and cooperative networks, and life-detecting instrument design. New morphological and material designs, with miniaturized and more diffuse sensor packages, will advance robotic sensing systems. Artificial intelligence algorithms controlling navigation and communications will allow the further development of the behavioral biomimicking by cooperating networks. Solutions will have to be tested within infrastructural networks of cabled observatories, neutrino telescopes, and off-shore industry sites with agendas and modalities that are beyond the scope of our work, but could draw inspiration on the proposed examples for the operational combination of fixed and mobile platforms.

    @article{lincoln52102,
    volume = {10},
    number = {1},
    month = {February},
    author = {Jacopo Aguzzi and Sascha Flogel and Simone Marini and Laurenz Thomsen and Jan Albiez and Peter Weiss and Giacomo Picardi and Marcello Calisti and Sergio Stefanni and Luca Mirimin and Fabrizio Vecchi and Cecilia Laschi and Andrew Branch and Evan Clark and Bernard Foing and Armin Wedler and Damianos Chatzievangelou and Michael Tangherlini and Autun Purser and Lewis Dartnell and Roberto Danovaro},
    title = {Developing technological synergies between deep-sea and space research},
    publisher = {University of California},
    year = {2022},
    journal = {Elementa: Science of the Anthropocene},
    doi = {10.1525/elementa.2021.00064},
    pages = {00064},
    keywords = {ARRAY(0x55bd28cdc410)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52102/},
    abstract = {Recent advances in robotic design, autonomy and sensor integration create solutions for the exploration of
    deep-sea environments, transferable to the oceans of icy moons. Marine platforms do not yet have the mission
    autonomy capacity of their space counterparts (e.g., the state of the art Mars Perseverance rover mission),
    although different levels of autonomous navigation and mapping, as well as sampling, are an extant capability.
    In this setting their increasingly biomimicked designs may allow access to complex environmental scenarios,
    with novel, highly-integrated life-detecting, oceanographic and geochemical sensor packages. Here, we lay an
    outlook for the upcoming advances in deep-sea robotics through synergies with space technologies within
    three major research areas: biomimetic structure and propulsion (including power storage and generation),
    artificial intelligence and cooperative networks, and life-detecting instrument design. New morphological and
    material designs, with miniaturized and more diffuse sensor packages, will advance robotic sensing systems.
    Artificial intelligence algorithms controlling navigation and communications will allow the further
    development of the behavioral biomimicking by cooperating networks. Solutions will have to be tested
    within infrastructural networks of cabled observatories, neutrino telescopes, and off-shore industry sites
    with agendas and modalities that are beyond the scope of our work, but could draw inspiration on the
    proposed examples for the operational combination of fixed and mobile platforms.}
    }

  • H. Luan, Q. Fu, Y. Zhang, M. Hua, S. Chen, and S. Yue, “A looming spatial localization neural network inspired by mlg1 neurons in the crab neohelice,” Frontiers in neuroscience, 2022. doi:10.3389/fnins.2021.787256
    [BibTeX] [Abstract] [Download PDF]

    Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s? receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.

    @article{lincoln49094,
    month = {January},
    title = {A Looming Spatial Localization Neural Network Inspired by MLG1 Neurons in the Crab Neohelice},
    author = {Hao Luan and Qingbing Fu and Yicheng Zhang and Mu Hua and Shengyong Chen and Shigang Yue},
    publisher = {Frontiers Media},
    year = {2022},
    doi = {10.3389/fnins.2021.787256},
    journal = {Frontiers in Neuroscience},
    keywords = {ARRAY(0x55bd28ce77f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49094/},
    abstract = {Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s? receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons.
    The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.}
    }

  • A. Mazzeo, J. Aguzzi, M. Calisti, S. Canese, F. Vecchi, S. Stefanni, and M. Controzzi, “Marine robotics for deep-sea specimen collection: a systematic review of underwater grippers,” Sensors, vol. 22, iss. 2, p. 648, 2022. doi:10.3390/s22020648
    [BibTeX] [Abstract] [Download PDF]

    The collection of delicate deep-sea specimens of biological interest with remotely operated vehicle (ROV) industrial grippers and tools is a long and expensive procedure. Industrial grippers were originally designed for heavy manipulation tasks, while sampling specimens requires dexterity and precision. We describe the grippers and tools commonly used in underwater sampling for scientific purposes, systematically review the state of the art of research in underwater gripping technologies, and identify design trends. We discuss the possibility of executing typical manipulations of sampling procedures with commonly used grippers and research prototypes. Our results indicate that commonly used grippers ensure that the basic actions either of gripping or caging are possible, and their functionality is extended by holding proper tools. Moreover, the approach of the research status seems to have changed its focus in recent years: from the demonstration of the validity of a specific technology (actuation, transmission, sensing) for marine applications, to the solution of specific needs of underwater manipulation. Finally, we summarize the environmental and operational requirements that should be considered in the design of an underwater gripper.

    @article{lincoln52101,
    volume = {22},
    number = {2},
    month = {January},
    author = {Angelo Mazzeo and Jacopo Aguzzi and Marcello Calisti and Simonpietro Canese and Fabrizio Vecchi and Sergio Stefanni and Marco Controzzi},
    title = {Marine Robotics for Deep-Sea Specimen Collection: A Systematic Review of Underwater Grippers},
    publisher = {MDPI},
    year = {2022},
    journal = {Sensors},
    doi = {10.3390/s22020648},
    pages = {648},
    keywords = {ARRAY(0x55bd28fec828)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52101/},
    abstract = {The collection of delicate deep-sea specimens of biological interest with remotely operated vehicle (ROV) industrial grippers and tools is a long and expensive procedure. Industrial grippers were originally designed for heavy manipulation tasks, while sampling specimens requires dexterity and precision. We describe the grippers and tools commonly used in underwater sampling for scientific purposes, systematically review the state of the art of research in underwater gripping technologies, and identify design trends. We discuss the possibility of executing typical manipulations of sampling procedures with commonly used grippers and research prototypes. Our results indicate that commonly used grippers ensure that the basic actions either of gripping or caging are possible, and their functionality is extended by holding proper tools. Moreover, the approach of the research status seems to have changed its focus in recent years: from the demonstration of the validity of a specific technology (actuation, transmission, sensing) for marine applications, to the solution of specific needs of underwater manipulation. Finally, we summarize the environmental and operational requirements that should be considered in the design of an underwater gripper.}
    }

  • L. Manning, S. Brewer, P. Craigon, P. J. Frey, A. Gutierrez, N. Jacobs, S. Kanza, S. Munday, J. Sacks, and S. Pearson, “Artificial intelligence and ethics within the food sector: developing a common language for technology adoption across the supply chain,” Trends in food science and technology, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Background: The use of artificial intelligence (AI) is growing in food supply chains. The ethical language associated with food supply and technology is contextualised and framed by the meaning given to it by stakeholders. Failure to differentiate between these nuanced meanings can create a barrier to technology adoption and reduce the benefit derived. Scope and approach: The aim of this review paper is to consider the embedded ethical language used by stakeholders who collaborate in the adoption of AI in food supply chains. Ethical perspectives frame this literature review and provide structure to consider how to shape a common discourse to build trust in, and frame more considered utilisation of, AI in food supply chains to the benefit of users, and wider society. Key findings and conclusions: Whilst the nature of data within the food system is much broader than the personal data covered by the European Union General Data Protection Regulation (GDPR), the ethical issues for computational and AI systems are similar and can be considered in terms of particular aspects: transparency, traceability, explainability, interpretability, accessibility, accountability and responsibility. The outputs of this research assist in giving a more rounded understanding of the language used, exploring the ethical interaction of aspects of AI used in food supply chains and also the management activities and actions that can be adopted to improve the applicability of AI technology, increase engagement and derive greater performance benefits. This work has implications for those developing AI governance protocols for the food supply chain as well as supply chain practitioners.

    @article{lincoln49072,
    title = {Artificial intelligence and ethics within the food sector: developing a common language for technology adoption across the supply chain},
    author = {Louise Manning and Steve Brewer and Peter Craigon and P.J Frey and Anabel Gutierrez and Naomi Jacobs and Samantha Kanza and Samuel Munday and Justin Sacks and Simon Pearson},
    publisher = {Elsevier},
    year = {2022},
    journal = {Trends in Food Science and Technology},
    keywords = {ARRAY(0x55bd28ce2030)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49072/},
    abstract = {Background: The use of artificial intelligence (AI) is growing in food supply chains. The ethical language associated with food supply and technology is contextualised and framed by the meaning given to it by stakeholders. Failure to differentiate between these nuanced meanings can create a barrier to technology adoption and reduce the benefit derived.
    Scope and approach: The aim of this review paper is to consider the embedded ethical language used by stakeholders who collaborate in the adoption of AI in food supply chains. Ethical perspectives frame this literature review and provide structure to consider how to shape a common discourse to build trust in, and frame more considered utilisation of, AI in food supply chains to the benefit of users, and wider society.
    Key findings and conclusions: Whilst the nature of data within the food system is much broader than the personal data covered by the European Union General Data Protection Regulation (GDPR), the ethical issues for computational and AI systems are similar and can be considered in terms of particular aspects: transparency, traceability, explainability, interpretability, accessibility, accountability and responsibility. The outputs of this research assist in giving a more rounded understanding of the language used, exploring the ethical interaction of aspects of AI used in food supply chains and also the management activities and actions that can be adopted to improve the applicability of AI technology, increase engagement and derive greater performance benefits. This work has implications for those developing AI governance protocols for the food supply chain as well as supply chain practitioners.}
    }

  • A. Pal, G. Das, M. Hanheide, A. C. Leite, and P. From, “An agricultural event prediction framework towards anticipatory scheduling of robot fleets: general concepts and case studies,” Agronomy, vol. 12, iss. 6, 2022. doi:10.3390/agronomy12061299
    [BibTeX] [Abstract] [Download PDF]

    Harvesting in soft-fruit farms is labor intensive, time consuming and is severely affected by scarcity of skilled labors. Among several activities during soft-fruit harvesting, human pickers take 20?30\% of overall operation time into the logistics activities. Such an unproductive time, for example, can be reduced by optimally deploying a fleet of agricultural robots and schedule them by anticipating the human activity behaviour (state) during harvesting. In this paper, we propose a framework for spatio-temporal prediction of human pickers? activities while they are picking fruits in agriculture fields. Here we exploit temporal patterns of picking operation and 2D discrete points, called topological nodes, as spatial constraints imposed by the agricultural environment. Both information are used in the prediction framework in combination with a variant of the Hidden Markov Model (HMM) algorithm to create two modules. The proposed methodology is validated with two test cases. In Test Case 1, the first module selects an optimal temporal model called as picking_state_progression model that uses temporal features of a picker state (event) to statistically evaluate an adequate number of intra-states also called sub-states. In Test Case 2, the second module uses the outcome from the optimal temporal model in the subsequent spatial model called node_transition model and performs ?spatio-temporal predictions? of the picker?s movement while the picker is in a particular state. The Discrete Event Simulation (DES) framework, a proven agricultural multi-robot logistics model, is used to simulate the different picking operation scenarios with and without our proposed prediction framework and the results are then statistically compared to each other. Our prediction framework can reduce the so-called unproductive logistics time in a fully manual harvesting process by about 80 percent in the overall picking operation. This research also indicates that the different rates of picking operations involve different numbers of sub-states, and these sub-states are associated with different trends considered in spatio-temporal predictions.

    @article{lincoln49668,
    volume = {12},
    number = {6},
    author = {Abhishesh Pal and Gautham Das and Marc Hanheide and Antonio Candea Leite and Pal From},
    title = {An Agricultural Event Prediction Framework towards Anticipatory Scheduling of Robot Fleets: General Concepts and Case Studies},
    publisher = {MDPI},
    journal = {Agronomy},
    doi = {10.3390/agronomy12061299},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7708)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49668/},
    abstract = {Harvesting in soft-fruit farms is labor intensive, time consuming and is severely affected by scarcity of skilled labors. Among several activities during soft-fruit harvesting, human pickers take 20?30\% of overall operation time into the logistics activities. Such an unproductive time, for example, can be reduced by optimally deploying a fleet of agricultural robots and schedule them by anticipating the human activity behaviour (state) during harvesting. In this paper, we propose a framework for spatio-temporal prediction of human pickers? activities while they are picking fruits in agriculture fields. Here we exploit temporal patterns of picking operation and 2D discrete points, called topological nodes, as spatial constraints imposed by the agricultural environment. Both information are used in the prediction framework in combination with a variant of the Hidden Markov Model (HMM) algorithm to create two modules. The proposed methodology is validated with two test cases. In Test Case 1, the first module selects an optimal temporal model called as picking\_state\_progression model that uses temporal features of a picker state (event) to statistically evaluate an adequate number of intra-states also called sub-states. In Test Case 2, the second module uses the outcome from the optimal temporal model in the subsequent spatial model called node\_transition model and performs ?spatio-temporal predictions? of the picker?s movement while the picker is in a particular state. The Discrete Event Simulation (DES) framework, a proven agricultural multi-robot logistics model, is used to simulate the different picking operation scenarios with and without our proposed prediction framework and the results are then statistically compared to each other. Our prediction framework can reduce the so-called unproductive logistics time in a fully manual harvesting process by about 80 percent in the overall picking operation. This research also indicates that the different rates of picking operations involve different numbers of sub-states, and these sub-states are associated with different trends considered in spatio-temporal predictions.}
    }

  • B. Mazzolai, A. Mondini, E. D. Dottore, L. Margheri, K. Suzumori, M. Cianchetti, T. Speck, S. Smoukov, I. Burget, T. Keplinger, G. D. F. Siqueira, F. Vanneste, O. Goury, C. Duriez, T. Nanayakkara, B. Vanderborght, J. Brancart, S. Terryn, S. Rich, R. Liu, K. Fukuda, T. Someya, M. Calisti, C. Laschi, W. Sun, G. Wang, L. Wen, R. Baines, P. K. Sree, R. Kramer-Bottiglio, D. Rus, P. Fischer, F. Simmel, and A. Lendlein, “Roadmap on soft robotics: multifunctionality, adaptability and growth without borders,” Multifunctional materials, vol. 5, p. 32001, 2022. doi:10.1088/2399-7532/ac4c95
    [BibTeX] [Abstract] [Download PDF]

    Soft robotics aims at creating systems with improved performance of movement and adaptability in unknown, challenging, environments and with higher level of safety during interactions with humans. This Roadmap on Soft Robotics covers selected aspects for the design of soft robots significantly linked to the area of multifunctional materials, as these are considered a fundamental component in the design of soft robots for an improvement of their peculiar abilities, such as morphing, adaptivity and growth. The roadmap includes different approaches for components and systems design, bioinspired materials, methodologies for building soft robots, strategies for the implementation and control of their functionalities and behavior, and examples of soft-bodied systems showing abilities across different environments. For each covered topic, the author(s) describe the current status and research directions, current and future challenges, and perspective advances in science and technology to meet the challenges.

    @article{lincoln52106,
    volume = {5},
    month = {August},
    author = {Barbara Mazzolai and Alessio Mondini and Emanuela Del Dottore and Laura Margheri and Koichi Suzumori and Matteo Cianchetti and Thomas Speck and Stoyan Smoukov and Ingo Burget and Tobias Keplinger and Gilberto De Freitas Siqueira and Felix Vanneste and Olivier Goury and Christian Duriez and Thrishantha Nanayakkara and Bram Vanderborght and Joost Brancart and Seppe Terryn and Steven Rich and Ruiyuan Liu and Kenjiro Fukuda and Takao Someya and Marcello Calisti and Cecilia Laschi and Wenguang Sun and Gang Wang and Li Wen and Robert Baines and Patiballa Kalyan Sree and Rebecca Kramer-Bottiglio and Daniela Rus and Peer Fischer and Friedrich Simmel and Andreas Lendlein},
    title = {Roadmap on soft robotics: multifunctionality, adaptability and growth without borders},
    publisher = {IOP Publishing},
    year = {2022},
    journal = {Multifunctional Materials},
    doi = {10.1088/2399-7532/ac4c95},
    pages = {032001},
    keywords = {ARRAY(0x55bd28fe9528)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52106/},
    abstract = {Soft robotics aims at creating systems with improved performance of movement and adaptability in unknown, challenging, environments and with higher level of safety during interactions with humans. This Roadmap on Soft Robotics covers selected aspects for the design of soft robots significantly linked to the area of multifunctional materials, as these are considered a fundamental component in the design of soft robots for an improvement of their peculiar abilities, such as morphing, adaptivity and growth. The roadmap includes different approaches for components and systems design, bioinspired materials, methodologies for building soft robots, strategies for the implementation and control of their functionalities and behavior, and examples of soft-bodied systems showing abilities across different environments. For each covered topic, the author(s) describe the current status and research directions, current and future challenges, and perspective advances in science and technology to meet the challenges.}
    }

  • C. Qi, J. Gao, K. Chen, L. Shu, and S. Pearson, “Tea chrysanthemum detection by leveraging generative adversarial networks and edge computing,” Frontiers in plant science, 2022.
    [BibTeX] [Abstract] [Download PDF]

    A high resolution dataset is one of the prerequisites for tea chrysanthemum detection with deep learning algorithms. This is crucial for further developing a selective chrysanthemum harvesting robot. However, generating high resolution datasets of the tea chrysanthemum with complex unstructured environments is a challenge. In this context, we propose a novel generative adversarial network (TC-GAN) that attempts to deal with this challenge. First, we designed a non-linear mapping network for untangling the features of the underlying code. Then, a customized regularisation method was used to provide fine-grained control over the image details. Finally, a gradient diversion design with multi-scale feature extraction capability was adopted to optimize the training process. The proposed TC-GAN was compared with 12 state-of-the-art generative adversarial networks, showing that an optimal average precision (AP) of 90.09\% was achieved with the generated images (512*512) on the developed TC-YOLO object detection model under the NVIDIA Tesla P100 GPU environment. Moreover, the detection model was deployed into the embedded NVIDIA Jetson TX2 platform with 0.1s inference time, and this edge computing device could be further developed into a perception system for selective chrysanthemum picking robots in the future.

    @article{lincoln48499,
    title = {Tea Chrysanthemum Detection by Leveraging Generative Adversarial Networks and Edge Computing},
    author = {Chao Qi and Junfeng Gao and Kunjie Chen and Lei Shu and Simon Pearson},
    publisher = {Frontiers Media},
    year = {2022},
    journal = {Frontiers in plant science},
    keywords = {ARRAY(0x55bd28edd3e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48499/},
    abstract = {A high resolution dataset is one of the prerequisites for tea chrysanthemum detection with deep learning algorithms. This is crucial for further developing a selective chrysanthemum harvesting robot. However, generating high resolution datasets of the tea chrysanthemum with complex unstructured environments is a challenge. In this context, we propose a novel generative adversarial network (TC-GAN) that attempts to deal with this challenge. First, we designed a non-linear mapping network for untangling the features of the underlying code. Then, a customized regularisation method was used to provide fine-grained control over the image details. Finally, a gradient diversion design with multi-scale feature extraction capability was adopted to optimize the training process. The proposed TC-GAN was compared with 12 state-of-the-art generative adversarial networks, showing that an optimal average precision (AP) of 90.09\% was achieved with the generated images (512*512) on the developed TC-YOLO object detection model under the NVIDIA Tesla P100 GPU environment. Moreover, the detection model was deployed into the embedded NVIDIA Jetson TX2 platform with 0.1s inference time, and this edge computing device could be further developed into a perception system for selective chrysanthemum picking robots in the future.}
    }

  • M. A. A. Mdfaa, G. Kulathunga, and A. Klimchik, “3d-siammask: vision-based multi-rotor aerial-vehicle tracking for a moving object,” Remote sensing, vol. 14, iss. 22, p. 5756, 2022. doi:10.3390/rs14225756
    [BibTeX] [Abstract] [Download PDF]

    This paper aims to develop a multi-rotor-based visual tracker for a specified moving object. Visual object-tracking algorithms for multi-rotors are challenging due to multiple issues such as occlusion, quick camera motion, and out-of-view scenarios. Hence, algorithmic changes are required for dealing with images or video sequences obtained by multi-rotors. Therefore, we propose two approaches: a generic object tracker and a class-specific tracker. Both tracking settings require the object bounding box to be selected in the first frame. As part of the later steps, the object tracker uses the updated template set and the calibrated RGBD sensor data as inputs to track the target object using a Siamese network and a machine-learning model for depth estimation. The class-specific tracker is quite similar to the generic object tracker but has an additional auxiliary object classifier. The experimental study and validation were carried out in a robot simulation environment. The simulation environment was designed to serve multiple case scenarios using Gazebo. According to the experiment results, the class-specific object tracker performed better than the generic object tracker in terms of stability and accuracy. Experiments show that the proposed generic tracker achieves promising results on three challenging datasets. Our tracker runs at approximately 36 fps on GPU. {\copyright} 2022 by the authors.

    @article{lincoln53298,
    volume = {14},
    number = {22},
    month = {November},
    author = {Mohamad Al Al Mdfaa and Geesara Kulathunga and Alexandr Klimchik},
    title = {3D-SiamMask: Vision-Based Multi-Rotor Aerial-Vehicle Tracking for a Moving Object},
    publisher = {MDPI},
    year = {2022},
    journal = {Remote Sensing},
    doi = {10.3390/rs14225756},
    pages = {5756},
    keywords = {ARRAY(0x55bd28ddcee0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53298/},
    abstract = {This paper aims to develop a multi-rotor-based visual tracker for a specified moving object. Visual object-tracking algorithms for multi-rotors are challenging due to multiple issues such as occlusion, quick camera motion, and out-of-view scenarios. Hence, algorithmic changes are required for dealing with images or video sequences obtained by multi-rotors. Therefore, we propose two approaches: a generic object tracker and a class-specific tracker. Both tracking settings require the object bounding box to be selected in the first frame. As part of the later steps, the object tracker uses the updated template set and the calibrated RGBD sensor data as inputs to track the target object using a Siamese network and a machine-learning model for depth estimation. The class-specific tracker is quite similar to the generic object tracker but has an additional auxiliary object classifier. The experimental study and validation were carried out in a robot simulation environment. The simulation environment was designed to serve multiple case scenarios using Gazebo. According to the experiment results, the class-specific object tracker performed better than the generic object tracker in terms of stability and accuracy. Experiments show that the proposed generic tracker achieves promising results on three challenging datasets. Our tracker runs at approximately 36 fps on GPU. {\copyright} 2022 by the authors.}
    }

  • K. Smith and M. Hanheide, “Future leaders in agri?food robotics,” Food science and technology, vol. 36, iss. 3, p. 62–65, 2022. doi:10.1002/fsat.3603_15.x
    [BibTeX] [Abstract] [Download PDF]

    The AgriFoRwArdS EPSRC Centre for Doctoral Training1 (CDT) is at the fore of nurturing and developing the next cohort of experts in the agri-food robotics sector. The Centre, established by the University of Lincoln in collaboration with the University of Cambridge and the University of East Anglia and funded by UKRI’s Engineering and Physical Sciences Research Council, is providing fully funded opportunities for 50 students to undertake their PhD studies and become the next leaders in the agri-food robotics community. Through collaboration with industry partners and utilising the expertise of the three partner organisations, the AgriFoRwArdS CDT aims to ensure that its work, and that of its students, helps transform agri-food robotics and the wider food production industry.

    @article{lincoln51719,
    volume = {36},
    number = {3},
    month = {September},
    author = {Kate Smith and Marc Hanheide},
    title = {Future leaders in agri?food robotics},
    publisher = {Wiley},
    year = {2022},
    journal = {Food Science and Technology},
    doi = {10.1002/fsat.3603\_15.x},
    pages = {62--65},
    keywords = {ARRAY(0x55bd28a65118)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/51719/},
    abstract = {The AgriFoRwArdS EPSRC Centre for Doctoral Training1 (CDT) is at the fore of nurturing and developing the next cohort of experts in the agri-food robotics sector.
    The Centre, established by the University of Lincoln in collaboration with the University of Cambridge and the University of East Anglia and funded by UKRI's Engineering and Physical Sciences Research Council, is providing fully funded opportunities for 50 students to undertake their PhD studies and become the next leaders in the agri-food robotics community.
    Through collaboration with industry partners and utilising the expertise of the three partner organisations, the AgriFoRwArdS CDT aims to ensure that its work, and that of its students, helps transform agri-food robotics and the wider food production industry.}
    }

  • L. Gong, M. Yu, V. Cutsuridis, S. Kollias, and S. Pearson, “A novel model fusion approach for greenhouse crop yield prediction,” Horticulturae, vol. 9, iss. 1, 2022. doi:10.3390/horticulturae9010005
    [BibTeX] [Abstract] [Download PDF]

    In this work, we have proposed a novel methodology for greenhouse tomato yield prediction, which is based on a hybrid of an explanatory biophysical model{–}the Tomgro model, and a machine learning model called CNN-RNN. The Tomgro and CNN-RNN models are calibrated/trained for predicting tomato yields while different fusion approaches (linear, Bayesian, neural network, random forest and gradient boosting) are exploited for fusing the prediction result of individual models for obtaining the final prediction results. The experimental results have shown that the model fusion approach achieves more accurate prediction results than the explanatory biophysical model or the machine learning model. Moreover, out of different model fusion approaches, the neural network one produced the most accurate tomato prediction results, with means and standard deviations of root mean square error (RMSE), r2-coefficient, Nash-Sutcliffe efficiency (NSE) and percent bias (PBIAS) being 17.69 {$\pm$} 3.47 g/m2 , 0.9995 {$\pm$} 0.0002, 0.9989 {$\pm$} 0.0004 and 0.1791 {$\pm$} 0.6837, respectively.

    @article{lincoln54930,
    volume = {9},
    number = {1},
    month = {December},
    author = {Liyun Gong and Miao Yu and Vassilis Cutsuridis and Stefanos Kollias and Simon Pearson},
    title = {A Novel Model Fusion Approach for Greenhouse Crop Yield Prediction},
    publisher = {MDPI},
    year = {2022},
    journal = {Horticulturae},
    doi = {10.3390/horticulturae9010005},
    keywords = {ARRAY(0x55bd28ddce50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54930/},
    abstract = {In this work, we have proposed a novel methodology for greenhouse tomato yield prediction, which is based on a hybrid of an explanatory biophysical model{--}the Tomgro model, and a machine learning model called CNN-RNN. The Tomgro and CNN-RNN models are calibrated/trained for predicting tomato yields while different fusion approaches (linear, Bayesian, neural network, random forest and gradient boosting) are exploited for fusing the prediction result of individual models for obtaining the final prediction results. The experimental results have shown that the model fusion approach achieves more accurate prediction results than the explanatory biophysical model or the machine learning model. Moreover, out of different model fusion approaches, the neural network one produced the most accurate tomato prediction results, with means and standard deviations of root mean square error (RMSE), r2-coefficient, Nash-Sutcliffe efficiency (NSE) and percent bias (PBIAS) being 17.69 {$\pm$} 3.47 g/m2 , 0.9995 {$\pm$} 0.0002, 0.9989 {$\pm$} 0.0004 and 0.1791 {$\pm$} 0.6837, respectively.}
    }

  • H. Harman and E. Sklar, “Multi-agent task allocation for harvest management,” Frontiers in robotics and ai, 2022. doi:10.3389/frobt.2022.864745
    [BibTeX] [Abstract] [Download PDF]

    Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks. The harvesting workforce is typically organised by farm manager(s) who assign workers to the fields that are ready to be harvested and team leaders who manage the workers in the fields. Creating these assignments is a dynamic and complex problem, as the skill of the workforce and the yield (quantity of ripe fruit picked) are variable and not entirely predictable. The work presented here posits that multi-agent task allocation methods can assist farm managers and team leaders to manage the harvesting workforce effectively and efficiently. There are three key challenges faced when adapting multi-agent approaches to this problem: (i) staff time (and thus cost) should be minimised; (ii) tasks must be distributed fairly to keep staff motivated; and (iii) the approach must be able to handle incremental (incomplete) data as the season progresses. An adapted variation of Round Robin (RR) is proposed for the problem of assigning workers to fields, and market-based task allocation mechanisms are applied to the challenge of assigning tasks to workers within the fields. To evaluate the approach introduced here, experiments are performed based on data that was supplied by a large commercial soft fruit farm for the past two harvesting seasons. The results demonstrate that our approach produces appropriate worker-to-field allocations. Moreover, simulated experiments demonstrate that there is a ?sweet spot? with respect to the ratio between two types of in-field workers.

    @article{lincoln52212,
    month = {October},
    title = {Multi-agent task allocation for harvest management},
    author = {Helen Harman and Elizabeth Sklar},
    publisher = {Frontiers},
    year = {2022},
    doi = {10.3389/frobt.2022.864745},
    journal = {Frontiers in Robotics and AI},
    keywords = {ARRAY(0x55bd28ddcfa0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52212/},
    abstract = {Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks. The harvesting workforce is typically organised by farm manager(s) who assign workers to the fields that are ready to be harvested and team leaders who manage the workers in the fields. Creating these assignments is a dynamic and complex problem, as the skill of the workforce and the yield (quantity of ripe fruit picked) are variable and not entirely predictable. The work presented here posits that multi-agent task allocation methods can assist farm managers and team leaders to manage the harvesting workforce effectively and efficiently. There are three key challenges faced when adapting multi-agent approaches to this problem: (i) staff time (and thus cost) should be minimised; (ii) tasks must be distributed fairly to keep staff motivated; and (iii) the approach must be able to handle incremental (incomplete) data as the season progresses. An adapted variation of Round Robin (RR) is proposed for the problem of assigning workers to fields, and market-based task allocation mechanisms are applied to the challenge of assigning tasks to workers within the fields. To evaluate the approach introduced here, experiments are performed based on data that was supplied by a large commercial soft fruit farm for the past two harvesting seasons. The results demonstrate that our approach produces appropriate worker-to-field allocations. Moreover, simulated experiments demonstrate that there is a ?sweet spot? with respect to the ratio between two types of in-field workers.}
    }

  • A. L. Zorrilla, I. M. Torres, and H. Cuayahuitl, “Audio embedding-aware dialogue policy learning,” Ieee transactions on audio, speech, and language processing, vol. 31, p. 525–538, 2022. doi:10.1109/TASLP.2022.3225658
    [BibTeX] [Abstract] [Download PDF]

    Following the success of Natural Language Processing (NLP) transformers pretrained via self-supervised learning, similar models have been proposed recently for speech processing such as Wav2Vec2, HuBERT and UniSpeech-SAT. An interesting yet unexplored area of application of these models is Spoken Dialogue Systems, where the users? audio signals are typically just mapped to word-level features derived from an Automatic Speech Recogniser (ASR), and then processed using NLP techniques to generate system responses. This paper reports a comprehensive comparison of dialogue policies trained using ASR-based transcriptions and extended with the aforementioned audio processing transformers in the DSTC2 task. Whilst our dialogue policies are trained with supervised and policy-based deep reinforcement learning, they are assessed using both automatic task completion metrics and a human evaluation. Our results reveal that using audio embeddings is more beneficial than detrimental in most of our trained dialogue policies, and that the benefits are stronger for supervised learning than reinforcement learning.

    @article{lincoln52689,
    volume = {31},
    month = {November},
    author = {Asier Lopez Zorrilla and M. Ines Torres and Heriberto Cuayahuitl},
    title = {Audio Embedding-Aware Dialogue Policy Learning},
    publisher = {IEEE},
    year = {2022},
    journal = {IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING},
    doi = {10.1109/TASLP.2022.3225658},
    pages = {525--538},
    keywords = {ARRAY(0x55bd28ddceb0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52689/},
    abstract = {Following the success of Natural Language Processing (NLP) transformers pretrained via self-supervised learning, similar models have been proposed recently for speech processing such as Wav2Vec2, HuBERT and UniSpeech-SAT. An interesting yet unexplored area of application of these models is Spoken Dialogue Systems, where the users? audio signals are typically just mapped to word-level features derived from an Automatic Speech Recogniser (ASR), and then processed using NLP techniques to generate system responses. This paper reports a comprehensive comparison of dialogue policies trained using ASR-based transcriptions and extended with the aforementioned audio processing transformers in the DSTC2 task. Whilst our dialogue policies are trained with supervised and policy-based deep reinforcement learning, they are assessed using both automatic task completion metrics and a human evaluation. Our results reveal that using audio embeddings is more beneficial than detrimental in most of our trained dialogue policies, and that the benefits are stronger for supervised learning than reinforcement learning.}
    }

  • H. R. Karbasian, J. A. Esfahani, A. M. Aliyu, and K. C. Kim, “Numerical analysis of wind turbines blade in deep dynamic stall,” Renewable energy, vol. 197, p. 1094–1105, 2022. doi:10.1016/j.renene.2022.07.115
    [BibTeX] [Abstract] [Download PDF]

    This study numerically investigates kinematics of dynamic stall, which is a crucial matter in wind turbines. Distinct movements of the blade with the same angle of attack (AOA) profile may provoke the flow field due to their kinematic characteristics. This induction can significantly change aerodynamic loads and dynamic stall process in wind turbines. The simulation involves a 3D NACA 0012 airfoil with two distinct pure-heaving and pure-pitching motions. The flow field over this 3D airfoil was simulated using Delayed Detached Eddy Simulations (DDES). The airfoil begins to oscillate at a Reynolds number of Re = 1.35 {$\times$} 105. The given attack angle profile remains unchanged for all cases. It is shown that the flow structures differ notably between pure-heaving and pure-pitching motions, such that the pure-pitching motions induce higher drag force on the airfoil than the pure-heaving motion. Remarkably, heaving motion causes excessive turbulence in the boundary layer, and then the coherent structures seem to be more stable. Hence, pure-heaving motion contains more energetic core vortices, yielding higher lift at post-stall. In contrast to conventional studies on the dynamic stall of wind turbines, current results show that airfoils? kinematics significantly affect the load predictions during the dynamic stall phenomenon.

    @article{lincoln50417,
    volume = {197},
    month = {September},
    author = {Hamid Reza Karbasian and Javad Abolfazli Esfahani and Aliyu Musa Aliyu and Kyung Chun Kim},
    title = {Numerical analysis of wind turbines blade in deep dynamic stall},
    publisher = {Elsevier},
    year = {2022},
    journal = {Renewable Energy},
    doi = {10.1016/j.renene.2022.07.115},
    pages = {1094--1105},
    keywords = {ARRAY(0x55bd28e36e60)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50417/},
    abstract = {This study numerically investigates kinematics of dynamic stall, which is a crucial matter in wind turbines. Distinct movements of the blade with the same angle of attack (AOA) profile may provoke the flow field due to their kinematic characteristics. This induction can significantly change aerodynamic loads and dynamic stall process in wind turbines. The simulation involves a 3D NACA 0012 airfoil with two distinct pure-heaving and pure-pitching motions. The flow field over this 3D airfoil was simulated using Delayed Detached Eddy Simulations (DDES). The airfoil begins to oscillate at a Reynolds number of Re = 1.35 {$\times$} 105. The given attack angle profile remains unchanged for all cases. It is shown that the flow structures differ notably between pure-heaving and pure-pitching motions, such that the pure-pitching motions induce higher drag force on the airfoil than the pure-heaving motion. Remarkably, heaving motion causes excessive turbulence in the boundary layer, and then the coherent structures seem to be more stable. Hence, pure-heaving motion contains more energetic core vortices, yielding higher lift at post-stall. In contrast to conventional studies on the dynamic stall of wind turbines, current results show that airfoils? kinematics significantly affect the load predictions during the dynamic stall phenomenon.}
    }

  • F. Camara and C. Fox, “Unfreezing autonomous vehicles with game theory, proxemics, and trust,” Frontiers in computer science, 2022. doi:10.3389/fcomp.2022.969194
    [BibTeX] [Abstract] [Download PDF]

    Recent years have witnessed the rapid deployment of robotic systems in public places such as roads, pavements, workplaces and care homes. Robot navigation in environments with static objects is largely solved, but navigating around humans in dynamic environments remains an active research question for autonomous vehicles (AVs). To navigate in human social spaces, self-driving cars and other robots must also show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. Most current AVs, for legal and safety reasons, consider pedestrians to be obstacles, so these AVs always stop for or replan to drive around them. But this highly safe nature may lead pedestrians to take advantage over them and slow their progress, even to a complete halt. We provide a review of our recent research on predicting and controlling human?AV interactions, which combines game theory, proxemics and trust, and uni?es these ?elds via quantitative, probabilistic models and robot controllers, to solve this ?freezing robot? problem.

    @article{lincoln52159,
    month = {October},
    title = {Unfreezing autonomous vehicles with game theory, proxemics, and trust},
    author = {Fanta Camara and Charles Fox},
    publisher = {Frontiers Media},
    year = {2022},
    doi = {10.3389/fcomp.2022.969194},
    journal = {Frontiers in Computer Science},
    keywords = {ARRAY(0x55bd28ddd060)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52159/},
    abstract = {Recent years have witnessed the rapid deployment of robotic systems in
    public places such as roads, pavements, workplaces and care homes. Robot
    navigation in environments with static objects is largely solved, but navigating
    around humans in dynamic environments remains an active research question
    for autonomous vehicles (AVs). To navigate in human social spaces, self-driving
    cars and other robots must also show social intelligence. This involves
    predicting and planning around pedestrians, understanding their personal
    space, and establishing trust with them. Most current AVs, for legal and
    safety reasons, consider pedestrians to be obstacles, so these AVs always
    stop for or replan to drive around them. But this highly safe nature may lead
    pedestrians to take advantage over them and slow their progress, even to a
    complete halt. We provide a review of our recent research on predicting and
    controlling human?AV interactions, which combines game theory, proxemics
    and trust, and uni?es these ?elds via quantitative, probabilistic models and
    robot controllers, to solve this ?freezing robot? problem.}
    }

  • G. Mengaldo, F. Renda, S. Brunton, M. Bacher, M. Calisti, C. Duriez, G. Chirikjian, and C. Laschi, “A concise guide to modelling the physics of embodied intelligence in soft robotics.,” Nature reviews physics, iss. 4, p. 595–610, 2022. doi:10.1038/s42254-022-00481-z
    [BibTeX] [Abstract] [Download PDF]

    Embodied intelligence (intelligence that requires and leverages a physical body) is a well-known paradigm in soft robotics, but its mathematical description and consequent computational modelling remain elusive, with a need for models that can be used for design and control purposes. We argue that filling this gap will enable full uptake of embodied intelligence in soft robots. We provide a concise guide to the main mathematical modelling approaches, and consequent computational modelling strategies, that can be used to describe soft robots and their physical interactions with the surrounding environment, including fluid and solid media. We aim to convey the challenges and opportunities within the context of modelling the physical interactions underpinning embodied intelligence. We emphasize that interdisciplinary work is required, especially in the context of fully coupled robot?environment interaction modelling. Promoting this dialogue across disciplines is a necessary step to further advance the field of soft robotics.

    @article{lincoln52104,
    number = {4},
    month = {September},
    author = {Gianmarco Mengaldo and Federico Renda and Steven Brunton and Moritz Bacher and Marcello Calisti and Christian Duriez and Gregory Chirikjian and Cecilia Laschi},
    title = {A concise guide to modelling the physics of embodied intelligence in soft robotics.},
    publisher = {Nature Research},
    year = {2022},
    journal = {Nature Reviews Physics},
    doi = {10.1038/s42254-022-00481-z},
    pages = {595--610},
    keywords = {ARRAY(0x55bd28ce7c78)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52104/},
    abstract = {Embodied intelligence (intelligence that requires and leverages a physical body) is a well-known paradigm in soft robotics, but its mathematical description and consequent computational modelling remain elusive, with a need for models that can be used for design and control purposes. We argue that filling this gap will enable full uptake of embodied intelligence in soft robots. We provide a concise guide to the main mathematical modelling approaches, and consequent computational modelling strategies, that can be used to describe soft robots and their physical interactions with the surrounding environment, including fluid and solid media. We aim to convey the challenges and opportunities within the context of modelling the physical interactions underpinning embodied intelligence. We emphasize that interdisciplinary work is required, especially in the context of fully coupled robot?environment interaction modelling. Promoting this dialogue across disciplines is a necessary step to further advance the field of soft robotics.}
    }

  • A. G. Esfahani, G. Das, I. Gould, P. Zarafshan, V. R. Sugathakumary, J. Heselden, A. Badiee, I. Wright, and S. Pearson, “Applications of robotic and solar energy in precision agriculture and smart farming,” in Solar energy advancements in agriculture and food production systems, S. Gorjian and P. E. Campana, Eds., Elsevier, 2022. doi:10.1016/C2020-0-03304-9
    [BibTeX] [Abstract] [Download PDF]

    Population growth, healthy diet requirements, and changes in food demand towards a more plant-based protein diet increase existing pressures for food production and land-use change. The increasing demand and current agriculture approaches jeopardise the health of soil and biodiversity which will affect the future ecosystem and food production. One of the solutions to the increasing pressure on agriculture is PA which offers to minimize the use of resources, including land, water, energy, herbicides, and pesticides, and maximise the yield. The development of PA requires a multidisciplinary approach including engineering, AI, and robotics. Robots will play a crucial role in delivering PA and will pave the way toward sustainable healthy food production. While PA is the way forward in the agriculture industry the related devices to collect various supporting data and also the agriculture machinery need to be run by clean energy to ensure sustainable growth in the sector. Among renewable energy sources, solar energy and solar PV have shown a great potential to dominate the future of sustainable energy and agriculture developments. For developing PV in rural and off-grid agriculture farms and lands the use of solar-powered devices is unavoidable. Such a transition to photovoltaic agriculture requires significant changes to agricultural practices and the adoption of smart technologies like IoT, robotics, and WSN. Future food production needs to adapt to changing consumer behaviour along with the rapidly deteriorating environmental factors. PA is also a response to future food production challenges where one of its key aims is to improve sustainability to minimize the use of diminishing resources and minimize GHG emissions by use of renewable energy sources. Along with these adaptations, the new technologies should be using green energy sources (i.e., solar energy) for meeting the power requirements for sustainable developments of these smart technologies. Since there is a rapid inflow of robotic technologies into the agriculture sector, increasing power demand is inevitable, especially in remote areas where PV-based systems can play a game-changing role. It is expected for the agriculture sector to witness a technological revolution toward sustainable food production which cannot be achieved without solar PV development and support.

    @incollection{lincoln49943,
    month = {June},
    author = {Amir Ghalamzan Esfahani and Gautham Das and Iain Gould and Payam Zarafshan and Vishnu Rajendran Sugathakumary and James Heselden and Amir Badiee and Isobel Wright and Simon Pearson},
    booktitle = {Solar Energy Advancements in Agriculture and Food Production Systems},
    editor = {Shiva Gorjian and Pietro Elia Campana},
    title = {Applications of robotic and solar energy in precision agriculture and smart farming},
    publisher = {Elsevier},
    doi = {10.1016/C2020-0-03304-9},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce2048)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49943/},
    abstract = {Population growth, healthy diet requirements, and changes in food demand towards a more plant-based protein diet increase existing pressures for food production and land-use change. The increasing demand and current agriculture approaches jeopardise the health of soil and biodiversity which will affect the future ecosystem and food production. One of the solutions to the increasing pressure on agriculture is PA which offers to minimize the use of resources, including land, water, energy, herbicides, and pesticides, and maximise the yield. The development of PA requires a multidisciplinary approach including engineering, AI, and robotics. Robots will play a crucial role in delivering PA and will pave the way toward sustainable healthy food production.
    While PA is the way forward in the agriculture industry the related devices to collect various supporting data and also the agriculture machinery need to be run by clean energy to ensure sustainable growth in the sector. Among renewable energy sources, solar energy and solar PV have shown a great potential to dominate the future of sustainable energy and agriculture developments. For developing PV in rural and off-grid agriculture farms and lands the use of solar-powered devices is unavoidable. Such a transition to photovoltaic agriculture requires significant changes to agricultural practices and the adoption of smart technologies like IoT, robotics, and WSN.
    Future food production needs to adapt to changing consumer behaviour along with the rapidly deteriorating environmental factors. PA is also a response to future food production challenges where one of its key aims is to improve sustainability to minimize the use of diminishing resources and minimize GHG emissions by use of renewable energy sources. Along with these adaptations, the new technologies should be using green energy sources (i.e., solar energy) for meeting the power requirements for sustainable developments of these smart technologies. Since there is a rapid inflow of robotic technologies into the agriculture sector, increasing power demand is inevitable, especially in remote areas where PV-based systems can play a game-changing role. It is expected for the agriculture sector to witness a technological revolution toward sustainable food production which cannot be achieved without solar PV development and support.}
    }

  • S. Parsa, H. A. Maior, A. R. E. Thumwood, M. L. Wilson, M. Hanheide, and A. G. Esfahani, “The impact of motion scaling and haptic guidance on operators? workload and performance in teleoperation,” in Chi conference on human factors in computing systems extended abstracts, 2022, p. 1–7. doi:10.1145/3491101.3519814
    [BibTeX] [Abstract] [Download PDF]

    The use of human operator managed robotics, especially for safety critical work, includes a shift from physically demanding to mentally challenging work, and new techniques for Human-Robot Interaction are being developed to make teleoperation easier and more accurate. This study evaluates the impact of combining two teleoperation support features (i) scaling the velocity mapping of leader-follower arms (motion scaling), and (ii) haptic-feedback guided shared control (haptic guidance). We used purposely difficult peg-in-the-hole tasks requiring high precision insertion and manipulation, and obstacle avoidance, and evaluated the impact of using individual and combined support features on a) task performance and b) operator workload. As expected, long distance tasks led to higher mental workload and lower performance than short distance tasks. Our results showed that motion scaling and haptic guidance impact workload and improve performance during more difficult tasks, and we discussed this in contrast to participants preference for using different teleoperation features.

    @inproceedings{lincoln50609,
    month = {April},
    author = {Soran Parsa and Horia A. Maior and Alex Reeve Elliott Thumwood and Max L Wilson and Marc Hanheide and Amir Ghalamzan Esfahani},
    booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    title = {The Impact of Motion Scaling and Haptic Guidance on Operators? Workload and Performance in Teleoperation},
    publisher = {ACM},
    doi = {10.1145/3491101.3519814},
    pages = {1--7},
    year = {2022},
    keywords = {ARRAY(0x55bd29000d00)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50609/},
    abstract = {The use of human operator managed robotics, especially for safety critical work, includes a shift from physically demanding to mentally challenging work, and new techniques for Human-Robot Interaction are being developed to make teleoperation easier and more accurate. This study evaluates the impact of combining two teleoperation support features (i) scaling the velocity mapping of leader-follower arms (motion scaling), and (ii) haptic-feedback guided shared control (haptic guidance). We used purposely difficult peg-in-the-hole tasks requiring high precision insertion and manipulation, and obstacle avoidance, and evaluated the impact of using individual and combined support features on a) task performance and b) operator workload. As expected, long distance tasks led to higher mental workload and lower performance than short distance tasks. Our results showed that motion scaling and haptic guidance impact workload and improve performance during more difficult tasks, and we discussed this in contrast to participants preference for using different teleoperation features.}
    }

  • F. Atas, G. Cielniak, and G. Lars, “Elevation state-space: surfel-based navigation in uneven environments for mobile robots,” in 2022 ieee/rsj international conference on intelligent robots and systems (iros), 2022.
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces a new method for robot motion planning and navigation in uneven environments through a surfel representation of underlying point clouds. The proposed method addresses the shortcomings of state-of-the-art navigation methods by incorporating both kinematic and physical constraints of a robot with standard motion planning algorithms (e.g., those from the Open Motion Planning Library), thus enabling efficient sampling-based planners for challenging uneven terrain navigation on raw point cloud maps. Unlike techniques based on Digital Elevation Maps (DEMs), our novel surfel-based state-space formulation and implementation are based on raw point cloud maps, allowing for the modeling of overlapping surfaces such as bridges, piers, and tunnels. Experimental results demonstrate the robustness of the proposed method for robot navigation in real and simulated unstructured environments. The proposed approach also optimizes planners’ performances by boosting their success rates up to 5x for challenging unstructured terrain planning and navigation, thanks to our surfel-based approach’s robot constraint-aware sampling strategy. Finally, we provide an open-source implementation of the proposed method to benefit the robotics community.

    @inproceedings{lincoln52845,
    booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {October},
    title = {Elevation State-Space: Surfel-Based Navigation in Uneven Environments for Mobile Robots},
    author = {Fetullah Atas and Grzegorz Cielniak and Grimstad Lars},
    publisher = {IEEE},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddcfd0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52845/},
    abstract = {This paper introduces a new method for robot motion planning and navigation in uneven environments through a surfel representation of underlying point clouds. The proposed method addresses the shortcomings of state-of-the-art navigation methods by incorporating both kinematic and physical constraints of a robot with standard motion planning algorithms (e.g., those from the Open Motion Planning Library), thus enabling efficient sampling-based planners for challenging uneven terrain navigation on raw point cloud maps. Unlike techniques based on Digital Elevation Maps (DEMs), our novel surfel-based state-space formulation and implementation are based on raw point cloud maps, allowing for the modeling of overlapping surfaces such as bridges, piers, and tunnels. Experimental results demonstrate the robustness of the proposed method for robot navigation in real and simulated unstructured environments. The proposed approach also optimizes planners' performances by boosting their success rates up to 5x for challenging unstructured terrain planning and navigation, thanks to our surfel-based approach's robot constraint-aware sampling strategy. Finally, we provide an open-source implementation of the proposed method to benefit the robotics community.}
    }

  • H. A. Montes and G. Cielniak, “Multiple broccoli head detection and tracking in 3d point clouds for autonomous harvesting,” in Aaai – ai for agriculture and food systems, 2022.
    [BibTeX] [Abstract] [Download PDF]

    This paper explores a tracking method of broccoli heads that combine a Particle Filter and 3D features detectors to track multiple crops in a sequence of 3D data frames. The tracking accuracy is verified based on a data association method that matches detections with tracks over each frame. The particle filter incorporates a simple motion model to produce the posterior particle distribution, and a similarity model as probability function to measure the tracking accuracy. The method is tested with datasets of two broccoli varieties collected in planted fields from two different countries. Our evaluation shows the tracking method reduces the number of false negatives produced by the detectors on their own. In addition, the method accurately detects and tracks the 3D locations of broccoli heads relative to the vehicle at high frame rates

    @inproceedings{lincoln48675,
    booktitle = {AAAI - AI for Agriculture and Food Systems},
    month = {February},
    title = {Multiple broccoli head detection and tracking in 3D point clouds for autonomous harvesting},
    author = {Hector A. Montes and Grzegorz Cielniak},
    year = {2022},
    keywords = {ARRAY(0x55bd28d94868)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48675/},
    abstract = {This paper explores a tracking method of broccoli heads that combine a Particle Filter and 3D features detectors to track multiple crops in a sequence of 3D data frames. The tracking accuracy is verified based on a data association method that matches detections with tracks over each frame. The particle filter incorporates a simple motion model to produce the posterior particle distribution, and a similarity model as probability function to measure the tracking accuracy. The method is tested with datasets of two broccoli varieties collected in planted fields from two different countries. Our evaluation shows the tracking method reduces the number of false negatives produced by the detectors on their own. In addition, the method accurately detects and tracks the 3D locations of broccoli heads relative to the vehicle at high frame rates}
    }

  • R. Polvara, S. M. Mellado, I. Hroob, G. Cielniak, and M. Hanheide, “Collection and evaluation of a long-term 4d agri-robotic dataset,” in Perception and navigation for autonomous robotics in unstructured and dynamic environments, 2022. doi:10.5281/zenodo.7135175
    [BibTeX] [Abstract] [Download PDF]

    Long-term autonomy is one of the most demanded capabilities looked into a robot. The possibility to perform the same task over and over on a long temporal horizon, offering a high standard of reproducibility and robustness, is appealing. Long-term autonomy can play a crucial role in the adoption of robotics systems for precision agriculture, for example in assisting humans in monitoring and harvesting crops in a large orchard. With this scope in mind, we report an ongoing effort in the long-term deployment of an autonomous mobile robot in a vineyard for data collection across multiple months. The main aim is to collect data from the same area at different points in time so to be able to analyse the impact of the environmental changes in the mapping and localisation tasks. In this work, we present a map-based localisation study taking 4 data sessions. We identify expected failures when the pre-built map visually differs from the environment’s current appearance and we anticipate LTS-Net, a solution pointed at extracting stable temporal features for improving long-term 4D localisation results.

    @inproceedings{lincoln52350,
    booktitle = {Perception and Navigation for Autonomous Robotics in Unstructured and Dynamic Environments},
    month = {October},
    title = {Collection and Evaluation of a Long-Term 4D Agri-Robotic Dataset},
    author = {Riccardo Polvara and Sergio Molina Mellado and Ibrahim Hroob and Grzegorz Cielniak and Marc Hanheide},
    year = {2022},
    doi = {10.5281/zenodo.7135175},
    keywords = {ARRAY(0x55bd28ddcf70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52350/},
    abstract = {Long-term autonomy is one of the most demanded capabilities looked into a robot. The possibility to perform the same task over and over on a long temporal horizon, offering a high standard of reproducibility and robustness, is appealing. Long-term autonomy can play a crucial role in the adoption of robotics systems for precision agriculture, for example in assisting humans in monitoring and harvesting crops in a large orchard. With this scope in mind, we report an ongoing effort in the long-term deployment of an autonomous mobile robot in a vineyard for data collection across multiple months. The main aim is to collect data from the same area at different points in time so to be able to analyse the impact of the environmental changes in the mapping and localisation tasks.
    In this work, we present a map-based localisation study taking 4 data sessions. We identify expected failures when the pre-built map visually differs from the environment's current appearance and we anticipate LTS-Net, a solution pointed at extracting stable temporal features for improving long-term 4D localisation results.}
    }

  • L. Castri, S. Mghames, M. Hanheide, and N. Bellotto, “Causal discovery of dynamic models for predicting human spatial interactions,” in International conference on social robotics (icsr), 2022.
    [BibTeX] [Abstract] [Download PDF]

    Exploiting robots for activities in human-shared environments, whether warehouses, shopping centres or hospitals, calls for such robots to understand the underlying physical interactions between nearby agents and objects. In particular, modelling cause-and-effect relations between the latter can help to predict unobserved human behaviours and anticipate the outcome of specific robot interventions. In this paper, we propose an application of causal discovery methods to model human-robot spatial interactions, trying to understand human behaviours from real-world sensor data in two possible scenarios: humans interacting with the environment, and humans interacting with obstacles. New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm in some challenging human environments, with potential application in many service robotics scenarios. To demonstrate the utility of the causal models obtained from real-world datasets, we present a comparison between causal and non-causal prediction approaches. Our results show that the causal model correctly captures the underlying interactions of the considered scenarios and improves its prediction accuracy.

    @inproceedings{lincoln52266,
    booktitle = {International Conference on Social Robotics (ICSR)},
    month = {October},
    title = {Causal Discovery of Dynamic Models for Predicting Human Spatial Interactions},
    author = {Luca Castri and Sariah Mghames and Marc Hanheide and Nicola Bellotto},
    publisher = {Springer},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddcf40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52266/},
    abstract = {Exploiting robots for activities in human-shared environments, whether warehouses, shopping centres or hospitals, calls for such robots to understand the underlying physical interactions between nearby agents and objects. In particular, modelling cause-and-effect relations between the latter can help to predict unobserved human behaviours and anticipate the outcome of specific robot interventions. In this paper, we propose an application of causal discovery methods to model human-robot spatial interactions, trying to understand human behaviours from real-world sensor data in two possible scenarios: humans interacting with the environment, and humans interacting with obstacles. New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm in some challenging human environments, with potential application in many service robotics scenarios. To demonstrate the utility of the causal models obtained from real-world datasets, we present a comparison between causal and non-causal prediction approaches. Our results show that the causal model correctly captures the underlying interactions of the considered scenarios and improves its prediction accuracy.}
    }

  • N. Wang, G. Das, and A. Millard, “Learning cooperative behaviours in adversarial multi-agent systems,” in Towards autonomous robotic systems, Cham, 2022, p. 179–189. doi:10.1007/978-3-031-15908-4_15
    [BibTeX] [Abstract] [Download PDF]

    This work extends an existing virtual multi-agent platform called RoboSumo to create TripleSumo–-a platform for investigating multi-agent cooperative behaviors in continuous action spaces, with physical contact in an adversarial environment. In this paper we investigate a scenario in which two agents, namely `Bug’ and `Ant’, must team up and push another agent `Spider’ out of the arena. To tackle this goal, the newly added agent `Bug’ is trained during an ongoing match between `Ant’ and `Spider’. `Bug’ must develop awareness of the other agents’ actions, infer the strategy of both sides, and eventually learn an action policy to cooperate. The reinforcement learning algorithm Deep Deterministic Policy Gradient (DDPG) is implemented with a hybrid reward structure combining dense and sparse rewards. The cooperative behavior is quantitatively evaluated by the mean probability of winning the match and mean number of steps needed to win.

    @inproceedings{lincoln52230,
    month = {September},
    author = {Ni Wang and Gautham Das and Alan Millard},
    booktitle = {Towards Autonomous Robotic Systems},
    address = {Cham},
    title = {Learning Cooperative Behaviours in Adversarial Multi-agent Systems},
    publisher = {Springer International Publishing},
    year = {2022},
    doi = {10.1007/978-3-031-15908-4\_15},
    pages = {179--189},
    keywords = {ARRAY(0x55bd28cfff28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52230/},
    abstract = {This work extends an existing virtual multi-agent platform called RoboSumo to create TripleSumo---a platform for investigating multi-agent cooperative behaviors in continuous action spaces, with physical contact in an adversarial environment. In this paper we investigate a scenario in which two agents, namely `Bug' and `Ant', must team up and push another agent `Spider' out of the arena. To tackle this goal, the newly added agent `Bug' is trained during an ongoing match between `Ant' and `Spider'. `Bug' must develop awareness of the other agents' actions, infer the strategy of both sides, and eventually learn an action policy to cooperate. The reinforcement learning algorithm Deep Deterministic Policy Gradient (DDPG) is implemented with a hybrid reward structure combining dense and sparse rewards. The cooperative behavior is quantitatively evaluated by the mean probability of winning the match and mean number of steps needed to win.}
    }

  • K. Nazari, W. Mandil, and A. G. Esfahani, “Proactive slip control by learned slip model and trajectory adaptation,” in 6th conference on robot learning, 2022.
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a novel control approach to dealing with object slip during robotic manipulative movements. Slip is a major cause of failure in many robotic grasping and manipulation tasks. Existing works increase grip force to avoid/control slip. However, this may not be feasible when (i) the robot cannot increase the gripping force? the max gripping force is already applied or (ii) in- creased force damages the grasped object, such as soft fruit. Moreover, the robot fixes the gripping force when it forms a stable grasp on the surface of an object, and changing the gripping force during real-time manipulation may not be an effective control policy. We propose a novel control approach to slip avoidance including a learned action-conditioned slip predictor and a constrained optimiser avoiding a predicted slip given a desired robot action. We show the effectiveness of the proposed trajectory adaptation method with the receding horizon controller with a series of real-robot test cases. Our experimental results show our proposed data-driven predictive controller can control slip for objects unseen in training.

    @inproceedings{lincoln52220,
    booktitle = {6th Conference on Robot Learning},
    month = {November},
    title = {Proactive slip control by learned slip model and trajectory adaptation},
    author = {Kiyanoush Nazari and Willow Mandil and Amir Ghalamzan Esfahani},
    year = {2022},
    journal = {Conference of Robot Learning},
    keywords = {ARRAY(0x55bd28ddcf10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52220/},
    abstract = {This paper presents a novel control approach to dealing with object slip during robotic manipulative movements. Slip is a major cause of failure in many robotic grasping and manipulation tasks. Existing works increase grip force to avoid/control slip. However, this may not be feasible when (i) the robot cannot increase the gripping force? the max gripping force is already applied or (ii) in- creased force damages the grasped object, such as soft fruit. Moreover, the robot fixes the gripping force when it forms a stable grasp on the surface of an object, and changing the gripping force during real-time manipulation may not be an effective control policy. We propose a novel control approach to slip avoidance including a learned action-conditioned slip predictor and a constrained optimiser avoiding a predicted slip given a desired robot action. We show the effectiveness of the proposed trajectory adaptation method with the receding horizon controller with a series of real-robot test cases. Our experimental results show our proposed data-driven predictive controller can control slip for objects unseen in training.}
    }

  • T. Choi and G. Cielniak, “Channel randomisation with domain control for effective representation learning of visual anomalies in strawberries,” in Ai for agriculture and food systems, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Channel Randomisation (CH-Rand) has appeared as a key data augmentation technique for anomaly detection on fruit images because neural networks can learn useful representations of colour irregularity whilst classifying the samples from the augmented “domain”. Our previous study has revealed its success with significantly more reliable performance than other state-of-the-art methods, largely specialised for identifying structural implausibility on non-agricultural objects (e.g., screws). In this paper, we further enhance CH-Rand with additional guidance to generate more informative data for representation learning of anomalies in fruits as most of its fundamental designs are still maintained. To be specific, we first control the “colour space” on which CH-Rand is executed to investigate whether a particular model{–}e.g., HSV , YCbCr, or L*a*b* {–}can better help synthesise realistic anomalies than the RGB, suggested in the original design. In addition, we develop a learning “curriculum” in which CH-Rand shifts its augmented domain to gradually increase the difficulty of the examples for neural networks to classify. To the best of our best knowledge, we are the first to connect the concept of curriculum to self-supervised representation learning for anomaly detection. Lastly, we perform evaluations with the Riseholme-2021 dataset, which contains {\ensuremath{>}} 3.5K real strawberry images at various growth levels along with anomalous examples. Our experimental results show that the trained models with the proposed strategies can achieve over 16\% higher scores of AUC-PR with more than three times less variability than the naive CH-Rand whilst using the same deep networks and data.

    @inproceedings{lincoln48676,
    booktitle = {AI for Agriculture and Food Systems},
    month = {January},
    title = {Channel Randomisation with Domain Control for Effective Representation Learning of Visual Anomalies in Strawberries},
    author = {Taeyeong Choi and Grzegorz Cielniak},
    year = {2022},
    keywords = {ARRAY(0x55bd28cdc7e8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48676/},
    abstract = {Channel Randomisation (CH-Rand) has appeared as a key data augmentation technique for anomaly detection on fruit
    images because neural networks can learn useful representations of colour irregularity whilst classifying the samples
    from the augmented "domain". Our previous study has revealed its success with significantly more reliable performance than other state-of-the-art methods, largely specialised for identifying structural implausibility on non-agricultural objects (e.g., screws). In this paper, we further enhance CH-Rand with additional guidance to generate more informative data for representation learning of anomalies in fruits as most of its fundamental designs are still maintained. To be specific, we first control the "colour space" on which CH-Rand is executed to investigate whether a particular model{--}e.g., HSV , YCbCr, or L*a*b* {--}can better help synthesise realistic anomalies than the RGB, suggested in the original design. In addition, we develop a learning "curriculum" in which CH-Rand shifts its augmented domain to gradually increase the difficulty of the examples for neural networks to classify. To the best of our best knowledge, we are the first to connect the concept of curriculum to self-supervised representation learning for anomaly detection. Lastly, we perform evaluations with the Riseholme-2021 dataset, which contains {\ensuremath{>}} 3.5K real strawberry images at various growth levels along with anomalous examples. Our experimental results show that the trained models with the proposed strategies can achieve over 16\% higher scores of AUC-PR with more than three times less variability than the naive CH-Rand whilst using the same deep networks and data.}
    }

  • R. D. Silva, G. Cielniak, and J. Gao, “Towards infield navigation: leveraging simulated data for crop row detection,” in Ieee international conference on automation science and engineering (case), 2022.
    [BibTeX] [Abstract] [Download PDF]

    Agricultural datasets for crop row detection are often bound by their limited number of images. This restricts the researchers from developing deep learning based models for precision agricultural tasks involving crop row detection. We suggest the utilization of small real-world datasets alongwith additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset. Our method could reach the performance of a deep learning based crop row detection model trained with real-world data by using 60\% less labelled realworld data. Our model performed well against field variations such as shadows, sunlight and growth stages. We introduce an automated pipeline to generate labelled images for crop row detection in simulation domain. An extensive comparison is done to analyze the contribution of simulated data towards reaching robust crop row detection in various real-world field scenarios.

    @inproceedings{lincoln49913,
    booktitle = {IEEE International Conference on Automation Science and Engineering (CASE)},
    title = {Towards Infield Navigation: leveraging simulated data for crop row detection},
    author = {Rajitha De Silva and Grzegorz Cielniak and Junfeng Gao},
    publisher = {IEEE},
    year = {2022},
    keywords = {ARRAY(0x55bd28d061a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49913/},
    abstract = {Agricultural datasets for crop row detection are often bound by their limited number of images. This restricts
    the researchers from developing deep learning based models for precision agricultural tasks involving crop row detection. We suggest the utilization of small real-world datasets alongwith additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset. Our method could reach the performance of a deep learning based crop row detection model trained with real-world data by using 60\% less labelled realworld data. Our model performed well against field variations such as shadows, sunlight and growth stages. We introduce an automated pipeline to generate labelled images for crop row detection in simulation domain. An extensive comparison is done to analyze the contribution of simulated data towards reaching robust crop row detection in various real-world field scenarios.}
    }

  • S. Ghidoni, M. Terreran, D. Evangelista, E. Menegatti, C. Eitzinger, E. Villagrossi, N. Pedrocchi, N. Castaman, M. Malecha, S. Mghames, L. Castri, M. Hanheide, and N. Bellotto, “From human perception and action recognition to causal understanding of human-robot interaction in industrial environments,” in Ital-ia 2022, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Human-robot collaboration is migrating from lightweight robots in laboratory environments to industrial applications, where heavy tasks and powerful robots are more common. In this scenario, a reliable perception of the humans involved in the process and related intentions and behaviors is fundamental. This paper presents two projects investigating the use of robots in relevant industrial scenarios, providing an overview of how industrial human-robot collaborative tasks can be successfully addressed.

    @inproceedings{lincoln48515,
    booktitle = {Ital-IA 2022},
    title = {From Human Perception and Action Recognition to Causal Understanding of Human-Robot Interaction in Industrial Environments},
    author = {Stefano Ghidoni and Matteo Terreran and Daniele Evangelista and Emanuele Menegatti and Christian Eitzinger and Enrico Villagrossi and Nicola Pedrocchi and Nicola Castaman and Marcin Malecha and Sariah Mghames and Luca Castri and Marc Hanheide and Nicola Bellotto},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7c48)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48515/},
    abstract = {Human-robot collaboration is migrating from lightweight robots in laboratory environments to industrial applications, where heavy tasks and powerful robots are more common. In this scenario, a reliable perception of the humans involved in the process and related intentions and behaviors is fundamental. This paper presents two projects investigating the use of robots in relevant industrial scenarios, providing an overview of how industrial human-robot collaborative tasks can be successfully addressed.}
    }

  • H. Harman and E. I. Sklar, “Challenges for multi-agent based agricultural workforce management,” in The 23rd international workshop on multi-agent-based simulation (mabs)), 2022.
    [BibTeX] [Abstract] [Download PDF]

    Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks, assigned by farm managers. The work here explores the application of artificial intelligence planning methodologies to optimise the existing workforce and applies multi-agent based simulation to evaluate the efficacy of the AI strategies. Key challenges threatening the acceptance of such an approach are highlighted and solutions are evaluated experimentally.

    @inproceedings{lincoln49036,
    booktitle = {The 23rd International Workshop on Multi-Agent-Based Simulation (MABS))},
    title = {Challenges for Multi-Agent Based Agricultural Workforce Management},
    author = {Helen Harman and Elizabeth I. Sklar},
    publisher = {Springer},
    year = {2022},
    keywords = {ARRAY(0x55bd28a4d020)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49036/},
    abstract = {Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks, assigned by farm managers. The work here explores the application of artificial intelligence planning methodologies to optimise the existing workforce and applies multi-agent based simulation to evaluate the efficacy of the AI strategies. Key challenges threatening the acceptance of such an approach are highlighted and solutions are evaluated experimentally.}
    }

  • P. Somaiya, H. Pandya, R. Polvara, M. Hanheide, and G. Cielniak, “Ts-rep: self-supervised time series representation learning from robot sensor data,” in Neurips 2022 workshop: self-supervised learning – theory and practice, 2022.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we propose TS-Rep, a self-supervised method that learns representations from multi-modal varying-length time series sensor data from real robots. TS-Rep is based on a simple yet effective technique for triplet learning, where we randomly split the time series into two segments to form anchor and positive while selecting random subseries from the other time series in the mini-batch to construct negatives. We additionally use the nearest neighbour in the representation space to increase the diversity in the positives. For evaluation, we perform a clusterability analysis on representations of three heterogeneous robotics datasets. Then learned representations are applied for anomaly detection, and our method consistently performs well. A classifier trained on TS-Rep learned representations outperforms unsupervised methods and performs close to the fully-supervised methods for terrain classification. Furthermore, we show that TS-Rep is, on average, the fastest method to train among the baselines.

    @inproceedings{lincoln55956,
    booktitle = {NeurIPS 2022 Workshop: Self-Supervised Learning - Theory and Practice},
    month = {December},
    title = {TS-Rep: Self-supervised time series representation learning from robot sensor data},
    author = {Pratik Somaiya and Harit Pandya and Riccardo Polvara and Marc Hanheide and Grzegorz Cielniak},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddce80)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55956/},
    abstract = {In this paper, we propose TS-Rep, a self-supervised method that learns representations from multi-modal varying-length time series sensor data from real robots. TS-Rep is based on a simple yet effective technique for triplet learning, where we randomly split the time series into two segments to form anchor and positive while selecting random subseries from the other time series in the mini-batch to construct negatives. We additionally use the nearest neighbour in the representation space to increase the diversity in the positives. For evaluation, we perform a clusterability analysis on representations of three heterogeneous robotics datasets. Then learned representations are applied for anomaly detection, and our method consistently performs well. A classifier trained on TS-Rep learned representations outperforms unsupervised methods and performs close to the fully-supervised methods for terrain classification. Furthermore, we show that TS-Rep is, on average, the fastest method to train among the baselines.}
    }

  • A. Owen, H. Harman, and E. Sklar, “Towards autonomous task allocation using a robot team in a food factory,” in Ukras2022 conference ?robotics for unconstrained environments?, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Scheduling of hygiene tasks in a food production environment is a complex challenge which is typically performed manually. Many factors must be considered during scheduling; this includes what training a hygiene operative (i.e. cleaning staff member) has undergone, the availability of hygiene operatives (holiday commitments, sick leave etc.) and the production constraints (how long does the oven take to cool, when does production begin again etc.). This paper seeks to apply multiagent task allocation (MATA) to automate and optimise the process of allocating tasks to hygiene operatives. The intention is that this optimization module will form one part of a proposed larger system. that we propose to develop. A simulation has been created to function as a digital twin of a factory environment, allowing us to evaluate experimentally a variety of task allocation methodologies. Trialled methods include Round Robin (RR), Sequential Single Item (SSI) auctions, Lowest Bid and Least Contested Bid.

    @inproceedings{lincoln51674,
    booktitle = {UKRAS2022 Conference ?Robotics for Unconstrained Environments?},
    title = {Towards Autonomous Task Allocation Using a Robot Team in a Food Factory},
    author = {Amie Owen and Helen Harman and Elizabeth Sklar},
    publisher = {UK-RAS},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7498)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/51674/},
    abstract = {Scheduling of hygiene tasks in a food production environment is a complex challenge which is typically performed manually. Many factors must be considered during scheduling; this includes what training a hygiene operative (i.e. cleaning staff member) has undergone, the availability of hygiene operatives (holiday commitments, sick leave etc.) and the production constraints (how long does the oven take to cool, when does production begin again etc.). This paper seeks to apply multiagent task allocation (MATA) to automate and optimise the process of allocating tasks to hygiene operatives. The intention is that this optimization module will form one part of a proposed larger system. that we propose to develop. A simulation has been created to function as a digital twin of a factory environment, allowing us to evaluate experimentally a variety of task allocation methodologies. Trialled methods include Round Robin (RR), Sequential Single Item (SSI) auctions, Lowest Bid and Least Contested Bid.}
    }

  • A. Owen, H. Harman, and E. I. Sklar, “Towards the application of multi-agent task allocation to hygiene tasks in the food production industry.,” in 20th international conference on practical applications of agents and multi-agent systems, paams 2022, 2022.
    [BibTeX] [Abstract] [Download PDF]

    The food production industry faces the complex challenge of scheduling both production and hygiene tasks. These tasks are typically scheduled manually. However, due to the increasing costs of raw materials and the regulations factories must adhere to, inefficiencies can be costly. This paper presents the initial findings of a survey, conducted to learn more about the hygiene tasks within the industry and to inform research on how multi-agent task allocation (MATA) methodologies could automate and improve the scheduling of hygiene tasks. A simulation of a heterogeneous human workforce within a factory environment is presented. This work evaluates experimentally different strategies for applying market-based mechanisms, in particular Sequential Single Item (SSI) auctions, to the problem of allocation hygiene tasks to a heterogeneous workforce.

    @inproceedings{lincoln51673,
    booktitle = {20th International Conference on Practical Applications of Agents and Multi-Agent Systems, PAAMS 2022},
    title = {Towards the application of multi-agent task allocation to hygiene tasks in the food production industry.},
    author = {Amie Owen and Helen Harman and Elizabeth I. Sklar},
    publisher = {Springer Cham},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce2690)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/51673/},
    abstract = {The food production industry faces the complex challenge of scheduling both production and hygiene tasks. These tasks are typically scheduled manually. However, due to the increasing costs of raw materials and the regulations factories must adhere to, inefficiencies can be costly. This paper presents the initial findings of a survey, conducted to learn more about the hygiene tasks within the industry and to inform research on how multi-agent task allocation (MATA) methodologies could automate and improve the scheduling of hygiene tasks. A simulation of a heterogeneous human workforce within a factory environment is presented. This work evaluates experimentally different strategies for applying market-based mechanisms, in particular Sequential Single Item (SSI) auctions, to the problem of allocation hygiene tasks to a heterogeneous workforce.}
    }

  • A. Mohtasib, G. Neumann, and H. Cuayahuitl, “Robot policy learning from demonstration using advantage weighting and early termination,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2022, p. 7414–7420. doi:10.1109/IROS47612.2022.9981056
    [BibTeX] [Abstract] [Download PDF]

    Learning robotic tasks in the real world is still highly challenging and effective practical solutions remain to be found. Traditional methods used in this area are imitation learning and reinforcement learning, but they both have limitations when applied to real robots. Combining reinforcement learning with pre-collected demonstrations is a promising approach that can help in learning control policies to solve robotic tasks. In this paper, we propose an algorithm that uses novel techniques to leverage offline expert data using offline and online training to obtain faster convergence and improved performance. The proposed algorithm (AWET) weights the critic losses with a novel agent advantage weight to improve over the expert data. In addition, AWET makes use of an automatic early termination technique to stop and discard policy rollouts that are not similar to expert trajectories–-to prevent drifting far from the expert data. In an ablation study, AWET showed improved and promising performance when compared to state-of-the-art baselines on four standard robotic tasks.

    @inproceedings{lincoln50442,
    month = {October},
    author = {Abdalkarim Mohtasib and Gerhard Neumann and Heriberto Cuayahuitl},
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    title = {Robot Policy Learning from Demonstration Using Advantage Weighting and Early Termination},
    publisher = {IEEE},
    doi = {10.1109/IROS47612.2022.9981056},
    pages = {7414--7420},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddd000)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50442/},
    abstract = {Learning robotic tasks in the real world is still highly challenging and effective practical solutions remain to be found. Traditional methods used in this area are imitation learning and reinforcement learning, but they both have limitations when applied to real robots. Combining reinforcement learning with pre-collected demonstrations is a promising approach that can help in learning control policies to solve robotic tasks. In this paper, we propose an algorithm that uses novel techniques to leverage offline expert data using offline and online training to obtain faster convergence and improved performance. The proposed algorithm (AWET) weights the critic losses with a novel agent advantage weight to improve over the expert data. In addition, AWET makes use of an automatic early termination technique to stop and discard policy rollouts that are not similar to expert trajectories---to prevent drifting far from the expert data. In an ablation study, AWET showed improved and promising performance when compared to state-of-the-art baselines on four standard robotic tasks.}
    }

  • H. Harman and E. Sklar, “Multi-agent task allocation techniques for harvest team formation,” in Advances in practical applications of agents, multi-agent systems, and complex systems simulation. the paams collection, 2022, p. 217–228. doi:10.1007/978-3-031-18192-4_18
    [BibTeX] [Abstract] [Download PDF]

    With increasing demands for soft fruit and shortages of seasonal workers, farms are seeking innovative solutions for efficiently managing their workforce. The harvesting workforce is typically organised by farm managers who assign workers to the fields that are ready to be harvested. They aim to minimise staff time (and costs) and distribute work fairly, whilst still picking all ripe fruit within the fields that need to be harvested. This paper posits that this problem can be addressed using multi-criteria, multi-agent task allocation techniques. The work presented compares the application of Genetic Algorithms (GAs) vs auction-based approaches to the challenge of assigning workers with various skill sets to fields with various estimated yields. These approaches are evaluated alongside a previously suggested method and the teams that were manually created by a farm manager during the 2021 harvesting season. Results indicate that the GA approach produces more efficient team allocations than the alternatives assessed.

    @inproceedings{lincoln50057,
    month = {October},
    author = {Helen Harman and Elizabeth Sklar},
    booktitle = {Advances in Practical Applications of Agents, Multi-Agent Systems, and Complex Systems Simulation. The PAAMS Collection},
    title = {Multi-Agent Task Allocation Techniques for Harvest Team Formation},
    publisher = {Springer},
    doi = {10.1007/978-3-031-18192-4\_18},
    pages = {217--228},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddd090)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50057/},
    abstract = {With increasing demands for soft fruit and shortages of seasonal workers, farms are seeking innovative solutions for efficiently managing their workforce. The harvesting workforce is typically organised by farm managers who assign workers to the fields that are ready to be harvested. They aim to minimise staff time (and costs) and distribute work fairly, whilst still picking all ripe fruit within the fields that need to be harvested. This paper posits that this problem can be addressed using multi-criteria, multi-agent task allocation techniques. The work presented compares the application of Genetic Algorithms (GAs) vs auction-based approaches to the challenge of assigning workers with various skill sets to fields with various estimated yields. These approaches are evaluated alongside a previously suggested method and the teams that were manually created by a farm manager during the 2021 harvesting season. Results indicate that the GA approach produces more efficient team allocations than the alternatives assessed.}
    }

  • Y. Zhang, C. Hu, M. Liu, H. Luan, F. Lei, H. Cuayahuitl, and S. Yue, “Temperature-based collision detection in extreme low light condition with bio-inspired lgmd neural network,” in 2021 2nd international symposium on automation, information and computing (isaic 2021), 2022. doi:10.1088/1742-6596/2224/1/012004
    [BibTeX] [Abstract] [Download PDF]

    It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.

    @inproceedings{lincoln49117,
    booktitle = {2021 2nd International Symposium on Automation, Information and Computing (ISAIC 2021)},
    month = {April},
    title = {Temperature-based Collision Detection in Extreme Low Light Condition with Bio-inspired LGMD Neural Network},
    author = {Yicheng Zhang and Cheng Hu and Mei Liu and Hao Luan and Fang Lei and Heriberto Cuayahuitl and Shigang Yue},
    publisher = {IOP Publishing Ltd},
    year = {2022},
    doi = {10.1088/1742-6596/2224/1/012004},
    keywords = {ARRAY(0x55bd28ce7cd8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49117/},
    abstract = {It is an enormous challenge for intelligent vehicles to avoid collision accidents at night because of the extremely poor light conditions. Thermal cameras can capture temperature map at night, even with no light sources and are ideal for collision detection in darkness. However, how to extract collision cues efficiently and effectively from the captured temperature map with limited computing resources is still a key issue to be solved. Recently, a bio-inspired neural network LGMD has been proposed for collision detection successfully, but for daytime and visible light. Whether it can be used for temperature-based collision detection or not remains unknown. In this study, we proposed an improved LGMD-based visual neural network for temperature-based collision detection at extreme light conditions. We show in this study that the insect inspired visual neural network can pick up the expanding temperature differences of approaching objects as long as the temperature difference against its background can be captured by a thermal sensor. Our results demonstrated that the proposed LGMD neural network can detect collisions swiftly based on the thermal modality in darkness; therefore, it can be a critical collision detection algorithm for autonomous vehicles driving at night to avoid fatal collisions with humans, animals, or other vehicles.}
    }

  • A. Salazar-Gomez, M. Darbyshire, J. Gao, E. Sklar, and S. Parsons, “Beyond map: towards practical object detection for weed spraying in precision agriculture,” in 2022 ieee/rsj international conference on intelligent robots and systems, 2022, p. 9232–9238. doi:10.1109/IROS47612.2022.9982139
    [BibTeX] [Abstract] [Download PDF]

    The evolution of smaller and more powerful GPUs over the last 2 decades has vastly increased the opportunity to apply robust deep learning-based machine vision approaches to real-time use cases in practical environments. One exciting application domain for such technologies is precision agriculture, where the ability to integrate on-board machine vision with data-driven actuation means that farmers can make decisions about crop care and harvesting at the level of the individual plant rather than the whole field. This makes sense both economically and environmentally. This paper assesses the feasibility of precision spraying weeds via a comprehensive evaluation of weed detection accuracy and speed using two separate datasets, two types of GPU, and several state-of-the-art object detection algorithms. A simplified model of precision spraying is used to determine whether the weed detection accuracy achieved could result in a sufficiently high weed hit rate combined with a significant reduction in herbicide usage. The paper introduces two metrics to capture these aspects of the real-world deployment of precision weeding and demonstrates their utility through experimental results.

    @inproceedings{lincoln51680,
    month = {December},
    author = {Adrian Salazar-Gomez and Madeleine Darbyshire and Junfeng Gao and Elizabeth Sklar and Simon Parsons},
    booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems},
    title = {Beyond mAP: Towards practical object detection for weed spraying in precision agriculture},
    publisher = {IEEE Press},
    doi = {10.1109/IROS47612.2022.9982139},
    pages = {9232--9238},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddce20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/51680/},
    abstract = {The evolution of smaller and more powerful GPUs over the last 2 decades has vastly increased the opportunity to apply robust deep learning-based machine vision approaches to real-time use cases in practical environments. One exciting application domain for such technologies is precision agriculture, where the ability to integrate on-board machine vision with data-driven actuation means that farmers can make decisions about crop care and harvesting at the level of the individual plant rather than the whole field. This makes sense both economically and environmentally. This paper assesses the feasibility of precision spraying weeds via a comprehensive evaluation of weed detection accuracy and speed using two separate datasets, two types of GPU, and several state-of-the-art object detection algorithms. A simplified model of precision spraying is used to determine whether the weed detection accuracy achieved could result in a sufficiently high weed hit rate combined with a significant reduction in herbicide usage. The paper introduces two metrics to capture these aspects of the real-world deployment of precision weeding and demonstrates their utility through experimental results.}
    }

  • M. C. Mayoral, L. Grimstad, P. r a, and G. Cielniak, “Towards safety in open-field agricultural robotic applications: a method for human risk assessment using classifiers,” in 2022 15th international conference on human system interaction (hsi), 2022. doi:10.1109/HSI55341.2022.9869472
    [BibTeX] [Abstract] [Download PDF]

    Tractors and heavy machinery have been used for decades to improve the quality and overall agriculture production. Moreover, agriculture is becoming a trend domain for robotics, and as a consequence, the efforts towards automatizing agricultural task increases year by year. However, for autonomous applications, accident prevention is of prior importance for warrantying human safety during operation in any scenario. This paper rephrases human safety as a classification problem using a custom distance criterion where each detected human gets a risk level classification. We propose the use of a neural network trained to detect and classify humans in the scene according to these criteria. The proposed approach learns from real-world data corresponding to an open-field scenario and is assessed with a custom risk assessment method.

    @inproceedings{lincoln52846,
    booktitle = {2022 15th International Conference on Human System Interaction (HSI)},
    month = {August},
    title = {Towards Safety in Open-field Agricultural Robotic Applications: A Method for Human Risk Assessment using Classifiers},
    author = {C. Mayoral Mayoral and Lars Grimstad and P{\r a}l J. From and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2022},
    doi = {10.1109/HSI55341.2022.9869472},
    keywords = {ARRAY(0x55bd29006150)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52846/},
    abstract = {Tractors and heavy machinery have been used for decades to improve the quality and overall agriculture production. Moreover, agriculture is becoming a trend domain for robotics, and as a consequence, the efforts towards automatizing agricultural task increases year by year. However, for autonomous applications, accident prevention is of prior importance for warrantying human safety during operation in any scenario. This paper rephrases human safety as a classification problem using a custom distance criterion where each detected human gets a risk level classification. We propose the use of a neural network trained to detect and classify humans in the scene according to these criteria. The proposed approach learns from real-world data corresponding to an open-field scenario and is assessed with a custom risk assessment method.}
    }

  • F. Camara and C. Fox, “Extending quantitative proxemics and trust to hri,” in 31st ieee international conference on robot & human interactive communication, 2022. doi:10.1109/RO-MAN53752.2022.9900821
    [BibTeX] [Abstract] [Download PDF]

    Human-robot interaction (HRI) requires quantitative models of proxemics and trust for robots to use in negotiating with people for space. Hall?s theory of proxemics has been used for decades to describe social interaction distances but has lacked detailed quantitative models and generative explanations to apply to these cases. In the limited case of autonomous vehicle interactions with pedestrians crossing a road, a recent model has explained the quantitative sizes of Hall?s distances to 4\% error and their links to the concept of trust in human interactions. The present study extends this model by generalising several of its assumptions to cover further cases including human-human and human-robot interactions. It tightens the explanations of Hall zones from 4\% to 1\% error and fits several more recent empirical HRI results. This may help to further unify these disparate fields and quantify them to a level which enables real-world operational HRI applications.

    @inproceedings{lincoln49872,
    booktitle = {31st IEEE International Conference on Robot \& Human Interactive Communication},
    month = {August},
    title = {Extending Quantitative Proxemics and Trust to HRI},
    author = {Fanta Camara and Charles Fox},
    publisher = {IEEE},
    year = {2022},
    doi = {10.1109/RO-MAN53752.2022.9900821},
    keywords = {ARRAY(0x55bd289e9390)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49872/},
    abstract = {Human-robot interaction (HRI) requires quantitative models of proxemics and trust for robots to use in negotiating with people for space. Hall?s theory of proxemics has been used for decades to describe social interaction distances but has lacked detailed quantitative models and generative explanations to apply to these cases. In the limited case of
    autonomous vehicle interactions with pedestrians crossing a road, a recent model has explained the quantitative sizes of Hall?s distances to 4\% error and their links to the concept of trust in human interactions. The present study extends this model by generalising several of its assumptions to cover further cases including human-human and human-robot interactions. It tightens the explanations of Hall zones from 4\% to 1\% error and fits several more recent empirical HRI results. This may help to further unify these disparate fields and quantify them to a level which enables real-world operational HRI applications.}
    }

  • G. Clawson and C. Fox, “Blockchain crop assurance and localisation,” in The 5th uk robotics and autonomous systems conference, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Food supply chain assurance should begin in the field with regular per-plant re-identification and logging. This is challenging due to localisation and storage requirements. A proof-of-concept solution is provided, using an image-based, super-GNSS precision, robotic localisation per-plant re-identification technique with decentralised storage and blockchain technology. ORB descriptors and RANSAC are used to align in-field stones to previously captured stone images for localisation. Blockchain smart contracts act as a data broker for repeated update and retrieval of an image from a distributed file share system. Results suggest that localisation can be achieved to sub 100mm within a time window of 18 seconds. The implementation is open source and available at: {$\backslash$}url\{https://github.com/garry-clawson/Blockchain-Crop-Assurance-and-Localisation\}

    @inproceedings{lincoln50385,
    booktitle = {The 5th UK Robotics and Autonomous Systems Conference},
    month = {August},
    title = {Blockchain Crop Assurance and Localisation},
    author = {Garry Clawson and Charles Fox},
    publisher = {UKRAS},
    year = {2022},
    keywords = {ARRAY(0x55bd289e93d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50385/},
    abstract = {Food supply chain assurance should begin in the field with regular per-plant re-identification and logging. This is challenging due to localisation and storage requirements. A proof-of-concept solution is provided, using an image-based, super-GNSS precision, robotic localisation per-plant re-identification technique with decentralised storage and blockchain technology. ORB descriptors and RANSAC are used to align in-field stones to previously captured stone images for localisation. Blockchain smart contracts act as a data broker for repeated update and retrieval of an image from a distributed file share system. Results suggest that localisation can be achieved to sub 100mm within a time window of 18 seconds. The implementation is open source and available at: {$\backslash$}url\{https://github.com/garry-clawson/Blockchain-Crop-Assurance-and-Localisation\}}
    }

  • M. Darbyshire, A. Salazar-Gomez, C. Lennox, J. Gao, E. Sklar, and S. Parsons, “Localising weeds using a prototype weed sprayer,” in Ukras22 conference ?robotics for unconstrained environments?, 2022, p. 12–13. doi:10.31256/Ua7Pr2W
    [BibTeX] [Abstract] [Download PDF]

    The application of convolutional neural networks (CNNs) to challenging visual recognition tasks has been shown to be highly effective and robust compared to traditional machine vision techniques. The recent development of small, powerful GPUs has enabled embedded systems to incorporate real-time, CNN-based, visual inference. Agriculture is a domain where this technology could be hugely advantageous. One such application within agriculture is precision spraying where only weeds are targeted with herbicide. This approach promises weed control with significant economic and environmental benefits from re- duced herbicide usage. While existing research has validated that CNN-based vision methods can accurately discern between weeds and crops, this paper explores how such detections can be used to actuate a prototype precision sprayer that incorporates a CNN- based weed detection system and validates spraying performance in a simplified scenario.

    @inproceedings{lincoln53105,
    month = {August},
    author = {Madeleine Darbyshire and Adrian Salazar-Gomez and Callum Lennox and Junfeng Gao and Elizabeth Sklar and Simon Parsons},
    booktitle = {UKRAS22 Conference ?Robotics for Unconstrained Environments?},
    title = {Localising Weeds Using a Prototype Weed Sprayer},
    publisher = {UK-RAS Network},
    doi = {10.31256/Ua7Pr2W},
    pages = {12--13},
    year = {2022},
    keywords = {ARRAY(0x55bd28ff96d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53105/},
    abstract = {The application of convolutional neural networks (CNNs) to challenging visual recognition tasks has been shown to be highly effective and robust compared to traditional machine vision techniques. The recent development of small, powerful GPUs has enabled embedded systems to incorporate real-time, CNN-based, visual inference. Agriculture is a domain where this technology could be hugely advantageous. One such application within agriculture is precision spraying where only weeds are targeted with herbicide. This approach promises weed control with significant economic and environmental benefits from re- duced herbicide usage. While existing research has validated that CNN-based vision methods can accurately discern between weeds and crops, this paper explores how such detections can be used to actuate a prototype precision sprayer that incorporates a CNN- based weed detection system and validates spraying performance in a simplified scenario.}
    }

  • L. Roberts-Elliott, G. Das, and A. Millard, “Agent-based simulation of multi-robot soil compaction mapping,” in Towards autonomous robotic systems, Cham, 2022, p. 251–265. doi:10.1007/978-3-031-15908-4_20
    [BibTeX] [Abstract] [Download PDF]

    Soil compaction, an increase in soil density and decrease in porosity, has a negative effect on crop yields, and damaging environmental impacts. Mapping soil compaction at a high resolution is an important step in enabling precision agriculture practices to address these issues. Autonomous ground-based robotic approaches using proximal sensing have been proposed as alternatives to time-consuming and costly manual soil sampling. Soil compaction has high spatial variance, which can be challenging to capture in a limited time window. A multi-robot system can parallelise the sampling process and reduce the overall sampling time. Multi-robot soil sampling is critically underexplored in literature, and requires selection of methods to efficiently coordinate the sampling. This paper presents a simulation of multi-agent spatial sampling, extending the Mesa agent-based simulation framework, with general applicability, but demonstrated here as a testbed for different methodologies of multi-robot soil compaction mapping. To reduce the necessary number of samples for accurate mapping, while maximising information gained per sample, a dynamic sampling strategy, informed by kriging variance from kriging interpolation of sampled soil compaction values, has been implemented. This is enhanced by task clustering and insertion heuristics for task queuing. Results from the evaluation trials show the suitability of sequential single item auctions in this highly dynamic environment, and high interpolation accuracy resulting from our dynamic sampling, with avenues for improvements in this bespoke sampling methodology in future work.

    @inproceedings{lincoln53183,
    month = {September},
    author = {Laurence Roberts-Elliott and Gautham Das and Alan Millard},
    booktitle = {Towards Autonomous Robotic Systems},
    address = {Cham},
    title = {Agent-Based Simulation of Multi-robot Soil Compaction Mapping},
    publisher = {Springer International Publishing},
    year = {2022},
    doi = {10.1007/978-3-031-15908-4\_20},
    pages = {251--265},
    keywords = {ARRAY(0x55bd29017f70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53183/},
    abstract = {Soil compaction, an increase in soil density and decrease in porosity, has a negative effect on crop yields, and damaging environmental impacts. Mapping soil compaction at a high resolution is an important step in enabling precision agriculture practices to address these issues. Autonomous ground-based robotic approaches using proximal sensing have been proposed as alternatives to time-consuming and costly manual soil sampling. Soil compaction has high spatial variance, which can be challenging to capture in a limited time window. A multi-robot system can parallelise the sampling process and reduce the overall sampling time. Multi-robot soil sampling is critically underexplored in literature, and requires selection of methods to efficiently coordinate the sampling. This paper presents a simulation of multi-agent spatial sampling, extending the Mesa agent-based simulation framework, with general applicability, but demonstrated here as a testbed for different methodologies of multi-robot soil compaction mapping. To reduce the necessary number of samples for accurate mapping, while maximising information gained per sample, a dynamic sampling strategy, informed by kriging variance from kriging interpolation of sampled soil compaction values, has been implemented. This is enhanced by task clustering and insertion heuristics for task queuing. Results from the evaluation trials show the suitability of sequential single item auctions in this highly dynamic environment, and high interpolation accuracy resulting from our dynamic sampling, with avenues for improvements in this bespoke sampling methodology in future work.}
    }

  • S. Mghames and M. Hanheide, “Environment-aware interactive movement primitives for object reaching in clutter,” in 2022 ieee 18th international conference on automation science and engineering, 2022. doi:10.1109/CASE49997.2022.9926518
    [BibTeX] [Abstract] [Download PDF]

    The majority of motion planning strategies developed over the literature for reaching an object in clutter are applied to two dimensional (2-d) space where the state space of the environment is constrained in one direction. Fewer works have been investigated to reach a target in 3-d cluttered space, and when so, they have limited performance when applied to complex cases. In this work, we propose a constrained multi-objective optimization framework (OptI-ProMP) to approach the problem of reaching a target in a compact clutter with a case study on soft fruits grown in clusters, leveraging the local optimisation-based planner CHOMP. OptI-ProMP features costs related to both static, dynamic and pushable objects in the target neighborhood, and it relies on probabilistic primitives for problem initialisation. We tested, in a simulated poly-tunnel, both ProMP-based planners from literature and the OptI-ProMP, on low (3-dofs) and high (7-dofs) dexterity robot body, respectively. Results show collision and pushing costs minimisation with 7-dofs robot kinematics, in addition to successful static obstacles avoidance and systematic drifting from the pushable objects center of mass.

    @inproceedings{lincoln54462,
    booktitle = {2022 IEEE 18th International Conference on Automation Science and Engineering},
    month = {August},
    title = {Environment-aware Interactive Movement Primitives for Object Reaching in Clutter},
    author = {Sariah Mghames and Marc Hanheide},
    publisher = {IEEE Xplore},
    year = {2022},
    doi = {10.1109/CASE49997.2022.9926518},
    keywords = {ARRAY(0x55bd28e05d78)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/54462/},
    abstract = {The majority of motion planning strategies developed over the literature for reaching an object in clutter are applied to two dimensional (2-d) space where the state space of the environment is constrained in one direction. Fewer works have been investigated to reach a target in 3-d cluttered space, and when so, they have limited performance when applied to complex cases. In this work, we propose a constrained multi-objective optimization framework (OptI-ProMP) to approach the problem of reaching a target in a compact clutter with a case study on soft fruits grown in clusters, leveraging the local optimisation-based planner CHOMP. OptI-ProMP features costs related to both static, dynamic and pushable objects in the target neighborhood, and it relies on probabilistic primitives for problem initialisation. We tested, in a simulated poly-tunnel, both ProMP-based planners from literature and the OptI-ProMP, on low (3-dofs) and high (7-dofs) dexterity robot body, respectively. Results show collision and pushing costs minimisation with 7-dofs robot kinematics, in addition to successful static obstacles avoidance and systematic drifting from the pushable objects center of mass.}
    }

  • J. Stevenson and C. Fox, “Scaling a hippocampus model with gpu parallelisation and test-driven refactoring,” in 11th international conference on biomimetic and biohybrid systems (living machines), 2022.
    [BibTeX] [Abstract] [Download PDF]

    The hippocampus is the brain area used for localisation, mapping and episodic memory. Humans and animals can outperform robotic systems in these tasks, so functional models of hippocampus may be useful to improve robotic navigation, such as for self-driving cars. Previous work developed a biologically plausible model of hippocampus based on Unitary Coherent Particle Filter (UCPF) and Temporal Restricted Boltzmann Machine, which was able to learn to navigate around small test environments. However it was implemented in serial software, which becomes very slow as the environments and numbers of neurons scale up. Modern GPUs can parallelize execution of neural networks. The present Neural Software Engineering study develops a GPU accelerated version of the UCPF hippocampus software, using the formal Software Engineering techniques of profiling, optimisation and test-driven refactoring. Results show that the model can greatly benefit from parallel execution, which may enable it to scale from toy environments and applications to real-world ones such as self-driving car navigation. The refactored parallel code is released to the community as open source software as part of this publication.

    @inproceedings{lincoln49936,
    booktitle = {11th International Conference on Biomimetic and Biohybrid Systems (Living Machines)},
    month = {July},
    title = {Scaling a hippocampus model with GPU parallelisation and test-driven refactoring},
    author = {Jack Stevenson and Charles Fox},
    publisher = {Springer LNCS},
    year = {2022},
    keywords = {ARRAY(0x55bd28e05da8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49936/},
    abstract = {The hippocampus is the brain area used for localisation, mapping and episodic memory. Humans and animals can outperform robotic systems in these tasks, so functional models of hippocampus may be useful to improve robotic navigation, such as for self-driving cars.
    Previous work developed a biologically plausible model of hippocampus based on Unitary Coherent Particle Filter (UCPF) and Temporal Restricted Boltzmann Machine, which was able to learn to navigate around small test environments. However it was implemented in serial software, which becomes very slow as the environments and numbers of neurons scale up. Modern GPUs can parallelize execution of neural networks.
    The present Neural Software Engineering study develops a GPU accelerated version of the UCPF hippocampus software, using the formal Software Engineering techniques of profiling, optimisation and test-driven refactoring. Results show that the model can greatly benefit from parallel execution, which may enable it to scale from toy environments and applications to real-world ones such as self-driving car navigation. The refactored parallel code is released to the community as open source software as part of this publication.}
    }

  • F. Camara and C. Fox, “Learning pedestrian social behaviour for game-theoretic self-driving cars,” in Rss pioneers workshop, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Robot navigation in environments with static objects appears to be a solved problem, but navigating around humans in dynamic and unstructured environments remains an active research question. This requires not only advanced path planning methods but also a good perception system, models of multi-agent interactions and realistic hardware for testing. To evolve in human social spaces, robots must also show social intelligence, i.e. the ability to understand human behaviour via explicit and implicit communication cues (e.g. proxemics) for better human-robot interactions (HRI) [28]. Similarly, autonomous vehicles (AVs), also called ?self-driving cars? that are appearing on the roads need a better understanding of pedestrians? social behaviour, especially in urban areas [26]. In particular, previous work showed that pedestrians may take advantage over autonomous vehicles [13] by intentionally and constantly stepping in front of AVs, hence preventing them from making progress on the roads. This inability of current AVs to read the intention of other road users, predict their future behaviour and interact with them is known as ?the big problem with self-driving cars? [1]. Thus, AVs need better decision-making models and must find a good balance between stopping for pedestrians when required and driving to reach their final destination as quickly as possible for their on-board passengers. A comprehensive review of existing pedestrian models for AVs, ranging from low-level sensing, detection and tracking models [9] to high-level interaction and game theoretic models of pedestrian behaviour [10], found that the lower-level models are accurate and mature enough to be deployed on AVs but more research is needed in the higher-level models. Hence, in this work, we focus on modelling, learning and operating pedestrian high-level social behaviour on self-driving cars using game theory and proxemics.

    @inproceedings{lincoln50876,
    booktitle = {RSS Pioneers Workshop},
    month = {June},
    title = {Learning Pedestrian Social Behaviour for Game-Theoretic Self-Driving Cars},
    author = {Fanta Camara and Charles Fox},
    publisher = {RSS},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7480)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50876/},
    abstract = {Robot navigation in environments with static objects appears to be a solved problem, but navigating around humans in dynamic and unstructured environments remains an active research question. This requires not only advanced path planning methods but also a good perception system, models of multi-agent interactions and realistic hardware for testing. To evolve in human social spaces, robots must also show social intelligence, i.e. the ability to understand human behaviour via explicit and implicit communication cues (e.g. proxemics) for better human-robot interactions (HRI) [28]. Similarly, autonomous vehicles (AVs), also called ?self-driving cars? that are appearing on the roads need a better understanding of pedestrians? social behaviour, especially in urban areas [26]. In particular, previous work showed that pedestrians may take advantage over autonomous vehicles [13] by intentionally and constantly stepping in front of AVs, hence preventing them from making progress on the roads. This inability of current AVs to read the intention of other road users, predict their future behaviour and interact with them is known as ?the big problem with self-driving cars? [1]. Thus, AVs need better decision-making models and must find a good balance between stopping for pedestrians when required and driving to reach their final destination as quickly as possible for their on-board passengers. A comprehensive review of existing pedestrian models for AVs, ranging from low-level sensing, detection and tracking models [9] to high-level interaction and game theoretic models of pedestrian behaviour [10], found that the lower-level models are accurate and mature enough to be deployed on AVs but more research is needed in the higher-level models. Hence, in this work, we focus on modelling, learning and operating pedestrian high-level social behaviour on self-driving cars using game theory and proxemics.}
    }

  • H. Harman and E. Sklar, “Multi-agent task allocation for fruit picker team formation (extended abstract),” in The 21st international conference on autonomous agents and multiagent systems (aamas 2022), 2022, p. 1618–1620.
    [BibTeX] [Abstract] [Download PDF]

    Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as fruit farms, human labourers undertake harvesting tasks, organised each day by farm manager(s) who assign workers to the fields that are ready to be harvested. The work presented here considers three challenges identified in the adaptation of a multi-agent task allocation methodology applied to the problem of distributing workers to fields. First, the methodology must be fast to compute so that it can be applied on a daily basis. Second, the incremental acquisition of harvesting data used to make decisions about worker-task assignments means that a data-backed approach must be derived from incomplete information as the growing season unfolds. Third, the allocation must take ?fairness? into account and consider worker motivation. Solutions to these challenges are demonstrated, showing statistically significant results based on the operations at a soft fruit farm during their 2020 and 2021 harvesting seasons.

    @inproceedings{lincoln49037,
    booktitle = {The 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2022)},
    month = {May},
    title = {Multi-agent Task Allocation for Fruit Picker Team Formation (Extended Abstract)},
    author = {Helen Harman and Elizabeth Sklar},
    publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
    year = {2022},
    pages = {1618--1620},
    keywords = {ARRAY(0x55bd28ce7528)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49037/},
    abstract = {Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as fruit farms, human labourers undertake harvesting tasks, organised each day by farm manager(s) who assign workers to the fields that are ready to be harvested. The work presented here considers three challenges identified in the adaptation of a multi-agent task allocation methodology applied to the problem of distributing workers to fields. First, the methodology must be fast to compute so that it can be applied on a daily basis. Second, the incremental acquisition of harvesting data used to make decisions about worker-task assignments means that a data-backed approach must be derived from incomplete information as the growing season unfolds. Third, the allocation must take ?fairness? into account and consider worker motivation. Solutions to these challenges are demonstrated, showing statistically significant results based on the operations at a soft fruit farm during their 2020 and 2021 harvesting seasons.}
    }

  • R. Godfrey, M. Rimmer, C. Headleand, and C. Fox, “Rhythmtrain: making rhythmic sight reading training fun,” in International computer music conference, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Rhythmic sight-reading forms a barrier to many musicians’ progress. It is difficult to practice in isolation, as it is hard to get feedback on accuracy. Different performers have different starting skills in different styles so it is hard to create a general curriculum for study. It can be boring to rehearse the same rhythms many times. We examine theories of motivation, engagement, and fun, and draw them together to design a novel training system, RhythmTrain. This includes consideration of dynamic difficultly, gamification and juicy design. The system uses machine learning to learn individual performers’ strengths, weaknesses, and interests, and optimises the selection of rhythms presented to maximise their engagement. An open source implementation is released as part of this publication.

    @inproceedings{lincoln49153,
    booktitle = {International Computer Music Conference},
    month = {September},
    title = {RhythmTrain: making rhythmic sight reading training fun},
    author = {Reece Godfrey and Matthew Rimmer and Chris Headleand and Charles Fox},
    publisher = {ICMA},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7720)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49153/},
    abstract = {Rhythmic sight-reading forms a barrier to many musicians' progress. It is difficult to practice in isolation, as it is hard to get feedback on accuracy. Different performers have different starting skills in different styles so it is hard to create a general curriculum for study. It can be boring to rehearse the same rhythms many times. We examine theories of motivation, engagement, and fun, and draw them together to design a novel training system, RhythmTrain. This includes consideration of dynamic difficultly, gamification and juicy design. The system uses machine learning to learn individual performers' strengths, weaknesses, and interests, and optimises the selection of rhythms presented to maximise their engagement. An open source implementation is released as part of this publication.}
    }

  • J. Bennett, B. Moncur, K. Fogarty, G. Clawson, and C. Fox, “Towards open source hardware robotic woodwind: an internal duct flute player,” in International computer music conference, 2022.
    [BibTeX] [Abstract] [Download PDF]

    We present the first open source hardware (OSH) design and build of an automated robotic internal duct flute player, including an artificial lung and pitch calibration system. Using a recorder as an introductory instrument, the system is designed to be as modular as possible, enabling modification to fit further instruments across the woodwind family. Design considerations include the need to be as open to modification and accessible to as many people and instruments as possible. The system is split into two physical modules: a blowing module and a fingering module, and three software modules: actuator control, pitch calibration and musical note processing via MIDI. The system is able to perform beginner level recorder player melodies.

    @inproceedings{lincoln49154,
    booktitle = {International Computer Music Conference},
    month = {September},
    title = {Towards Open Source Hardware Robotic Woodwind: an Internal Duct Flute Player},
    author = {James Bennett and Bethan Moncur and Kyle Fogarty and Garry Clawson and Charles Fox},
    publisher = {ICMA},
    year = {2022},
    keywords = {ARRAY(0x55bd26f27248)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49154/},
    abstract = {We present the first open source hardware (OSH) design and build of an automated robotic internal duct flute player, including an artificial lung and pitch calibration system. Using a recorder as an introductory instrument, the system is designed to be as modular as possible, enabling modification to fit further instruments across the woodwind family. Design considerations include the need to be as open to modification and accessible to as many people and instruments as possible. The system is split into two physical modules: a blowing module and a fingering module, and three software modules: actuator control, pitch calibration and musical note processing via MIDI.
    The system is able to perform beginner level recorder player melodies.}
    }

  • Y. Zhang, J. Zhao, M. Hua, M. Liu, F. Lei, H. Cuayahuitl, and S. Yue, “Olgmd: an opponent colour lgmd-based model for collision detection with thermal images at night,” in 31st international conference on artificial neural networks, 2022. doi:10.1007/978-3-031-15934-3_21
    [BibTeX] [Abstract] [Download PDF]

    It is an enormous challenge for intelligent robots or vehicles to detect and avoid collisions at night because of poor lighting conditions. Thermal cameras capture night scenes with temperature maps, often showing different pseudo-colour modes to enhance the visual effects for the human eyes. Since the features of approaching objects could have been well enhanced in the pseudo-colour outputs of a thermal camera, it is likely that colour cues could help the Lobula Giant Motion Detector (LGMD) to pick up the collision cues effectively. However, there is no investigation published on this aspect and it is not clear whether LGMD-like neural networks can take pseudo-colour information as input for collision detection in extreme dim conditions. In this study, we investigate a few thermal pseudo-colour modes and propose to extract colour cues with a triple-channel LGMD-based neural network to directly process the pseudo-colour images. The proposed model consists of three sub-networks{–}each dealing with one specific opponent colour channel, i.e. black-white, red-green, or yellow-blue. A collision alarm is triggered if any channel?s output exceeds its threshold for a few successive frames. Our experiments demonstrate that the proposed bio-inspired collision detection system works well in quickly detecting colliding objects in direct collision course in extremely low lighting conditions. The proposed method showed its potential to be part of sensor systems for future robots or vehicles driving at night or in other extreme lighting conditions{–}to help avoiding fatal collisions.

    @inproceedings{lincoln55640,
    booktitle = {31st International Conference on Artificial Neural Networks},
    month = {September},
    title = {OLGMD: An Opponent Colour LGMD-based Model for Collision Detection with Thermal Images at Night},
    author = {Yicheng Zhang and Jiannan Zhao and Mu Hua and Mei Liu and Fang Lei and Heriberto Cuayahuitl and Shigang Yue},
    publisher = {Springer Cham},
    year = {2022},
    doi = {10.1007/978-3-031-15934-3\_21},
    keywords = {ARRAY(0x55bd28da5728)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55640/},
    abstract = {It is an enormous challenge for intelligent robots or vehicles to detect and avoid collisions at night because of poor lighting conditions. Thermal cameras capture night scenes with temperature maps, often showing different pseudo-colour modes to enhance the visual effects for the human eyes. Since the features of approaching objects could have been well enhanced in the pseudo-colour outputs of a thermal camera, it is likely that colour cues could help the Lobula Giant Motion Detector (LGMD) to pick up the collision cues effectively. However, there is no investigation published on this aspect and it is not clear whether LGMD-like neural networks can take pseudo-colour information as input for collision detection in extreme dim conditions. In this study, we investigate a few thermal pseudo-colour modes and propose to extract colour cues with a triple-channel LGMD-based neural network to directly process the pseudo-colour images. The proposed model consists of three sub-networks{--}each dealing with one specific opponent colour channel, i.e. black-white, red-green, or yellow-blue. A collision alarm is triggered if any channel?s output exceeds its threshold for a few successive frames. Our experiments demonstrate that the proposed bio-inspired collision detection system works well in quickly detecting colliding objects in direct collision course in extremely low lighting conditions. The proposed method showed its potential to be part of sensor systems for future robots or vehicles driving at night or in other extreme lighting conditions{--}to help avoiding fatal collisions.}
    }

  • J. Lock, F. Camara, and C. Fox, “Emap: real-time terrain estimation,” in 23rd towards autonomous robotic systems (taros) conference, 2022.
    [BibTeX] [Abstract] [Download PDF]

    Terrain mapping has a many use cases in both land surveyance and autonomous vehicles. Popular methods generate occupancy maps over 3D space, which are sub-optimal in outdoor scenarios with large, clear spaces where gaps in LiDAR readings are common. A terrain can instead be modelled as a height map over 2D space which can iteratively be updated with incoming LiDAR data, which simplifies computation and allows missing points to be estimated based on the current terrain estimate. The latter point is of particular interest, since it can reduce the data collection effort required (and its associated costs) and current options are not suitable to real-time operation. In this work, we introduce a new method that is capable of performing such terrain mapping and inferencing tasks in real-time. We evaluate it with a set of mapping scenarios and show it is capable of generating maps with higher accuracy than an OctoMap-based method.

    @inproceedings{lincoln50390,
    booktitle = {23rd Towards Autonomous Robotic Systems (TAROS) Conference},
    month = {September},
    title = {EMap: Real-time Terrain Estimation},
    author = {Jacobus Lock and Fanta Camara and Charles Fox},
    publisher = {Springer},
    year = {2022},
    keywords = {ARRAY(0x55bd28da5b78)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50390/},
    abstract = {Terrain mapping has a many use cases in both land surveyance and autonomous vehicles.
    Popular methods generate occupancy maps over 3D space, which are sub-optimal in outdoor scenarios with large, clear spaces where gaps in LiDAR readings are common.
    A terrain can instead be modelled as a height map over 2D space which can iteratively be updated with incoming LiDAR data, which simplifies computation and allows missing points to be estimated based on the current terrain estimate.
    The latter point is of particular interest, since it can reduce the data collection effort required (and its associated costs) and current options are not suitable to real-time operation.
    In this work, we introduce a new method that is capable of performing such terrain mapping and inferencing tasks in real-time.
    We evaluate it with a set of mapping scenarios and show it is capable of generating maps with higher accuracy than an OctoMap-based method.}
    }

  • J. Gregory, M. H. Nair, G. Bullegas, and M. R. Saaj, “Using semantic systems engineering techniques to verity the large aperture space telescope mission ? current status,” in Model based space systems and software engineering mbse2021, 2022.
    [BibTeX] [Abstract] [Download PDF]

    MBSE aims to integrate engineering models across tools and domain boundaries to support traditional systems engineering activities (e.g., requirements elicitation and traceability, design, analysis, verification and validation). However, MBSE does not inherently solve interoperability with the multiple model-based infrastructures involved in a complex systems engineering project. The challenge is to implement digital continuity in the three dimensions of systems engineering: across disciplines, throughout the lifecycle, and along the supply chain. Space systems are ideal candidates for the application of MBSE and semantic modelling as these complex and expensive systems are mission-critical and often co-developed by multiple stakeholders. In this paper, the authors introduce the concept of Semantic Systems Engineering (SES) as an expansion of MBSE practices to include semantic modelling through SWTs. The paper also presents the progress and status of a novel Semantic Systems Engineering Ontology (SESO) in the context of a specific design case study ? the Large Aperture Space Telescope mission.

    @inproceedings{lincoln49463,
    booktitle = {Model Based Space Systems and Software Engineering MBSE2021},
    month = {September},
    title = {Using Semantic Systems Engineering Techniques to Verity the Large Aperture Space Telescope Mission ? Current Status},
    author = {Joe Gregory and Manu H. Nair and Gianmaria Bullegas and Mini Rai Saaj},
    publisher = {European Space Agency},
    year = {2022},
    keywords = {ARRAY(0x55bd29017f10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49463/},
    abstract = {MBSE aims to integrate engineering models across tools and domain boundaries to support traditional systems engineering activities (e.g., requirements elicitation and traceability, design, analysis, verification and validation). However, MBSE does not inherently solve interoperability with the multiple model-based infrastructures involved in a complex systems engineering project. The challenge is to implement digital continuity in the three dimensions of systems engineering: across disciplines, throughout the lifecycle, and along the supply chain. Space systems are ideal candidates for the application of MBSE and semantic modelling as these complex and expensive systems are mission-critical and often co-developed by multiple stakeholders. In this paper, the authors introduce the concept of Semantic Systems Engineering (SES) as an expansion of MBSE practices to include semantic modelling through SWTs. The paper also presents the progress and status of a novel Semantic Systems Engineering Ontology (SESO) in the context of a specific design case study ? the Large Aperture Space Telescope mission.}
    }

  • H. Luan, M. Hua, J. Peng, S. Yue, S. Chen, and Q. Fu, “Accelerating motion perception model mimics the visual neuronal ensemble of crab,” in 2022 international joint conference on neural networks (ijcnn), 2022, p. 1–8. doi:10.1109/IJCNN55064.2022.9892540
    [BibTeX] [Abstract] [Download PDF]

    In nature, crabs have a panoramic vision for the localization and perception of accelerating motion from local segments to global view in order to guide reactive behaviours including escape. The visual neuronal ensemble in crab plays crucial roles in such capability, however, has never been investigated and modelled as an artificial vision system. To bridge this gap, we propose an accelerating motion perception model (AMPM) mimicking the visual neuronal ensemble in crab. The AMPM includes two main parts, wherein the pre-synaptic network from the previous modelling work simulates 16 MLG1 neurons covering the entire view to localize moving objects. The emphasis herein is laid on the original modelling of MLG1s? post-synaptic network to perceive accelerating motions from a global view, which employs a novel spatial-temporal difference encoder (STDE), and an adaptive spiking threshold temporal difference encoder (AT-TDE). Specifically, the STDE transforms ?time-to-travel? between activations of two successive segments of MLG1 into excitatory post-synaptic current (EPSC), which decays with the elapse of time. The AT-TDE in two directional, i.e., counter-clockwise and clockwise accelerating detectors guarantees ?non-firing? to con-stant movements. Accordingly, the accelerating motion can be effectively localized and perceived by the whole network. The systematic experiments verified the feasibility and robustness of the proposed method. The model responses to translational accelerating motion also fit many of the explored physiological features of direction selective neurons in the lobula complex of crab (i.e. lobula complex direction cells, LCDCs). This modelling study not only provides a reasonable hypothesis for such biological neural pathways, but is also critical for developing a new neuromorphic sensor strategy.

    @inproceedings{lincoln52805,
    month = {September},
    author = {Hao Luan and Mu Hua and Jigen Peng and Shigang Yue and Shengyong Chen and Qinbing Fu},
    booktitle = {2022 International Joint Conference on Neural Networks (IJCNN)},
    title = {Accelerating Motion Perception Model Mimics the Visual Neuronal Ensemble of Crab},
    publisher = {IEEE},
    doi = {10.1109/IJCNN55064.2022.9892540},
    pages = {1--8},
    year = {2022},
    keywords = {ARRAY(0x55bd28ddd0f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52805/},
    abstract = {In nature, crabs have a panoramic vision for the localization and perception of accelerating motion from local segments to global view in order to guide reactive behaviours including escape. The visual neuronal ensemble in crab plays crucial roles in such capability, however, has never been investigated and modelled as an artificial vision system. To bridge this gap, we propose an accelerating motion perception model (AMPM) mimicking the visual neuronal ensemble in crab. The AMPM includes two main parts, wherein the pre-synaptic network from the previous modelling work simulates 16 MLG1 neurons covering the entire view to localize moving objects. The emphasis herein is laid on the original modelling of MLG1s? post-synaptic network to perceive accelerating motions from a global view, which employs a novel spatial-temporal difference encoder (STDE), and an adaptive spiking threshold temporal difference encoder (AT-TDE). Specifically, the STDE transforms ?time-to-travel? between activations of two successive segments of MLG1 into excitatory post-synaptic current (EPSC), which decays with the elapse of time. The AT-TDE in two directional, i.e., counter-clockwise and clockwise accelerating detectors guarantees ?non-firing? to con-stant movements. Accordingly, the accelerating motion can be effectively localized and perceived by the whole network. The systematic experiments verified the feasibility and robustness of the proposed method. The model responses to translational accelerating motion also fit many of the explored physiological features of direction selective neurons in the lobula complex of crab (i.e. lobula complex direction cells, LCDCs). This modelling study not only provides a reasonable hypothesis for such biological neural pathways, but is also critical for developing a new neuromorphic sensor strategy.}
    }

  • F. Camara and C. Fox, “Game theory, proxemics and trust for self-driving car social navigation,” in Social robot navigation: advances and evaluation (seanavbench 2022), 2022.
    [BibTeX] [Abstract] [Download PDF]

    To navigate in human social spaces, self-driving cars and other robots must show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. The present paper gives an overview of our ongoing work on modelling and controlling human?self-driving car interactions using game theory, proxemics and trust, and unifying these fields via quantitative models and robot controllers.

    @inproceedings{lincoln49183,
    booktitle = {Social Robot Navigation: Advances and Evaluation (SEANavBench 2022)},
    month = {May},
    title = {Game Theory, Proxemics and Trust for Self-Driving Car Social Navigation},
    author = {Fanta Camara and Charles Fox},
    publisher = {Social Robot Navigation: Advances and Evaluation},
    year = {2022},
    keywords = {ARRAY(0x55bd28ce7b10)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/49183/},
    abstract = {To navigate in human social spaces, self-driving cars and other robots must show social intelligence. This involves predicting and planning around pedestrians, understanding their personal space, and establishing trust with them. The present paper gives an overview of our ongoing work on modelling and controlling human?self-driving car interactions using game theory, proxemics and trust, and unifying these fields via quantitative models and robot controllers.}
    }

  • F. Castagna, S. Parsons, I. Sassoon, and E. Sklar, “Providing explanations via the eqr argument scheme,” in 9th international conference on computational models of argument (comma2022)), 2022, p. 351 –352. doi:10.3233/FAIA220168
    [BibTeX] [Abstract] [Download PDF]

    This demo paper outlines the EQR argument scheme (AS) structure and deploys its instantiations to convey explanations using a chatbot.

    @inproceedings{lincoln53877,
    month = {September},
    author = {Federico Castagna and Simon Parsons and Isabel Sassoon and Elizabeth Sklar},
    booktitle = {9th International Conference on Computational Models of Argument (COMMA2022))},
    title = {Providing Explanations via the EQR Argument Scheme},
    publisher = {IOS Press},
    doi = {10.3233/FAIA220168},
    pages = {351 --352},
    year = {2022},
    keywords = {ARRAY(0x55bd28cfff70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/53877/},
    abstract = {This demo paper outlines the EQR argument scheme (AS) structure and deploys its instantiations to convey explanations using a chatbot.}
    }

  • T. Choi, O. Would, A. Salazar-Gomez, and G. Cielniak, “Self-supervised representation learning for reliable robotic monitoring of fruit anomalies,” in 2022 ieee international conference on robotics and automation (icra), 2022. doi:10.1109/ICRA46639.2022.9811954
    [BibTeX] [Abstract] [Download PDF]

    Data augmentation can be a simple yet powerful tool for autonomous robots to fully utilise available data for self-supervised identification of atypical scenes or objects. State-of-the-art augmentation methods arbitrarily embed “structural” peculiarity on typical images so that classifying these artefacts can provide guidance for learning representations for the detection of anomalous visual signals. In this paper, however, we argue that learning such structure-sensitive representations can be a suboptimal approach to some classes of anomaly (e.g., unhealthy fruits) which could be better recognised by a different type of visual element such as “colour”. We thus propose Channel Randomisation as a novel data augmentation method for restricting neural networks to learn encoding of “colour irregularity” whilst predicting channel-randomised images to ultimately build reliable fruit-monitoring robots identifying atypical fruit qualities. Our experiments show that (1) this colour-based alternative can better learn representations for consistently accurate identification of fruit anomalies in various fruit species, and also, (2) unlike other methods, the validation accuracy can be utilised as a criterion for early stopping of training in practice due to positive correlation between the performance in the self-supervised colour-differentiation task and the subsequent detection rate of actual anomalous fruits. Also, the proposed approach is evaluated on a new agricultural dataset, Riseholme-2021, consisting of 3.5K strawberry images gathered by a mobile robot, which we share online to encourage active agri-robotics research.

    @inproceedings{lincoln48682,
    booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
    month = {July},
    title = {Self-supervised Representation Learning for Reliable Robotic Monitoring of Fruit Anomalies},
    author = {Taeyeong Choi and Owen Would and Adrian Salazar-Gomez and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2022},
    doi = {10.1109/ICRA46639.2022.9811954},
    keywords = {ARRAY(0x55bd28fe1480)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48682/},
    abstract = {Data augmentation can be a simple yet powerful tool for autonomous robots to fully utilise available data for self-supervised
    identification of atypical scenes or objects. State-of-the-art augmentation methods arbitrarily embed "structural" peculiarity on typical images so that classifying these artefacts can provide guidance for learning representations for the detection of anomalous visual signals. In this paper, however, we argue that learning such structure-sensitive representations can be a suboptimal approach to some classes of anomaly (e.g., unhealthy fruits) which could be better recognised by a different type of visual element such as "colour". We thus propose Channel Randomisation as a novel data augmentation method for restricting neural networks to learn encoding of "colour irregularity" whilst predicting channel-randomised images to ultimately build reliable fruit-monitoring robots identifying atypical fruit qualities. Our experiments show that (1) this colour-based alternative can better learn representations for consistently accurate identification of fruit anomalies in various fruit species, and also, (2) unlike other methods, the validation accuracy can be utilised as a criterion for early stopping of training in practice due to positive correlation between the performance in the self-supervised colour-differentiation task and the subsequent detection rate of actual anomalous fruits. Also, the proposed approach is evaluated on a new agricultural dataset, Riseholme-2021, consisting of 3.5K strawberry images gathered by a mobile robot, which we share online to encourage active agri-robotics research.}
    }

2021

  • N. Andreakos, S. Yue, and V. Cutsuridis, “Quantitative investigation of memory recall performance of a computational microcircuit model of the hippocampus,” Brain informatics, vol. 8, p. 9, 2021. doi:10.1186/s40708-021-00131-7
    [BibTeX] [Abstract] [Download PDF]

    Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.

    @article{lincoln44717,
    volume = {8},
    month = {December},
    author = {Nikolas Andreakos and Shigang Yue and Vassilis Cutsuridis},
    title = {Quantitative Investigation Of Memory Recall Performance Of A Computational Microcircuit Model Of The Hippocampus},
    publisher = {SpringerOpen},
    year = {2021},
    journal = {Brain Informatics},
    doi = {10.1186/s40708-021-00131-7},
    pages = {9},
    keywords = {ARRAY(0x55bd28edd418)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44717/},
    abstract = {Memory, the process of encoding, storing, and maintaining information over time in order to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells? firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model?s recall performance against stored patterns, pattern overlap, network size and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ?model 1? recall quality was excellent across all conditions. ?Model 2? recall was the worst. The number of ?active cells? representing a memory pattern was the determining factor in improving the model?s recall performance regardless of the number of stored patterns and overlap between them. As ?active cells per pattern? decreased, the model?s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.}
    }

  • F. Camara, P. Dickinson, and C. Fox, “Evaluating pedestrian interaction preferences with a game theoretic autonomous vehicle in virtual reality,” Transportation research part f, vol. 78, p. 410–423, 2021. doi:10.1016/j.trf.2021.02.017
    [BibTeX] [Abstract] [Download PDF]

    Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved problems, but how to control their interactions with other road users in mixed traffic environments, especially with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game theory model has been developed only in unrealistic lab environments. To improve their realism, this study empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to investigate participants? preference for AV driving style. The results show that the majority of participants were not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two environments and with different car models suggestive of car and last-mile style vehicles. These results provide some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and more generally show how such parameters can be inferred from virtual reality experiments.

    @article{lincoln44566,
    volume = {78},
    month = {April},
    author = {Fanta Camara and Patrick Dickinson and Charles Fox},
    title = {Evaluating Pedestrian Interaction Preferences with a Game Theoretic Autonomous Vehicle in Virtual Reality},
    publisher = {Elsevier},
    year = {2021},
    journal = {Transportation Research Part F},
    doi = {10.1016/j.trf.2021.02.017},
    pages = {410--423},
    keywords = {ARRAY(0x55bd28e42840)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44566/},
    abstract = {Abstract: Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved
    problems, but how to control their interactions with other road users in mixed traffic environments, especially
    with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control
    AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game
    theory model has been developed only in unrealistic lab environments. To improve their realism, this study
    empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous
    vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game
    theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions
    for the pedestrians. In a first experiment, participants? trajectories reveal a more cautious crossing behaviour in
    VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to
    investigate participants? preference for AV driving style. The results show that the majority of participants were
    not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two
    environments and with different car models suggestive of car and last-mile style vehicles. These results provide
    some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and
    more generally show how such parameters can be inferred from virtual reality experiments.}
    }

  • F. Yang, L. Shu, Y. Yang, G. Han, S. Pearson, and K. Li, “Optimal deployment of solar insecticidal lamps over constrained locations in mixed-crop farmlands,” Ieee internet of things journal, vol. 8, iss. 16, p. 13095–13114, 2021. doi:10.1109/JIOT.2021.3064043
    [BibTeX] [Abstract] [Download PDF]

    Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.

    @article{lincoln44192,
    volume = {8},
    number = {16},
    month = {August},
    author = {Fan Yang and Lei Shu and Yuli Yang and Guangjie Han and Simon Pearson and Kailiang Li},
    title = {Optimal Deployment of Solar Insecticidal Lamps over Constrained Locations in Mixed-Crop Farmlands},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Internet of Things Journal},
    doi = {10.1109/JIOT.2021.3064043},
    pages = {13095--13114},
    keywords = {ARRAY(0x55bd29007120)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44192/},
    abstract = {Solar Insecticidal Lamps (SILs) play a vital role in green prevention and control of pests. By embedding SILs in Wireless Sensor Networks (WSNs), we establish a novel agricultural Internet of Things (IoT), referred to as the SILIoTs. In practice, the deployment of SIL nodes is determined by the geographical characteristics of an actual farmland, the constraints on the locations of SIL nodes, and the radio-wave propagation in complex agricultural environment. In this paper, we mainly focus on the constrained SIL Deployment Problem (cSILDP) in a mixed-crop farmland, where the locations used to deploy SIL nodes are a limited set of candidates located on the ridges. We formulate the cSILDP in this scenario as a Connected Set Cover (CSC) problem, and propose a Hole Aware Node Deployment Method (HANDM) based on the greedy algorithm to solve the constrained optimization problem. The HANDM is a two-phase method. In the first phase, a novel deployment strategy is utilised to guarantee only a single coverage hole in each iteration, based on which a set of suboptimal locations is found for the deployment of SIL nodes. In the second phase, according to the operations of deletion and fusion, the optimal locations are obtained to meet the requirements on complete coverage and connectivity. Experimental results show that our proposed method achieves better performance than the peer algorithms, specifically in terms of deployment cost.}
    }

  • I. Sassoon, N. Kokciyan, S. Modgil, and S. Parsons, “Argumentation schemes for clinical decision support,” Argument & computation, 2021. doi:10.3233/AAC-200550
    [BibTeX] [Abstract] [Download PDF]

    This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created, and illustrates the overall process on a small set of case studies.

    @article{lincoln46566,
    month = {August},
    title = {Argumentation Schemes for Clinical Decision Support},
    author = {Isabel Sassoon and Nadin Kokciyan and Sanjay Modgil and Simon Parsons},
    publisher = {IOS Press},
    year = {2021},
    doi = {10.3233/AAC-200550},
    journal = {Argument \& Computation},
    keywords = {ARRAY(0x55bd28ed6730)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46566/},
    abstract = {This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created, and illustrates the overall process on a small set of case studies.}
    }

  • Q. Fu, X. Sun, T. liu, C. Hu, and S. Yue, “Robustness of bio-inspired visual systems for collision prediction in critical robot traffic,” Frontiers in robotics and ai, vol. 8, p. 529872, 2021. doi:doi:10.3389/frobt.2021.529872
    [BibTeX] [Abstract] [Download PDF]

    Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust?s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.

    @article{lincoln46873,
    volume = {8},
    month = {August},
    author = {Qinbing Fu and Xuelong Sun and Tian liu and Cheng Hu and Shigang Yue},
    title = {Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic},
    publisher = {Frontiers Media},
    year = {2021},
    journal = {Frontiers in Robotics and AI},
    doi = {doi:10.3389/frobt.2021.529872},
    pages = {529872},
    keywords = {ARRAY(0x55bd28f23378)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46873/},
    abstract = {Collision prevention sets a major research and development obstacle for intelligent robots
    and vehicles. This paper investigates the robustness of two state-of-the-art neural network
    models inspired by the locust?s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have
    been studied and modelled intensively, their capability and robustness against real-time
    critical traffic scenarios where real-physical crashes will happen have never been
    systematically investigated due to difficulty and high price in replicating risky traffic with
    many crash occurrences. To close this gap, we apply a recently published robotic platform
    to test the LGMDs inspired visual systems in physical implementation of critical traffic
    scenarios at low cost and high flexibility. The proposed visual systems are applied as the
    only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt
    braking. The simulated traffic resembles on-road sections including the intersection and
    highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones
    upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at
    bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is
    laid on corroborating the robustness of LGMDs neural systems model in different dynamic
    robot scenes to timely alert potential crashes. This study well complements previous
    experimentation on such bio-inspired computations for collision prediction in more critical
    physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired
    visual systems in critical traffic towards a reliable collision alert system under constrained
    computation power. This paper also exhibits a novel, tractable, and affordable robotic
    approach to evaluate online visual systems in dynamic scenes.}
    }

  • S. Brewer, S. Pearson, R. Maull, P. Godsiff, J. G. Frey, A. Zisman, G. Parr, A. McMillan, S. Cameron, H. Blackmore, L. Manning, and L. Bidaut, “A trust framework for digital food systems.,” Nature food, vol. 2, p. 543–545, 2021. doi:10.1038/s43016-021-00346-1
    [BibTeX] [Abstract] [Download PDF]

    The full potential for a digitally transformed food system has not yet been realised – or indeed imagined. Data flows across, and within, vast but largely decentralised and tiered supply chain networks. Data defines internal inputs, bi-directional flows of food, information and finance within the supply chain, and intended and extraneous outputs. Data exchanges can orchestrate critical network dependencies, define standards and underpin food safety. Poore and Nemecek1 hypothesised that digital technologies could drive system transformation for the public good by empowering personalised selection of foods with, for example, lower intrinsic greenhouse gas emissions. Here, we contend that the full potential of a digitally transformed food system can only be realised if permissioned and trusted data can flow seemlessly through complex, multi-lateral supply chains, effectively from farms through to the consumer.

    @article{lincoln47264,
    volume = {2},
    month = {August},
    author = {Steve Brewer and Simon Pearson and Roger Maull and Phil Godsiff and Jeremy G. Frey and Andrea Zisman and Gerard Parr and Andrew McMillan and Sarah Cameron and Hannah Blackmore and Louise Manning and Luc Bidaut},
    title = {A trust framework for digital food systems.},
    publisher = {Nature Research},
    year = {2021},
    journal = {Nature Food},
    doi = {10.1038/s43016-021-00346-1},
    pages = {543--545},
    keywords = {ARRAY(0x55bd28e04490)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47264/},
    abstract = {The full potential for a digitally transformed food system has not yet been realised - or indeed imagined. Data flows across, and within, vast but largely decentralised and tiered supply chain networks. Data defines internal inputs, bi-directional flows of food, information and finance within the supply chain, and intended and extraneous outputs. Data exchanges can orchestrate critical network dependencies, define standards and underpin food safety. Poore and Nemecek1 hypothesised that digital technologies could drive system transformation for the public good by empowering personalised selection of foods with, for example, lower intrinsic greenhouse gas emissions. Here, we contend that the full potential of a digitally transformed food system can only be realised if permissioned and trusted data can flow seemlessly through complex, multi-lateral supply chains, effectively from farms through to the consumer.}
    }

  • L. Gong, M. Yu, S. Jiang, V. Cutsuridis, S. Kollias, and S. Pearson, “Studies of evolutionary algorithms for the reduced tomgro model calibration for modelling tomato yields,” Smart agricultural technology, vol. 1, p. 100011, 2021. doi:10.1016/j.atech.2021.100011
    [BibTeX] [Abstract] [Download PDF]

    The reduced Tomgro model is one of the popular biophysical models, which can reflect the actual growth process and model the yields of tomato-based on environmental parameters in a greenhouse. It is commonly integrated with the greenhouse environmental control system for optimally controlling environmental parameters to maximize the tomato growth/yields under acceptable energy consumption. In this work, we compare three mainstream evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and differential evolutionary (DE)) for calibrating the reduced Tomgro model, to model the tomato mature fruit dry matter (DM) weights. Different evolutionary algorithms have been applied to calibrate 14 key parameters of the reduced Tomgro model. And the performance of the calibrated Tomgro models based on different evolutionary algorithms has been evaluated based on three datasets obtained from a real tomato grower, with each dataset containing greenhouse environmental parameters (e.g., carbon dioxide concentration, temperature, photosynthetically active radiation (PAR)) and tomato yield information at a particular greenhouse for one year. Multiple metrics (root mean square errors (RMSEs), relative root mean square errors (r-RSMEs), and mean average errors (MAEs)) between actual DM weights and model-simulated ones for all three datasets, are used to validate the performance of calibrated reduced Tomgro model.

    @article{lincoln46525,
    volume = {1},
    month = {December},
    author = {Liyun Gong and Miao Yu and Shouyong Jiang and Vassilis Cutsuridis and Stefanos Kollias and Simon Pearson},
    title = {Studies of evolutionary algorithms for the reduced Tomgro model calibration for modelling tomato yields},
    publisher = {Elsevier},
    year = {2021},
    journal = {Smart Agricultural Technology},
    doi = {10.1016/j.atech.2021.100011},
    pages = {100011},
    keywords = {ARRAY(0x55bd28ce2630)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46525/},
    abstract = {The reduced Tomgro model is one of the popular biophysical models, which can reflect the actual growth process and model the yields of tomato-based on environmental parameters in a greenhouse. It is commonly integrated with the greenhouse environmental control system for optimally controlling environmental parameters to maximize the tomato growth/yields under acceptable energy consumption. In this work, we compare three mainstream evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and differential evolutionary (DE)) for calibrating the reduced Tomgro model, to model the tomato mature fruit dry matter (DM) weights. Different evolutionary algorithms have been applied to calibrate 14 key parameters of the reduced Tomgro model. And the performance of the calibrated Tomgro models based on different evolutionary algorithms has been evaluated based on three datasets obtained from a real tomato grower, with each dataset containing greenhouse environmental parameters (e.g., carbon dioxide concentration, temperature, photosynthetically active radiation (PAR)) and tomato yield information at a particular greenhouse for one year. Multiple metrics (root mean square errors (RMSEs), relative root mean square errors (r-RSMEs), and mean average errors (MAEs)) between actual DM weights and model-simulated ones for all three datasets, are used to validate the performance of calibrated reduced Tomgro model.}
    }

  • L. Gong, M. Yu, S. Jiang, V. Cutsuridis, and S. Pearson, “Deep learning based prediction on greenhouse crop yield combined tcn and rnn,” Sensors, vol. 21, iss. 13, p. 4537, 2021. doi:10.3390/s21134537
    [BibTeX] [Abstract] [Download PDF]

    Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses? environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing{–}temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.

    @article{lincoln46522,
    volume = {21},
    number = {13},
    month = {July},
    author = {Liyun Gong and Miao Yu and Shouyong Jiang and Vassilis Cutsuridis and Simon Pearson},
    title = {Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN},
    publisher = {MDPI},
    year = {2021},
    journal = {Sensors},
    doi = {10.3390/s21134537},
    pages = {4537},
    keywords = {ARRAY(0x55bd286bee68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46522/},
    abstract = {Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses? environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing{--}temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.}
    }

  • H. Isakhani, C. Xiong, W. Chen, and S. Yue, “Towards locust-inspired gliding wing prototypes for micro aerial vehicle applications,” Royal society open science, vol. 8, iss. 6, p. 202253, 2021. doi:10.1098/rsos.202253
    [BibTeX] [Abstract] [Download PDF]

    In aviation, gliding is the most economical mode of flight explicitly appreciated by natural fliers. They achieve it by high-performance wing structures evolved over millions of years in nature. Among other prehistoric beings, locust (Schistocerca gregaria) is a perfect example of such natural glider capable of endured transatlantic flights that could inspire a practical solution to achieve similar capabilities on micro aerial vehicles. This study investigates the effects of haemolymph on the flexibility of several flying insect wings further showcasing the superior structural performance of locusts. However, biomimicry of such aerodynamic and structural properties is hindered by the limitations of modern as well as conventional fabrication technologies in terms of availability and precision, respectively. Therefore, here we adopt finite element analysis (FEA) to investigate the manufacturing-worthiness of a 3D digitally reconstructed locust tandem wing, and propose novel combinations of economical and readily-available manufacturing methods to develop the model into prototypes that are structurally similar to their counterparts in nature while maintaining the optimum gliding ratio previously obtained in the aerodynamic simulations. Latter is evaluated in the future study and the former is assessed here via an experimental analysis of the flexural stiffness and maximum deformation rate. Ultimately, a comparative study of the mechanical properties reveals the feasibility of each prototype for gliding micro aerial vehicle applications.

    @article{lincoln47017,
    volume = {8},
    number = {6},
    month = {June},
    author = {Hamid Isakhani and Caihua Xiong and Wenbin Chen and Shigang Yue},
    title = {Towards locust-inspired gliding wing prototypes for micro aerial vehicle applications},
    publisher = {The Royal Society},
    year = {2021},
    journal = {Royal Society Open Science},
    doi = {10.1098/rsos.202253},
    pages = {202253},
    keywords = {ARRAY(0x55bd28dcdba0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47017/},
    abstract = {In aviation, gliding is the most economical mode of flight explicitly appreciated by natural fliers. They achieve it by high-performance wing structures evolved over millions of years in nature. Among other prehistoric beings, locust (Schistocerca gregaria) is a perfect example of such natural glider capable of endured transatlantic flights that could inspire a practical solution to achieve similar capabilities on micro aerial vehicles. This study investigates the effects of haemolymph on the flexibility of several flying insect wings further showcasing the superior structural performance of locusts.
    However, biomimicry of such aerodynamic and structural properties is hindered by the limitations of modern as well as conventional fabrication technologies in terms of availability and precision, respectively. Therefore, here we adopt finite element analysis (FEA) to investigate the manufacturing-worthiness of a 3D digitally reconstructed locust tandem wing, and propose novel combinations of economical and readily-available manufacturing methods to develop the model into prototypes that are structurally similar to their counterparts in nature while maintaining the optimum gliding ratio previously obtained in the aerodynamic simulations. Latter is evaluated in the future study and the former is assessed here via an experimental analysis of the flexural stiffness and maximum deformation rate.
    Ultimately, a comparative study of the mechanical properties reveals the feasibility of each prototype for gliding micro aerial vehicle applications.}
    }

  • M. Al-Khafajiy, S. Otoum, T. Baker, M. Asim, Z. Maamar, M. Aloqaily, M. Taylor, and M. Randles, “Intelligent control and security of fog resources in healthcare systems via a cognitive fog model,” Acm transactions on internet technology, vol. 21, iss. 3, p. 1–23, 2021. doi:10.1145/3382770
    [BibTeX] [Abstract] [Download PDF]

    There have been significant advances in the field of Internet of Things (IoT) recently, which have not always considered security or data security concerns: A high degree of security is required when considering the sharing of medical data over networks. In most IoT-based systems, especially those within smart-homes and smart-cities, there is a bridging point (fog computing) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks, as well as small amounts of data processing. The fog nodes can have useful knowledge and potential for constructive security and control over both the sensor network and the data transmitted over the Internet. Smart healthcare services utilise such networks of IoT systems. It is therefore vital that medical data emanating from IoT systems is highly secure, to prevent fraudulent use, whilst maintaining quality of service providing assured, verified and complete data. In this article, we examine the development of a Cognitive Fog (CF) model, for secure, smart healthcare services, that is able to make decisions such as opting-in and opting-out from running processes and invoking new processes when required, and providing security for the operational processes within the fog system. Overall, the proposed ensemble security model performed better in terms of Accuracy Rate, Detection Rate, and a lower False Positive Rate (standard intrusion detection measurements) than three base classifiers (K-NN, DBSCAN, and DT) using a standard security dataset (NSL-KDD).

    @article{lincoln47555,
    volume = {21},
    number = {3},
    month = {June},
    author = {Mohammed Al-Khafajiy and Safa Otoum and Thar Baker and Muhammad Asim and Zakaria Maamar and Moayad Aloqaily and Mark Taylor and Martin Randles},
    title = {Intelligent Control and Security of Fog Resources in Healthcare Systems via a Cognitive Fog Model},
    publisher = {ACM},
    year = {2021},
    journal = {ACM Transactions on Internet Technology},
    doi = {10.1145/3382770},
    pages = {1--23},
    keywords = {ARRAY(0x55bd28ddf1a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47555/},
    abstract = {There have been significant advances in the field of Internet of Things (IoT) recently, which have not always considered security or data security concerns: A high degree of security is required when considering the sharing of medical data over networks. In most IoT-based systems, especially those within smart-homes and smart-cities, there is a bridging point (fog computing) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks, as well as small amounts of data processing. The fog nodes can have useful knowledge and potential for constructive security and control over both the sensor network and the data transmitted over the Internet. Smart healthcare services utilise such networks of IoT systems. It is therefore vital that medical data emanating from IoT systems is highly secure, to prevent fraudulent use, whilst maintaining quality of service providing assured, verified and complete data. In this article, we examine the development of a Cognitive Fog (CF) model, for secure, smart healthcare services, that is able to make decisions such as opting-in and opting-out from running processes and invoking new processes when required, and providing security for the operational processes within the fog system. Overall, the proposed ensemble security model performed better in terms of Accuracy Rate, Detection Rate, and a lower False Positive Rate (standard intrusion detection measurements) than three base classifiers (K-NN, DBSCAN, and DT) using a standard security dataset (NSL-KDD).}
    }

  • S. D. Mohan, F. J. Davis, A. Badiee, P. Hadley, C. A. Twitchen, and S. Pearson, “Optical and thermal properties of commercial polymer film,modeling the albedo effect,” Journal of applied polymer science, vol. 138, iss. 24, p. 50581, 2021. doi:10.1002/app.50 581
    [BibTeX] [Abstract] [Download PDF]

    Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.

    @article{lincoln44141,
    volume = {138},
    number = {24},
    month = {June},
    author = {Saeed D Mohan and Fred J Davis and Amir Badiee and Paul Hadley and Carrie A Twitchen and Simon Pearson},
    title = {Optical and thermal properties of commercial polymer film,modeling the albedo effect},
    publisher = {Wiley},
    year = {2021},
    journal = {Journal of Applied Polymer Science},
    doi = {10.1002/app.50 581},
    pages = {50581},
    keywords = {ARRAY(0x55bd28d45228)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44141/},
    abstract = {Greenhouse cladding materials are an important part of greenhouse design.The cladding material controls the light transmission and distribution over theplants within the greenhouse, thereby exerting a major influence on the over-all yield. Greenhouse claddings are typically translucent materials offeringmore diffusive transmission than reflection; however, the reflective propertiesof the films offer a potential route to increasing the surface albedo of the localenvironment. We model thermal properties by modeling the films based ontheir optical transmissions and reflections. We can use this data to estimatetheir albedo and determine the amount of short wave radiation that will betransmitted/reflected/blocked by the materials and how it can influence thelocal environment.}
    }

  • A. Badiee, J. R. Wallbank, J. P. Fentanes, E. Trill, P. Scarlet, Y. Zhu, G. Cielniak, H. Cooper, J. R. Blake, J. G. Evans, M. Zreda, K. Markus, and S. Pearson, “Using additional moderator to control the footprint of a cosmos rover for soil moisture measurement,” Water resources research, vol. 57, iss. 6, p. e2020WR028478, 2021. doi:10.1029/2020WR028478
    [BibTeX] [Abstract] [Download PDF]

    Cosmic Ray Neutron Probes (CRNP) have found application in soil moisture estimation due to their conveniently large ({\ensuremath{>}}100 m) footprints. Here we explore the possibility of using high density polyethylene (HDPE) moderator to limit the field of view, and hence the footprint, of a soil moisture sensor formed of 12 CRNP mounted on to a mobile robotic platform (Thorvald) for better in-field localisation of moisture variation. URANOS neutron scattering simulations are used to show that 5 cm of additional HDPE moderator (used to shield the upper surface and sides of the detector) is sufficient to (i), reduce the footprint of the detector considerably, (ii) approximately double the percentage of neutrons detected from within 5 m of the detector, and (iii), does not affect the shape of the curve used to convert neutron counts into soil moisture. Simulation and rover measurements for a transect crossing between grass and concrete additionally suggest that (iv), soil moisture changes can be sensed over a length scales of tens of meters or less (roughly an order of magnitude smaller than commonly used footprint distances), and (v), the additional moderator does not reduce the detected neutron count rate (and hence increase noise) as much as might be expected given the extent of the additional moderator. The detector with additional HDPE moderator was also used to conduct measurements on a stubble field over three weeks to test the rover system in measuring spatial and temporal soil moisture variation.

    @article{lincoln45017,
    volume = {57},
    number = {6},
    month = {June},
    author = {Amir Badiee and John R. Wallbank and Jaime Pulido Fentanes and Emily Trill and Peter Scarlet and Yongchao Zhu and Grzegorz Cielniak and Hollie Cooper and James R. Blake and Jonathan G. Evans and Marek Zreda and K{\"o}hli Markus and Simon Pearson},
    title = {Using Additional Moderator to Control the Footprint of a COSMOS Rover for Soil Moisture Measurement},
    publisher = {Wiley},
    year = {2021},
    journal = {Water Resources Research},
    doi = {10.1029/2020WR028478},
    pages = {e2020WR028478},
    keywords = {ARRAY(0x55bd28dbfc38)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45017/},
    abstract = {Cosmic Ray Neutron Probes (CRNP) have found application in soil moisture estimation due to their conveniently large ({\ensuremath{>}}100 m) footprints. Here we explore the possibility of using high density polyethylene (HDPE) moderator to limit the field of view, and hence the footprint, of a soil moisture sensor formed of 12 CRNP mounted on to a mobile robotic platform (Thorvald) for better in-field localisation of moisture variation. URANOS neutron scattering simulations are used to show that 5 cm of additional HDPE moderator (used to shield the upper surface and sides of the detector) is sufficient to (i), reduce the footprint of the detector considerably, (ii) approximately double the percentage of neutrons detected from within 5 m of the detector, and (iii), does not affect the shape of the curve used to convert neutron counts into soil moisture. Simulation and rover measurements for a transect crossing between grass and concrete additionally suggest that (iv), soil moisture changes can be sensed over a length scales of tens of meters or less (roughly an order of magnitude smaller than commonly used footprint distances), and (v), the additional moderator does not reduce the detected neutron count rate (and hence increase noise) as much as might be expected given the extent of the additional moderator. The detector with additional HDPE moderator was also used to conduct measurements on a stubble field over three weeks to test the rover system in measuring spatial and temporal soil moisture variation.}
    }

  • D. C. Rose, J. Lyon, A. de Broon, M. Hanheide, and S. Pearson, “Responsible development of autonomous robots in agriculture,” Nature food, vol. 2, iss. 5, p. 306–309, 2021. doi:10.1038/s43016-021-00287-9
    [BibTeX] [Abstract] [Download PDF]

    Despite the potential contributions of autonomous robots to agricultural sustainability, social, legal and ethical issues threaten adoption. We discuss how responsible innovation principles can be embedded into the user-centred design of autonomous robots and identify areas for further empirical research.

    @article{lincoln45058,
    volume = {2},
    number = {5},
    month = {May},
    author = {David Christian Rose and Jessica Lyon and Auvikki de Broon and Marc Hanheide and Simon Pearson},
    title = {Responsible Development of Autonomous Robots in Agriculture},
    publisher = {Springer Nature},
    year = {2021},
    journal = {Nature Food},
    doi = {10.1038/s43016-021-00287-9},
    pages = {306--309},
    keywords = {ARRAY(0x55bd28d8a1f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45058/},
    abstract = {Despite the potential contributions of autonomous robots to agricultural sustainability, social, legal and ethical issues threaten adoption. We discuss how responsible innovation principles can be embedded into the user-centred design of autonomous robots and identify areas for further empirical research.}
    }

  • J. Aguzzi, C. Costa, M. Calisti, V. Funari, S. Stefanni, R. Danovaro, H. Gomes, F. Vecchi, L. Dartnell, P. Weiss, K. Nowak, D. Chatzievangelou, and S. Marini, “Research trends and future perspectives in marine biomimicking robotics,” Sensors, vol. 21, iss. 11, p. 3778, 2021. doi:10.3390/s21113778
    [BibTeX] [Abstract] [Download PDF]

    Mechatronic and soft robotics are taking inspiration from the animal kingdom to create new high-performance robots. Here, we focused on marine biomimetic research and used innovative bibliographic statistics tools, to highlight established and emerging knowledge domains. A total of 6980 scientific publications retrieved from the Scopus database (1950?2020), evidencing a sharp research increase in 2003?2004. Clustering analysis of countries collaborations showed two major Asian-North America and European clusters. Three significant areas appeared: (i) energy provision, whose advancement mainly relies on microbial fuel cells, (ii) biomaterials for not yet fully operational soft-robotic solutions; and finally (iii), design and control, chiefly oriented to locomotor designs. In this scenario, marine biomimicking robotics still lacks solutions for the long-lasting energy provision, which presently hinders operation autonomy. In the research environment, identifying natural processes by which living organisms obtain energy is thus urgent to sustain energy-demanding tasks while, at the same time, the natural designs must increasingly inform to optimize energy consumption.

    @article{lincoln46134,
    volume = {21},
    number = {11},
    month = {May},
    author = {Jacopo Aguzzi and Corrado Costa and Marcello Calisti and Valerio Funari and Sergio Stefanni and Roberto Danovaro and Helena Gomes and Fabrizio Vecchi and Lewis Dartnell and Peter Weiss and Kathrin Nowak and Damianos Chatzievangelou and Simone Marini},
    title = {Research Trends and Future Perspectives in Marine Biomimicking Robotics},
    year = {2021},
    journal = {Sensors},
    doi = {10.3390/s21113778},
    pages = {3778},
    keywords = {ARRAY(0x55bd28ff5618)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46134/},
    abstract = {Mechatronic and soft robotics are taking inspiration from the animal kingdom to create new high-performance robots. Here, we focused on marine biomimetic research and used innovative bibliographic statistics tools, to highlight established and emerging knowledge domains. A total of 6980 scientific publications retrieved from the Scopus database (1950?2020), evidencing a sharp research increase in 2003?2004. Clustering analysis of countries collaborations showed two major Asian-North America and European clusters. Three significant areas appeared: (i) energy provision, whose advancement mainly relies on microbial fuel cells, (ii) biomaterials for not yet fully operational soft-robotic solutions; and finally (iii), design and control, chiefly oriented to locomotor designs. In this scenario, marine biomimicking robotics still lacks solutions for the long-lasting energy provision, which presently hinders operation autonomy. In the research environment, identifying natural processes by which living organisms obtain energy is thus urgent to sustain energy-demanding tasks while, at the same time, the natural designs must increasingly inform to optimize energy consumption.}
    }

  • D. D. Barrie, M. Pandya, H. Pandya, M. Hanheide, and K. Elgeneidy, “A deep learning method for vision based force prediction of a soft fin ray gripper using simulation data,” Frontiers in robotics and ai, vol. 8, p. 631371, 2021. doi:10.3389/frobt.2021.631371
    [BibTeX] [Abstract] [Download PDF]

    Soft robotic grippers are increasingly desired in applications that involve grasping of complex and deformable objects. However, their flexible nature and non-linear dynamics makes the modelling and control difficult. Numerical techniques such as Finite Element Analysis (FEA) present an accurate way of modelling complex deformations. However, FEA approaches are computationally expensive and consequently challenging to employ for real-time control tasks. Existing analytical techniques simplify the modelling by approximating the deformed gripper geometry. Although this approach is less computationally demanding, it is limited in design scope and can lead to larger estimation errors. In this paper, we present a learning based framework that is able to predict contact forces as well as stress distribution from soft Fin Ray Effect (FRE) finger images in real-time. These images are used to learn internal representations for deformations using a deep neural encoder, which are further decoded to contact forces and stress maps using separate branches. The entire network is jointly learned in an end-to-end fashion. In order to address the challenge of having sufficient labelled data for training, we employ FEA to generate simulated images to supervise our framework. This leads to an accurate prediction, faster inference and availability of large and diverse data for better generalisability. Furthermore, our approach is able to predict a detailed stress distribution that can guide grasp planning, which would be particularly useful for delicate objects. Our proposed approach is validated by comparing the predicted contact forces to the computed ground-truth forces from FEA as well as real force sensor. We rigorously evaluate the performance of our approach under variations in contact point, object material, object shape, viewing angle, and level of occlusion.

    @article{lincoln45569,
    volume = {8},
    month = {May},
    author = {Daniel De Barrie and Manjari Pandya and Harit Pandya and Marc Hanheide and Khaled Elgeneidy},
    title = {A Deep Learning Method for Vision Based Force Prediction of a Soft Fin Ray Gripper Using Simulation Data},
    publisher = {Frontiers Media},
    year = {2021},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2021.631371},
    pages = {631371},
    keywords = {ARRAY(0x55bd29012280)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45569/},
    abstract = {Soft robotic grippers are increasingly desired in applications that involve grasping of complex and deformable objects. However, their flexible nature and non-linear dynamics makes the modelling and control difficult. Numerical techniques such as Finite Element Analysis (FEA) present an accurate way of modelling complex deformations. However, FEA approaches are computationally expensive and consequently challenging to employ for real-time control tasks. Existing analytical techniques simplify the modelling by approximating the deformed gripper geometry. Although this approach is less computationally demanding, it is limited in design scope and can lead to larger estimation errors. In this paper, we present a learning based framework that is able to predict contact forces as well as stress distribution from soft Fin Ray Effect (FRE) finger images in real-time. These images are used to learn internal representations for deformations using a deep neural encoder, which are further decoded to contact forces and stress maps using separate branches. The entire network is jointly learned in an end-to-end fashion. In order to address the challenge of having sufficient labelled data for training, we employ FEA to generate simulated images to supervise our framework. This leads to an accurate prediction, faster inference and availability of large and diverse data for better generalisability. Furthermore, our approach is able to predict a detailed stress distribution that can guide grasp planning, which would be particularly useful for delicate objects. Our proposed approach is validated by comparing the predicted contact forces to the computed ground-truth forces from FEA as well as real force sensor. We rigorously evaluate the performance of our approach under variations in contact point, object material, object shape, viewing angle, and level of occlusion.}
    }

  • C. Jansen and E. Sklar, “Exploring co-creative drawing workflows,” Frontiers in robotics and ai, vol. 8, 2021. doi:10.3389/frobt.2021.577770
    [BibTeX] [Abstract] [Download PDF]

    This article presents the outcomes from a mixed-methods study of drawing practitioners (e.g., professional illustrators, fine artists, and art students) that was conducted in Autumn 2018 as a preliminary investigation for the development of a physical human-AI co-creative drawing system. The aim of the study was to discover possible roles that technology could play in observing, modeling, and possibly assisting an artist with their drawing. The study had three components: a paper survey of artists’ drawing practises, technology usage and attitudes, video recorded drawing exercises and a follow-up semi-structured interview which included a co-design discussion on how AI might contribute to their drawing workflow. Key themes identified from the interviews were (1) drawing with physical mediums is a traditional and primary way of creation; (2) artists’ views on AI varied, where co-creative AI is preferable to didactic AI; and (3) artists have a critical and skeptical view on the automation of creative work with AI. Participants’ input provided the basis for the design and technical specifications of a co-creative drawing prototype, for which details are presented in this article. In addition, lessons learned from conducting the user study are presented with a reflection on future studies with drawing practitioners.

    @article{lincoln50873,
    volume = {8},
    month = {May},
    author = {Chipp Jansen and Elizabeth Sklar},
    title = {Exploring Co-creative Drawing Workflows},
    publisher = {Frontiers},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2021.577770},
    year = {2021},
    keywords = {ARRAY(0x55bd29002678)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/50873/},
    abstract = {This article presents the outcomes from a mixed-methods study of drawing practitioners (e.g., professional illustrators, fine artists, and art students) that was conducted in Autumn 2018 as a preliminary investigation for the development of a physical human-AI co-creative drawing system. The aim of the study was to discover possible roles that technology could play in observing, modeling, and possibly assisting an artist with their drawing. The study had three components: a paper survey of artists' drawing practises, technology usage and attitudes, video recorded drawing exercises and a follow-up semi-structured interview which included a co-design discussion on how AI might contribute to their drawing workflow. Key themes identified from the interviews were (1) drawing with physical mediums is a traditional and primary way of creation; (2) artists' views on AI varied, where co-creative AI is preferable to didactic AI; and (3) artists have a critical and skeptical view on the automation of creative work with AI. Participants' input provided the basis for the design and technical specifications of a co-creative drawing prototype, for which details are presented in this article. In addition, lessons learned from conducting the user study are presented with a reflection on future studies with drawing practitioners.}
    }

  • A. S. Gomez, E. Aptoula, S. Parsons, and S. Bosilj, “Deep regression versus detection for counting in robotic phenotyping,” Ieee robotics and automation letters, vol. 6, iss. 2, p. 2902–2907, 2021. doi:10.1109/LRA.2021.3062586
    [BibTeX] [Abstract] [Download PDF]

    Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.

    @article{lincoln44001,
    volume = {6},
    number = {2},
    month = {April},
    author = {Adrian Salazar Gomez and E Aptoula and Simon Parsons and Simon Bosilj},
    title = {Deep Regression versus Detection for Counting in Robotic Phenotyping},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2021.3062586},
    pages = {2902--2907},
    keywords = {ARRAY(0x55bd28fe52d0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44001/},
    abstract = {Work in robotic phenotyping requires computer vision methods that estimate the number of fruit or grains in an image. To decide what to use, we compared three methods for counting fruit and grains, each method representative of a class of approaches from the literature. These are two methods based on density estimation and regression (single and multiple column), and one method based on object detection. We found that when the density of objects in an image is low, the approaches are comparable, but as the density increases, counting by regression becomes steadily more accurate than counting by detection. With more than a hundred objects per image, the error in the count predicted by detection-based methods is up to 5 times higher than when using regression-based ones.}
    }

  • S. Sarkadi, A. Rutherford, P. McBurney, S. Parsons, and I. Rahwan, “The evolution of deception,” Royal society open science, vol. 8, iss. 9, 2021. doi:10.1098/rsos.201032
    [BibTeX] [Abstract] [Download PDF]

    Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good–public goods games (PGGs)–with ideas from Interpersonal Deception Theory and Truth-Default Theory. This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception, and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralised case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of The Digital Commons.

    @article{lincoln46543,
    volume = {8},
    number = {9},
    month = {September},
    author = {Stefan Sarkadi and Alex Rutherford and Peter McBurney and Simon Parsons and Iyad Rahwan},
    title = {The Evolution of Deception},
    publisher = {Royal Society},
    year = {2021},
    journal = {Royal Society Open Science},
    doi = {10.1098/rsos.201032},
    keywords = {ARRAY(0x55bd28dfeb50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46543/},
    abstract = {Deception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good--public goods games (PGGs)--with ideas from Interpersonal Deception Theory and Truth-Default Theory. This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception, and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralised case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of The Digital Commons.}
    }

  • N. Dethlefs, A. Schoene, and H. Cuayahuitl, “A divide-and-conquer approach to neural natural language generation from structured data,” Neurocomputing, vol. 433, p. 300–309, 2021. doi:10.1016/j.neucom.2020.12.083
    [BibTeX] [Abstract] [Download PDF]

    Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.

    @article{lincoln43748,
    volume = {433},
    month = {April},
    author = {Nina Dethlefs and Annika Schoene and Heriberto Cuayahuitl},
    title = {A Divide-and-Conquer Approach to Neural Natural Language Generation from Structured Data},
    publisher = {Elsevier},
    year = {2021},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2020.12.083},
    pages = {300--309},
    keywords = {ARRAY(0x55bd28df0aa0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43748/},
    abstract = {Current approaches that generate text from linked data for complex real-world domains can face problems including rich and sparse vocabularies as well as learning from examples of long varied sequences. In this article, we propose a novel divide-and-conquer approach that automatically induces a hierarchy of ?generation spaces? from a dataset of semantic concepts and texts. Generation spaces are based on a notion of similarity of partial knowledge graphs that represent the domain and feed into a hierarchy of sequence-to-sequence or memory-to-sequence learners for concept-to-text generation. An advantage of our approach is that learning models are exposed to the most relevant examples during training which can avoid bias towards majority samples. We evaluate our approach on two common benchmark datasets and compare our hierarchical approach against a flat learning setup. We also conduct a comparison between sequence-to-sequence and memory-to-sequence learning models. Experiments show that our hierarchical approach overcomes issues of data sparsity and learns robust lexico-syntactic patterns, consistently outperforming flat baselines and previous work by up to 30\%. We also find that while memory-to-sequence models can outperform sequence-to-sequence models in some cases, the latter are generally more stable in their performance and represent a safer overall choice.}
    }

  • T. G. Thuruthel, G. Picardi, F. Iida, C. Laschi, and M. Calisti, “Learning to stop: a unifying principle for legged locomotion in varying environments,” Royal society open science, vol. 8, iss. 4, 2021. doi:10.1098/rsos.210223
    [BibTeX] [Abstract] [Download PDF]

    Evolutionary studies have unequivocally proven the transition of living organisms from water to land. Consequently, it can be deduced that locomotion strategies must have evolved from one environment to the other. However, the mechanism by which this transition happened and its implications on bio-mechanical studies and robotics research have not been explored in detail. This paper presents a unifying control strategy for locomotion in varying environments based on the principle of ?learning to stop?. Using a common reinforcement learning framework, deep deterministic policy gradient, we show that our proposed learning strategy facilitates a fast and safe methodology for transferring learned controllers from the facile water environment to the harsh land environment. Our results not only propose a plausible mechanism for safe and quick transition of locomotion strategies from a water to land environment but also provide a novel alternative for safer and faster training of robots.

    @article{lincoln44628,
    volume = {8},
    number = {4},
    month = {April},
    author = {T. G. Thuruthel and G. Picardi and F. Iida and C. Laschi and M. Calisti},
    title = {Learning to stop: a unifying principle for legged locomotion in varying environments},
    publisher = {The Royal Society},
    year = {2021},
    journal = {Royal Society Open Science},
    doi = {10.1098/rsos.210223},
    keywords = {ARRAY(0x55bd28e1bc48)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44628/},
    abstract = {Evolutionary studies have unequivocally proven the transition of living organisms from water to land. Consequently, it can be deduced that locomotion strategies must have evolved from one environment to the other. However, the mechanism by which this transition happened and its implications on bio-mechanical studies and robotics research have not been explored in detail. This paper presents a unifying control strategy for locomotion in varying environments based on the principle of ?learning to stop?. Using a common reinforcement learning framework, deep deterministic policy gradient, we show that our proposed learning strategy facilitates a fast and safe methodology for transferring learned controllers from the facile water environment to the harsh land environment. Our results not only propose a plausible mechanism for safe and quick transition of locomotion strategies from a water to land environment but also provide a novel alternative for safer and faster training of robots.}
    }

  • C. Laschi and M. Calisti, “Soft robot reaches the deepest part of the ocean,” Nature, vol. 591, p. 35–36, 2021. doi:10.1038/d41586-021-00489-y
    [BibTeX] [Abstract] [Download PDF]

    A self-powered robot inspired by a fish can survive the extreme pressures at the bottom of the ocean?s deepest trench, thanks to its soft body and distributed electronic system {–} and might enable exploration of the uncharted ocean.

    @article{lincoln52080,
    volume = {591},
    month = {March},
    author = {Cecilia Laschi and Marcello Calisti},
    title = {Soft robot reaches the deepest part of the ocean},
    publisher = {Nature Publishing Group},
    year = {2021},
    journal = {Nature},
    doi = {10.1038/d41586-021-00489-y},
    pages = {35--36},
    keywords = {ARRAY(0x55bd28d3cd30)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52080/},
    abstract = {A self-powered robot inspired by a fish can survive the extreme pressures at the bottom of the ocean?s deepest trench, thanks to its soft body and distributed electronic system {--} and might enable exploration of the uncharted ocean.}
    }

  • J. Gao, J. C. Westergaard, E. H. o, M. Bagge, E. Liljeroth, and E. Alexandersson, “Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning,” Knowledge-based systems, vol. 214, p. 106723, 2021. doi:10.1016/j.knosys.2020.106723
    [BibTeX] [Abstract] [Download PDF]

    The plant pathogen Phytophthora infestans causes the severe disease late blight in potato, which can result in huge yield loss for potato production. Automatic and accurate disease lesion segmentation enables fast evaluation of disease severity and assessment of disease progress. In tasks requiring computer vision, deep learning has recently gained tremendous success for image classification, object detection and semantic segmentation. To test whether we could extract late blight lesions from unstructured field environments based on high-resolution visual field images and deep learning algorithms, we collected{$\sim$}500 field RGB images in a set of diverse potato genotypes with different disease severity (0\%?70\%), resulting in 2100 cropped images. 1600 of these cropped images were used as the dataset for training deep neural networks and 250 cropped images were randomly selected as the validation dataset. Finally, the developed model was tested on the remaining 250 cropped images. The results show that the values for intersection over union (IoU) of the classes background (leaf and soil) and disease lesion in the test dataset were 0.996 and 0.386, respectively. Furthermore, we established a linear relationship (R2=0.655) between manual visual scores of late blight and the number of lesions detected by deep learning at the canopy level. We also showed that imbalance weights of lesion and background classes improved segmentation performance, and that fused masks based on the majority voting of the multiple masks enhanced the correlation with the visual disease scores. This study demonstrates the feasibility of using deep learning algorithms for disease lesion segmentation and severity evaluation based on proximal imagery, which could aid breeding for crop resistance in field environments, and also benefit precision farming.

    @article{lincoln43642,
    volume = {214},
    month = {February},
    author = {Junfeng Gao and Jesper Cairo Westergaard and Ea H{\o}egh Riis Sundmark and Merethe Bagge and Erland Liljeroth and Erik Alexandersson},
    title = {Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning},
    publisher = {Elsevier},
    year = {2021},
    journal = {Knowledge-Based Systems},
    doi = {10.1016/j.knosys.2020.106723},
    pages = {106723},
    keywords = {ARRAY(0x55bd28a3baf8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43642/},
    abstract = {The plant pathogen Phytophthora infestans causes the severe disease late blight in potato, which can result in huge yield loss for potato production. Automatic and accurate disease lesion segmentation enables fast evaluation of disease severity and assessment of disease progress. In tasks requiring computer vision, deep learning has recently gained tremendous success for image classification, object detection and semantic segmentation. To test whether we could extract late blight lesions from unstructured field environments based on high-resolution visual field images and deep learning algorithms, we collected{$\sim$}500 field RGB images in a set of diverse potato genotypes with different disease severity (0\%?70\%), resulting in 2100 cropped images. 1600 of these cropped images were used as the dataset for training deep neural networks and 250 cropped images were randomly selected as the validation dataset. Finally, the developed model was tested on the remaining 250 cropped images. The results show that the values for intersection over union (IoU) of the classes background (leaf and soil) and disease lesion in the test dataset were 0.996 and 0.386, respectively. Furthermore, we established a linear relationship (R2=0.655) between manual visual scores of late blight and the number of lesions detected by deep learning at the canopy level. We also showed that imbalance weights of lesion and background classes improved segmentation performance, and that fused masks based on the majority voting of the multiple masks enhanced the correlation with the visual disease scores. This study demonstrates the feasibility of using deep learning algorithms for disease lesion segmentation and severity evaluation based on proximal imagery, which could aid breeding for crop resistance in field environments, and also benefit precision farming.}
    }

  • P. McBurney and S. Parsons, “Argument schemes and dialogue protocols: doug walton’s legacy in artificial intelligence,” Journal of applied logics, vol. 8, iss. 1, p. 263–286, 2021.
    [BibTeX] [Abstract] [Download PDF]

    This paper is intended to honour the memory of Douglas Walton (1942–2020), a Canadian philosopher of argumentation who died in January 2020. Walton’s contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions –- argumentation schemes and dialogue protocols –- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.

    @article{lincoln43751,
    volume = {8},
    number = {1},
    month = {February},
    author = {Peter McBurney and Simon Parsons},
    title = {Argument Schemes and Dialogue Protocols: Doug Walton's legacy in artificial intelligence},
    publisher = {College Publications},
    year = {2021},
    journal = {Journal of Applied Logics},
    pages = {263--286},
    keywords = {ARRAY(0x55bd28ccc9b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43751/},
    abstract = {This paper is intended to honour the memory of Douglas Walton (1942--2020), a Canadian philosopher of argumentation who died in January 2020. Walton's contributions to argumentation theory have had a very strong influence on Artificial Intelligence (AI), particularly in the design of autonomous software agents able to reason and argue with one another, and in the design of protocols to govern such interactions. In this paper, we explore two of these contributions --- argumentation schemes and dialogue protocols --- by discussing how they may be applied to a pressing current research challenge in AI: the automated assessment of explanations for automated decision-making systems.}
    }

  • A. Seddaoui and M. C. Saaj, “Collision-free optimal trajectory generation for a space robot using genetic algorithm,” Acta astronautica, vol. 179, p. 311–321, 2021. doi:10.1016/j.actaastro.2020.11.001
    [BibTeX] [Abstract] [Download PDF]

    Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around their target. Several challenges arise when modelling, controlling and planning the motion of such systems, therefore, new methodologies are required. A safe approach towards the grasping point implies that the space robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking using a nonlinear adaptive

    @article{lincoln43074,
    volume = {179},
    month = {February},
    author = {Asma Seddaoui and Mini Chakravarthini Saaj},
    note = {The paper is the outcome of a PhD I supervised at University of Surrey.},
    title = {Collision-free optimal trajectory generation for a space robot using genetic algorithm},
    publisher = {Elsevier},
    year = {2021},
    journal = {Acta Astronautica},
    doi = {10.1016/j.actaastro.2020.11.001},
    pages = {311--321},
    keywords = {ARRAY(0x55bd28daaf88)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43074/},
    abstract = {Future on-orbit servicing and assembly missions will require space robots capable of manoeuvring safely around
    their target. Several challenges arise when modelling, controlling and planning the motion of such systems,
    therefore, new methodologies are required. A safe approach towards the grasping point implies that the space
    robot must be able to use the additional degrees of freedom offered by the spacecraft base to aid the arm attain
    the target and avoid collisions and singularities. The controlled-floating space robot possesses this particularity
    of motion and will be utilised in this paper to design an optimal path generator. The path generator, based on a
    Genetic Algorithm, takes advantage of the dynamic coupling effect and the controlled motion of the spacecraft
    base to safely attain the target. It aims to minimise several objectives whilst satisfying multiple constraints. The
    key feature of this new path generator is that it requires only the Cartesian position of the point to grasp as
    an input, without prior knowledge a desired path. The results presented originate from the trajectory tracking
    using a nonlinear adaptive}
    }

  • Á. D. Santos, N. Fili, D. S. Pearson, Y. Hari-Gupta, and C. P. Toseland, “High-throughput mechanobiology: force modulation of ensemble biochemical and cell-based assays.,” Biophysical journal, vol. 120, iss. 4, p. 631–641, 2021. doi:10.1016/j.bpj.2020.12.024
    [BibTeX] [Abstract] [Download PDF]

    Mechanobiology is focused on how the physical forces and mechanical properties of proteins, cells, and tissues contribute to physiology and disease. Although the response of proteins and cells to mechanical stimuli is critical for function, the tools to probe these activities are typically restricted to single-molecule manipulations. Here, we have developed a novel microplate reader assay to encompass mechanical measurements with ensemble biochemical and cellular assays, using a microplate lid modified with magnets. This configuration enables multiple static magnetic tweezers to function simultaneously across the microplate, thereby greatly increasing throughput. We demonstrate the broad applicability and versatility through in�vitro and in cellulo approaches. Overall, our methodology allows, for the first time (to our knowledge), ensemble biochemical and cell-based assays to be performed under force in high-throughput format. This approach substantially increases the availability of mechanobiology measurements.

    @article{lincoln46356,
    volume = {120},
    number = {4},
    month = {February},
    author = {{\'A}lia Dos Santos and Natalia Fili and David S. Pearson and Yukti Hari-Gupta and Christopher P. Toseland},
    title = {High-throughput mechanobiology: Force modulation of ensemble biochemical and cell-based assays.},
    publisher = {Elsevier},
    year = {2021},
    journal = {Biophysical Journal},
    doi = {10.1016/j.bpj.2020.12.024},
    pages = {631--641},
    keywords = {ARRAY(0x55bd28d86980)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46356/},
    abstract = {Mechanobiology is focused on how the physical forces and mechanical properties of proteins, cells, and tissues contribute to physiology and disease. Although the response of proteins and cells to mechanical stimuli is critical for function, the tools to probe these activities are typically restricted to single-molecule manipulations. Here, we have developed a novel microplate reader assay to encompass mechanical measurements with ensemble biochemical and cellular assays, using a microplate lid modified with magnets. This configuration enables multiple static magnetic tweezers to function simultaneously across the microplate, thereby greatly increasing throughput. We demonstrate the broad applicability and versatility through in�vitro and in cellulo approaches. Overall, our methodology allows, for the first time (to our knowledge), ensemble biochemical and cell-based assays to be performed under force in high-throughput format. This approach substantially increases the availability of mechanobiology measurements.}
    }

  • M. Lujak, E. I. Sklar, and F. Semet, “Agriculture fleet vehicle routing: a decentralised and dynamic problem,” Ai communications, vol. 34, iss. 1, p. 55–71, 2021. doi:10.3233/AIC-201581
    [BibTeX] [Abstract] [Download PDF]

    To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.

    @article{lincoln43570,
    volume = {34},
    number = {1},
    month = {February},
    author = {Marin Lujak and Elizabeth I Sklar and Frederic Semet},
    title = {Agriculture fleet vehicle routing: A decentralised and dynamic problem},
    publisher = {IOS Press},
    year = {2021},
    journal = {AI Communications},
    doi = {10.3233/AIC-201581},
    pages = {55--71},
    keywords = {ARRAY(0x55bd28ff0e08)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43570/},
    abstract = {To date, the research on agriculture vehicles in general and Agriculture Mobile Robots (AMRs) in particular has focused on a single vehicle (robot) and its agriculture-specific capabilities. Very little work has explored the coordination of fleets of such vehicles in the daily execution of farming tasks. This is especially the case when considering overall fleet performance, its efficiency and scalability in the context of highly automated agriculture vehicles that perform tasks throughout multiple fields potentially owned by different farmers and/or enterprises. The potential impact of automating AMR fleet coordination on commercial agriculture is immense. Major conglomerates with large and heterogeneous fleets of agriculture vehicles could operate on huge land areas without human operators to effect precision farming. In this paper, we propose the Agriculture Fleet Vehicle Routing Problem (AF-VRP) which, to the best of our knowledge, differs from any other version of the Vehicle Routing Problem studied so far. We focus on the dynamic and decentralised version of this problem applicable in environments involving multiple agriculture machinery and farm owners where concepts of fairness and equity must be considered. Such a problem combines three related problems: the dynamic assignment problem, the dynamic 3-index assignment problem and the capacitated arc routing problem. We review the state-of-the-art and categorise solution approaches as centralised, distributed and decentralised, based on the underlining decision-making context. Finally, we discuss open challenges in applying distributed and decentralised coordination approaches to this problem.}
    }

  • T. Vintr, Z. Yan, K. Eyisoy, F. Kubis, J. Blaha, J. Ulrich, C. Swaminathan, S. M. Mellado, T. Kucner, M. Magnusson, G. Cielniak, J. Faigl, T. Duckett, A. Lilienthal, and T. Krajnik, “Natural criteria for comparison of pedestrian flow forecasting models,” 2020 ieee/rjs international conference on intelligent robots and systems (iros), p. 11197–11204, 2021. doi:10.1109/IROS45743.2020.9341672
    [BibTeX] [Abstract] [Download PDF]

    Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-theart pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.

    @article{lincoln48928,
    month = {February},
    author = {Tomas Vintr and Zhi Yan and Kerem Eyisoy and Filip Kubis and Jan Blaha and Jiri Ulrich and Chittaranjan Swaminathan and Sergio Molina Mellado and Tomasz Kucner and Martin Magnusson and Grzegorz Cielniak and Jan Faigl and Tom Duckett and Achim Lilienthal and Tomas Krajnik},
    title = {Natural criteria for comparison of pedestrian flow forecasting models},
    publisher = {IEEE},
    journal = {2020 IEEE/RJS International Conference on Intelligent Robots and Systems (IROS)},
    doi = {10.1109/IROS45743.2020.9341672},
    pages = {11197--11204},
    year = {2021},
    keywords = {ARRAY(0x55bd28d4e968)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48928/},
    abstract = {Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-theart pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.}
    }

  • L. Korir, A. Drake, M. Collison, C. C. Villa, E. Sklar, and S. Pearson, “Current and emergent economic impacts of covid-19 and brexit on uk fresh produce and horticultural businesses,” Arxiv, 2021. doi:10.22004/ag.econ.312068
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK?s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.

    @article{lincoln46766,
    month = {January},
    title = {Current and Emergent Economic Impacts of Covid-19 and Brexit on UK Fresh Produce and Horticultural Businesses},
    author = {Lilian Korir and Archie Drake and Martin Collison and Carolina Camacho Villa and Elizabeth Sklar and Simon Pearson},
    year = {2021},
    doi = {10.22004/ag.econ.312068},
    journal = {ArXiv},
    keywords = {ARRAY(0x55bd286c10f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46766/},
    abstract = {This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality.
    The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK?s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.}
    }

  • G. Picardi, H. Hauser, C. Laschi, and M. Calisti, “Morphologically induced stability on an underwater legged robot with a deformable body,” The international journal of robotics research, vol. 40, iss. 1, p. 435–448, 2021. doi:10.1177/0278364919840426
    [BibTeX] [Abstract] [Download PDF]

    For robots to navigate successfully in the real world, unstructured environment adaptability is a prerequisite. Although this is typically implemented within the control layer, there have been recent proposals of adaptation through a morphing of the body. However, the successful demonstration of this approach has mostly been theoretical and in simulations thus far. In this work we present an underwater hopping robot that features a deformable body implemented as a deployable structure that is covered by a soft skin for which it is possible to manually change the body size without altering any other property (e.g. buoyancy or weight). For such a system, we show that it is possible to induce a stable hopping behavior instead of a fall, by just increasing the body size. We provide a mathematical model that describes the hopping behavior of the robot under the influence of shape-dependent underwater contributions (drag, buoyancy, and added mass) in order to analyze and compare the results obtained. Moreover, we show that for certain conditions, a stable hopping behavior can only be obtained through changing the morphology of the robot as the controller (i.e. actuator) would already be working at maximum capacity. The presented work demonstrates that, through the exploitation of shape-dependent forces, the dynamics of a system can be modified through altering the morphology of the body to induce a desirable behavior and, thus, a morphological change can be an effective alternative to the classic control.

    @article{lincoln46149,
    volume = {40},
    number = {1},
    month = {January},
    author = {Giacomo Picardi and Helmut Hauser and Cecilia Laschi and Marcello Calisti},
    title = {Morphologically induced stability on an underwater legged robot with a deformable body},
    year = {2021},
    journal = {The International Journal of Robotics Research},
    doi = {10.1177/0278364919840426},
    pages = {435--448},
    keywords = {ARRAY(0x55bd28ddbef8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46149/},
    abstract = {For robots to navigate successfully in the real world, unstructured environment adaptability is a prerequisite. Although this is typically implemented within the control layer, there have been recent proposals of adaptation through a morphing of the body. However, the successful demonstration of this approach has mostly been theoretical and in simulations thus far. In this work we present an underwater hopping robot that features a deformable body implemented as a deployable structure that is covered by a soft skin for which it is possible to manually change the body size without altering any other property (e.g. buoyancy or weight). For such a system, we show that it is possible to induce a stable hopping behavior instead of a fall, by just increasing the body size. We provide a mathematical model that describes the hopping behavior of the robot under the influence of shape-dependent underwater contributions (drag, buoyancy, and added mass) in order to analyze and compare the results obtained. Moreover, we show that for certain conditions, a stable hopping behavior can only be obtained through changing the morphology of the robot as the controller (i.e. actuator) would already be working at maximum capacity. The presented work demonstrates that, through the exploitation of shape-dependent forces, the dynamics of a system can be modified through altering the morphology of the body to induce a desirable behavior and, thus, a morphological change can be an effective alternative to the classic control.}
    }

  • C. Armanini, M. Farman, M. Calisti, F. Giorgio-Serchi, C. Stefanini, and F. Renda, “Flagellate underwater robotics at macroscale: design, modeling, and characterization,” Ieee transactions on robotics, p. 1–17, 2021. doi:10.1109/TRO.2021.3094051
    [BibTeX] [Abstract] [Download PDF]

    Prokaryotic flagellum is considered as the only known example of a biological ?wheel,? a system capable of converting the action of rotatory actuator into a continuous propulsive force. For this reason, flagella are an interesting case study in soft robotics and they represent an appealing source of inspiration for the design of underwater robots. A great number of flagellum-inspired devices exists, but these are all characterized by a size ranging in the micrometer scale and mostly realized with rigid materials. Here, we present the design and development of a novel generation of macroscale underwater propellers that draw their inspiration from flagellated organisms. Through a simple rotatory actuation and exploiting the capability of the soft material to store energy when interacting with the surrounding fluid, the propellers attain different helical shapes that generate a propulsive thrust. A theoretical model is presented, accurately describing and predicting the kinematic and the propulsive capabilities of the proposed solution. Different experimental trials are presented to validate the accuracy of the model and to investigate the performance of the proposed design. Finally, an underwater robot prototype propelled by four flagellar modules is presented.

    @article{lincoln46191,
    title = {Flagellate Underwater Robotics at Macroscale: Design, Modeling, and Characterization},
    author = {Costanza Armanini and Madiha Farman and Marcello Calisti and Francesco Giorgio-Serchi and Cesare Stefanini and Federico Renda},
    year = {2021},
    pages = {1--17},
    doi = {10.1109/TRO.2021.3094051},
    journal = {IEEE Transactions on Robotics},
    keywords = {ARRAY(0x55bd28d765b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46191/},
    abstract = {Prokaryotic flagellum is considered as the only known example of a biological ?wheel,? a system capable of converting the action of rotatory actuator into a continuous propulsive force. For this reason, flagella are an interesting case study in soft robotics and they represent an appealing source of inspiration for the design of underwater robots. A great number of flagellum-inspired devices exists, but these are all characterized by a size ranging in the micrometer scale and mostly realized with rigid materials. Here, we present the design and development of a novel generation of macroscale underwater propellers that draw their inspiration from flagellated organisms. Through a simple rotatory actuation and exploiting the capability of the soft material to store energy when interacting with the surrounding fluid, the propellers attain different helical shapes that generate a propulsive thrust. A theoretical model is presented, accurately describing and predicting the kinematic and the propulsive capabilities of the proposed solution. Different experimental trials are presented to validate the accuracy of the model and to investigate the performance of the proposed design. Finally, an underwater robot prototype propelled by four flagellar modules is presented.}
    }

  • N. Kokciyan, I. Sassoon, E. Sklar, S. Parsons, and S. Modgil, “Applying metalevel argumentation frameworks to support medical decision making,” Ieee intelligent systems, 2021. doi:10.1109/MIS.2021.3051420
    [BibTeX] [Abstract] [Download PDF]

    People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.

    @article{lincoln43690,
    title = {Applying Metalevel Argumentation Frameworks to Support Medical Decision Making},
    author = {Nadin Kokciyan and Isabel Sassoon and Elizabeth Sklar and Simon Parsons and Sanjay Modgil},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/MIS.2021.3051420},
    journal = {IEEE Intelligent Systems},
    keywords = {ARRAY(0x55bd28e318f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43690/},
    abstract = {People are increasingly employing artificial intelligence as the basis for decision-support systems (DSSs) to assist them in making well-informed decisions. Adoption of DSS is challenging when such systems lack support, or evidence, for justifying their recommendations. DSSs are widely applied in the medical domain, due to the complexity of the domain and the sheer volume of data that render manual processing difficult. This paper proposes a metalevel argumentation-based decision-support system that can reason with heterogeneous data (e.g. body measurements, electronic health records, clinical guidelines), while incorporating the preferences of the human beneficiaries of those decisions. The system constructs template-based explanations for the recommendations that it makes. The proposed framework has been implemented in a system to support stroke patients and its functionality has been tested in a pilot study. User feedback shows that the system can run effectively over an extended period.}
    }

  • H. Wang, H. Wang, J. Zhao, C. Hu, J. Peng, and S. Yue, “A time-delay feedback neural network for discriminating small, fast-moving targets in complex dynamic environments,” Ieee transactions on neural networks and learning systems, 2021. doi:10.1109/TNNLS.2021.3094205
    [BibTeX] [Abstract] [Download PDF]

    Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this paper, we propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop, and find it shows preference for high-velocity objects. Extensive experiments suggest that the Feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.

    @article{lincoln45567,
    title = {A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments},
    author = {Hongxin Wang and Huatian Wang and Jiannan Zhao and Cheng Hu and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/TNNLS.2021.3094205},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    keywords = {ARRAY(0x55bd28e31938)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45567/},
    abstract = {Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power. By exploiting their highly evolved visual systems, flying insects can effectively detect mates and track prey during rapid pursuits, even though the small targets equate to only a few pixels in their visual field. The high degree of sensitivity to small target movement is supported by a class of specialized neurons called small target motion detectors (STMDs). Existing STMD-based computational models normally comprise four sequentially arranged neural layers interconnected via feedforward loops to extract information on small target motion from raw visual inputs. However, feedback, another important regulatory circuit for motion perception, has not been investigated in the STMD pathway and its functional roles for small target motion detection are not clear. In this paper, we propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses. We compare the properties of the model with and without the time-delay feedback loop, and find it shows preference for high-velocity objects. Extensive experiments suggest that the Feedback STMD achieves superior detection performance for fast-moving small targets, while significantly suppressing background false positive movements which display lower velocities. The proposed feedback model provides an effective solution in robotic visual systems for detecting fast-moving small targets that are always salient and potentially threatening.}
    }

  • A. G. Esfahani, K. N. Sasikolomi, H. Hashempour, and F. Zhong, “Deep-lfd: deep robot learning from demonstrations,” Software impacts, vol. 9, p. 100087, 2021. doi:10.1016/j.simpa.2021.100087
    [BibTeX] [Abstract] [Download PDF]

    Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.

    @article{lincoln45212,
    volume = {9},
    month = {August},
    author = {Amir Ghalamzan Esfahani and Kiyanoush Nazari Sasikolomi and Hamidreza Hashempour and Fangxun Zhong},
    title = {Deep-LfD: Deep robot learning from demonstrations},
    publisher = {Elsevier},
    year = {2021},
    journal = {Software Impacts},
    doi = {10.1016/j.simpa.2021.100087},
    pages = {100087},
    keywords = {ARRAY(0x55bd28dff0c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45212/},
    abstract = {Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.}
    }

  • J. Zhao, H. Wang, N. Bellotto, C. Hu, J. Peng, and S. Yue, “Enhancing lgmd’s looming selectivity for uav with spatial-temporal distributed presynaptic connections,” Ieee transactions on neural networks and learning systems, p. 1–15, 2021. doi:10.1109/TNNLS.2021.3106946
    [BibTeX] [Abstract] [Download PDF]

    Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect?s visual neuron, LGMD is considered to be an ideal basis for building UAV?s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, we proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts? synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. Systematic experiments have been conducted to verify the performance of the proposed model for UAV agile flights. The results have demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably, and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.

    @article{lincoln47316,
    title = {Enhancing LGMD's Looming Selectivity for UAV With Spatial-Temporal Distributed Presynaptic Connections},
    author = {Jiannan Zhao and Hongxin Wang and Nicola Bellotto and Cheng Hu and Jigen Peng and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    pages = {1--15},
    doi = {10.1109/TNNLS.2021.3106946},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    keywords = {ARRAY(0x55bd28dd0de0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47316/},
    abstract = {Collision detection is one of the most challenging tasks for Unmanned Aerial Vehicles (UAVs). This is especially true for small or micro UAVs, due to their limited computational power. In nature, flying insects with compact and simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments. A good example of this is provided by locusts. They can avoid collisions in a dense swarm through the activity of a motion-based visual neuron called the Lobula Giant Movement Detector (LGMD). The defining feature of the LGMD neuron is its preference for looming. As a flying insect?s visual neuron, LGMD is considered to be an ideal basis for building UAV?s collision detecting system. However, existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements caused by UAV agile flights. To address this issue, we proposed a new model implementing distributed spatial-temporal synaptic interactions, which is inspired by recent findings in locusts? synaptic morphology. We first introduced the locally distributed excitation to enhance the excitation caused by visual motion with preferred velocities. Then radially extending temporal latency for inhibition is incorporated to compete with the distributed excitation and selectively suppress the non-preferred visual motions. This spatial-temporal competition between excitation and inhibition in our model is therefore tuned to preferred image angular velocity representing looming rather than background movements with these distributed synaptic interactions. Systematic experiments have been conducted to verify the performance of the proposed model for UAV agile flights. The results have demonstrated that this new model enhances the looming selectivity in complex flying scenes considerably, and has the potential to be implemented on embedded collision detection systems for small or micro UAVs.}
    }

  • R. Polvara, F. D. Duchetto, G. Neumann, and M. Hanheide, “Navigate-and-seek: a robotics framework for people localization in agricultural environments,” Ieee robotics and automation letters, vol. 6, iss. 4, p. 6577–6584, 2021. doi:10.1109/LRA.2021.3094557
    [BibTeX] [Abstract] [Download PDF]

    The agricultural domain offers a working environment where many human laborers are nowadays employed to maintain or harvest crops, with huge potential for productivity gains through the introduction of robotic automation. Detecting and localizing humans reliably and accurately in such an environment, however, is a prerequisite to many services offered by fleets of mobile robots collaborating with human workers. Consequently, in this paper, we expand on the concept of a topological particle filter (TPF) to accurately and individually localize and track workers in a farm environment, integrating information from heterogeneous sensors and combining local active sensing (exploiting a robot?s onboard sensing employing a Next-Best-Sense planning approach) and global localization (using affordable IoT GNSS devices). We validate the proposed approach in topologies created for the deployment of robotics fleets to support fruit pickers in a real farm environment. By combining multi-sensor observations on the topological level complemented by active perception through the NBS approach, we show that we can improve the accuracy of picker localization in comparison to prior work.

    @article{lincoln45627,
    volume = {6},
    number = {4},
    month = {October},
    author = {Riccardo Polvara and Francesco Del Duchetto and Gerhard Neumann and Marc Hanheide},
    title = {Navigate-and-Seek: a Robotics Framework for People Localization in Agricultural Environments},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2021.3094557},
    pages = {6577--6584},
    keywords = {ARRAY(0x55bd28ddcbb0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45627/},
    abstract = {The agricultural domain offers a working environment where many human laborers are nowadays employed to maintain or harvest crops, with huge potential for productivity gains through the introduction of robotic automation. Detecting and localizing humans reliably and accurately in such an environment, however, is a prerequisite to many services offered by fleets of mobile robots collaborating with human workers. Consequently, in this paper, we expand on the concept of a topological particle filter (TPF) to accurately and individually localize and track workers in a farm environment, integrating information from heterogeneous sensors and combining local active sensing (exploiting a robot?s onboard sensing employing a Next-Best-Sense planning approach) and global localization (using affordable IoT GNSS devices). We validate the proposed approach in topologies created for the deployment of robotics fleets to support fruit pickers in a real farm environment. By combining multi-sensor observations on the topological level complemented by active perception through the NBS approach, we show that we can improve the accuracy of picker localization in comparison to prior work.}
    }

  • H. Isakhani, N. Bellotto, Q. Fu, and S. Yue, “Generative design and fabrication of a locust-inspired gliding wing prototype for micro aerial robots,” Journal of computational design and engineering, vol. 8, iss. 5, p. 1191–1203, 2021. doi:10.1093/jcde/qwab040
    [BibTeX] [Abstract] [Download PDF]

    Gliding is generally one of the most efficient modes of flight in natural fliers that can be further emphasised in the aircraft industry to reduce emissions and facilitate endured flights. Natural wings being fundamentally responsible for this phenomenon are developed over millions of years of evolution. Artificial wings on the other hand, are limited to the human-proposed conceptual design phase often leading to sub-optimal results. However, the novel Generative Design (GD) method claims to produce mechanically improved solutions based on robust and rigorous models of design conditions and performance criteria. This study investigates the potential applications of this Computer-Associated Design (CAsD) technology to generate novel micro aerial vehicle wing concepts that are structurally more stable and efficient. Multiple performance-driven solutions (wings) with high-level goals are generated by an infinite scale cloud computing solution executing a machine learning based GD algorithm. Ultimately, the highest performing CAsD concepts are numerically analysed, fabricated, and mechanically tested according to our previous study, and the results are compared to the literature for qualitative as well as quantitative analysis and validations. It was concluded that the GD-based tandem wings’ (fore-& hindwing) ability to withstand fracture failure without compromising structural rigidity was optimised by 78\% compared to its peer models. However, the weight was slightly increased by 11\% with 14\% drop in stiffness when compared to our models from previous study.

    @article{lincoln46871,
    volume = {8},
    number = {5},
    month = {October},
    author = {Hamid Isakhani and Nicola Bellotto and Qinbing Fu and Shigang Yue},
    title = {Generative design and fabrication of a locust-inspired gliding wing prototype for micro aerial robots},
    publisher = {Oxford University Press},
    year = {2021},
    journal = {Journal of Computational Design and Engineering},
    doi = {10.1093/jcde/qwab040},
    pages = {1191--1203},
    keywords = {ARRAY(0x55bd29017d48)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46871/},
    abstract = {Gliding is generally one of the most efficient modes of flight in natural fliers that can be further emphasised in the aircraft industry to reduce emissions and facilitate endured flights. Natural wings being fundamentally responsible for this phenomenon are developed over millions of years of evolution. Artificial wings on the other hand, are limited to the human-proposed conceptual design phase often leading to sub-optimal results. However, the novel Generative Design (GD) method claims to produce mechanically improved solutions based on robust and rigorous models of design conditions and performance criteria. This study investigates the potential applications of this Computer-Associated Design (CAsD) technology to generate novel micro aerial vehicle wing concepts that are structurally more stable and efficient. Multiple performance-driven solutions (wings) with high-level goals are generated by an infinite scale cloud computing solution executing a machine learning based GD algorithm. Ultimately, the highest performing CAsD concepts are numerically analysed, fabricated, and mechanically tested according to our previous study, and the results are compared to the literature for qualitative as well as quantitative analysis and validations. It was concluded that the GD-based tandem wings' (fore-\& hindwing) ability to withstand fracture failure without compromising structural rigidity was optimised by 78\% compared to its peer models. However, the weight was slightly increased by 11\% with 14\% drop in stiffness when compared to our models from previous study.}
    }

  • T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A multiple pheromone communication system for swarm intelligence,” Ieee access, vol. 9, p. 148721–148737, 2021. doi:10.1109/ACCESS.2021.3124386
    [BibTeX] [Abstract] [Download PDF]

    Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviours such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS\${$\backslash$}Phi\$) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS\${$\backslash$}Phi\$ uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The colour sensors beneath the robots can detect and identify lingering “pheromones” on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.

    @article{lincoln47447,
    volume = {9},
    month = {December},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue},
    title = {A Multiple Pheromone Communication System for Swarm Intelligence},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2021.3124386},
    pages = {148721--148737},
    keywords = {ARRAY(0x55bd28ce7cf0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47447/},
    abstract = {Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviours such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS\${$\backslash$}Phi\$) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS\${$\backslash$}Phi\$ uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The colour sensors beneath the robots can detect and identify lingering "pheromones" on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.}
    }

  • Z. Maamar, N. Faci, M. Al-Khafajiy, and M. Dohan, “Time-centric and resource-driven composition for the internet of things,” Internet of things, vol. 16, p. 100460, 2021. doi:10.1016/j.iot.2021.100460
    [BibTeX] [Abstract] [Download PDF]

    Internet of Things (IoT), one of the fastest growing Information and Communication Technologies (ICT), is playing a major role in provisioning contextualized, smart services to end-users and organizations. To sustain this role, many challenges must be tackled with focus in this paper on the design and development of thing composition. The complex nature of today?s needs requires groups of things, and not separate things, to work together to satisfy these needs. By analogy with other ICTs like Web services, thing composition is specified with a model that uses dependencies to decide upon things that will do what, where, when, and why. Two types of dependencies are adopted, regular that schedule the execution chronology of things and special that coordinate the operations of things when they run into obstacles like unavailability of resources to use. Both resource use and resource availability are specified in compliance with Allen?s time intervals upon which reasoning takes place. This reasoning is technically demonstrated through a system extending EdgeCloudSim and backed with a set of experiments.

    @article{lincoln47573,
    volume = {16},
    month = {December},
    author = {Zakaria Maamar and Noura Faci and Mohammed Al-Khafajiy and Murtada Dohan},
    title = {Time-centric and resource-driven composition for the Internet of Things},
    publisher = {Elsevier},
    year = {2021},
    journal = {Internet of Things},
    doi = {10.1016/j.iot.2021.100460},
    pages = {100460},
    keywords = {ARRAY(0x55bd28a4f320)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47573/},
    abstract = {Internet of Things (IoT), one of the fastest growing Information and Communication Technologies (ICT), is playing a major role in provisioning contextualized, smart services to end-users and organizations. To sustain this role, many challenges must be tackled with focus in this paper on the design and development of thing composition. The complex nature of today?s needs requires groups of things, and not separate things, to work together to satisfy these needs. By analogy with other ICTs like Web services, thing composition is specified with a model that uses dependencies to decide upon things that will do what, where, when, and why. Two types of dependencies are adopted, regular that schedule the execution chronology of things and special that coordinate the operations of things when they run into obstacles like unavailability of resources to use. Both resource use and resource availability are specified in compliance with Allen?s time intervals upon which reasoning takes place. This reasoning is technically demonstrated through a system extending EdgeCloudSim and backed with a set of experiments.}
    }

  • A. Seddaoui, C. M. Saaj, and M. H. Nair, “Modeling a controlled-floating space robot for in-space services: a beginner?s tutorial,” Frontiers in robotics and ai, vol. 8, 2021. doi:10.3389/frobt.2021.725333
    [BibTeX] [Abstract] [Download PDF]

    Ground-based applications of robotics and autonomous systems (RASs) are fast advancing, and there is a growing appetite for developing cost-effective RAS solutions for in situ servicing, debris removal, manufacturing, and assembly missions. An orbital space robot, that is, a spacecraft mounted with one or more robotic manipulators, is an inevitable system for a range of future in-orbit services. However, various practical challenges make controlling a space robot extremely difficult compared with its terrestrial counterpart. The state of the art of modeling the kinematics and dynamics of a space robot, operating in the free-flying and free-floating modes, has been well studied by researchers. However, these two modes of operation have various shortcomings, which can be overcome by operating the space robot in the controlled-floating mode. This tutorial article aims to address the knowledge gap in modeling complex space robots operating in the controlled-floating mode and under perturbed conditions. The novel research contribution of this article is the refined dynamic model of a chaser space robot, derived with respect to the moving target while accounting for the internal perturbations due to constantly changing the center of mass, the inertial matrix, Coriolis, and centrifugal terms of the coupled system; it also accounts for the external environmental disturbances. The nonlinear model presented accurately represents the multibody coupled dynamics of a space robot, which is pivotal for precise pose control. Simulation results presented demonstrate the accuracy of the model for closed-loop control. In addition to the theoretical contributions in mathematical modeling, this article also offers a commercially viable solution for a wide range of in-orbit missions.

    @article{lincoln48335,
    volume = {8},
    month = {December},
    author = {Asma Seddaoui and Chakravarthini Mini Saaj and Manu Harikrishnan Nair},
    title = {Modeling a Controlled-Floating Space Robot for In-Space Services: A Beginner?s Tutorial},
    publisher = {Frontiers Media},
    journal = {Frontiers in Robotics and AI},
    doi = {10.3389/frobt.2021.725333},
    year = {2021},
    keywords = {ARRAY(0x55bd28ce7870)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48335/},
    abstract = {Ground-based applications of robotics and autonomous systems (RASs) are fast
    advancing, and there is a growing appetite for developing cost-effective RAS solutions
    for in situ servicing, debris removal, manufacturing, and assembly missions. An orbital
    space robot, that is, a spacecraft mounted with one or more robotic manipulators, is an
    inevitable system for a range of future in-orbit services. However, various practical
    challenges make controlling a space robot extremely difficult compared with its
    terrestrial counterpart. The state of the art of modeling the kinematics and dynamics of
    a space robot, operating in the free-flying and free-floating modes, has been well studied
    by researchers. However, these two modes of operation have various shortcomings,
    which can be overcome by operating the space robot in the controlled-floating mode. This
    tutorial article aims to address the knowledge gap in modeling complex space robots
    operating in the controlled-floating mode and under perturbed conditions. The novel
    research contribution of this article is the refined dynamic model of a chaser space robot,
    derived with respect to the moving target while accounting for the internal perturbations
    due to constantly changing the center of mass, the inertial matrix, Coriolis, and centrifugal
    terms of the coupled system; it also accounts for the external environmental disturbances.
    The nonlinear model presented accurately represents the multibody coupled dynamics of
    a space robot, which is pivotal for precise pose control. Simulation results presented
    demonstrate the accuracy of the model for closed-loop control. In addition to the
    theoretical contributions in mathematical modeling, this article also offers a
    commercially viable solution for a wide range of in-orbit missions.}
    }

  • M. Chellapurath, K. Walker, E. Donato, G. Picardi, S. Stefanni, C. Laschi, F. G. Serchi, and M. Calisti, “Analysis of station keeping performance of an underwater legged robot,” Ieee/asme transactions on mechatronics, p. 1–12, 2021. doi:10.1109/TMECH.2021.3132779
    [BibTeX] [Abstract] [Download PDF]

    Remotely operated vehicles (ROVs) can exploit contact with the substrate to enhance their station keeping capabilities. A negatively buoyant underwater legged robot can perform passive station keeping, relying on the frictional force to counteract disturbances acting on the robot. Unlike conventional propeller-based ROVs, this approach has similar, slightly higher efficiency while reducing disturbances to the substrate. Detailed analysis on the passive station keeping performance of an underwater legged robot was performed using Seabed Interaction Legged Vehicle for Exploration and Research 2 (SILVER2) as a reference platform, investigating the effect of leg configuration, net weight, and the nature of the substrate on station keeping performance. A numerical model was developed to study the effect of both geometrical and physical parameters on the station keeping performance, which accurately predicted the station keeping behavior of the robot during field tests. Finally, we defined a metric called station keeping efficiency for the evaluation of station keeping performance; the underwater legged robots showed higher station keeping efficiency (28\%) than commercial propeller-based ROVs (11\%), showing they could present an alternative for tasks such as environmental monitoring.

    @article{lincoln52083,
    month = {December},
    author = {Mrudul Chellapurath and Kyle Walker and Enrico Donato and Giacomo Picardi and Sergio Stefanni and Cecilia Laschi and Francesco Giorgio Serchi and Marcello Calisti},
    title = {Analysis of Station Keeping Performance of an Underwater Legged Robot},
    publisher = {IEEE},
    journal = {IEEE/ASME Transactions on Mechatronics},
    doi = {10.1109/TMECH.2021.3132779},
    pages = {1--12},
    year = {2021},
    keywords = {ARRAY(0x55bd28d1d0f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52083/},
    abstract = {Remotely operated vehicles (ROVs) can exploit contact with the substrate to enhance their station keeping capabilities. A negatively buoyant underwater legged robot can perform passive station keeping, relying on the frictional force to counteract disturbances acting on the robot. Unlike conventional propeller-based ROVs, this approach has similar, slightly higher efficiency while reducing disturbances to the substrate. Detailed analysis on the passive station keeping performance of an underwater legged robot was performed using Seabed Interaction Legged Vehicle for Exploration and Research 2 (SILVER2) as a reference platform, investigating the effect of leg configuration, net weight, and the nature of the substrate on station keeping performance. A numerical model was developed to study the effect of both geometrical and physical parameters on the station keeping performance, which accurately predicted the station keeping behavior of the robot during field tests. Finally, we defined a metric called station keeping efficiency for the evaluation of station keeping performance; the underwater legged robots showed higher station keeping efficiency (28\%) than commercial propeller-based ROVs (11\%), showing they could present an alternative for tasks such as environmental monitoring.}
    }

  • J. L. Louedec and G. Cielniak, “3d shape sensing and deep learning-based segmentation of strawberries,” Computers and electronics in agriculture, vol. 190, 2021. doi:10.1016/j.compag.2021.106374
    [BibTeX] [Abstract] [Download PDF]

    Automation and robotisation of the agricultural sector are seen as a viable solution to socio-economic challenges faced by this industry. This technology often relies on intelligent perception systems providing information about crops, plants and the entire environment. The challenges faced by traditional 2D vision systems can be addressed by modern 3D vision systems which enable straightforward localisation of objects, size and shape estimation, or handling of occlusions. So far, the use of 3D sensing was mainly limited to indoor or structured environments. In this paper, we evaluate modern sensing technologies including stereo and time-of-flight cameras for 3D perception of shape in agriculture and study their usability for segmenting out soft fruit from background based on their shape. To that end, we propose a novel 3D deep neural network which exploits the organised nature of information originating from the camera-based 3D sensors. We demonstrate the superior performance and ef? ficiency of the proposed architecture compared to the state-of-the-art 3D networks. Through a simulated study, we also show the potential of the 3D sensing paradigm for object segmentation in agriculture and provide in? sights and analysis of what shape quality is needed and expected for further analysis of crops. The results of this work should encourage researchers and companies to develop more accurate and robust 3D sensing technologies to assure their wider adoption in practical agricultural applications.

    @article{lincoln47035,
    volume = {190},
    month = {November},
    author = {Justin Le Louedec and Grzegorz Cielniak},
    title = {3D shape sensing and deep learning-based segmentation of strawberries},
    publisher = {Elsevier},
    journal = {Computers and Electronics in Agriculture},
    doi = {10.1016/j.compag.2021.106374},
    year = {2021},
    keywords = {ARRAY(0x55bd28ff5438)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47035/},
    abstract = {Automation and robotisation of the agricultural sector are seen as a viable solution to socio-economic challenges
    faced by this industry. This technology often relies on intelligent perception systems providing information about
    crops, plants and the entire environment. The challenges faced by traditional 2D vision systems can be addressed
    by modern 3D vision systems which enable straightforward localisation of objects, size and shape estimation, or
    handling of occlusions. So far, the use of 3D sensing was mainly limited to indoor or structured environments. In
    this paper, we evaluate modern sensing technologies including stereo and time-of-flight cameras for 3D
    perception of shape in agriculture and study their usability for segmenting out soft fruit from background based
    on their shape. To that end, we propose a novel 3D deep neural network which exploits the organised nature of
    information originating from the camera-based 3D sensors. We demonstrate the superior performance and ef?
    ficiency of the proposed architecture compared to the state-of-the-art 3D networks. Through a simulated study,
    we also show the potential of the 3D sensing paradigm for object segmentation in agriculture and provide in?
    sights and analysis of what shape quality is needed and expected for further analysis of crops. The results of this
    work should encourage researchers and companies to develop more accurate and robust 3D sensing technologies
    to assure their wider adoption in practical agricultural applications.}
    }

  • A. Astolfi, G. Picardi, and M. Calisti, “Multilegged underwater running with articulated legs,” Ieee transactions on robotics, vol. 38, iss. 3, p. 1841–1855, 2021. doi:10.1109/TRO.2021.3118204
    [BibTeX] [Abstract] [Download PDF]

    Drawing inspiration from the locomotion modalities of animals, legged robots demonstrated the potential to traverse irregular and unstructured environments. Successful approaches exploited single-leg templates, like the spring-loaded inverted pendulum (SLIP), as a reference for the control of multilegged machines. Nevertheless, the anchoring between the low-order model and the actual multilegged structure is still an open challenge. This article proposes a novel strategy to derive actuation inputs for a multilegged robot by expressing the control requirements in terms of jump height and forward speed (derived from the limit cycle). We found that these requirements could be associated with a specific maximum force, successively split on an arbitrary number of legs and their relative actuation sets. The proposed approach has been validated in multibody simulation and real-world experiments by employing the underwater hexapod robot SILVER2. Results show that locomotion performances of the low-order model are reflected by the simulated and actual robot, showing that the articulated-USLIP (a-USLIP) model can faithfully explain the multilegged behavior under the imposed control inputs once hydrodynamic parameters have been tuned. More importantly, the proposed controller can be translated to the terrestrial case with minimal modifications and extended with additional layers to obtain more complex behaviors.

    @article{lincoln52081,
    volume = {38},
    number = {3},
    month = {November},
    author = {Anna Astolfi and Giacomo Picardi and Marcello Calisti},
    title = {Multilegged Underwater Running With Articulated Legs},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Transactions on Robotics},
    doi = {10.1109/TRO.2021.3118204},
    pages = {1841--1855},
    keywords = {ARRAY(0x55bd28ce7378)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52081/},
    abstract = {Drawing inspiration from the locomotion modalities of animals, legged robots demonstrated the potential to traverse irregular and unstructured environments. Successful approaches exploited single-leg templates, like the spring-loaded inverted pendulum (SLIP), as a reference for the control of multilegged machines. Nevertheless, the anchoring between the low-order model and the actual multilegged structure is still an open challenge. This article proposes a novel strategy to derive actuation inputs for a multilegged robot by expressing the control requirements in terms of jump height and forward speed (derived from the limit cycle). We found that these requirements could be associated with a specific maximum force, successively split on an arbitrary number of legs and their relative actuation sets. The proposed approach has been validated in multibody simulation and real-world experiments by employing the underwater hexapod robot SILVER2. Results show that locomotion performances of the low-order model are reflected by the simulated and actual robot, showing that the articulated-USLIP (a-USLIP) model can faithfully explain the multilegged behavior under the imposed control inputs once hydrodynamic parameters have been tuned. More importantly, the proposed controller can be translated to the terrestrial case with minimal modifications and extended with additional layers to obtain more complex behaviors.}
    }

  • S. Maleki, S. Maleki, and N. R. Jennings, “Unsupervised anomaly detection with lstm autoencoders using statistical data-filtering,” Applied soft computing, vol. 108, p. 107443, 2021. doi:10.1016/j.asoc.2021.107443
    [BibTeX] [Abstract] [Download PDF]

    To address one of the most challenging industry problems, we develop an enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series. We propose the outputs of a well-designed system are drawn from an unknown probability distribution, U, in normal conditions. We introduce a probability criterion based on the classical central limit theorem that allows evaluation of the likelihood that a data-point is drawn from U. This enables the labelling of the data on the fly. Non-anomalous data is passed to train a deep Long Short-Term Memory (LSTM) autoencoder that distinguishes anomalies when the reconstruction error exceeds a threshold. To illustrate our algorithm?s efficacy, we consider two real industrial case studies where gradually-developing and abrupt anomalies occur. Moreover, we compare our algorithm?s performance with four of the recent and widely used algorithms in the domain. We show that our algorithm achieves considerably better results in that it timely detects anomalies while others either miss or lag in doing so.

    @article{lincoln44910,
    volume = {108},
    month = {September},
    author = {Sepehr Maleki and Sasan Maleki and Nicholas R. Jennings},
    title = {Unsupervised anomaly detection with LSTM autoencoders using statistical data-filtering},
    publisher = {Elsevier},
    year = {2021},
    journal = {Applied Soft Computing},
    doi = {10.1016/j.asoc.2021.107443},
    pages = {107443},
    keywords = {ARRAY(0x55bd29018078)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44910/},
    abstract = {To address one of the most challenging industry problems, we develop an enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series. We propose the outputs of a well-designed system are drawn from an unknown probability distribution, U, in normal conditions. We introduce a probability criterion based on the classical central limit theorem that allows evaluation of the likelihood that a data-point is drawn from U. This enables the labelling of the data on the fly. Non-anomalous data is passed to train a deep Long Short-Term Memory (LSTM) autoencoder that distinguishes anomalies when the reconstruction error exceeds a threshold. To illustrate our algorithm?s efficacy, we consider two real industrial case studies where gradually-developing and abrupt anomalies occur. Moreover, we compare our algorithm?s performance with four of the recent and widely used algorithms in the domain. We show that our algorithm achieves considerably better results in that it timely detects anomalies while others either miss or lag in doing so.}
    }

  • T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A multiple pheromone communication system for swarm intelligence,” Ieee access, vol. 9, 2021. doi:10.1109/ACCESS.2021.3124386
    [BibTeX] [Abstract] [Download PDF]

    Pheromones are chemical substances essential for communication among social insects. In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual pheromone has emerged recently as a powerful real-time method for indirect communication. However, these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviors such as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS{\ensuremath{\Phi}}) to support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones simultaneously. The proposed system ColCOS{\ensuremath{\Phi}} uses optical signals to emulate different evaporating chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The color sensors beneath the robots can detect and identify lingering “pheromones” on the screen. Meanwhile, the release of any pheromone from each robot is enabled by monitoring its positional information over time with an overhead camera. No other communication methods apart from virtual pheromones are employed in this system. Two case studies have been carried out which have verified the feasibility and effectiveness of the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel platform is a timely and powerful tool for research into swarm intelligence.

    @article{lincoln47216,
    volume = {9},
    month = {November},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue},
    title = {A Multiple Pheromone Communication System for Swarm Intelligence},
    publisher = {IEEE},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2021.3124386},
    year = {2021},
    keywords = {ARRAY(0x55bd28ce7b28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47216/},
    abstract = {Pheromones are chemical substances essential for communication among social insects.
    In the application of swarm intelligence to real micro mobile robots, the deployment of a single virtual
    pheromone has emerged recently as a powerful real-time method for indirect communication. However,
    these studies usually exploit only one kind of pheromones in their task, neglecting the crucial fact that in
    the world of real insects, multiple pheromones play important roles in shaping stigmergic behaviors such
    as foraging or nest building. To explore the multiple pheromones mechanism which enable robots to solve
    complex collective tasks efficiently, we introduce an artificial multiple pheromone system (ColCOS{\ensuremath{\Phi}}) to
    support swarm intelligence research by enabling multiple robots to deploy and react to multiple pheromones
    simultaneously. The proposed system ColCOS{\ensuremath{\Phi}} uses optical signals to emulate different evaporating
    chemical substances i.e. pheromones. These emulated pheromones are represented by trails displayed on a
    wide LCD display screen positioned horizontally, on which multiple miniature robots can move freely. The
    color sensors beneath the robots can detect and identify lingering "pheromones" on the screen. Meanwhile,
    the release of any pheromone from each robot is enabled by monitoring its positional information over time
    with an overhead camera. No other communication methods apart from virtual pheromones are employed in
    this system. Two case studies have been carried out which have verified the feasibility and effectiveness of
    the proposed system in achieving complex swarm tasks as empowered by multiple pheromones. This novel
    platform is a timely and powerful tool for research into swarm intelligence.}
    }

  • F. Camara, N. Bellotto, S. Cosar, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, and C. Fox, “Pedestrian models for autonomous driving part i: low-level models, from sensing to tracking,” Ieee transactions on intelligent transport systems, vol. 22, iss. 10, p. 6131–6151, 2021. doi:10.1109/TITS.2020.3006768
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.

    @article{lincoln41705,
    volume = {22},
    number = {10},
    month = {October},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Dimitris Nathanael and Mathias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Charles Fox},
    title = {Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from Sensing to Tracking},
    publisher = {IEEE},
    year = {2021},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    doi = {10.1109/TITS.2020.3006768},
    pages = {6131--6151},
    keywords = {ARRAY(0x55bd28ce74f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41705/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychology models, from the perspective of an AV designer. This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians. Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.}
    }

  • H. Isakhani, S. Yue, C. Xiong, and W. Chen, “Aerodynamic analysis and optimization of gliding locust wing using nash genetic algorithm,” Aiaa journal, vol. 59, iss. 10, p. 4002–4013, 2021. doi:10.2514/1.J060298
    [BibTeX] [Abstract] [Download PDF]

    Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. This study investigates the aerodynamic characteristics of an insect species called desert locust (Schistocerca gregaria) with an extraordinary gliding skills at low Reynolds number. Here, locust tandem wings are subjected to a computational fluid dynamics (CFD) simulation using 2D and 3D Navier-Stokes equations revealing fore-hindwing interactions, and the influence of their corrugations on the aerodynamic performance. Furthermore, the obtained CFD results are mathematically parameterized using PARSEC method and optimized based on a novel fusion of Genetic Algorithms and Nash game theory to achieve Nash equilibrium being the optimized wings. It was concluded that the lift-drag (gliding) ratio of the optimized profiles were improved by at least 77\% and 150\% compared to the original wing and the published literature, respectively. Ultimately, the profiles are integrated and analyzed using 3D CFD simulations that demonstrated a 14\% performance improvement validating the proposed wing models for further fabrication and rapid prototyping presented in the future study.

    @article{lincoln47016,
    volume = {59},
    number = {10},
    month = {October},
    author = {Hamid Isakhani and Shigang Yue and Caihua Xiong and Wenbin Chen},
    title = {Aerodynamic Analysis and Optimization of Gliding Locust Wing Using Nash Genetic Algorithm},
    publisher = {Aerospace Research Central},
    year = {2021},
    journal = {AIAA Journal},
    doi = {10.2514/1.J060298},
    pages = {4002--4013},
    keywords = {ARRAY(0x55bd28a53648)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47016/},
    abstract = {Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. This study investigates the aerodynamic characteristics of an insect species called desert locust (Schistocerca gregaria) with an extraordinary gliding skills at low Reynolds number. Here, locust tandem wings are subjected to a computational fluid dynamics (CFD) simulation using 2D and 3D Navier-Stokes equations revealing fore-hindwing interactions, and the influence of their corrugations on the aerodynamic performance. Furthermore, the obtained CFD results are mathematically parameterized using PARSEC method and optimized based on a novel fusion of Genetic Algorithms and Nash game theory to achieve Nash equilibrium being the optimized wings.
    It was concluded that the lift-drag (gliding) ratio of the optimized profiles were improved by at least 77\% and 150\% compared to the original wing and the published literature, respectively.
    Ultimately, the profiles are integrated and analyzed using 3D CFD simulations that demonstrated a 14\% performance improvement validating the proposed wing models for further fabrication and rapid prototyping presented in the future study.}
    }

  • I. Gould, J. D. Waegemaeker, D. Tzemi, I. Wright, S. Pearson, E. Ruto, L. Karrasch, L. S. Christensen, H. Aronsson, S. Eich-Greatorex, G. Bosworth, and P. Vellinga, “Salinization threats to agriculture across the north sea region,” in Future of sustainable agriculture in saline environments, Taylor and francis, 2021, p. 71–92. doi:doi:10.1201/9781003112327-5
    [BibTeX] [Abstract] [Download PDF]

    Salinization represents a global threat to agricultural productivity and human livelihoods. Historically, much saline research has focussed on arid or semi-arid systems. The North Sea region of Europe has seen very little attention in salinity literature, however, under future climate predictions, this is likely to change. In this review, we outline the mechanisms of salinization across the North Sea region. These include the intrusion of saline groundwater, coastal flooding, irrigation and airborne salinization. The extent of each degradation process is explored for the United Kingdom, Belgium, the Netherlands, Germany, Denmark, Sweden and Norway. The potential threat of salinization across the North Sea varies in a complex and diverse manner. However, we find an overall lack of data, both of water monitoring and soil sampling, on salinity in the region. For agricultural systems in the region to adapt against future salinization risk, more extensive mapping and monitoring of salinization need to be conducted, along with the development of appropriate land management practices.

    @incollection{lincoln45934,
    booktitle = {Future of Sustainable Agriculture in Saline Environments},
    title = {Salinization Threats to Agriculture across the North Sea Region},
    author = {Iain Gould and Jeroen De Waegemaeker and Domna Tzemi and Isobel Wright and Simon Pearson and Eric Ruto and Leena Karrasch and Laurids Siig Christensen and Henrik Aronsson and Susanne Eich-Greatorex and Gary Bosworth and Pier Vellinga},
    publisher = {Taylor and Francis},
    year = {2021},
    pages = {71--92},
    doi = {doi:10.1201/9781003112327-5},
    keywords = {ARRAY(0x55bd28cc9f40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45934/},
    abstract = {Salinization represents a global threat to agricultural productivity and human livelihoods. Historically, much saline research has focussed on arid or semi-arid systems. The North Sea region of Europe has seen very little attention in salinity literature, however, under future climate predictions, this is likely to change. In this review, we outline the mechanisms of salinization across the North Sea region. These include the intrusion of saline groundwater, coastal flooding, irrigation and airborne salinization. The extent of each degradation process is explored for the United Kingdom, Belgium, the Netherlands, Germany, Denmark, Sweden and Norway. The potential threat of salinization across the North Sea varies in a complex and diverse manner. However, we find an overall lack of data, both of water monitoring and soil sampling, on salinity in the region. For agricultural systems in the region to adapt against future salinization risk, more extensive mapping and monitoring of salinization need to be conducted, along with the development of appropriate land management practices.}
    }

  • A. Bikakis, A. Cohen, W. Dvorak, G. Flouris, and S. Parsons, “Joint attacks and accrual in argumentation frameworks,” in Handbook of formal argumentation, volume 2, D. Gabbay, M. Giacomin, G. R. Simari, and M. Thimm, Eds., College publications, 2021.
    [BibTeX] [Abstract] [Download PDF]

    While modelling arguments, it is often useful to represent “joint attacks”, i.e., cases where multiple arguments jointly attack another (note that this is different from the case where multiple arguments attack another in isolation). Based on this remark, the notion of joint attacks has been proposed as a useful extension of classical Abstract Argumentation Frameworks, and has been shown to constitute a genuine extension in terms of expressive power. In this chapter, we review various works considering the notion of joint attacks from various perspectives, including abstract and structured frameworks. Moreover, we present results detailing the relation among frameworks with joint attacks and classical argumentation frameworks, computational aspects, and applications of joint attacks. Last but not least, we propose a roadmap for future research on the subject, identifying gaps in current research and important research directions.

    @incollection{lincoln48565,
    booktitle = {Handbook of Formal Argumentation, Volume 2},
    editor = {Dov Gabbay and Massimiliano Giacomin and Guillermo R. Simari and Matthias Thimm},
    month = {August},
    title = {Joint Attacks and Accrual in Argumentation Frameworks},
    author = {Antonis Bikakis and Andrea Cohen and Wolfgang Dvorak and Giorgos Flouris and Simon Parsons},
    publisher = {College Publications},
    year = {2021},
    keywords = {ARRAY(0x55bd28d1fa38)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48565/},
    abstract = {While modelling arguments, it is often useful to represent ``joint attacks'', i.e., cases where multiple arguments jointly attack another (note that this is different from the case where multiple arguments attack another in isolation). Based on this remark, the notion of joint attacks has been proposed as a useful extension of classical Abstract Argumentation Frameworks, and has been shown to constitute a genuine extension in terms of expressive power.
    In this chapter, we review various works considering the notion of joint attacks from various perspectives, including abstract and structured frameworks. Moreover, we present results detailing the relation among frameworks with joint attacks and classical argumentation frameworks, computational aspects, and applications of joint attacks.
    Last but not least, we propose a roadmap for future research on the subject, identifying gaps in current research and important research directions.}
    }

  • J. Gao, J. C. Westergaard, and E. Alexandersson, “Computer vision and less complex image analyses to monitor potato traits in fields,” in Solanum tuberosum, D. Dobnik, K. Gruden, Ž. Ramšak, and A. Coll, Eds., New York: Springer, 2021, p. 273–299. doi:10.1007/978-1-0716-1609-3_13
    [BibTeX] [Abstract] [Download PDF]

    Field phenotyping of crops has recently gained considerable attention leading to the development of new protocols for recording plant traits of interest. Phenotyping in field conditions can be performed by various cameras, sensors and imaging platforms. In this chapter, practical aspects as well as advantages and disadvantages of above-ground phenotyping platforms are highlighted with a focus on drone-based imaging and relevant image analysis for field conditions. It includes useful planning tips for experimental design as well as protocols, sources, and tools for image acquisition, pre-processing, feature extraction and machine learning highlighting the possibilities with computer vision. Several open and free resources are given to speed up data analysis for biologists. This chapter targets professionals and researchers with limited computational background performing or wishing to perform phenotyping of field crops, especially with a drone-based platform. The advice and methods described focus on potato but can mostly be used for field phenotyping of any crops.

    @incollection{lincoln46316,
    number = {2354},
    month = {August},
    author = {Junfeng Gao and Jesper Cairo Westergaard and Erik Alexandersson},
    series = {Methods in Molecular Biology},
    booktitle = {Solanum tuberosum},
    editor = {David Dobnik and Kristina Gruden and {\v Z}iva Ram{\v s}ak and Anna Coll},
    title = {Computer Vision and Less Complex Image Analyses to Monitor Potato Traits in Fields},
    address = {New York},
    publisher = {Springer},
    year = {2021},
    doi = {10.1007/978-1-0716-1609-3\_13},
    pages = {273--299},
    keywords = {ARRAY(0x55bd28dfeee0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46316/},
    abstract = {Field phenotyping of crops has recently gained considerable attention leading to the development of new protocols for recording plant traits of interest. Phenotyping in field conditions can be performed by various cameras, sensors and imaging platforms. In this chapter, practical aspects as well as advantages and disadvantages of above-ground phenotyping platforms are highlighted with a focus on drone-based imaging and relevant image analysis for field conditions. It includes useful planning tips for experimental design as well as protocols, sources, and tools for image acquisition, pre-processing, feature extraction and machine learning highlighting the possibilities with computer vision. Several open and free resources are given to speed up data analysis for biologists.
    This chapter targets professionals and researchers with limited computational background performing or wishing to perform phenotyping of field crops, especially with a drone-based platform. The advice and methods described focus on potato but can mostly be used for field phenotyping of any crops.}
    }

  • E. Black, N. Maudet, and S. Parsons, “Argumentation-based dialogue,” in Handbook of formal argumentation, volume 2, D. Gabby, M. Giacomin, G. R. Simari, and M. Thimm, Eds., College publications, 2021.
    [BibTeX] [Abstract] [Download PDF]

    Dialogue is fundamental to argumentation, providing a dialectical basis for establishing which arguments are acceptable. Argumentation can also be used as the basis for dialogue. In such “argumentation-based” dialogues, participants take part in an exchange of arguments, and the mechanisms of argumentation are used to establish what participants take to be acceptable at the end of the exchange. This chapter considers such dialogues, discussing the elements that are required in order to carry out argumentation-based dialogues, giving examples, and discussing open issues.

    @incollection{lincoln48566,
    booktitle = {Handbook of Formal Argumentation, Volume 2},
    editor = {Dov Gabby and Massimiliano Giacomin and Guillermo R. Simari and Matthias Thimm},
    month = {August},
    title = {Argumentation-based Dialogue},
    author = {Elizabeth Black and Nicolas Maudet and Simon Parsons},
    publisher = {College Publications},
    year = {2021},
    keywords = {ARRAY(0x55bd28e26bc0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48566/},
    abstract = {Dialogue is fundamental to argumentation, providing a dialectical basis for establishing which arguments are acceptable.
    Argumentation can also be used as the basis for dialogue. In such ``argumentation-based'' dialogues, participants take part in an exchange of arguments, and the mechanisms of argumentation are used to establish what participants take to be acceptable at the end of the exchange. This chapter considers such dialogues, discussing the elements that are required in order to carry out argumentation-based dialogues, giving examples, and discussing open issues.}
    }

  • D. Dai, J. Gao, S. Parsons, and E. Sklar, “Small datasets for fruit detection with transfer learning,” in 4th uk-ras conference, 2021, p. 5–6. doi:10.31256/Nf6Uh8Q
    [BibTeX] [Abstract] [Download PDF]

    A common approach to the problem of fruit detection in images is to design a deep learning network and train a model to locate objects, using bounding boxes to identify regions containing fruit. However, this requires sufficient data and presents challenges for small datasets. Transfer learning, which acquires knowledge from a source domain and brings that to a new target domain, can produce improved performance in the target domain. The work discussed in this paper shows the application of transfer learning for fruit detection with small datasets and presents analysis between the number of training images in source and target domains.

    @inproceedings{lincoln46542,
    month = {July},
    author = {Dan Dai and Junfeng Gao and Simon Parsons and Elizabeth Sklar},
    booktitle = {4th UK-RAS Conference},
    title = {Small datasets for fruit detection with transfer learning},
    publisher = {UK-RAS},
    doi = {10.31256/Nf6Uh8Q},
    pages = {5--6},
    year = {2021},
    keywords = {ARRAY(0x55bd28df0d28)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46542/},
    abstract = {A common approach to the problem of fruit detection in images is to design a deep learning network and train a model to locate objects, using bounding boxes to identify regions containing fruit. However, this requires sufficient data and presents challenges for small datasets. Transfer learning, which acquires knowledge from a source domain and brings that to a new target domain, can produce improved performance in the target domain. The work discussed in this paper shows the application of transfer learning for fruit detection with small datasets and presents analysis between the number of training images in source and target domains.}
    }

  • L. Korir, A. Drake, M. Collison, C. C. Villa, E. Sklar, and S. Pearson, “Current and emergent economic impacts of covid-19 and brexit on uk fresh produce and horticultural businesses,” in The 94 th annual conference of the agricultural economics society (aes), 2021. doi:10.22004/ag.econ.312068
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK’s horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.

    @inproceedings{lincoln46582,
    booktitle = {The 94 th Annual Conference of the Agricultural Economics Society (AES)},
    month = {January},
    title = {Current and Emergent Economic Impacts of Covid-19 and Brexit on UK Fresh Produce and Horticultural Businesses},
    author = {Lilian Korir and Archie Drake and Martin Collison and Carolina Camacho Villa and Elizabeth Sklar and Simon Pearson},
    year = {2021},
    doi = {10.22004/ag.econ.312068},
    keywords = {ARRAY(0x55bd28df0cb0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46582/},
    abstract = {This paper describes a study designed to investigate the current and emergent impacts of Covid-19 and Brexit on UK horticultural businesses. Various characteristics of UK horticultural production, notably labour reliance and import dependence, make it an important sector for policymakers concerned to understand the effects of these disruptive events as we move from 2020 into 2021. The study design prioritised timeliness, using a rapid survey to gather information from a relatively small (n = 19) but indicative group of producers. The main novelty of the results is to suggest that a very substantial majority of producers either plan to scale back production in 2021 (47\%) or have been unable to make plans for 2021 because of uncertainty (37\%). The results also add to broader evidence that the sector has experienced profound labour supply challenges, with implications for labour cost and quality. The study discusses the implications of these insights from producers in terms of productivity and automation, as well as in terms of broader economic implications. Although automation is generally recognised as the long-term future for the industry (89\%), it appeared in the study as the second most referred short-term option (32\%) only after changes to labour schemes and policies (58\%). Currently, automation plays a limited role in contributing to the UK's horticultural workforce shortage due to economic and socio-political uncertainties. The conclusion highlights policy recommendations and future investigative intentions, as well as suggesting methodological and other discussion points for the research community.}
    }

  • H. Harman and E. Sklar, “A practical application of market-based mechanisms for allocating harvesting tasks,” in 19th international conference on practical applications of agents and multi-agent systems, 2021. doi:10.1007/978-3-030-85739-4_10
    [BibTeX] [Abstract] [Download PDF]

    Market-based task allocation mechanisms are designed to distribute a set of tasks fairly amongst a set of agents. Such mechanisms have been shown to be highly effective in simulation and when applied to multi-robot teams. Application of such mechanisms in real-world settings can present a range of practical challenges, such as knowing what is the best point in a complex process to allocate tasks and what information to consider in determining the allocation. The work presented here explores the application of market-based task allocation mechanisms to the problem of managing a heterogeneous human workforce to undertake activities associated with harvesting soft fruit. Soft fruit farms aim to maximise yield (the volume of fruit picked) while minimising labour time (and thus the cost of picking). Our work evaluates experimentally several different strategies for practical application of market-based mechanisms for allocating tasks to workers on soft fruit farms, identifying methods that appear best when simulated using a multi-agent model of farm activity.

    @inproceedings{lincoln46475,
    month = {September},
    author = {Helen Harman and Elizabeth Sklar},
    booktitle = {19th International Conference on Practical Applications of Agents and Multi-Agent Systems},
    title = {A Practical Application of Market-based Mechanisms for Allocating Harvesting Tasks},
    publisher = {Springer},
    journal = {Advances in Practical Applications of Agents, Multi-Agent Systems and Social Good: The PAAMS Collection},
    doi = {10.1007/978-3-030-85739-4\_10},
    year = {2021},
    keywords = {ARRAY(0x55bd28e04988)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46475/},
    abstract = {Market-based task allocation mechanisms are designed to distribute a set of tasks fairly amongst a set of agents. Such mechanisms have been shown to be highly effective in simulation and when applied to multi-robot teams. Application of such mechanisms in real-world settings can present a range of practical challenges, such as knowing what is the best point in a complex process to allocate tasks and what information to consider in determining the allocation. The work presented here explores the application of market-based task allocation mechanisms to the problem of managing a heterogeneous human workforce to undertake activities associated with harvesting soft fruit. Soft fruit farms aim to maximise yield (the volume of fruit picked) while minimising labour time (and thus the cost of picking). Our work evaluates experimentally several different strategies for practical application of market-based mechanisms for allocating tasks to workers on soft fruit farms, identifying methods that appear best when simulated using a multi-agent model of farm activity.}
    }

  • J. L. Louedec and G. Cielniak, “Gaussian map predictions for 3d surface feature localisation and counting,” in Bmvc, 2021.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features, addressing the limitations of state-of-the-art methods based on density estimation which struggle in presence of local disturbances. Gaussian maps indicate probable object location and can be generated directly from keypoint annotations avoiding laborious and costly per-pixel annotations. We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation enabling efficient processing by a neural network GNet, an improved UNet architecture, which generates the likely locations of surface features and their precise count. We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications. The results of training the proposed system on several hundreds of 3D scans of strawberries from a publicly available dataset demonstrate the accuracy and precision of the system which outperforms the state-of-the-art density-based methods for this application.

    @inproceedings{lincoln48667,
    booktitle = {BMVC},
    month = {November},
    title = {Gaussian map predictions for 3D surface feature localisation and counting},
    author = {Justin Le Louedec and Grzegorz Cielniak},
    publisher = {BMVA},
    year = {2021},
    keywords = {ARRAY(0x55bd28fdbe90)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48667/},
    abstract = {In this paper, we propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features, addressing the limitations of state-of-the-art methods based on density estimation which struggle in presence of local disturbances. Gaussian maps indicate probable object location and can be generated directly from keypoint annotations avoiding laborious and costly per-pixel annotations. We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation enabling efficient processing by a neural network GNet, an improved UNet architecture, which generates the likely locations of surface features and their precise count. We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications. The results of training the proposed system on several hundreds of 3D scans of strawberries from a publicly available dataset demonstrate the accuracy and precision of the system which outperforms the state-of-the-art density-based methods for this application.}
    }

  • U. A. Zahidi and G. Cielniak, “Active learning for crop-weed discrimination by image classification from convolutional neural network?s feature pyramid levels,” in 13th international conference, icvs 2021, International Conference on Computer Vision Systems ICVS 2021: Computer Vision Systems, 2021. doi:10.1007/978-3-030-87156-7_20
    [BibTeX] [Abstract] [Download PDF]

    The amount of e?ort required for high-quality data acquisition and labelling for adequate supervised learning drives the need for building an e?cient and e?ective image sampling strategy. We propose a novel Batch Mode Active Learning that blends Region Convolutional Neural Network?s (RCNN) Feature Pyramid Network (FPN) levels together and employs t-distributed Stochastic Neighbour Embedding (t-SNE) classi?cation for selecting incremental batch based on feature similarity. Later, K-means clustering is performed on t-SNE instances for the selected sample size of images. Results show that t-SNE classi?cation on merged FPN feature maps outperforms the approach based on RGB images directly, random sampling and maximum entropy-based image sampling schemes. For comparison, we employ a publicly available data set of images of Sugar beet for a crop-weed discrimination task together with our newly acquired annotated images of Romaine and Apollo lettuce crops at di?erent growth stages. Batch sampling on all datasets by the proposed method shows that only 60\% of images are required to produce precision/recall statistics similar to the complete dataset. Two lettuce datasets used in our experiments are publicly available (Lettuce datasets: https://bit.ly/3g7Owc5) to facilitate further research opportunities.

    @inproceedings{lincoln46648,
    month = {September},
    author = {Usman A. Zahidi and Grzegorz Cielniak},
    booktitle = {13th International Conference, ICVS 2021},
    address = {International Conference on Computer Vision Systems ICVS 2021: Computer Vision Systems},
    title = {Active Learning for Crop-Weed Discrimination by Image Classification from Convolutional Neural Network?s Feature Pyramid Levels},
    publisher = {Springer Verlag},
    doi = {10.1007/978-3-030-87156-7\_20},
    year = {2021},
    keywords = {ARRAY(0x55bd28e04418)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46648/},
    abstract = {The amount of e?ort required for high-quality data acquisition and labelling for adequate supervised learning drives the need for building an e?cient and e?ective image sampling strategy. We propose a novel Batch Mode Active Learning that blends Region Convolutional Neural Network?s (RCNN) Feature Pyramid Network (FPN) levels together and employs t-distributed Stochastic Neighbour Embedding (t-SNE) classi?cation for selecting incremental batch based on feature similarity. Later, K-means clustering is performed on t-SNE instances for the selected sample size of images. Results show that t-SNE classi?cation on merged FPN feature maps outperforms the approach based on RGB images directly, random sampling and maximum entropy-based image sampling schemes. For comparison, we employ a publicly available data set of images of Sugar beet for a crop-weed discrimination task together with our newly acquired annotated images of Romaine and Apollo lettuce crops at di?erent growth stages. Batch sampling on all datasets by the proposed method shows that only 60\% of images are required to produce precision/recall statistics similar to the complete dataset. Two lettuce datasets used in our experiments are publicly available (Lettuce datasets: https://bit.ly/3g7Owc5) to facilitate further research opportunities.}
    }

  • S. Mghames, M. Hanheide, and A. G. Esfahani, “Interactive movement primitives: planning to push occluding pieces for fruit picking,” in Ieee/rsj international conference on intelligent robots and systems (iros), 2021. doi:10.1109/IROS45743.2020.9341728
    [BibTeX] [Abstract] [Download PDF]

    Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.

    @inproceedings{lincoln42217,
    booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    month = {February},
    title = {Interactive Movement Primitives: Planning to Push Occluding Pieces for Fruit Picking},
    author = {Sariah Mghames and Marc Hanheide and Amir Ghalamzan Esfahani},
    year = {2021},
    doi = {10.1109/IROS45743.2020.9341728},
    note = {{\copyright} 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.},
    keywords = {ARRAY(0x55bd28dbfea8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42217/},
    abstract = {Robotic technology is increasingly considered the major mean for fruit picking. However, picking fruits in a dense cluster imposes a challenging research question in terms of motion/path planning as conventional planning approaches may not find collision-free movements for the robot to reach-and-pick a ripe fruit within a dense cluster. In such cases, the robot needs to safely push unripe fruits to reach a ripe one. Nonetheless, existing approaches to planning pushing movements in cluttered environments either are computationally expensive or only deal with 2-D cases and are not suitable for fruit picking, where it needs to compute 3- D pushing movements in a short time. In this work, we present a path planning algorithm for pushing occluding fruits to reach-and-pick a ripe one. Our proposed approach, called Interactive Probabilistic Movement Primitives (I-ProMP), is not computationally expensive (its computation time is in the order of 100 milliseconds) and is readily used for 3-D problems. We demonstrate the efficiency of our approach with pushing unripe strawberries in a simulated polytunnel. Our experimental results confirm I-ProMP successfully pushes table top grown strawberries and reaches a ripe one.}
    }

  • M. Cédérick, I. Ferrané, and H. Cuayahuitl, “Reward-based environment states for robot manipulation policy learning,” in Neurips 2021 workshop on deployable decision making in embodied systems (ddm), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Training robot manipulation policies is a challenging and open problem in robotics and artificial intelligence. In this paper we propose a novel and compact state representation based on the rewards predicted from an image-based task success classifier. Our experiments{–}using the Pepper robot in simulation with two deep reinforcement learning algorithms on a grab-and-lift task{–}reveal that our proposed state representation can achieve up to 97\% task success using our best policies.

    @inproceedings{lincoln47522,
    booktitle = {NeurIPS 2021 Workshop on Deployable Decision Making in Embodied Systems (DDM)},
    month = {December},
    title = {Reward-Based Environment States for Robot Manipulation Policy Learning},
    author = {Mouliets C{\'e}d{\'e}rick and Isabelle Ferran{\'e} and Heriberto Cuayahuitl},
    year = {2021},
    keywords = {ARRAY(0x55bd28d4e2f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47522/},
    abstract = {Training robot manipulation policies is a challenging and open problem in robotics and artificial intelligence. In this paper we propose a novel and compact state representation based on the rewards predicted from an image-based task success
    classifier. Our experiments{--}using the Pepper robot in simulation with two deep reinforcement learning algorithms on a grab-and-lift task{--}reveal that our proposed state representation can achieve up to 97\% task success using our best policies.}
    }

  • A. L. Zorrilla, I. M. Torres, and H. Cuayahuitl, “Audio embeddings help to learn better dialogue policies,” in Ieee automatic speech recognition and understanding, 2021.
    [BibTeX] [Abstract] [Download PDF]

    Neural transformer architectures have gained a lot of interest for text-based dialogue management in the last few years. They have shown high learning capabilities for open domain dialogue with huge amounts of data and also for domain adaptation in task-oriented setups. But the potential benefits of exploiting the users’ audio signal have rarely been explored in such frameworks. In this work, we combine text dialogue history representations generated by a GPT-2 model with audio embeddings obtained by the recently released Wav2Vec2 transformer model. We jointly fine-tune these models to learn dialogue policies via supervised learning and two policy gradient-based reinforcement learning algorithms. Our experimental results, using the DSTC2 dataset and a simulated user model capable of sampling audio turns, reveal that audio embeddings lead to overall higher task success (than without using audio embeddings) with statistically significant results across evaluation metrics and training algorithms.

    @inproceedings{lincoln46800,
    booktitle = {IEEE Automatic Speech Recognition and Understanding},
    month = {December},
    title = {Audio Embeddings Help to Learn Better Dialogue Policies},
    author = {Asier Lopez Zorrilla and M. Ines Torres and Heriberto Cuayahuitl},
    publisher = {IEEE},
    year = {2021},
    keywords = {ARRAY(0x55bd28ce2090)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46800/},
    abstract = {Neural transformer architectures have gained a lot of interest for text-based dialogue management in the last few years. They have shown high learning capabilities for open domain dialogue with huge amounts of data and also for domain adaptation in task-oriented setups. But the potential benefits of exploiting the users' audio signal have rarely been explored in such frameworks. In this work, we combine text dialogue history representations generated by a GPT-2 model with audio embeddings obtained by the recently released Wav2Vec2 transformer model. We jointly fine-tune these models to learn dialogue policies via supervised learning and two policy gradient-based reinforcement learning algorithms. Our experimental results, using the DSTC2 dataset and a simulated user model capable of sampling audio turns, reveal that audio embeddings lead to overall higher task success (than without using audio embeddings) with statistically significant results across evaluation metrics and training algorithms.}
    }

  • E. Donato, G. Picardi, and M. Calisti, “Statics optimization of a hexapedal robot modelled as a stewart platform,” in Annual conference towards autonomous robotic systems, 2021. doi:10.1007/978-3-030-89177-0_39
    [BibTeX] [Abstract] [Download PDF]

    SILVER2 is an underwater legged robot designed with the aim of collecting litter on the seabed and sample the sediment to assess the presence of micro-plastics. Besides the original application, SILVER2 can also be a valuable tool for all underwater operations which require to interact with objects directly on the seabed. The advancement presented in this paper is to model SILVER2 as a Gough-Stewart platform, and therefore to enhance its ability to interact with the environment. Since the robot is equipped with six segmented legs with three actuated joints, it is able to make arbitrary movements in the six degrees of freedom. The robot?s performance has been analysed from both kinematics and statics points of view. The goal of this work is providing a strategy to harness the redundancy of SILVER2 by finding the optimal posture to maximize forces/torques that it can resist along/around constrained directions. Simulation results have been reported to show the advantages of the proposed method.

    @inproceedings{lincoln52082,
    booktitle = {Annual Conference Towards Autonomous Robotic Systems},
    month = {October},
    title = {Statics Optimization of a Hexapedal Robot Modelled as a Stewart Platform},
    author = {Enrico Donato and Giacomo Picardi and Marcello Calisti},
    publisher = {Springer},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_39},
    keywords = {ARRAY(0x55bd28ce75b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52082/},
    abstract = {SILVER2 is an underwater legged robot designed with the aim of collecting litter on the seabed and sample the sediment to assess the presence of micro-plastics. Besides the original application, SILVER2 can also be a valuable tool for all underwater operations which require to interact with objects directly on the seabed. The advancement presented in this paper is to model SILVER2 as a Gough-Stewart platform, and therefore to enhance its ability to interact with the environment. Since the robot is equipped with six segmented legs with three actuated joints, it is able to make arbitrary movements in the six degrees of freedom. The robot?s performance has been analysed from both kinematics and statics points of view. The goal of this work is providing a strategy to harness the redundancy of SILVER2 by finding the optimal posture to maximize forces/torques that it can resist along/around constrained directions. Simulation results have been reported to show the advantages of the proposed method.}
    }

  • M. Hua, Q. Fu, W. Duan, and S. Yue, “Investigating refractoriness in collision perception neuronal model,” in 2021 international joint conference on neural networks (ijcnn), 2021. doi:10.1109/IJCNN52387.2021.9533965
    [BibTeX] [Abstract] [Download PDF]

    Currently, collision detection methods based on visual cues are still challenged by several factors including ultrafast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant movement detectors (LGMDs) in locust?s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ?link (L) layer? located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.

    @inproceedings{lincoln46692,
    booktitle = {2021 International Joint Conference on Neural Networks (IJCNN)},
    month = {September},
    title = {Investigating Refractoriness in Collision Perception Neuronal Model},
    author = {Mu Hua and Qinbing Fu and Wenting Duan and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/IJCNN52387.2021.9533965},
    keywords = {ARRAY(0x55bd28dff300)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46692/},
    abstract = {Currently, collision detection methods based on visual cues are still challenged by several factors including ultrafast approaching velocity and noisy signal. Taking inspiration from nature, though the computational models of lobula giant
    movement detectors (LGMDs) in locust?s visual pathways have demonstrated positive impacts on addressing these problems, there remains potential for improvement. In this paper, we propose a novel method mimicking neuronal refractoriness, i.e. the refractory period (RP), and further investigate its functionality and efficacy in the classic LGMD neural network model for collision perception. Compared with previous works, the two phases constructing RP, namely the absolute refractory period (ARP) and relative refractory period (RRP) are computationally implemented through a ?link (L) layer? located between the photoreceptor and the excitation layers to realise the dynamic characteristic of RP in discrete time domain. The L layer, consisting of local time-varying thresholds, represents a sort of mechanism that allows photoreceptors to be activated individually and selectively by comparing the intensity of each photoreceptor to its corresponding local threshold established by its last output. More specifically, while the local threshold can merely be augmented by larger output, it shrinks exponentially over time. Our experimental outcomes show that, to some extent, the investigated mechanism not only enhances the LGMD model in terms of reliability and stability when faced with ultra-fast approaching objects, but also improves its performance against visual stimuli polluted by Gaussian or Salt-Pepper noise. This research demonstrates the modelling of refractoriness is effective in collision perception neuronal models, and promising to address the aforementioned collision detection challenges.}
    }

  • R. Kirk, M. Mangan, and G. Cielniak, “Non-destructive soft fruit mass and volume estimation for phenotyping in horticulture,” in 13th international conference on computer vision systems, icvs 2021, 2021. doi:10.1007/978-3-030-87156-7
    [BibTeX] [Abstract] [Download PDF]

    Manual assessment of soft fruits is both laborious and prone to human error. We present methods to compute width, height, cross-section length, volume and mass using computer vision cameras from a robotic platform. Estimation of phenotypic traits from a camera system on a mobile robot is a non-destructive/invasive approach to gathering quantitative fruit data which is critical for breeding programmes, in-field quality assessment, maturity estimation and yield forecasting. Our presented methods can process 324?1770 berries per second on consumer-grade hardware and achieve low error rates of 3.00 cm3 and 2.34 g for volume and mass estimates. Our methods require object masks from 2D images, a typical output of segmentation architectures such as Mask R-CNN, and depth data for computing scale.

    @inproceedings{lincoln55953,
    month = {September},
    author = {Raymond Kirk and Michael Mangan and Grzegorz Cielniak},
    booktitle = {13th International Conference on Computer Vision Systems, ICVS 2021},
    editor = {Marcus Vincze and Timothy Patten and Henrik Christensen and Lazaros Nalpantidis},
    title = {Non-destructive Soft Fruit Mass and Volume Estimation for Phenotyping in Horticulture},
    publisher = {Springer Cham},
    doi = {10.1007/978-3-030-87156-7},
    year = {2021},
    keywords = {ARRAY(0x55bd29002030)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55953/},
    abstract = {Manual assessment of soft fruits is both laborious and prone to human error. We present methods to compute width, height, cross-section length, volume and mass using computer vision cameras from a robotic platform. Estimation of phenotypic traits from a camera system on a mobile robot is a non-destructive/invasive approach to gathering quantitative fruit data which is critical for breeding programmes, in-field quality assessment, maturity estimation and yield forecasting. Our presented methods can process 324?1770 berries per second on consumer-grade hardware and achieve low error rates of 3.00 cm3 and 2.34 g for volume and mass estimates. Our methods require object masks from 2D images, a typical output of segmentation architectures such as Mask R-CNN, and depth data for computing scale.}
    }

  • H. Harman and E. Sklar, “Auction-based task allocation mechanisms for managing fruit harvesting tasks,” in Ukras21, 2021, p. 47–48. doi:10.31256/Dg2Zp9Q
    [BibTeX] [Abstract] [Download PDF]

    Multi-robot task allocation mechanisms are de-signed to distribute a set of activities fairly amongst a set of robots. Frequently, this can be framed as a multi-criteria optimisation problem, for example minimising cost while maximising rewards. In soft fruit farms, tasks, such as picking ripe fruit at harvest time, are assigned to human labourers. The work presented here explores the application of multi-robot task allocation mechanisms to the complex problem of managing a heterogeneous workforce to undertake activities associated with harvesting soft fruit.

    @inproceedings{lincoln45349,
    booktitle = {UKRAS21},
    title = {Auction-based Task Allocation Mechanisms for Managing Fruit Harvesting Tasks},
    author = {Helen Harman and Elizabeth Sklar},
    year = {2021},
    pages = {47--48},
    doi = {10.31256/Dg2Zp9Q},
    keywords = {ARRAY(0x55bd28e487c0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45349/},
    abstract = {Multi-robot task allocation mechanisms are de-signed to distribute a set of activities fairly amongst a set of robots. Frequently, this can be framed as a multi-criteria optimisation problem, for example minimising cost while maximising rewards. In soft fruit farms, tasks, such as picking ripe fruit at harvest time, are assigned to human labourers. The work presented here explores the application of multi-robot task allocation mechanisms to the complex problem of managing a heterogeneous workforce to undertake activities associated with harvesting soft fruit.}
    }

  • I. Hroob, R. Polvara, S. M. Mellado, G. Cielniak, and M. Hanheide, “Benchmark of visual and 3d lidar slam systems in simulation environment for vineyards,” in Towards autonomous robotic systems conference (taros), 2021.
    [BibTeX] [Abstract] [Download PDF]

    In this work, we present a comparative analysis of the trajectories estimated from various Simultaneous Localization and Mapping (SLAM) systems in a simulation environment for vineyards. Vineyard environment is challenging for SLAM methods, due to visual appearance changes over time, uneven terrain, and repeated visual patterns. For this reason, we created a simulation environment specifically for vineyards to help studying SLAM systems in such a challenging environment. We evaluated the following SLAM systems: LIO-SAM, StaticMapping, ORB-SLAM2, and RTAB-MAP in four different scenarios. The mobile robot used in this study is equipped with 2D and 3D lidars, IMU, and RGB-D camera (Kinect v2). The results show good and encouraging performance of RTAB-MAP in such an environment.

    @inproceedings{lincoln45642,
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    title = {Benchmark of visual and 3D lidar SLAM systems in simulation environment for vineyards},
    author = {Ibrahim Hroob and Riccardo Polvara and Sergio Molina Mellado and Grzegorz Cielniak and Marc Hanheide},
    year = {2021},
    journal = {The 22nd Towards Autonomous Robotic Systems Conference},
    keywords = {ARRAY(0x55bd28ce74f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45642/},
    abstract = {In this work, we present a comparative analysis of the trajectories estimated from various Simultaneous Localization and Mapping (SLAM) systems in a simulation environment for vineyards. Vineyard environment is challenging for SLAM methods, due to visual appearance changes over time, uneven terrain, and repeated visual patterns. For this reason, we created a simulation environment specifically for vineyards to help studying SLAM systems in such a challenging environment. We evaluated the following SLAM systems: LIO-SAM, StaticMapping, ORB-SLAM2, and RTAB-MAP in four different scenarios. The mobile robot used in this study is equipped with 2D and 3D lidars, IMU, and RGB-D camera (Kinect v2). The results show good and encouraging performance of RTAB-MAP in such an environment.}
    }

  • R. Kirk, M. Mangan, and G. Cielniak, “Robust counting of soft fruit through occlusions with re-identification,” in 13th international conference on computer vision systems, icvs 2021, 2021. doi:10.1007/978-3-030-87156-7_17
    [BibTeX] [Abstract] [Download PDF]

    Fruit counting and tracking is a crucial component of fruit harvesting and yield forecasting applications within horticulture. We present a novel multi-object, multi-class fruit tracking system to count fruit from image sequences. We first train a recurrent neural network (RNN) comprised of a feature extractor stem and two heads for re-identification and maturity classification. We apply the network to detected fruits in image sequences and utilise the output of both network heads to maintain track consistency and reduce intra-class false positives between maturity stages. The counting-by-tracking system is evaluated by comparing with a popular detect-to-track architecture and against manually labelled tracks (counts). Our proposed system achieves a mean average percentage error (MAPE) of 3\% (L1 loss = 7) improving on the baseline multi-object tracking approach which obtained an MAPE of 21\% (L1 loss = 41). Validating this approach for use in horticulture.

    @inproceedings{lincoln55954,
    booktitle = {13th International Conference on Computer Vision Systems, ICVS 2021},
    month = {September},
    title = {Robust Counting of Soft Fruit Through Occlusions with Re-identification},
    author = {Raymond Kirk and Michael Mangan and Grzegorz Cielniak},
    publisher = {Springer Cham},
    year = {2021},
    doi = {10.1007/978-3-030-87156-7\_17},
    keywords = {ARRAY(0x55bd28dff120)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/55954/},
    abstract = {Fruit counting and tracking is a crucial component of fruit harvesting and yield forecasting applications within horticulture. We present a novel multi-object, multi-class fruit tracking system to count fruit from image sequences. We first train a recurrent neural network (RNN) comprised of a feature extractor stem and two heads for re-identification and maturity classification. We apply the network to detected fruits in image sequences and utilise the output of both network heads to maintain track consistency and reduce intra-class false positives between maturity stages. The counting-by-tracking system is evaluated by comparing with a popular detect-to-track architecture and against manually labelled tracks (counts). Our proposed system achieves a mean average percentage error (MAPE) of 3\% (L1 loss = 7) improving on the baseline multi-object tracking approach which obtained an MAPE of 21\% (L1 loss = 41). Validating this approach for use in horticulture.}
    }

  • W. King, L. Pooley, P. Johnson, and K. Elgeneidy, “Design and characterisation of a variable-stiffness soft actuator based on tendon twisting,” in Taros 2021, 2021.
    [BibTeX] [Abstract] [Download PDF]

    The field of soft robotics aims to address the challenges faced by traditional rigid robots in less structured and dynamic environments that require more adaptive interactions. Taking inspiration from biological organisms? such as octopus tentacles and elephant trunks, soft robots commonly use elastic materials and novel actuation methods to mimic the continuous deformation of their mostly soft bodies. While current robotic manipulators, such as those used in the DaVinci surgical robot, have seen use in precise minimally invasive surgeries applications, the capability of soft robotics to provide a greater degree of flexibility and inherently safe interactions shows great promise that motivates further study. Nevertheless, introducing softness consequently opens new challenges in achieving accurate positional control and sufficient force generation often required for manipulation tasks. In this paper, the feasibility of a stiffening mechanism based on tendon-twisting is investigated, as an alternative stiffening mechanism for soft actuators that can be easily scaled as needed based on tendon size, material properties, and arrangements, while offering simple means of controlling a gradual increase in stiffening during operation.

    @inproceedings{lincoln45570,
    booktitle = {Taros 2021},
    month = {September},
    title = {Design and Characterisation of a Variable-Stiffness Soft Actuator Based on Tendon Twisting},
    author = {William King and Luke Pooley and Philip Johnson and Khaled Elgeneidy},
    year = {2021},
    keywords = {ARRAY(0x55bd28ee8200)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45570/},
    abstract = {The field of soft robotics aims to address the challenges faced by traditional rigid robots in less structured and dynamic environments that require more adaptive interactions. Taking inspiration from biological organisms? such as octopus tentacles and elephant trunks, soft robots commonly use elastic materials and novel actuation methods to mimic the continuous deformation of their mostly soft bodies. While current robotic manipulators, such as those used in the DaVinci surgical robot, have seen use in precise minimally invasive surgeries applications, the capability of soft robotics to provide a greater degree of flexibility and inherently safe interactions shows great promise that motivates further study. Nevertheless, introducing softness consequently opens new challenges in achieving accurate positional control and sufficient force generation often required for manipulation tasks. In this paper, the feasibility of a stiffening mechanism based on tendon-twisting is investigated, as an alternative stiffening mechanism for soft actuators that can be easily scaled as needed based on tendon size, material properties, and arrangements, while offering simple means of controlling a gradual increase in stiffening during operation.}
    }

  • C. Fox, “Musichastie: field-based hierarchical music representation,” in International conference on computer music, 2021.
    [BibTeX] [Abstract] [Download PDF]

    MusicHastie is a hierarchical music representation language designed for use in human and automated composition and for human and machine learning based music study and analysis. It represents and manipulates musical structure in a semantic form based on concepts from Schenkerian analysis, western European art music and popular music notations, electronica and some non-western forms such as modes and ragas. The representation is designed to model one form of musical perception by human musicians so can be used to aid human understanding and memorization of popular music pieces. An open source MusicHastie to MIDI compiler is released as part of this publication, now including capabilities for electronica MIDI control commands to model structures such as filter sweeps in addition to keys, chords, rhythms, patterns, and melodies.

    @inproceedings{lincoln45328,
    booktitle = {International Conference on Computer Music},
    month = {July},
    title = {MusicHastie: field-based hierarchical music representation},
    author = {Charles Fox},
    publisher = {ICMC},
    year = {2021},
    keywords = {ARRAY(0x55bd28d1fa50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45328/},
    abstract = {MusicHastie is a hierarchical music representation language designed for use in human and automated composition and for human and machine learning based music study and analysis. It represents and manipulates musical structure in a semantic form based on concepts from Schenkerian analysis, western European art music and popular music notations, electronica and some non-western forms such as modes and ragas. The representation is designed to model one form of musical perception by human musicians so can be used to aid human understanding and memorization of popular music pieces. An open source MusicHastie to MIDI compiler is released as part of this publication, now including capabilities for electronica MIDI control commands to model structures such as filter sweeps in addition to keys, chords, rhythms, patterns, and melodies.}
    }

  • Z. Maamar and M. Al-Khafajiy, “Cloud-edge coupling to mitigate execution failures,” in Proceedings of the 36th annual acm symposium on applied computing, 2021, p. 711–718. doi:10.1145/3412841.3442334
    [BibTeX] [Abstract] [Download PDF]

    This paper examines the doability of cloud-edge coupling to mitigate execution failures and hence, achieve business process continuity. These failures are the result of disruptions that impact the cycles of consuming cloud resources and/or edge resources. Cloud/Edge resources are subject to restrictions like limitedness and non-shareability that increase the complexity of resuming execution operations to the extent that some of these operations could be halted, which means failures. To mitigate failures, cloud and edge resources are synchronized using messages allowing proper consumption of these resources. A Microsoft Azure-based testbed simulating cloud-edge coupling is also presented in the paper.

    @inproceedings{lincoln47575,
    month = {March},
    author = {Zakaria Maamar and Mohammed Al-Khafajiy},
    booktitle = {Proceedings of the 36th Annual ACM Symposium on Applied Computing},
    title = {Cloud-edge coupling to mitigate execution failures},
    publisher = {Association for Computing Machinery},
    doi = {10.1145/3412841.3442334},
    pages = {711--718},
    year = {2021},
    keywords = {ARRAY(0x55bd28fede68)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47575/},
    abstract = {This paper examines the doability of cloud-edge coupling to mitigate execution failures and hence, achieve business process continuity. These failures are the result of disruptions that impact the cycles of consuming cloud resources and/or edge resources. Cloud/Edge resources are subject to restrictions like limitedness and non-shareability that increase the complexity of resuming execution operations to the extent that some of these operations could be halted, which means failures. To mitigate failures, cloud and edge resources are synchronized using messages allowing proper consumption of these resources. A Microsoft Azure-based testbed simulating cloud-edge coupling is also presented in the paper.}
    }

  • L. Guevara, M. Khalid, M. Hanheide, and S. Parsons, “Assessing the probability of human injury during uv-c treatment of crops by robots,” in 4th uk-ras conference, 2021. doi:10.31256/Pj6Cz2L
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a hazard analysis for an agricultural scenario where a crop is treated by a robot using UV-C light. Although human-robot interactions are not expected, it may be the case that unauthorized people approach the robot while it is operating. These potential human-robot interactions have been identified and modelled as Markov Decision Processes (MDP) and tested in the model checking tool PRISM.

    @inproceedings{lincoln46537,
    booktitle = {4th UK-RAS Conference},
    month = {July},
    title = {Assessing the probability of human injury during UV-C treatment of crops by robots},
    author = {Leonardo Guevara and Muhammad Khalid and Marc Hanheide and Simon Parsons},
    publisher = {UK-RAS},
    year = {2021},
    doi = {10.31256/Pj6Cz2L},
    keywords = {ARRAY(0x55bd28d23f38)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46537/},
    abstract = {This paper describes a hazard analysis for an agricultural scenario where a crop is treated by a robot using UV-C light.
    Although human-robot interactions are not expected, it may be the case that unauthorized people approach the robot while it is operating. These potential human-robot interactions have been identified and modelled as Markov Decision Processes (MDP) and tested in the model checking tool PRISM.}
    }

  • T. Choi and G. Cielniak, “Adaptive selection of informative path planning strategies via reinforcement learning,” in 2021 european conference on mobile robots (ecmr), 2021. doi:10.1109/ECMR50962.2021.9568796
    [BibTeX] [Abstract] [Download PDF]

    In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as ?attraction force? to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore ?local planning? approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.

    @inproceedings{lincoln46371,
    booktitle = {2021 European Conference on Mobile Robots (ECMR)},
    month = {October},
    title = {Adaptive Selection of Informative Path Planning Strategies via Reinforcement Learning},
    author = {Taeyeong Choi and Grzegorz Cielniak},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/ECMR50962.2021.9568796},
    keywords = {ARRAY(0x55bd28cce658)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46371/},
    abstract = {In our previous work, we designed a systematic policy to prioritize sampling locations to lead significant accuracy
    improvement in spatial interpolation by using the prediction uncertainty of Gaussian Process Regression (GPR) as ?attraction force? to deployed robots in path planning. Although the integration with Traveling Salesman Problem (TSP) solvers was also shown to produce relatively short travel distance, we here hypothesise several factors that could decrease the overall prediction precision as well because sub-optimal locations may eventually be included in their paths. To address this issue, in this paper, we first explore ?local planning? approaches adopting various spatial ranges within which next sampling locations are prioritized to investigate their effects on the prediction performance as well as incurred travel distance. Also, Reinforcement Learning (RL)-based high-level controllers are trained to adaptively produce blended plans from a particular set of local planners to inherit unique strengths from that selection depending on latest prediction states. Our experiments on use cases of temperature monitoring robots demonstrate that the dynamic mixtures of planners can not only generate sophisticated, informative plans that a single planner could
    not create alone but also ensure significantly reduced travel distances at no cost of prediction reliability without any assist of additional modules for shortest path calculation.}
    }

  • M. Khalid, L. Guevara, M. Hanheide, and S. Parsons, “Assuring autonomy of robots in soft fruit production,” in 4th uk-ras conference, 2021. doi:10.31256/Ml6Ik7G
    [BibTeX] [Abstract] [Download PDF]

    This paper describes our work to assure safe autonomy in soft fruit production. The first step was hazard analysis, where all the possible hazards in representative scenarios were identified. Following this analysis, a three-layer safety architecture was identified that will minimise the occurrence of the identified hazards. Most of the hazards are minimised by upper layers, while unavoidable hazards are handled using emergency stops. In parallel, we are using probabilistic model checking to check the probability of a hazard’s occurrence. The results from the model checking will be used to improve safety system architecture.

    @inproceedings{lincoln46541,
    booktitle = {4th UK-RAS Conference},
    month = {July},
    title = {Assuring autonomy of robots in soft fruit production},
    author = {Muhammad Khalid and Leonardo Guevara and Marc Hanheide and Simon Parsons},
    publisher = {UK-RAS},
    year = {2021},
    doi = {10.31256/Ml6Ik7G},
    keywords = {ARRAY(0x55bd28d1f318)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46541/},
    abstract = {This paper describes our work to assure safe autonomy in soft fruit production. The first step was hazard analysis, where all the possible hazards in representative scenarios were identified. Following this analysis, a three-layer safety architecture was identified that will minimise the occurrence of the identified hazards. Most of the hazards are minimised by upper layers, while unavoidable hazards are handled using emergency stops. In parallel, we are using probabilistic model checking to check the probability of a hazard's occurrence. The results from the model checking will be used to improve safety system architecture.}
    }

  • H. Rogers, B. Dawson, G. Clawson, and C. Fox, “Extending an open source hardware agri-robot with simulation and plant re-identification,” in Oxford autonomous intelligent machines and systems conference 2021, 2021.
    [BibTeX] [Abstract] [Download PDF]

    Previous work constructed an open source hardware (OSH) agri-robot platform for swarming agriculture research. We summarise recent developments from the community on this platform as a case study of how an OSH project can develop. The original platform has been extended by contributions of a simulation package and a vision-based plant-re-identification system used as a target for blockchain-based food assurance. Gaining new participants in OSH projects requires explicit instructions on how to contribute. The system hardware and software is open-sourced at https://github.com/Harry-Rogers/PiCar as part of this publication. We invite others to get involved and extend the platform.

    @inproceedings{lincoln46862,
    booktitle = {Oxford Autonomous Intelligent Machines and Systems Conference 2021},
    month = {October},
    title = {Extending an Open Source Hardware Agri-Robot with Simulation and Plant Re-identification},
    author = {Harry Rogers and Benjamin Dawson and Garry Clawson and Charles Fox},
    publisher = {Oxford AIMS Conference 2021},
    year = {2021},
    keywords = {ARRAY(0x55bd29018018)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46862/},
    abstract = {Previous work constructed an open source hardware (OSH)
    agri-robot platform for swarming agriculture research. We summarise
    recent developments from the community on this platform as a case study
    of how an OSH project can develop. The original platform has been
    extended by contributions of a simulation package and a vision-based
    plant-re-identification system used as a target for blockchain-based food
    assurance. Gaining new participants in OSH projects requires explicit
    instructions on how to contribute. The system hardware and software is
    open-sourced at https://github.com/Harry-Rogers/PiCar as part of this
    publication. We invite others to get involved and extend the platform.}
    }

  • A. Mohtasib, A. G. Esfahani, N. Bellotto, and H. Cuayahuitl, “Neural task success classifiers for robotic manipulation from few real demonstrations,” in International joint conference on neural networks (ijcnn), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Robots learning a new manipulation task from a small amount of demonstrations are increasingly demanded in different workspaces. A classifier model assessing the quality of actions can predict the successful completion of a task, which can be used by intelligent agents for action-selection. This paper presents a novel classifier that learns to classify task completion only from a few demonstrations. We carry out a comprehensive comparison of different neural classifiers, e.g. fully connected-based, fully convolutional-based, sequence2sequence-based, and domain adaptation-based classification. We also present a new dataset including five robot manipulation tasks, which is publicly available. We compared the performances of our novel classifier and the existing models using our dataset and the MIME dataset. The results suggest domain adaptation and timing-based features improve success prediction. Our novel model, i.e. fully convolutional neural network with domain adaptation and timing features, achieves an average classification accuracy of 97.3\% and 95.5\% across tasks in both datasets whereas state-of-the-art classifiers without domain adaptation and timing-features only achieve 82.4\% and 90.3\%, respectively.

    @inproceedings{lincoln45559,
    booktitle = {International Joint Conference on Neural Networks (IJCNN)},
    month = {July},
    title = {Neural Task Success Classifiers for Robotic Manipulation from Few Real Demonstrations},
    author = {Abdalkarim Mohtasib and Amir Ghalamzan Esfahani and Nicola Bellotto and Heriberto Cuayahuitl},
    publisher = {IEEE},
    year = {2021},
    keywords = {ARRAY(0x55bd24e3da70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45559/},
    abstract = {Robots learning a new manipulation task from a small amount of demonstrations are increasingly demanded in different workspaces. A classifier model assessing the quality of actions can predict the successful completion of a task, which can be used by intelligent agents for action-selection. This paper presents a novel classifier that learns to classify task completion only from a few demonstrations. We carry out a comprehensive comparison of different neural classifiers, e.g. fully connected-based, fully convolutional-based, sequence2sequence-based, and domain adaptation-based classification. We also present a new dataset including five robot manipulation tasks, which is publicly available. We compared the performances of our novel classifier and the existing models using our dataset and the MIME dataset. The results suggest domain adaptation and timing-based features improve success prediction. Our novel model, i.e. fully convolutional neural network with domain adaptation and timing features, achieves an average classification accuracy of 97.3\% and 95.5\% across tasks in both datasets whereas state-of-the-art classifiers without domain adaptation and timing-features only achieve 82.4\% and 90.3\%, respectively.}
    }

  • N. Wagner, R. Kirk, M. Hanheide, and G. Cielniak, “Efficient and robust orientation estimation of strawberries for fruit picking applications,” in Ieee international conference on robotics and automation (icra), 2021, p. 13857–1386. doi:10.1109/ICRA48506.2021.9561848
    [BibTeX] [Abstract] [Download PDF]

    Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB data only, mean errors of as low as 11? could be achieved.

    @inproceedings{lincoln44426,
    month = {October},
    author = {Nikolaus Wagner and Raymond Kirk and Marc Hanheide and Grzegorz Cielniak},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Efficient and Robust Orientation Estimation of Strawberries for Fruit Picking Applications},
    publisher = {IEEE},
    doi = {10.1109/ICRA48506.2021.9561848},
    pages = {13857--1386},
    year = {2021},
    keywords = {ARRAY(0x55bd29018000)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44426/},
    abstract = {Recent developments in agriculture have highlighted the potential of as well as the need for the use of robotics. Various processes in this field can benefit from the proper use of state of the art technology [1], in terms of efficiency as well
    as quality. One of these areas is the harvesting of ripe fruit. In order to be able to automate this process, a robotic
    harvester needs to be aware of the full poses of the crop/fruit to be collected in order to perform proper path- and collision planning. The current state of the art mainly considers problems of detection and segmentation of fruit with localisation limited to the 3D position only. The reliable and real-time estimation of the respective orientations remains a mostly unaddressed problem. In this paper, we present a compact and efficient network architecture for estimating the orientation of soft fruit such as strawberries from colour and, optionally, depth images. The proposed system can be automatically trained in a realistic simulation environment. We evaluate the system?s performance on simulated datasets and validate its operation on publicly available images of strawberries to demonstrate its practical use. Depending on the amount of training data used, coverage of state space, as well as the availability of RGB-D or RGB
    data only, mean errors of as low as 11? could be achieved.}
    }

  • J. C. Mayoral, L. Grimstad, P. r a, and G. Cielniak, “Integration of a human-aware risk-based braking system into an open-field mobile robot,” in Ieee international conference on robotics and automation (icra), 2021, p. 2435–2442. doi:10.1109/ICRA48506.2021.9561522
    [BibTeX] [Abstract] [Download PDF]

    Safety integration components for robotic applications are a mandatory feature for any autonomous mobile application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor. Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.

    @inproceedings{lincoln44427,
    month = {October},
    author = {Jose C. Mayoral and Lars Grimstad and P{\r a}l J. From and Grzegorz Cielniak},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    title = {Integration of a Human-aware Risk-based Braking System into an Open-Field Mobile Robot},
    publisher = {IEEE},
    doi = {10.1109/ICRA48506.2021.9561522},
    pages = {2435--2442},
    year = {2021},
    keywords = {ARRAY(0x55bd28fe81e0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/44427/},
    abstract = {Safety integration components for robotic applications are a mandatory feature for any autonomous mobile
    application, including human avoidance behaviors. This paper proposes a novel parametrizable scene risk evaluator for
    open-field applications that use humans motion predictions and pre-defined hazard zones to estimate a braking factor.
    Parameters optimization uses simulated data. The evaluation is carried out by simulated and real-time scenarios, showing the impact of human predictions in favor of risk reductions on agricultural applications.}
    }

  • T. Liu, X. Sun, C. Hu, Q. Fu, and S. Yue, “A versatile vision-pheromone-communication platform for swarm robotics,” in 2021 ieee international conference on robotics and automation (icra), 2021. doi:10.1109/ICRA48506.2021.9561911
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction, but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOS\${$\backslash$}Phi\$ (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform, and to investigate how complex scenarios can be facilitated by making use of this platform.

    @inproceedings{lincoln47322,
    booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
    month = {October},
    title = {A Versatile Vision-Pheromone-Communication Platform for Swarm Robotics},
    author = {Tian Liu and Xuelong Sun and Cheng Hu and Qinbing Fu and Shigang Yue},
    publisher = {IEEE},
    year = {2021},
    doi = {10.1109/ICRA48506.2021.9561911},
    keywords = {ARRAY(0x55bd28ce7768)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47322/},
    abstract = {This paper describes a versatile platform for swarm robotics research. It integrates multiple pheromone communication with a dynamic visual scene along with real time data transmission and localization of multiple-robots. The platform has been built for inquiries into social insect behavior and bio-robotics. By introducing a new research scheme to coordinate olfactory and visual cues, it not only complements current swarm robotics platforms which focus only on pheromone communications by adding visual interaction, but also may fill an important gap in closing the loop from bio-robotics to neuroscience. We have built a controllable dynamic visual environment based on our previously developed ColCOS\${$\backslash$}Phi\$ (a multi-pheromones platform) by enclosing the arena with LED panels and interacting with the micro mobile robots with a visual sensor. In addition, a wireless communication system has been developed to allow transmission of real-time bi-directional data between multiple micro robot agents and a PC host. A case study combining concepts from the internet of vehicles (IoV) and insect-vision inspired model has been undertaken to verify the applicability of the presented platform, and to investigate how complex scenarios can be facilitated by making use of this platform.}
    }

  • T. Zhivkov, A. Gomez, J. Gao, E. Sklar, and S. Parsons, “The need for speed: how 5g communication can support ai in the field,” in Epsrc uk-ras network (2021). ukras21 conference: robotics at home proceedings, 2021, p. 55–56. doi:10.31256/On8Hj9U
    [BibTeX] [Abstract] [Download PDF]

    Using AI for agriculture requires the fast transmission and processing of large volumes of data. Cost-effective high speed processing may not be possible on-board agricultural vehicles, and suitably fast transmission may not be possible with older generation wireless communications. In response, the work presented here investigates the use of 5G wireless technology to support the deployment of AI in this context.

    @inproceedings{lincoln46574,
    month = {June},
    author = {Tsvetan Zhivkov and Adrian Gomez and Junfeng Gao and Elizabeth Sklar and Simon Parsons},
    booktitle = {EPSRC UK-RAS Network (2021). UKRAS21 Conference: Robotics at home Proceedings},
    title = {The need for speed: How 5G communication can support AI in the field},
    publisher = {UK-RAS},
    doi = {10.31256/On8Hj9U},
    pages = {55--56},
    year = {2021},
    keywords = {ARRAY(0x55bd28cc44f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46574/},
    abstract = {Using AI for agriculture requires the fast transmission and processing of large volumes of data. Cost-effective high speed processing may not be possible on-board agricultural vehicles, and suitably fast transmission may not be possible with older generation wireless communications. In response, the work presented here investigates the use of 5G wireless technology to support the deployment of AI in this context.}
    }

  • N. Wagner and G. Cielniak, “Inference of mechanical properties of dynamic objects through active perception,” in Towards autonomous robotic systems conference (taros), 2021, p. 430–439. doi:10.1007/978-3-030-89177-0_45
    [BibTeX] [Abstract] [Download PDF]

    Current robotic systems often lack a deeper understanding of their surroundings, even if they are equipped with visual sensors like RGB-D cameras. Knowledge of the mechanical properties of the objects in their immediate surroundings, however, could bring huge benefits to applications such as path planning, obstacle avoidance & removal or estimating object compliance. In this paper, we present a novel approach to inferring mechanical properties of dynamic objects with the help of active perception and frequency analysis of objects’ stimulus responses. We perform FFT on a buffer of image flow maps to identify the spectral signature of objects and from that their eigenfrequency. Combining this with 3D depth information allows us to infer an object’s mass without having to weigh it. We perform experiments on a demonstrator with variable mass and stiffness to test our approach and provide an analysis on the influence of individual properties on the result. By simply applying a controlled amount of force to a system, we were able to infer mechanical properties of systems with an eigenfrequency of around 4.5 Hz in about 2 s. This lab-based feasibility study opens new exciting robotic applications targeting realistic, non-rigid objects such as plants, crops or fabric.

    @inproceedings{lincoln46646,
    month = {October},
    author = {Nikolaus Wagner and Grzegorz Cielniak},
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    title = {Inference of Mechanical Properties of Dynamic Objects through Active Perception},
    publisher = {Springer},
    year = {2021},
    journal = {Towards Autonomous Robotic Systems Conference (TAROS) 2021},
    doi = {10.1007/978-3-030-89177-0\_45},
    pages = {430--439},
    keywords = {ARRAY(0x55bd28ddcbf8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46646/},
    abstract = {Current robotic systems often lack a deeper understanding of their surroundings, even if they are equipped with visual sensors like RGB-D cameras. Knowledge of the mechanical properties of the objects in their immediate surroundings, however, could bring huge benefits to applications such as path planning, obstacle avoidance \& removal or estimating object compliance.
    In this paper, we present a novel approach to inferring mechanical properties of dynamic objects with the help of active perception and frequency analysis of objects' stimulus responses. We perform FFT on a buffer of image flow maps to identify the spectral signature of objects and from that their eigenfrequency. Combining this with 3D depth information allows us to infer an object's mass without having to weigh it.
    We perform experiments on a demonstrator with variable mass and stiffness to test our approach and provide an analysis on the influence of individual properties on the result. By simply applying a controlled amount of force to a system, we were able to infer mechanical properties of systems with an eigenfrequency of around 4.5 Hz in about 2 s. This lab-based feasibility study opens new exciting robotic applications targeting realistic, non-rigid objects such as plants, crops or fabric.}
    }

  • K. Heiwolt, T. Duckett, and G. Cielniak, “Deep semantic segmentation of 3d plant point clouds,” in Towards autonomous robotic systems conference, 2021. doi:10.1007/978-3-030-89177-0_4
    [BibTeX] [Abstract] [Download PDF]

    Plant phenotyping is an essential step in the plant breeding cycle, necessary to ensure food safety for a growing world population. Standard procedures for evaluating three-dimensional plant morphology and extracting relevant phenotypic characteristics are slow, costly, and in need of automation. Previous work towards automatic semantic segmentation of plants relies on explicit prior knowledge about the species and sensor set-up, as well as manually tuned parameters. In this work, we propose to use a supervised machine learning algorithm to predict per-point semantic annotations directly from point cloud data of whole plants and minimise the necessary user input. We train a PointNet++ variant on a fully annotated procedurally generated data set of partial point clouds of tomato plants, and show that the network is capable of distinguishing between the semantic classes of leaves, stems, and soil based on structural data only. We present both quantitative and qualitative evaluation results, and establish a proof of concept, indicating that deep learning is a promising approach towards replacing the current complex, laborious, species-specific, state-of-the-art plant segmentation procedures.

    @inproceedings{lincoln46669,
    booktitle = {Towards Autonomous Robotic Systems Conference},
    month = {October},
    title = {Deep semantic segmentation of 3D plant point clouds},
    author = {Karoline Heiwolt and Tom Duckett and Grzegorz Cielniak},
    publisher = {Springer International Publishing},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_4},
    keywords = {ARRAY(0x55bd28ce7b40)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46669/},
    abstract = {Plant phenotyping is an essential step in the plant breeding cycle, necessary to ensure food safety for a growing world population. Standard procedures for evaluating three-dimensional plant morphology and extracting relevant phenotypic characteristics are slow, costly, and in need of automation. Previous work towards automatic semantic segmentation of plants relies on explicit prior knowledge about the species and sensor set-up, as well as manually tuned parameters. In this work, we propose to use a supervised machine learning algorithm to predict per-point semantic annotations directly from point cloud data of whole plants and minimise the necessary user input. We train a PointNet++ variant on a fully annotated procedurally generated data set of partial point clouds of tomato plants, and show that the network is capable of distinguishing between the semantic classes of leaves, stems, and soil based on structural data only. We present both quantitative and qualitative evaluation results, and establish a proof of concept, indicating that deep learning is a promising approach towards replacing the current complex, laborious, species-specific, state-of-the-art plant segmentation procedures.}
    }

  • R. Ravikanna, M. Hanheide, G. Das, and Z. Zhu, “Maximising availability of transportation robots through intelligent allocation of parking spaces,” in Taros2021, 2021. doi:10.1007/978-3-030-89177-0_34
    [BibTeX] [Abstract] [Download PDF]

    Autonomous agricultural robots increasingly have an important role in tasks such as transportation, crop monitoring, weed detection etc. These tasks require the robots to travel to different locations in the field. Reducing time for this travel can greatly reduce the global task completion time and improve the availability of the robot to perform more number of tasks. Looking at in-field logistics robots for supporting human fruit pickers as a relevant scenario, this research deals with the design of various algorithms for automated allocation of parking spaces for the on-field robots, so as to make them most accessible to preferred areas of the field. These parking space allocation algorithms are tested for their performance by varying initial parameters like the size of the field, number of farm workers in the field, position of the farm workers etc. Various experiments are conducted for this purpose on a simulated environment. Their results are studied and discussed for better understanding about the contribution of intelligent parking space allocation towards improving the overall time efficiency of task completion.

    @inproceedings{lincoln46635,
    booktitle = {TAROS2021},
    month = {October},
    title = {Maximising availability of transportation robots
    through intelligent allocation of parking spaces},
    author = {Roopika Ravikanna and Marc Hanheide and Gautham Das and Zuyuan Zhu},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_34},
    keywords = {ARRAY(0x55bd28eeaa98)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46635/},
    abstract = {Autonomous agricultural robots increasingly have an important role in tasks such as transportation, crop monitoring, weed detection etc. These tasks require the robots to travel to different locations in the field. Reducing time for this travel can greatly reduce the global task completion time and improve the availability of the robot to perform more number of tasks. Looking at in-field logistics robots for supporting human fruit pickers as a relevant scenario, this research deals with the design of various algorithms for automated allocation of parking spaces for the on-field robots, so as to make them most accessible to preferred areas of the field. These parking space allocation algorithms are tested
    for their performance by varying initial parameters like the size of the field, number of farm workers in the field, position of the farm workers etc. Various experiments are conducted for this purpose on a simulated environment. Their results are studied and discussed for better understanding about the contribution of intelligent parking space allocation towards improving the overall time efficiency of task completion.}
    }

  • G. Picardi, R. Lovecchio, and M. Calisti, “Towards autonomous area inspection with a bio-inspired underwater legged robot,” in Iros, 2021. doi:10.1109/IROS51168.2021.9636316
    [BibTeX] [Abstract] [Download PDF]

    Recently, a new category of bio-inspired legged robots moving directly on the seabed have been proposed to complement the abilities of traditional underwater vehicles and to enhance manipulation and sampling tasks. So far, only tele-operated use of underwater legged robots has been reported and in this paper we attempt to fill such gap by presenting the first step towards autonomous area inspection. First, we present a 3 dimensional single-legged model for underwater hopping locomotion and derive a path following control strategy. Later, we adapt such control strategy to an underwater hexapod robot SILVER2 on the robotic simulator Webots. Finally, we simulate a full autonomous mission consisting in the inspection of an area over a pre-defined path, target recognition, transition to a safer gait and target approach. Our results show the feasibility of the approach and encourage the implementation of the presented control strategy on the robot SILVER2.

    @inproceedings{lincoln52079,
    booktitle = {IROS},
    month = {September},
    title = {Towards autonomous area inspection with a bio-inspired underwater legged robot},
    author = {Giacomo Picardi and Rossana Lovecchio and Marcello Calisti},
    publisher = {IEEE/RSJ},
    year = {2021},
    doi = {10.1109/IROS51168.2021.9636316},
    keywords = {ARRAY(0x55bd28d16ca0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/52079/},
    abstract = {Recently, a new category of bio-inspired legged robots moving directly on the seabed have been proposed to complement the abilities of traditional underwater vehicles and to enhance manipulation and sampling tasks. So far, only tele-operated use of underwater legged robots has been reported and in this paper we attempt to fill such gap by presenting the first step towards autonomous area inspection. First, we present a 3 dimensional single-legged model for underwater hopping locomotion and derive a path following control strategy. Later, we adapt such control strategy to an underwater hexapod robot SILVER2 on the robotic simulator Webots. Finally, we simulate a full autonomous mission consisting in the inspection of an area over a pre-defined path, target recognition, transition to a safer gait and target approach. Our results show the feasibility of the approach and encourage the implementation of the presented control strategy on the robot SILVER2.}
    }

  • C. Jansen and E. Sklar, “Predicting artist drawing activity via multi-camera inputs for co-creative drawing,” in Towards autonomous robotic systems conference (taros), 2021. doi:10.1007/978-3-030-89177-0_23
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the results of experimentation in computer vision based for the perception of the artist drawing with analog media (pen and paper), with the aim to contribute towards a human- robot co-creative drawing framework. Using data gathered from user studies with artists and illustrators, two types of CNN models were de- signed and evaluated to predict an artist?s activity (e.g. are they drawing or not?) and the position of the pen on the canvas based only on a multi- camera input of the drawing surface. Results of different combination of input sources are presented, with an overall mean accuracy of 95\% (std: 7\%) for predicting when the artist is present and 68\% (std: 15\%) for predicting when the artist is drawing; and mean squared normalised error of 0.0034 (std: 0.0099) of predicting the pen?s position on the drawing canvas. These results point toward an autonomous robotic system having an awareness of an artist at work via camera based input and contributes toward the development of a more fluid physical to digital workflow for creative content creation.

    @inproceedings{lincoln46480,
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    month = {October},
    title = {Predicting Artist Drawing Activity via Multi-Camera Inputs for Co-Creative Drawing},
    author = {Chipp Jansen and Elizabeth Sklar},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_23},
    journal = {Proceedings of the 22nd Towards Autonomous Robotic Systems (TAROS) Conference},
    keywords = {ARRAY(0x55bd28d91568)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46480/},
    abstract = {This paper presents the results of experimentation in computer vision based for the perception of the artist drawing with analog media (pen and paper), with the aim to contribute towards a human- robot co-creative drawing framework. Using data gathered from user studies with artists and illustrators, two types of CNN models were de- signed and evaluated to predict an artist?s activity (e.g. are they drawing or not?) and the position of the pen on the canvas based only on a multi- camera input of the drawing surface. Results of different combination of input sources are presented, with an overall mean accuracy of 95\% (std: 7\%) for predicting when the artist is present and 68\% (std: 15\%) for predicting when the artist is drawing; and mean squared normalised error of 0.0034 (std: 0.0099) of predicting the pen?s position on the drawing canvas. These results point toward an autonomous robotic system having an awareness of an artist at work via camera based input and contributes toward the development of a more fluid physical to digital workflow for creative content creation.}
    }

  • A. Henry and C. Fox, “Open source hardware automated guitar player,” in International conference on computer music, 2021.
    [BibTeX] [Abstract] [Download PDF]

    We present the first open source hardware (OSH) design and build of a physical robotic automated guitar player. Users? own instruments being different shapes and sizes, the system is designed to be used and/or modified to physically attach to a wide range of instruments. Design objectives include ease and low cost of build. Automation is split into three modules: the left-hand fretting, right-hand string picking, and right hand palm muting. Automation is performed using cheap electric linear solenoids. Software APIs are designed and implemented for both low level actuator control and high level music performance.

    @inproceedings{lincoln45327,
    booktitle = {International Conference on Computer Music},
    month = {July},
    title = {Open source hardware automated guitar player},
    author = {Andrew Henry and Charles Fox},
    publisher = {ICMC},
    year = {2021},
    keywords = {ARRAY(0x55bd28d1f930)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45327/},
    abstract = {We present the first open source hardware (OSH) design and build of a physical robotic automated guitar player. Users? own instruments being different shapes and sizes, the system is designed to be used and/or modified to physically attach to a wide range of instruments. Design objectives include ease and low cost of build. Automation is split into three modules: the left-hand fretting, right-hand string picking, and right hand palm muting. Automation is performed using cheap electric linear solenoids. Software APIs are designed and implemented for both low level actuator control and high level music performance.}
    }

  • K. Swann, P. Hadley, M. A. Hadley, S. Pearson, A. Badiee, and C. Twitchen, “The effect of light intensity and duration on yield and quality of everbearer and june-bearer strawberry cultivars in a led lit multi-tiered vertical growing system,” in Ix international strawberry symposium, 2021, p. 359–366. doi:10.17660/ActaHortic.2021.1309.52
    [BibTeX] [Abstract] [Download PDF]

    This study aimed to provide insights into the efficient use of supplementary lighting for strawberry crops produced in a multi-tiered LED lit vertical growing system, ascertaining the optimal light intensity and duration, with comparative energy use and costs. Furthermore, the suitability of a premium everbearer strawberry cultivar with a high yield potential was compared with a standard winter glasshouse June-bearer cultivar currently used for out-of-season production in the UK. Three lighting durations (11, 16 and 22 h) provided by LEDs were combined with two light intensities (344 and 227 ?mol) to give six light treatments on each tier of a three-tiered system to grow the two cultivars. The everbearer showed a higher yield with a higher correlation with increased lighting and a greater proportion of reproductive growth than the Junebearer. Light intensity and duration increased yield with duration also increasing sugar content (?Brix). However, even with yields of over 100 t ha?1 recorded in this study, yields are likely to be insufficient to cover the cost of electricity.

    @inproceedings{lincoln45160,
    booktitle = {IX International Strawberry Symposium},
    month = {April},
    title = {The effect of light intensity and duration on yield and quality of everbearer and June-bearer strawberry cultivars in a LED lit multi-tiered vertical growing system},
    author = {K Swann and P Hadley and M. A. Hadley and Simon Pearson and Amir Badiee and C. Twitchen},
    year = {2021},
    pages = {359--366},
    doi = {10.17660/ActaHortic.2021.1309.52},
    keywords = {ARRAY(0x55bd28d0c390)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45160/},
    abstract = {This study aimed to provide insights into the efficient use of supplementary lighting for strawberry crops produced in a multi-tiered LED lit vertical growing system, ascertaining the optimal light intensity and duration, with comparative energy use and costs. Furthermore, the suitability of a premium everbearer strawberry cultivar with a high yield potential was compared with a standard winter glasshouse June-bearer cultivar currently used for out-of-season production in the UK. Three lighting durations (11, 16 and 22 h) provided by LEDs were combined with two light intensities (344 and 227 ?mol) to give six light treatments on each tier of a three-tiered system to grow the two cultivars. The everbearer showed a higher yield with a higher correlation with increased lighting and a greater proportion of reproductive growth than the Junebearer. Light intensity and duration increased yield with duration also increasing sugar content (?Brix). However, even with yields of over 100 t ha?1 recorded in this study, yields are likely to be insufficient to cover the cost of electricity.}
    }

  • Z. Maamar, M. Al-Khafajiy, and M. Dohan, “An iot application business-model on top of cloud and fog nodes,” in Advanced information networking and applications, 2021, p. 174–186. doi:10.1007/978-3-030-75075-6_14
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses the design of a business model dedicated for IoT applications that would be deployed on top of cloud and fog resources. This business model features 2 constructs, flow (specialized into data and collaboration) and placement (specialized into processing and storage). On the one hand, the flow construct is about who sends what and to whom, who collaborates with whom, and what restrictions exist on what to send, to whom to send, and with whom to collaborate. On the other hand, the placement construct is about what and how to fragment, where to store, and what restrictions exist on what and how to fragment, and where to store. The paper also discusses the development of a system built-upon a deep learning model that recommends how the different flows and placements should be formed. These recommendations consider the technical capabilities of cloud and fog resources as well as the networking topology connecting these resources to things.

    @inproceedings{lincoln47574,
    volume = {226},
    month = {April},
    author = {Zakaria Maamar and Mohammed Al-Khafajiy and Murtada Dohan},
    booktitle = {Advanced Information Networking and Applications},
    title = {An IoT Application Business-Model on Top of Cloud and Fog Nodes},
    publisher = {Springer},
    year = {2021},
    journal = {AINA 2021: Advanced Information Networking and Applications},
    doi = {10.1007/978-3-030-75075-6\_14},
    pages = {174--186},
    keywords = {ARRAY(0x55bd28ebf8a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47574/},
    abstract = {This paper discusses the design of a business model dedicated for IoT applications that would be deployed on top of cloud and fog resources. This business model features 2 constructs, flow (specialized into data and collaboration) and placement (specialized into processing and storage). On the one hand, the flow construct is about who sends what and to whom, who collaborates with whom, and what restrictions exist on what to send, to whom to send, and with whom to collaborate. On the other hand, the placement construct is about what and how to fragment, where to store, and what restrictions exist on what and how to fragment, and where to store. The paper also discusses the development of a system built-upon a deep learning model that recommends how the different flows and placements should be formed. These recommendations consider the technical capabilities of cloud and fog resources as well as the networking topology connecting these resources to things.}
    }

  • J. Heselden and G. Das, “Crh*: a deadlock free framework for scalable prioritised path planning in multi-robot systems,” in Towards autonomous robotic systems conference, 2021. doi:10.1007/978-3-030-89177-0_7
    [BibTeX] [Abstract] [Download PDF]

    Multi-robot system is an ever growing tool which is able to be applied to a wide range of industries to improve productivity and robustness, especially when tasks are distributed in space, time and functionality. Recent works have shown the benefits of multi-robot systems in fields such as warehouse automation, entertainment and agriculture. The work presented in this paper tackles the deadlock problem in multi-robot navigation, in which robots within a common work-space, are caught in situations where they are unable to navigate to their targets, being blocked by one another. This problem can be mitigated by efficient multi-robot path planning. Our work focused around the development of a scalable rescheduling algorithm named Conflict Resolution Heuristic A* (CRH*) for decoupled prioritised planning. Extensive experimental evaluation of CRH* was carried out in discrete event simulations of a fleet of autonomous agricultural robots. The results from these experiments proved that the algorithm was both scalable and deadlock-free. Additionally, novel customisation options were included to test further optimisations in system performance. Continuous Assignment and Dynamic Scoring showed to reduce the make-span of the routing whilst Combinatorial Heuristics showed to reduce the impact of outliers on priority orderings.

    @inproceedings{lincoln46453,
    booktitle = {Towards Autonomous Robotic Systems Conference},
    month = {October},
    title = {CRH*: A Deadlock Free Framework for Scalable Prioritised Path Planning in Multi-Robot Systems},
    author = {James Heselden and Gautham Das},
    publisher = {Springer International Publishing},
    year = {2021},
    doi = {10.1007/978-3-030-89177-0\_7},
    keywords = {ARRAY(0x55bd28eb00f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46453/},
    abstract = {Multi-robot system is an ever growing tool which is able to be applied to a wide range of industries to improve productivity and robustness, especially when tasks are distributed in space, time and functionality. Recent works have shown the benefits of multi-robot systems in fields such as warehouse automation, entertainment and agriculture. The work presented in this paper tackles the deadlock problem in multi-robot navigation, in which robots within a common work-space, are caught in situations where they are unable to navigate to their targets, being blocked by one another. This problem can be mitigated by efficient multi-robot path planning. Our work focused around the development of a scalable rescheduling algorithm named Conflict Resolution Heuristic A* (CRH*) for decoupled prioritised planning. Extensive experimental evaluation of CRH* was carried out in discrete event simulations of a fleet of autonomous agricultural robots. The results from these experiments proved that the algorithm was both scalable and deadlock-free. Additionally, novel customisation options were included to test further optimisations in system performance. Continuous Assignment and Dynamic Scoring showed to reduce the make-span of the routing whilst Combinatorial Heuristics showed to reduce the impact of outliers on priority orderings.}
    }

  • A. Mohtasib, G. Neumann, and H. Cuayahuitl, “A study on dense and sparse (visual) rewards in robot policy learning,” in Towards autonomous robotic systems conference (taros), 2021.
    [BibTeX] [Abstract] [Download PDF]

    Deep Reinforcement Learning (DRL) is a promising approach for teaching robots new behaviour. However, one of its main limitations is the need for carefully hand-coded reward signals by an expert. We argue that it is crucial to automate the reward learning process so that new skills can be taught to robots by their users. To address such automation, we consider task success classifiers using visual observations to estimate the rewards in terms of task success. In this work, we study the performance of multiple state-of-the-art deep reinforcement learning algorithms under different types of reward: Dense, Sparse, Visual Dense, and Visual Sparse rewards. Our experiments in various simulation tasks (Pendulum, Reacher, Pusher, and Fetch Reach) show that while DRL agents can learn successful behaviours using visual rewards when the goal targets are distinguishable, their performance may decrease if the task goal is not clearly visible. Our results also show that visual dense rewards are more successful than visual sparse rewards and that there is no single best algorithm for all tasks.

    @inproceedings{lincoln45983,
    booktitle = {Towards Autonomous Robotic Systems Conference (TAROS)},
    month = {September},
    title = {A Study on Dense and Sparse (Visual) Rewards in Robot Policy Learning},
    author = {Abdalkarim Mohtasib and Gerhard Neumann and Heriberto Cuayahuitl},
    publisher = {University of Lincoln},
    year = {2021},
    keywords = {ARRAY(0x55bd28e23560)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/45983/},
    abstract = {Deep Reinforcement Learning (DRL) is a promising approach for teaching robots new behaviour. However, one of its main limitations is the need for carefully hand-coded reward signals by an expert. We argue that it is crucial to automate the reward learning process so that new skills can be taught to robots by their users. To address such automation, we consider task success classifiers using visual observations to estimate the rewards in terms of task success. In this work, we study the performance of multiple state-of-the-art deep reinforcement learning algorithms under different types of reward: Dense, Sparse, Visual Dense, and Visual Sparse rewards. Our experiments in various simulation tasks (Pendulum, Reacher, Pusher, and Fetch Reach) show that while DRL agents can learn successful behaviours using visual rewards when the goal targets are distinguishable, their performance may decrease if the task goal is not clearly visible. Our results also show that visual dense rewards are more successful than visual sparse rewards and that there is no single best algorithm for all tasks.}
    }

2020

  • H. Wang, Q. Fu, H. Wang, P. Baxter, J. Peng, and S. Yue, “A bioinspired angular velocity decoding neural network model for visually guided flights,” Neural networks, 2020. doi:10.1016/j.neunet.2020.12.008
    [BibTeX] [Abstract] [Download PDF]

    Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.

    @article{lincoln43704,
    title = {A bioinspired angular velocity decoding neural network model for visually guided flights},
    author = {Huatian Wang and Qinbing Fu and Hongxin Wang and Paul Baxter and Jigen Peng and Shigang Yue},
    publisher = {Elsevier},
    year = {2020},
    doi = {10.1016/j.neunet.2020.12.008},
    journal = {Neural Networks},
    keywords = {ARRAY(0x55bd28e9c0d8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/43704/},
    abstract = {Efficient and robust motion perception systems are important pre-requisites for achieving visually guided flights in future micro air vehicles. As a source of inspiration, the visual neural networks of flying insects such as honeybee and Drosophila provide ideal examples on which to base artificial motion perception models. In this paper, we have used this approach to develop a novel method that solves the fundamental problem of estimating angular velocity for visually guided flights. Compared with previous models, our elementary motion detector (EMD) based model uses a separate texture estimation pathway to effectively decode angular velocity, and demonstrates considerable independence from the spatial frequency and contrast of the gratings. Using the Unity development platform the model is further tested for tunnel centering and terrain following paradigms in order to reproduce the visually guided flight behaviors of honeybees. In a series of controlled trials, the virtual bee utilizes the proposed angular velocity control schemes to accurately navigate through a patterned tunnel, maintaining a suitable distance from the undulating textured terrain. The results are consistent with both neuron spike recordings and behavioral path recordings of real honeybees, thereby demonstrating the model?s potential for implementation in micro air vehicles which have only visual sensors.}
    }

  • W. Martindale, S. Pearson, M. Swainson, L. Korir, I. Wright, A. M. Opiyo, B. Karanja, S. Nyalala, and M. Kumar, “Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between kenyan, indian and united kingdom food manufacturers,” Emerald open research, vol. 2, iss. 12, 2020. doi:10.35241/emeraldopenres.13414.1
    [BibTeX] [Abstract] [Download PDF]

    The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.

    @article{lincoln40529,
    volume = {2},
    number = {12},
    month = {April},
    author = {Wayne Martindale and Simon Pearson and Mark Swainson and Lilian Korir and Isobel Wright and Arnold M. Opiyo and Benard Karanja and Samuel Nyalala and Mahesh Kumar},
    title = {Framing food security and food loss statistics for incisive supply chain improvement and knowledge transfer between Kenyan, Indian and United Kingdom food manufacturers},
    publisher = {Emerald},
    year = {2020},
    journal = {Emerald Open Research},
    doi = {10.35241/emeraldopenres.13414.1},
    keywords = {ARRAY(0x55bd28dbad80)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40529/},
    abstract = {The application of global indices of nutrition and food sustainability in public health and the improvement of product profiles has facilitated effective actions that increase food security. In the research reported here we develop index measurements further so that they can be applied to food categories and be used by food processors and manufacturers for specific food supply chains. This research considers how they can be used to assess the sustainability of supply chain operations by stimulating more incisive food loss and waste reduction planning. The research demonstrates how an index driven approach focussed on improving both nutritional delivery and reducing food waste will result in improved food security and sustainability. Nutritional improvements are focussed on protein supply and reduction of food waste on supply chain losses and the methods are tested using the food systems of Kenya and India where the current research is being deployed. Innovative practices will emerge when nutritional improvement and waste reduction actions demonstrate market success, and this will result in the co-development of food manufacturing infrastructure and innovation programmes. The use of established indices of sustainability and security enable comparisons that encourage knowledge transfer and the establishment of cross-functional indices that quantify national food nutrition, security and sustainability. The research presented in this initial study is focussed on applying these indices to specific food supply chains for food processors and manufacturers.}
    }

  • X. Sun, S. Yue, and M. Mangan, “A decentralised neural model explaining optimal integration of navigational strategies in insects,” Elife, vol. 9, 2020. doi:10.7554/eLife.54026
    [BibTeX] [Abstract] [Download PDF]

    Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.

    @article{lincoln41703,
    volume = {9},
    month = {July},
    author = {Xuelong Sun and Shigang Yue and Michael Mangan},
    title = {A decentralised neural model explaining optimal integration of navigational strategies in insects},
    publisher = {eLife Sciences Publications},
    journal = {eLife},
    doi = {10.7554/eLife.54026},
    year = {2020},
    keywords = {ARRAY(0x55bd28a9ef08)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41703/},
    abstract = {Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.}
    }

  • H. Cuayahuitl, “A data-efficient deep learning approach for deployable multimodal social robots,” Neurocomputing, vol. 396, p. 587–598, 2020. doi:10.1016/j.neucom.2018.09.104
    [BibTeX] [Abstract] [Download PDF]

    The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games–-and use the game of `Noughts {$\backslash$}& Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.

    @article{lincoln42805,
    volume = {396},
    month = {July},
    author = {Heriberto Cuayahuitl},
    note = {The final published version of this article can be accessed online at https://www.journals.elsevier.com/neurocomputing/},
    title = {A Data-Efficient Deep Learning Approach for Deployable Multimodal Social Robots},
    publisher = {Elsevier},
    year = {2020},
    journal = {Neurocomputing},
    doi = {10.1016/j.neucom.2018.09.104},
    pages = {587--598},
    keywords = {ARRAY(0x55bd28d47f20)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42805/},
    abstract = {The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games---and use the game of `Noughts {$\backslash$}\& Crosses' with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the \{{$\backslash$}it Pepper\} robot confirms that highly accurate visual perception is required for successful game play.}
    }

  • Q. Fu and S. Yue, “Modelling drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds,” Biological cybernetics, 2020. doi:10.1007/s00422-020-00841-x
    [BibTeX] [Abstract] [Download PDF]

    Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.

    @article{lincoln42133,
    month = {July},
    title = {Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds},
    author = {Qinbing Fu and Shigang Yue},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s00422-020-00841-x},
    journal = {Biological Cybernetics},
    keywords = {ARRAY(0x55bd28dc6c58)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42133/},
    abstract = {Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and ef?ciently, is still a challenging problem. In nature, lightweight and low-powered ?ying insects apply motion vision to detect a moving target in highly variable environments during ?ight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit ?y Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-?eld horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-?ltering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have veri?ed the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.}
    }

  • D. Liu, N. Bellotto, and S. Yue, “Deep spiking neural network for video-based disguise face recognition based on dynamic facial movements,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 6, p. 1843–1855, 2020. doi:10.1109/TNNLS.2019.2927274
    [BibTeX] [Abstract] [Download PDF]

    With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well – it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios

    @article{lincoln41718,
    volume = {31},
    number = {6},
    month = {June},
    author = {Daqi Liu and Nicola Bellotto and Shigang Yue},
    title = {Deep Spiking Neural Network for Video-based Disguise Face Recognition Based on Dynamic Facial Movements},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2927274},
    pages = {1843--1855},
    keywords = {ARRAY(0x55bd28a53900)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41718/},
    abstract = {With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well - it achieves from95\% to 100\% correct classification rates under various realisticexperimental scenarios}
    }

  • J. Liu, S. Iacoponi, C. Laschi, L. Wen, and M. Calisti, “Underwater mobile manipulation: a soft arm on a benthic legged robot,” Ieee robotics & automation magazine, vol. 27, iss. 4, p. 12–26, 2020. doi:10.1109/MRA.2020.3024001
    [BibTeX] [Abstract] [Download PDF]

    Robotic systems that can explore the sea floor, collect marine samples, gather shallow water refuse, and perform other underwater tasks are interesting and important in several fields, from biology and ecology to off-shore industry. In this article, we present a robotic platform that is, to our knowledge, the first to combine benthic legged locomotion and soft continuum manipulation to perform real-world underwater mission-like experiments. We experimentally exploit inverse kinematics for spatial manipulation in a laboratory environment and then examine the robot’s workspace extensibility, force, energy consumption, and grasping ability in different undersea scenarios.

    @article{lincoln46137,
    volume = {27},
    number = {4},
    month = {December},
    author = {Jiaqi Liu and Saverio Iacoponi and Cecilia Laschi and Li Wen and Marcello Calisti},
    title = {Underwater Mobile Manipulation: A Soft Arm on a Benthic Legged Robot},
    year = {2020},
    journal = {IEEE Robotics \& Automation Magazine},
    doi = {10.1109/MRA.2020.3024001},
    pages = {12--26},
    keywords = {ARRAY(0x55bd28df09b0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46137/},
    abstract = {Robotic systems that can explore the sea floor, collect marine samples, gather shallow water refuse, and perform other underwater tasks are interesting and important in several fields, from biology and ecology to off-shore industry. In this article, we present a robotic platform that is, to our knowledge, the first to combine benthic legged locomotion and soft continuum manipulation to perform real-world underwater mission-like experiments. We experimentally exploit inverse kinematics for spatial manipulation in a laboratory environment and then examine the robot's workspace extensibility, force, energy consumption, and grasping ability in different undersea scenarios.}
    }

  • Q. Fu, H. Wang, J. Peng, and S. Yue, “Improved collision perception neuronal system model with adaptive inhibition mechanism and evolutionary learning,” Ieee access, vol. 8, p. 108896–108912, 2020. doi:10.1109/ACCESS.2020.3001396
    [BibTeX] [Abstract] [Download PDF]

    Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.

    @article{lincoln42131,
    volume = {8},
    month = {June},
    author = {Qinbing Fu and Huatian Wang and Jigen Peng and Shigang Yue},
    title = {Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Access},
    doi = {10.1109/ACCESS.2020.3001396},
    pages = {108896--108912},
    keywords = {ARRAY(0x55bd28fedda8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/42131/},
    abstract = {Accurate and timely perception of collision in highly variable environments is still a challenging problem for arti?cial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust?s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesde?nedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, ?exibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots.}
    }

  • Y. M. Lee, R. Madigan, O. Giles, L. Garach?Morcillo, G. Markkula, C. Fox, F. Camara, M. Rothmueller, S. A. Vendelbo?Larsen, P. H. Rasmussen, A. Dietrich, D. Nathanael, V. Portouli, A. Schieben, and N. Merat, “Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles,” Cognition, technology & work, 2020. doi:10.1007/s10111-020-00635-y
    [BibTeX] [Abstract] [Download PDF]

    To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu- nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred. This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle- based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit communication cues from drivers, although some cultural differences were observed. The implications of these findings are discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.

    @article{lincoln41217,
    month = {June},
    title = {Road users rarely use explicit communication when interacting in today?s traffic: implications for automated vehicles},
    author = {Yee Mun Lee and Ruth Madigan and Oscar Giles and Laura Garach?Morcillo and Gustav Markkula and Charles Fox and Fanta Camara and Markus Rothmueller and Signe Alexandra Vendelbo?Larsen and Pernille Holm Rasmussen and Andre Dietrich and Dimitris Nathanael and Villy Portouli and Anna Schieben and Natasha Merat},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s10111-020-00635-y},
    journal = {Cognition, Technology \& Work},
    keywords = {ARRAY(0x55bd29006c70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41217/},
    abstract = {To be successful, automated vehicles (AVs) need to be able to manoeuvre in mixed traffic in a way that will be accepted by
    road users, and maximises traffic safety and efficiency. A likely prerequisite for this success is for AVs to be able to commu-
    nicate effectively with other road users in a complex traffic environment. The current study, conducted as part of the European
    project interACT, investigates the communication strategies used by drivers and pedestrians while crossing the road at six
    observed locations, across three European countries. In total, 701 road user interactions were observed and annotated, using
    an observation protocol developed for this purpose. The observation protocols identified 20 event categories, observed from
    the approaching vehicles/drivers and pedestrians. These included information about movement, looking behaviour, hand
    gestures, and signals used, as well as some demographic data. These observations illustrated that explicit communication
    techniques, such as honking, flashing headlights by drivers, or hand gestures by drivers and pedestrians, rarely occurred.
    This observation was consistent across sites. In addition, a follow-on questionnaire, administered to a sub-set of the observed
    pedestrians after crossing the road, found that when contemplating a crossing, pedestrians were more likely to use vehicle-
    based behaviour, rather than communication cues from the driver. Overall, the findings suggest that vehicle-based movement
    information such as yielding cues are more likely to be used by pedestrians while crossing the road, compared to explicit
    communication cues from drivers, although some cultural differences were observed. The implications of these findings are
    discussed with respect to design of suitable external interfaces and communication of intent by future automated vehicles.}
    }

  • Z. Yan, S. Schreiberhuber, G. Halmetschlager, T. Duckett, M. Vincze, and N. Bellotto, “Robot perception of static and dynamic objects with an autonomous floor scrubber,” Intelligent service robotics, 2020. doi:10.1007/s11370-020-00324-9
    [BibTeX] [Abstract] [Download PDF]

    This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.

    @article{lincoln40882,
    month = {June},
    title = {Robot Perception of Static and Dynamic Objects with an Autonomous Floor Scrubber},
    author = {Zhi Yan and Simon Schreiberhuber and Georg Halmetschlager and Tom Duckett and Markus Vincze and Nicola Bellotto},
    publisher = {Springer},
    year = {2020},
    doi = {10.1007/s11370-020-00324-9},
    journal = {Intelligent Service Robotics},
    keywords = {ARRAY(0x55bd28cf35f8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40882/},
    abstract = {This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community.}
    }

  • I. Albayati, A. Postnikov, S. Pearson, R. Bickerton, A. Zolotas, and C. Bingham, “Power and energy analysis for a commercial retail refrigeration system responding to a static demand side response,” International journal of electrical power & energy systems, vol. 117, p. 105645, 2020. doi:10.1016/j.ijepes.2019.105645
    [BibTeX] [Abstract] [Download PDF]

    The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.

    @article{lincoln38163,
    volume = {117},
    month = {May},
    author = {Ibrahim Albayati and Andrey Postnikov and Simon Pearson and Ronald Bickerton and Argyrios Zolotas and Chris Bingham},
    title = {Power and Energy Analysis for a Commercial Retail Refrigeration System Responding to a Static Demand Side Response},
    publisher = {Elsevier},
    year = {2020},
    journal = {International Journal of Electrical Power \& Energy Systems},
    doi = {10.1016/j.ijepes.2019.105645},
    pages = {105645},
    keywords = {ARRAY(0x55bd28ff4300)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/38163/},
    abstract = {The paper considers the impact of Demand Side Response events on supply power profile and energy efficiency of widely distributed aggregated loads applied across commercial refrigeration systems. Responses to secondary grid frequency static DSR events are investigated. Experimental trials are conducted on a system of refrigerators representing a small retail store, and subsequently on the refrigerators of an operational superstore in the UK. Energy consumption and energy savings during 3 hours of operation, pre and post-secondary DSR, are discussed. In addition, a simultaneous secondary DSR event is realised across three operational retail stores located in different geographical regions of the UK. A Simulink model for a 3{\ensuremath{\Phi}} power network is used to investigate the impact of a synchronised return to normal operation of the aggregated refrigeration systems post DSR on the local power network. Results show {\texttt{\char126}}1\% drop in line voltage due to the synchronised return to operation. An analysis of energy consumption shows that DSR events can facilitate energy savings of between 3.8\% and 9.3\% compared to normal operation. This is a result of the refrigerators operating more efficiently during and shortly after the DSR. The use of aggregated refrigeration loads can contribute to the necessary load-shed by 97.3\% at the beginning of DSR and 27\% during 30 minutes DSR, based on a simultaneous DSR event carried out on three retail stores.}
    }

  • G. Picardi, M. Chellapurath, S. Iacoponi, S. Stefanni, C. Laschi, and M. Calisti, “Bioinspired underwater legged robot for seabed exploration with low environmental disturbance,” Science robotics, vol. 5, iss. 42, p. eaaz1012, 2020. doi:10.1126/scirobotics.aaz1012
    [BibTeX] [Abstract] [Download PDF]

    Robots have the potential to assist and complement humans in the study and exploration of extreme and hostile environments. For example, valuable scientific data have been collected with the aid of propeller-driven autonomous and remotely operated vehicles in underwater operations. However, because of their nature as swimmers, such robots are limited when closer interaction with the environment is required. Here, we report a bioinspired underwater legged robot, called SILVER2, that implements locomotion modalities inspired by benthic animals (organisms that harness the interaction with the seabed to move; for example, octopi and crabs). Our robot can traverse irregular terrains, interact delicately with the environment, approach targets safely and precisely, and hold position passively and silently. The capabilities of our robot were validated through a series of field missions in real sea conditions in a depth range between 0.5 and 12 meters.

    @article{lincoln46143,
    volume = {5},
    number = {42},
    month = {May},
    author = {G. Picardi and M. Chellapurath and S. Iacoponi and S. Stefanni and C. Laschi and M. Calisti},
    title = {Bioinspired underwater legged robot for seabed exploration with low environmental disturbance},
    year = {2020},
    journal = {Science Robotics},
    doi = {10.1126/scirobotics.aaz1012},
    pages = {eaaz1012},
    url = {https://eprints.lincoln.ac.uk/id/eprint/46143/},
    abstract = {Robots have the potential to assist and complement humans in the study and exploration of extreme and hostile environments. For example, valuable scientific data have been collected with the aid of propeller-driven autonomous and remotely operated vehicles in underwater operations. However, because of their nature as swimmers, such robots are limited when closer interaction with the environment is required. Here, we report a bioinspired underwater legged robot, called SILVER2, that implements locomotion modalities inspired by benthic animals (organisms that harness the interaction with the seabed to move; for example, octopi and crabs). Our robot can traverse irregular terrains, interact delicately with the environment, approach targets safely and precisely, and hold position passively and silently. The capabilities of our robot were validated through a series of field missions in real sea conditions in a depth range between 0.5 and 12 meters.}
    }

  • L. Jackson, C. M. Saaj, A. Seddaoui, C. Whiting, S. Eckersley, and S. Hadfield, “Downsizing an orbital space robot: a dynamic system based evaluation,” Advances in space research, vol. 65, iss. 10, p. 2247–2262, 2020. doi:10.1016/j.asr.2020.03.004
    [BibTeX] [Abstract] [Download PDF]

    Small space robots have the potential to revolutionise space exploration by facilitating the on-orbit assembly of infrastructure, in shorter time scales, at reduced costs. Their commercial appeal will be further improved if such a system is also capable of performing on-orbit servicing missions, in line with the current drive to limit space debris and prolong the lifetime of satellites already in orbit. Whilst there have been a limited number of successful demonstrations of technologies capable of these on-orbit operations, the systems remain large and bespoke. The recent surge in small satellite technologies is changing the economics of space and in the near future, downsizing a space robot might become be a viable option with a host of benefits. This industry wide shift means some of the technologies for use with a downsized space robot, such as power and communication subsystems, now exist. However, there are still dynamic and control issues that need to be overcome before a downsized space robot can be capable of undertaking useful missions. This paper first outlines these issues, before analyzing the effect of downsizing a system on its operational capability. Therefore presenting the smallest controllable system such that the benefits of a small space robot can be achieved with current technologies. The sizing of the base spacecraft and manipulator are addressed here. The design presented consists of a 3 link, 6 degrees of freedom robotic manipulator mounted on a 12U form factor satellite. The feasibility of this 12U space robot was evaluated in simulation and the in-depth results presented here support the hypothesis that a small space robot is a viable solution for in-orbit operations.

    @article{lincoln48337,
    volume = {65},
    number = {10},
    month = {May},
    author = {Lucy Jackson and Chakravarthini M. Saaj and Asma Seddaoui and Calem Whiting and Steve Eckersley and Simon Hadfield},
    title = {Downsizing an orbital space robot: A dynamic system based evaluation},
    publisher = {Elsevier},
    year = {2020},
    journal = {Advances in Space Research},
    doi = {10.1016/j.asr.2020.03.004},
    pages = {2247--2262},
    keywords = {ARRAY(0x55bd28d6ff50)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/48337/},
    abstract = {Small space robots have the potential to revolutionise space exploration by facilitating the on-orbit assembly of infrastructure, in shorter time scales, at reduced costs. Their commercial appeal will be further improved if such a system is also capable of performing on-orbit servicing missions, in line with the current drive to limit space debris and prolong the lifetime of satellites already in orbit. Whilst there have been a limited number of successful demonstrations of technologies capable of these on-orbit operations, the systems remain large and bespoke. The recent surge in small satellite technologies is changing the economics of space and in the near future, downsizing a space robot might become be a viable option with a host of benefits. This industry wide shift means some of the technologies for use with
    a downsized space robot, such as power and communication subsystems, now exist. However, there are still dynamic and control issues that need to be overcome before a downsized space robot can be capable of undertaking useful missions. This paper first outlines these issues, before analyzing the effect of downsizing a system on its operational capability. Therefore presenting the smallest controllable system such that the benefits of a small space robot can be achieved with current technologies. The sizing of the base spacecraft and manipulator are addressed here. The design presented consists of a 3 link, 6 degrees of freedom robotic manipulator mounted on a 12U form factor satellite. The feasibility of this 12U space robot was evaluated in simulation and the in-depth results presented here support the hypothesis that a small space robot is a viable solution for in-orbit operations.}
    }

  • H. Wang, J. Peng, and S. Yue, “A directionally selective small target motion detecting visual neural network in cluttered backgrounds,” Ieee transactions on cybernetics, vol. 50, iss. 4, p. 1541–1555, 2020. doi:10.1109/TCYB.2018.2869384
    [BibTeX] [Abstract] [Download PDF]

    Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect’s visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.

    @article{lincoln33420,
    volume = {50},
    number = {4},
    month = {April},
    author = {Hongxin Wang and Jigen Peng and Shigang Yue},
    note = {The final published version of this article can be accessed online at https://ieeexplore.ieee.org/document/8485659},
    title = {A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Transactions on Cybernetics},
    doi = {10.1109/TCYB.2018.2869384},
    pages = {1541--1555},
    keywords = {ARRAY(0x55bd28ed69a0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/33420/},
    abstract = {Discriminating targets moving against a cluttered background is a huge challenge, let alone detecting a target as small as one or a few pixels and tracking it in flight. In the insect's visual system, a class of specific neurons, called small target motion detectors (STMDs), have been identified as showing exquisite selectivity for small target motion. Some of the STMDs have also demonstrated direction selectivity which means these STMDs respond strongly only to their preferred motion direction. Direction selectivity is an important property of these STMD neurons which could contribute to tracking small targets such as mates in flight. However, little has been done on systematically modeling these directionally selective STMD neurons. In this paper, we propose a directionally selective STMD-based neural network for small target detection in a cluttered background. In the proposed neural network, a new correlation mechanism is introduced for direction selectivity via correlating signals relayed from two pixels. Then, a lateral inhibition mechanism is implemented on the spatial field for size selectivity of the STMD neurons. Finally, a population vector algorithm is used to encode motion direction of small targets. Extensive experiments showed that the proposed neural network not only is in accord with current biological findings, i.e., showing directional preferences, but also worked reliably in detecting small targets against cluttered backgrounds.}
    }

  • T. Pardi, V. Ortenzi, C. Fairbairn, T. Pipe, A. G. Esfahani, and R. Stolkin, “Planning maximum-manipulability cutting paths,” Ieee robotics and automation letters, vol. 5, iss. 2, p. 1999–2006, 2020. doi:10.1109/LRA.2020.2970949
    [BibTeX] [Abstract] [Download PDF]

    This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.

    @article{lincoln41285,
    volume = {5},
    number = {2},
    month = {April},
    author = {Tommaso Pardi and Valerio Ortenzi and Colin Fairbairn and Tony Pipe and Amir Ghalamzan Esfahani and Rustam Stolkin},
    title = {Planning maximum-manipulability cutting paths},
    publisher = {IEEE},
    year = {2020},
    journal = {IEEE Robotics and Automation Letters},
    doi = {10.1109/LRA.2020.2970949},
    pages = {1999--2006},
    keywords = {ARRAY(0x55bd28d0c940)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41285/},
    abstract = {This paper presents a method for constrained motion planning from vision, which enables a robot to move its end-effector over an observed surface, given start and destination points. The robot has no prior knowledge of the surface shape but observes it from a noisy point cloud. We consider the multi-objective optimisation problem of finding robot trajectories which maximise the robot?s manipulability throughout the motion, while also minimising surface-distance travelled between the two points. This work has application in industrial problems of rough robotic cutting, e.g., demolition of the legacy nuclear plant, where the cut path needs not be precise as long as it achieves dismantling. We show how detours in the path can be leveraged to increase the manipulability of the robot at all points along the path. This helps to avoid singularities while maximising the robot?s capability to make small deviations during task execution. We show how a sampling-based planner can be projected onto the Riemannian manifold of a curved surface, and extended to include a term which maximises manipulability. We present the results of empirical experiments, with both simulated and real robots, which are tasked with moving over a variety of different surface shapes. Our planner enables successful task completion while ensuring significantly greater manipulability when compared against a conventional RRT* planner.}
    }

  • S. Cosar and N. Bellotto, “Human re-identification with a robot thermal camera using entropy-based sampling,” Journal of intelligent and robotic systems, vol. 98, iss. 1, p. 85–102, 2020. doi:10.1007/s10846-019-01026-w
    [BibTeX] [Abstract] [Download PDF]

    Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.

    @article{lincoln35778,
    volume = {98},
    number = {1},
    month = {April},
    author = {Serhan Cosar and Nicola Bellotto},
    title = {Human Re-Identification with a Robot Thermal Camera using Entropy-based Sampling},
    publisher = {Springer},
    year = {2020},
    journal = {Journal of Intelligent and Robotic Systems},
    doi = {10.1007/s10846-019-01026-w},
    pages = {85--102},
    keywords = {ARRAY(0x55bd28fe33a8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/35778/},
    abstract = {Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re-identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.}
    }

  • F. Camara, N. Bellotto, S. Cosar, F. Weber, D. Nathanael, M. Althoff, J. Wu, J. Ruenz, A. Dietrich, G. Markkula, A. Schieben, F. Tango, N. Merat, and C. Fox, “Pedestrian models for autonomous driving part ii: high-level models of human behavior,” Ieee transactions on intelligent transport systems, 2020. doi:10.1109/TITS.2020.3006767
    [BibTeX] [Abstract] [Download PDF]

    Abstract{–}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.

    @article{lincoln41706,
    month = {July},
    title = {Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior},
    author = {Fanta Camara and Nicola Bellotto and Serhan Cosar and Florian Weber and Dimitris Nathanael and Matthias Althoff and Jingyuan Wu and Johannes Ruenz and Andre Dietrich and Gustav Markkula and Anna Schieben and Fabio Tango and Natasha Merat and Charles Fox},
    publisher = {IEEE},
    year = {2020},
    doi = {10.1109/TITS.2020.3006767},
    journal = {IEEE Transactions on Intelligent Transport Systems},
    keywords = {ARRAY(0x55bd28d63878)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41706/},
    abstract = {Abstract{--}Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians? likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control.}
    }

  • M. Al-Khafajiy, T. Baker, M. Asim, Z. Guo, R. Ranjan, A. Longo, D. Puthal, and M. Taylor, “Comitment: a fog computing trust management approach,” Journal of parallel and distributed computing, vol. 137, p. 1–16, 2020. doi:10.1016/j.jpdc.2019.10.006
    [BibTeX] [Abstract] [Download PDF]

    As an extension of cloud computing, fog computing is considered to be relatively more secure than cloud computing due to data being transiently maintained and analyzed on local fog nodes closer to data sources. However, there exist several security and privacy concerns when fog nodes collaborate and share data to execute certain tasks. For example, offloading data to a malicious fog node can result into an unauthorized collection or manipulation of users? private data. Cryptographic-based techniques can prevent external attacks, but are not useful when fog nodes are already authenticated and part of a networks using legitimate identities. We therefore resort to trust to identify and isolate malicious fog nodes and mitigate security, respectively. In this paper, we present a fog COMputIng Trust manageMENT (COMITMENT) approach that uses quality of service and quality of protection history measures from previous direct and indirect fog node interactions for assessing and managing the trust level of the nodes within the fog computing environment. Using COMITMENT approach, we were able to reduce/identify the malicious attacks/interactions among fog nodes by approximately 66\%, while reducing the service response time by approximately 15s.

    @article{lincoln47559,
    volume = {137},
    month = {March},
    author = {Mohammed Al-Khafajiy and Thar Baker and Muhammad Asim and Zehua Guo and Rajiv Ranjan and Antonella Longo and Deepak Puthal and Mark Taylor},
    title = {COMITMENT: A Fog Computing Trust Management Approach},
    publisher = {Elsevier},
    year = {2020},
    journal = {Journal of Parallel and Distributed Computing},
    doi = {10.1016/j.jpdc.2019.10.006},
    pages = {1--16},
    keywords = {ARRAY(0x55bd28e48c70)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/47559/},
    abstract = {As an extension of cloud computing, fog computing is considered to be relatively more secure than cloud computing due to data being transiently maintained and analyzed on local fog nodes closer to data sources. However, there exist several security and privacy concerns when fog nodes collaborate and share data to execute certain tasks. For example, offloading data to a malicious fog node can result into an unauthorized collection or manipulation of users? private data. Cryptographic-based techniques can prevent external attacks, but are not useful when fog nodes are already authenticated and part of a networks using legitimate identities. We therefore resort to trust to identify and isolate malicious fog nodes and mitigate security, respectively. In this paper, we present a fog COMputIng Trust manageMENT (COMITMENT) approach that uses quality of service and quality of protection history measures from previous direct and indirect fog node interactions for assessing and managing the trust level of the nodes within the fog computing environment. Using COMITMENT approach, we were able to reduce/identify the malicious attacks/interactions among fog nodes by approximately 66\%, while reducing the service response time by approximately 15s.}
    }

  • J. Gao, A. French, M. Pound, Y. He, T. Pridmore, and J. Pieters, “Deep convolutional neural networks for image-based convolvulus sepium detection in sugar beet fields,” Plant methods, vol. 16, p. 19, 2020. doi:10.1186/s13007-020-00570-z
    [BibTeX] [Abstract] [Download PDF]

    Background Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. Results Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 {$\times$} 1200) on a NVIDIA Titan X GPU environment.

    @article{lincoln41223,
    volume = {16},
    month = {March},
    author = {Junfeng Gao and Andrew French and Michael Pound and Yong He and Tony Pridmore and Jan Pieters},
    title = {Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields},
    publisher = {BMC},
    year = {2020},
    journal = {Plant Methods},
    doi = {10.1186/s13007-020-00570-z},
    pages = {19},
    keywords = {ARRAY(0x55bd28fee630)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/41223/},
    abstract = {Background
    Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments.
    Results
    Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 {$\times$} 1200) on a NVIDIA Titan X GPU environment.}
    }

  • H. Wang, J. Peng, X. Zheng, and S. Yue, “A robust visual system for small target motion detection against cluttered moving backgrounds,” Ieee transactions on neural networks and learning systems, vol. 31, iss. 3, p. 839–853, 2020. doi:10.1109/TNNLS.2019.2910418
    [BibTeX] [Abstract] [Download PDF]

    Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems–ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.

    @article{lincoln36114,
    volume = {31},
    number = {3},
    month = {March},
    author = {Hongxin Wang and Jigen Peng and Xuqiang Zheng and Shigang Yue},
    title = {A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds},
    publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
    year = {2020},
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
    doi = {10.1109/TNNLS.2019.2910418},
    pages = {839--853},
    keywords = {ARRAY(0x55bd28ee64b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/36114/},
    abstract = {Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey, which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems--ommatidia, motion pathway, contrast pathway, and mushroom body. Compared with the existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated into the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over the existing STMD-based models against fake features.}
    }

  • R. Polvara, M. Patacchiola, M. Hanheide, and G. Neumann, “Sim-to-real quadrotor landing via sequential deep q-networks and domain randomization,” Robotics, vol. 9, iss. 1, 2020. doi:doi:10.3390/robotics9010008
    [BibTeX] [Abstract] [Download PDF]

    The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.

    @article{lincoln40216,
    volume = {9},
    number = {1},
    month = {February},
    author = {Riccardo Polvara and Massimiliano Patacchiola and Marc Hanheide and Gerhard Neumann},
    title = {Sim-to-Real Quadrotor Landing via Sequential Deep Q-Networks and Domain Randomization},
    publisher = {MDPI},
    year = {2020},
    journal = {Robotics},
    doi = {doi:10.3390/robotics9010008},
    keywords = {ARRAY(0x55bd28d86ab8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40216/},
    abstract = {The autonomous landing of an Unmanned Aerial Vehicle (UAV) on a marker is one of the most challenging problems in robotics. Many solutions have been proposed, with the best results achieved via customized geometric features and external sensors. This paper discusses for the first time the use of deep reinforcement learning as an end-to-end learning paradigm to find a policy for UAVs autonomous landing. Our method is based on a divide-and-conquer paradigm that splits a task into sequential sub-tasks, each one assigned to a Deep Q-Network (DQN), hence the name Sequential Deep Q-Network (SDQN). Each DQN in an SDQN is activated by an internal trigger, and it represents a component of a high-level control policy, which can navigate the UAV towards the marker. Different technical solutions have been implemented, for example combining vanilla and double DQNs, and the introduction of a partitioned buffer replay to address the problem of sample efficiency. One of the main contributions of this work consists in showing how an SDQN trained in a simulator via domain randomization, can effectively generalize to real-world scenarios of increasing complexity. The performance of SDQNs is comparable with a state-of-the-art algorithm and human pilots while being quantitatively better in noisy conditions.}
    }

  • M. Bartlett, C. Costescu, P. Baxter, and S. Thill, “Requirements for robotic interpretation of social signals ?in the wild?: insights from diagnostic criteria of autism spectrum disorder,” Mdpi information, vol. 11, iss. 81, p. 1–20, 2020. doi:10.3390/info11020081
    [BibTeX] [Abstract] [Download PDF]

    The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.

    @article{lincoln40108,
    volume = {11},
    number = {81},
    month = {February},
    author = {M Bartlett and C Costescu and Paul Baxter and S Thill},
    title = {Requirements for Robotic Interpretation of Social Signals ?in the Wild?: Insights from Diagnostic Criteria of Autism Spectrum Disorder},
    publisher = {MDPI},
    year = {2020},
    journal = {MDPI Information},
    doi = {10.3390/info11020081},
    pages = {1--20},
    keywords = {ARRAY(0x55bd28ff8090)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/40108/},
    abstract = {The last few decades have seen widespread advances in technological means to characterise
    observable aspects of human behaviour such as gaze or posture. Among others, these developments
    have also led to significant advances in social robotics. At the same time, however, social robots
    are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether
    the technological progress is sufficient to let such robots move ?into the wild?. In this paper, we
    characterise the problems that a social robot in the real world may face, and review the technological
    state of the art in terms of addressing these. We do this by considering what it would entail
    to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD
    diagnosis fundamentally requires the ability to characterise human behaviour from observable
    aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis
    is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall,
    we demonstrate that even with relatively clear therapist-provided criteria and current technological
    progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have
    clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis,
    we provide a classification of criteria based on whether or not they depend on covert information
    and highlight present-day possibilities for supporting therapists in diagnosis through technological
    means. For social robotics, we highlight the fundamental role of covert behaviour, show that the
    current state-of-the-art is unable to characterise this, and emphasise that future research should tackle
    this explicitly in realistic settings.}
    }

  • B. Chen, J. Huang, Y. Huang, S. Kollias, and S. Yue, “Combining guaranteed and spot markets in display advertising: selling guaranteed page views with stochastic demand,” European journal of operational research, vol. 280, iss. 3, p. 1144–1159, 2020. doi:10.1016/j.ejor.2019.07.067
    [BibTeX] [Abstract] [Download PDF]

    While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.

    @article{lincoln39575,
    volume = {280},
    number = {3},
    month = {February},
    author = {Bowei Chen and Jingmin Huang and Yufei Huang and Stefanos Kollias and Shigang Yue},
    title = {Combining guaranteed and spot markets in display advertising: Selling guaranteed page views with stochastic demand},
    publisher = {Elsevier},
    year = {2020},
    journal = {European Journal of Operational Research},
    doi = {10.1016/j.ejor.2019.07.067},
    pages = {1144--1159},
    keywords = {ARRAY(0x55bd289ef698)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39575/},
    abstract = {While page views are often sold instantly through real-time auctions when users visit Web pages, they can also be sold in advance via guaranteed contracts. In this paper, we combine guaranteed and spot markets in display advertising, and present a dynamic programming model to study how a media seller should optimally allocate and price page
    views between guaranteed contracts and advertising auctions. This optimisation problem is challenging because the allocation and pricing of guaranteed contracts endogenously affects the expected revenue from advertising auctions in the future. We take into consideration several distinct characteristics regarding the media buyers? purchasing behaviour, such as risk aversion, stochastic demand arrivals, and devise a scalable and efficient algorithm to solve the optimisation problem. Our work is one of a few studies that investigate the auction-based posted price guaranteed contracts for display advertising. The proposed model is further empirically validated with a display advertising data set from a UK supply-side platform. The results show that the optimal pricing and allocation strategies from our model can significantly increase the media seller?s expected total revenue, and the model suggests different optimal strategies based on the level of competition in advertising auctions.}
    }

  • J. P. Fentanes, A. Badiee, T. Duckett, J. Evans, S. Pearson, and G. Cielniak, “Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor,” Journal of field robotics, vol. 37, iss. 1, p. 122–136, 2020. doi:10.1002/rob.21914
    [BibTeX] [Abstract] [Download PDF]

    Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.

    @article{lincoln37350,
    volume = {37},
    number = {1},
    month = {January},
    author = {Jaime Pulido Fentanes and Amir Badiee and Tom Duckett and Jonathan Evans and Simon Pearson and Grzegorz Cielniak},
    title = {Kriging?based robotic exploration for soil moisture mapping using a cosmic?ray sensor},
    publisher = {Wiley Periodicals, Inc.},
    year = {2020},
    journal = {Journal of Field Robotics},
    doi = {10.1002/rob.21914},
    pages = {122--136},
    keywords = {ARRAY(0x55bd28d0c3f0)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/37350/},
    abstract = {Soil moisture monitoring is a fundamental process to enhance agricultural outcomes and to protect the environment. The traditional methods for measuring moisture content in the soil are laborious and expensive, and therefore there is a growing interest in developing sensors and technologies which can reduce the effort and costs. In this work, we propose to use an autonomous mobile robot equipped with a state?of?the?art noncontact soil moisture sensor building moisture maps on the fly and automatically selecting the most optimal sampling locations. We introduce an autonomous exploration strategy driven by the quality of the soil moisture model indicating areas of the field where the information is less precise. The sensor model follows the Poisson distribution and we demonstrate how to integrate such measurements into the kriging framework. We also investigate a range of different exploration strategies and assess their usefulness through a set of evaluation experiments based on real soil moisture data collected from two different fields. We demonstrate the benefits of using the adaptive measurement interval and adaptive sampling strategies for building better quality soil moisture models. The presented method is general and can be applied to other scenarios where the measured phenomena directly affect the acquisition time and need to be spatially mapped.}
    }

  • P. Chudzik, A. Mitchell, M. Alkaseem, Y. Wu, S. Fang, T. Hudaib, S. Pearson, and B. Al-Diri, “Mobile real-time grasshopper detection and data aggregation framework,” Scientific reports, vol. 10, p. 1150, 2020. doi:10.1038/s41598-020-57674-8
    [BibTeX] [Abstract] [Download PDF]

    nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.

    @article{lincoln39125,
    volume = {10},
    month = {January},
    author = {Piotr Chudzik and Arthur Mitchell and Mohammad Alkaseem and Yingie Wu and Shibo Fang and Taghread Hudaib and Simon Pearson and Bashir Al-Diri},
    title = {Mobile Real-Time Grasshopper Detection and Data Aggregation Framework},
    publisher = {Springer},
    year = {2020},
    journal = {Scientific Reports},
    doi = {10.1038/s41598-020-57674-8},
    pages = {1150},
    keywords = {ARRAY(0x55bd290069b8)},
    url = {https://eprints.lincoln.ac.uk/id/eprint/39125/},
    abstract = {nsects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images
    to detect insects. MAeStRo uses a state-of-the-art two-stage training deep learning approach. the framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAeStRo can gather data using cloud storage for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in inner Mongolia. the detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.}
    }

  • R. Kirk, G. Cielniak, and M. Mangan, “L*a*b*fruits: a rapid and robust outdoor fruit detection system combining bio-inspired features with one-stage deep learning networks,” Sensors, vol. 20, iss. 1, p. 275, 2020. doi:10.3390/s20010275
    [BibTeX] [Abstract] [Download PDF]

    Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper, we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F1: 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F1: 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture, this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.

    @article{lincoln39423,
    volume = {20},
    number = {1},
    month = {January},
    author = {Raymond Kirk and Grzegorz Cielniak and Michael Mangan},
    title = {L*a*b*Fruits: A Rapid and Robust Outdoor Fruit Detection System Combining Bio-Inspired Features with One-Stage Deep Learning Networks},
    publisher = {MDPI},
    year = {2020},
    journal = {Sensors},
    do